Next Article in Journal
Roman Domination of Cartesian Bundles of Cycles over Cycles
Previous Article in Journal
Offline Magnetometer Calibration Using Enhanced Particle Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditional Optimal Sets and the Quantization Coefficients for Some Uniform Distributions

by
Evans Nyanney
1,
Megha Pandey
2 and
Mrinal Kanti Roychowdhury
1,*
1
School of Mathematical and Statistical Sciences, University of Texas Rio Grande Valley, 1201 West University Drive, Edinburg, TX 78539-2999, USA
2
School of Mathematics, Northwest University, Xi’an 710069, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2350; https://doi.org/10.3390/math13152350
Submission received: 20 June 2025 / Revised: 20 July 2025 / Accepted: 21 July 2025 / Published: 23 July 2025
(This article belongs to the Section A: Algebra and Logic)

Abstract

Bucklew and Wise (1982) showed that the quantization dimension of an absolutely continuous probability measure on a given Euclidean space is constant and equals the Euclidean dimension of the space, and the quantization coefficient exists as a finite positive number. By giving different examples, in this paper, we have shown that the quantization coefficients for absolutely continuous probability measures defined on the same Euclidean space can be different. We have taken uniform distribution as a prototype of an absolutely continuous probability measure. In addition, we have also calculated the conditional optimal sets of n-points and the nth conditional quantization errors for the uniform distributions in constrained and unconstrained scenarios.

1. Introduction

Quantization is the process of approximating a continuous-valued signal by a discrete set of values. It is a fundamental concept with widespread applications in engineering and technology. We refer to [1,2,3] for surveys on the subject and comprehensive lists of references to the literature; see also [4,5,6,7].
Definition 1.
Let P be a Borel probability measure on R k equipped with a metric d induced by a norm · on R k . Let S be a nonempty closed subset of R k . Let β R k be given with c a r d ( β ) = for some N . Then, for n N with n , the nth conditional constrained quantization error for P with respect to the constraint S and the conditional set β is defined by
V n : = V n ( P ) = inf α min a α β d ( x , a ) 2 d P ( x ) : α S , 0 card ( α ) n ,
where card ( A ) represents the cardinality of the set A.
Definition 2.
A set α β , where α S and P ( M ( b | α β ) ) > 0 for b β , for which the infimum in (1) exists and contains no less than ℓ elements and no more than n elements is called a conditional constrained optimal set of n-points for P with respect to the constraint S and the conditional set β.
We assume that d ( x , 0 ) 2 d P ( x ) < to make sure that the infimum in (1) exists (see [8]). For a finite set γ R 2 and a γ , by M ( a | γ ) , we denote the set of all elements in R 2 which are nearest to a among all the elements in γ , i.e., M ( a | γ ) = { x R 2 : d ( x , a ) = min b γ d ( x , b ) } .   M ( a | γ ) is called the Voronoi region in R 2 generated by a γ .
Let V n , r ( P ) be a strictly decreasing sequence, and write V , r ( P ) : = lim n V n , r ( P ) . The numbers
D ̲ ( P ) : = lim inf n 2 log n log ( V n ( P ) V ( P ) ) and D ¯ ( P ) : = lim sup n 2 log n log ( V n ( P ) V ( P ) ) ,
are called the conditional lower and the conditional upper constrained quantization dimensions of the probability measure P, respectively. If D ̲ ( P ) = D ¯ ( P ) , the common value is called the conditional constrained quantization dimension of P and is denoted by D ( P ) . For any κ > 0 , the two numbers lim inf n n 2 κ ( V n ( P ) V ( P ) ) and lim sup n n 2 κ ( V n ( P ) V ( P ) ) are, respectively, called the κ-dimensional conditional lower and conditional upper constrained quantization coefficients for P. If both of them are equal, then it is called the κ-dimensional conditional constrained quantization coefficient for P, and is denoted by lim n n 2 κ ( V n ( P ) V ( P ) ) .
If there is no conditional set, then by the nth constrained quantization error for P with respect to the constraint S R k , it is meant that
V n : = V n ( P ) = inf min a α d ( x , a ) 2 d P ( x ) : α S and 1 card ( α ) n ,
and then the numbers D ( P ) and lim n n 2 κ ( V n ( P ) V ( P ) ) , if they exist, are called the constrained quantization dimension and the κ-dimensional constrained quantization coefficient for P, respectively. A set α S for which the infimum in (3) exists is called a constrained optimal set of n-points for P.
If there is no constraint, i.e., if S = R k , then by the nth conditional unconstrained quantization error with respect to the conditional set β , it is meant that
V n : = V n ( P ) = inf α min a α β d ( x , a ) 2 d P ( x ) : α R k , 0 card ( α ) n ,
and then the numbers D ( P ) and lim n n 2 κ ( V n ( P ) V ( P ) ) , if they exist, are called the conditional unconstrained quantization dimension and the κ-dimensional conditional unconstrained quantization coefficient for P, respectively. A set α S for which the infimum in (4) exists is called a conditional unconstrained optimal set of n-points for P.
If there is no constraint and no conditional set, then by the nth unconditional quantization error, it is meant that
V n : = V n ( P ) = inf min a α d ( x , a ) 2 d P ( x ) : α R k , 1 card ( α ) n ,
and then the numbers D ( P ) and lim n n 2 κ ( V n ( P ) V ( P ) ) , if they exist, are called the unconstrained quantization dimension and the κ-dimensional unconstrained quantization coefficient for P, respectively. A set α S for which the infimum in (5) exists is called an optimal set of n-means for P. It is known that if the support of P contains infinitely many elements, then an optimal set of n-means contains exactly n elements, and V = lim n V n = 0 .
Constrained quantization and conditional quantization have recently been introduced by Pandey and Roychowdhury (see [8,9]). After the introduction of constrained quantization, the quantization theory has now two classifications: constrained quantization and unconstrained quantization. Unconstrained quantization is traditionally known as quantization. Thus, the nth unconditional quantization error, given by (5), will traditionally be referred to as nth quantization error. Likewise, unconstrained quantization dimension and the κ-dimensional unconstrained quantization coefficient for P will be referred to as the quantization dimension and the κ-dimensional quantization coefficient for P, respectively. For some other papers in the direction of constrained quantization and conditional quantization, one can see [10,11,12,13]. For unconstrained quantization, one can see [1,2,3,5,6,14,15,16,17,18,19,20] and the references therein.
A seminal result by Bucklew and Wise (see [21]) established that for absolutely continuous probability measures defined on Euclidean spaces, the quantization dimension equals the Euclidean dimension of the space, and the quantization coefficient exists as a finite, positive constant. This paper is motivated by the observation that although the quantization dimension remains invariant, the quantization coefficient may vary, even among absolutely continuous probability measures defined on the same Euclidean space. To investigate this phenomenon, we consider uniform distributions as prototypical examples of absolutely continuous measures.
The primary objectives of this paper are as follows:
  • To demonstrate, through various examples, that the quantization coefficients for absolutely continuous probability measures on the same Euclidean space can differ;
  • To compute conditional optimal sets of n-points and the corresponding nth quantization errors under constrained and unconstrained scenarios;
  • To examine the asymptotic quantization behavior in the presence and absence of conditional sets, particularly when the conditional set does not affect the quantization coefficient.
Additionally, the paper explores how the geometry of the support, such as line segments, circles, and boundaries of regular polygons, affects the quantization coefficient. This investigation highlights the structural dependencies of quantization efficiency, with a focus on determining how the shape and size of the support influence the values of the quantization coefficients.

Delineation

In Section 2, we give the basic preliminaries. In Section 3, for a uniform distribution on a line segment taking different conditional sets, we calculate the conditional optimal sets of n-points and the nth conditional quantization errors. Then, for each conditional set, we calculate the quantization coefficient, and see that the quantization coefficient does not depend on the conditional set but depends on the length of the line segment. In Section 4, we calculate the conditional optimal sets of n-points, the nth conditional quantization errors, the conditional quantization dimension, and the conditional quantization coefficient in a constrained scenario for a uniform distribution defined on a circle of radius r with respect to a given conditional set and a constraint. In addition, for the same probability distribution, we investigate the optimal sets of n-means and the nth quantization errors, and the quantization coefficient in unconstrained scenario. From the work in this section, we see that the quantization coefficient for a uniform distribution defined on a circle depends on the radius of the circle. In Section 5, we calculate the conditional optimal sets of n-points, the nth conditional quantization errors, and the conditional quantization coefficient for a uniform distribution defined on the boundary of a regular polygon which is inscribed in a circle of radius r with respect to a given conditional set. From the work in this section, we see that the quantization coefficient for a uniform distribution defined on the boundary of a regular m-sided polygon depends on both the number of sides of the polygon and the length of the sides.

2. Preliminaries

For any two elements ( a , b ) and ( c , d ) in R 2 , we write
ρ ( ( a , b ) , ( c , d ) ) : = ( a c ) 2 + ( b d ) 2 ,
which gives the squared Euclidean distance between the two elements ( a , b ) and ( c , d ) . Let p and q be two elements that belong to an optimal set of n-points for some positive integer n, and let e be an element on the boundary of the Voronoi regions of the elements p and q. Since the boundary of the Voronoi regions of any two elements is the perpendicular bisector of the line segment joining the two elements, we have
ρ ( p , e ) ρ ( q , e ) = 0 .
We call such an equation a canonical equation.
Let P be a Borel probability measure on R which is uniform on its support the closed interval [ a , b ] . Then, the probability density function f for P is given by
f ( x ) = 1 b a if a x b , 0 otherwise .
Hence, we have d P ( x ) = P ( d x ) = f ( x ) d x for any x R , where d denotes the differential.
Let us now state the following proposition. For the details of the proof, see [13].
Proposition 1
([13]). Let P be a uniform distribution on the closed interval [ a , b ] and c , d [ a , b ] be such that a < c < d < b . For n N with n 2 , let α n be a conditional unconstrained optimal set of n-points for P with respect to the conditional set β = { c , d } such that α n contains k elements from the closed interval [ a , c ] , ℓ elements from the closed interval [ c , d ] , and m elements from the closed interval [ d , b ] for some k , , m N with k , m 1 and 2 . Then, k + + m = n + 2 ,
α n [ a , c ] = a + ( 2 j 1 ) ( c a ) 2 k 1 : 1 j k , α n [ c , d ] = c + j 1 1 ( d c ) : 1 j , and α n [ d , b ] = d + 2 ( j 1 ) ( b d ) 2 m 1 : 1 j m
with the conditional unconstrained quantization error
V n : = V k , , m ( P ) = 1 3 ( b a ) ( c a ) 3 ( 2 k 1 ) 2 + 1 4 ( d c ) 3 ( 1 ) 2 + ( b d ) 3 ( 2 m 1 ) 2 .
Remark 1.
If nothing is specified, by conditional optimal sets of n-points and the conditional quantization errors, it is meant the conditional unconstrained optimal sets of n-points and the conditional unconstrained quantization errors, respectively.
The following theorem motivates us to make Remarks 3, 5 and 8.
Theorem 1 
(see [9]). In both constrained and unconstrained quantization, the lower and upper quantization dimensions and the lower and upper quantization coefficients for a Borel probability measure do not depend on the conditional set.
Remark 2.
Given that the underlying spaces for all considered probability measures P in this work are one dimensional, their quantization dimensions are given by D ( P ) = 1 (see [21]). Hence, in the sequel we are mostly interested in calculating the quantization coefficients for different uniform distributions, though in some cases we also calculate the conditional optimal sets of n-points and the nth conditional quantization errors in constrained and unconstrained scenarios.
In the following sections, we give the main results of the paper.

3. Conditional Optimal Sets of n-Points and the Quantization Coefficients for Uniform Distributions on Line Segments

Without any loss of generality, we can assume the line segment as a closed interval [ a , b ] , where 0 < a < b < + . Let P be the uniform distribution defined on the closed interval [ a , b ] . Then, the probability density function f for P is given by (6).
The following theorem is a consequence of Proposition 1, where c = a and d = b , c = d = a , and c = d = b , respectively.
Theorem 2.
Let P be the uniform distribution on the line segment joining a and b, where a , b R with a < b . Then we have the following:
( i ) The conditional optimal set of n-points with respect to the conditional set β : = { a , b } is
a + j 1 1 ( b a ) : 1 j n with conditional quantization error V n = ( b a ) 2 12 ( n 1 ) 2 ;
( i i ) The conditional optimal set of n-points with respect to the conditional set β : = { a } is
a + 2 ( j 1 ) ( b a ) 2 n 1 : 1 j n with conditional quantization error V n = ( b a ) 2 3 ( 2 n 1 ) 2 ;
( i i i ) The conditional optimal set of n-points with respect to the conditional set β : = { b } is
a + ( 2 j 1 ) ( b a ) 2 n 1 : 1 j n with conditional quantization error V n = ( b a ) 2 3 ( 2 n 1 ) 2 .
Theorem 3.
Let P be the uniform distribution on the line segment joining a and b, where a , b R with a < b . Then, the conditional quantization coefficients for P with respect to the conditional sets { a , b } , { a } , and { b } exist as finite positive numbers, and each equals ( a b ) 2 12 .
Proof. 
By Theorem 2 ( i ) , we obtain the nth conditional quantization error for the uniform distribution P with respect to the conditional set β : = { a , b } as
V n = ( b a ) 2 12 ( n 1 ) 2 .
Then,
V = lim n V n = 0 yielding lim n n 2 ( V n V ) = ( a b ) 2 12 .
Similarly, if the conditional set is β : = { a } or β : = { b } , by ( i i ) and ( i i i ) in Theorem 2, we obtain lim n n 2 ( V n V ) = ( a b ) 2 12 . □
Remark 3.
By Theorems 1 and 3, we see that the quantization coefficient for the uniform distribution P on a line segment depends on the length of the line segment, and does not depend on the conditional sets.

4. Optimal Sets of n-Points and Quantization Coefficients for Uniform Distributions on Circles: Conditional Constrained, and Unconstrained Scenarios

In this section, we have two subsections. In the first subsection, we calculate the conditional constrained optimal sets of n-points and the nth conditional constrained quantization errors, the conditional constrained quantization dimension, and the conditional constrained quantization coefficient for a uniform distribution P defined on a circle of radius r with respect to a given conditional set and a constraint. In the second subsection, for the same probability distribution, we investigate the optimal sets of n-means and the nth quantization errors, and the quantization coefficient in the unconstrained scenario.
Let L be the circle of radius r. Without any loss of generality, we can take the equation of the circle as x 1 2 + x 2 2 = r 2 , i.e., the parametric equations of the circle is given by L : = { ( x 1 , x 2 ) : x 1 = r cos θ , x 2 = r sin θ for 0 θ 2 π } . Notice that any point on the circle can be given by ( r cos θ , r sin θ ) , which will be identified as θ , where 0 θ 2 π . Let the positive direction of the x 1 -axis cut the circle at the point A, i.e., A is represented by the parametric value θ = 0 . Let s be the distance of a point on L along the arc starting from the point A in the counterclockwise direction. Then,
d s = d x 1 d θ 2 + d x 2 d θ 2 d θ = r d θ ,
where d stands for differential. Then, the probability density function (pdf) f ( x 1 , x 2 ) for P is given by
f ( x 1 , x 2 ) = 1 2 π r if ( x 1 , x 2 ) L , 0 otherwise .
Thus, we have d P ( s ) = P ( d s ) = f ( x 1 , x 2 ) d s = 1 2 π d θ . Moreover, we know that if θ ^ radians is the central angle subtended by an arc of length S of the circle, then S = r θ ^ , and
P ( S ) = S d P ( s ) = 1 2 π S d θ = θ ^ 2 π .

4.1. Conditional Quantization in Constrained Scenario

In this subsection, to investigate the conditional quantization in a constrained scenario for the uniform distribution P on the circle L, we take the circle L as the constraint and the set { ( r , 0 ) } as the conditional set. Let us define a function
T : L [ 0 , 2 π r ] such that T ( θ ) : = T ( ( r cos θ , r sin θ ) ) = r θ ,
where 0 θ 2 π . Then, notice that T : L { ( r , 0 ) } ( 0 , 2 π r ) is a bijective function. Let Q be the image measure of P under the function T, i.e., Q = T P such that for any Borel subset A [ 0 , 2 π r ] , we have
Q ( A ) = T P ( A ) = P ( T 1 ( A ) ) .
Lemma 1.
The image measure Q is a uniform distribution on [ 0 , 2 π r ] .
Proof. 
Since P is a uniform distribution on L, we can assume that P is also a uniform distribution on L { ( r , 0 ) } , as the deletion, or addition, of a finite number of points from, or with, the support of a continuous probability measure does not change the distribution. Take any [ c , d ] ( 0 , 2 π r ) , where 0 < c < d < 2 π r . Since T is a bijection, there exist θ 1 , θ 2 , where 0 < θ 1 < θ 2 < 2 π , such that T ( θ 1 ) = r θ 1 = c and T ( θ 2 ) = r θ 2 = d . Then,
Q ( [ c , d ] ) = P ( T 1 ( [ c , d ] ) ) = P ( { ( r cos θ , r sin θ ) : θ 1 θ θ 2 } ) = θ 2 θ 1 2 π = r θ 2 r θ 1 2 π r = d c 2 π r .
Notice Q ( [ c , d ] ) = λ ( [ c , d ] ) , where λ is the normalized Lebesgue measure on ( 0 , 2 π r ) . Hence, we can conclude that Q is a uniform distribution on ( 0 , 2 π r ), i.e., Q is a uniform distribution on [ 0 , 2 π r ] . □
Notation 1.
For any two elements c , d [ 0 , 2 π r ] , by ( { c , d } ) it is meant the distance between the two elements c , d , i.e., ( { c , d } ) = d c . Similarly, for any two elements θ 1 , θ 2 L with θ 1 < θ 2 , by ( { θ 1 , θ 2 } ) it is meant the arc distance between the two elements θ 1 and θ 2 , i.e., the length of the arc on L subtended by the angle θ 2 θ 1 , i.e., ( { θ 1 , θ 2 } ) = r ( θ 2 θ 1 ) .
Lemma 2.
The function T : L { ( r , 0 ) } ( 0 , 2 π r ) preserves the distance.
Proof. 
Take any c , d ( 0 , 2 π r ) such that 0 < c < d < 2 π r . The lemma will be proved if we can prove that ( { c , d } ) = ( T 1 ( { c , d } ) ) . Since T : L { ( r , 0 ) } ( 0 , 2 π r ) is a bijection, there exist θ 1 , θ 2 L { ( r , 0 ) } such that T ( θ 1 ) = r θ 1 = c and T ( θ 2 ) = r θ 2 = d . Then,
( T 1 ( { c , d } ) ) = ( { θ 1 , θ 2 } ) = r ( θ 2 θ 1 ) = d c = ( { c , d } ) .
Thus, the lemma is yielded. □
The following lemma is a consequence of Proposition 1, where a = c = 0 and b = d = 2 π r .
Lemma 3.
The conditional unconstrained optimal set for the uniform distribution Q with respect to the conditional set { 0 , 2 π r } is given by ( j 1 ) 2 π r n 1 : 1 j n with conditional unconstrained quantization error V n ( Q ) = π 2 r 2 3 ( n 1 ) 2 .
Remark 4.
Let { a 1 , a 2 , , a n } be a conditional unconstrained optimal set of n-points for Q with respect to the conditional set { 0 , 2 π r } , where a 1 = 0 and a n = 2 π r , Since both P and Q are uniform distributions, and Q is the image measure of P under the function T, and T: L { ( r , 0 ) } ( 0 , 2 π r ) preserves the distance, we can say that the set T 1 ( { a 1 , a 2 , , a n } ) , i.e., the set { T 1 ( a j ) : 1 j n 1 } forms a conditional constrained optimal set of ( n 1 ) -points for P with respect to the conditional set { ( r , 0 ) } and the constraint L, as T 1 ( 0 ) = T 1 ( 2 π r ) = ( r , 0 ) .
Let us now prove the following theorems, which give the main results in this subsection.
Theorem 4.
Let P be the uniform distribution on the circle of radius r with center ( 0 , 0 ) . Then, the set { ( r cos ( j 1 ) 2 π n , r sin ( j 1 ) 2 π n ) : 1 j n } forms a conditional constrained optimal set of n-points with respect to the conditional set { ( r , 0 ) } and the constraint L with conditional constrained quantization error
V n = 2 r 2 ( 1 n π sin π n ) .
Proof. 
By Lemma 3, we know that the set { ( j 1 ) 2 π r n 1 : 1 j n } forms a conditional unconstrained optimal set of n-points for Q with respect to the conditional set { 0 , 2 π r } , where n 2 . Hence, by Remark 7, the set { T 1 ( ( j 1 ) 2 π r n 1 ) : 1 j n 1 } forms a conditional constrained optimal set of ( n 1 ) -points for P with respect to the conditional set { ( 0 , π ) } and the constraint L, where ( n 1 ) 1 . Now, notice that
{ T 1 ( ( j 1 ) 2 π r n 1 ) : 1 j n 1 } = { ( j 1 ) 2 π n 1 : 1 j n 1 } = { ( cos ( j 1 ) 2 π n 1 , sin ( j 1 ) 2 π n 1 ) : 1 j n 1 } .
Hence, replacing n by n + 1 , we deduce that the set { ( r cos ( j 1 ) 2 π n , r sin ( j 1 ) 2 π n ) : 1 j n } forms a conditional constrained optimal set of n-points with respect to the conditional set { ( 0 , π ) } and the constraint L. Due to rotational symmetry, we obtain the conditional constrained quantization error as
V n = n ( distortion error contributed by the element ( r , 0 ) ) = n 2 π π n π n ρ ( ( r cos θ , r sin θ ) , ( r , 0 ) ) d θ = 2 r 2 ( 1 n π sin π n ) .
Theorem 5.
Let P be the uniform distribution on the circle of radius r with center ( 0 , 0 ) . Then, with respect to the conditional set { ( r , 0 ) } and the constraint L, the conditional constrained quantization dimension D ( P ) exists and equals one, and the conditional constrained quantization coefficient for P exists as a finite positive number and equals π 2 r 2 3 , i.e., lim n n 2 ( V n V ) = π 2 r 2 3 .
Proof. 
By Theorem 4, we obtain the nth conditional constrained quantization error for the uniform distribution P with respect to the conditional set { ( r , 0 ) } and the constraint L as V n = 2 r 2 ( 1 n π sin π n ) . Then, V = lim n V n = 0 . Hence,
D ( P ) = lim n 2 log n log ( V n V ) = 1 and lim n n 2 ( V n V ) = π 2 r 2 3 .
Remark 5.
By Theorem 1, we know that quantization dimension and the quantization coefficient in constrained and unconstrained cases do not depend on the conditional set. Thus, by Theorem 5, we see that constrained quantization dimension of the uniform distribution P with respect to the constraint L equals one, which is the dimension of the underlying space where the support of the probability measure is defined, and does not depend on the radius r of the circle. This fact is not true, in general, in constrained quantization; for example, one can see [8,11]. However, we see that the constrained quantization coefficient for the uniform distribution P with respect to the constraint L depends on the radius r of the circle.

4.2. Quantization in Unconstrained Scenario

In this subsection, we investigate the optimal sets of n-means, nth quantization errors, and the quantization coefficient for the uniform distribution P when there is no constraint and no conditional set. The following lemma is a generalized version of a similar theorem that appears in [22].
Theorem 6.
Let α n be an optimal set of n-means for the uniform distribution P on the circle x 1 2 + x 2 2 = r 2 for n N . Then,
α n : = n r π sin ( π n ) · cos ( ( 2 j 1 ) π n ) , sin ( π n ) · sin ( ( 2 j 1 ) π n ) : j = 1 , 2 , , n
forms an optimal set of n-means, and the corresponding quantization error is given by r 2 ( 1 n 2 π 2 sin 2 π n ) .
Proof. 
Let α n : = { a 1 , a 2 , , a n } be an optimal set of n-means for P. Let the boundaries of the Voronoi regions of a k intersect L at the points given by the parameters θ k 1 and θ k such that θ k 1 < θ k for 1 k n . Without any loss of generality, we can assume that θ 0 = 0 and θ n = 2 π . Since, P is a uniform distribution and the circle L is rotationally symmetric, without going into the much details of calculations, we see that
θ 1 θ 0 = θ 2 θ 1 = θ 3 θ 2 = = θ n θ n 1 = θ n θ 0 n = 2 π n .
implying θ k = 2 π k n for 0 k n . It is well-known that in unconstrained quantization, the elements in an optimal set are the conditional expectations in their own Voronoi regions. Hence, for 1 k n , we have
a k = 2 π θ k θ k 1 θ k 1 θ k 1 2 π ( r cos θ , r sin θ ) d θ = r θ k θ k 1 sin θ k sin θ k 1 , cos θ k 1 cos θ k ,
yielding
a k = n r π sin ( π n ) · cos ( ( 2 k 1 ) π n ) , sin ( π n ) · sin ( ( 2 k 1 ) π n ) .
Let V n be the nth quantization error. Due to symmetry, the distortion errors contributed by a k in their own Voronoi regions are equal for all 1 k n . Again, notice that θ 0 = 0 , θ 1 = 2 π n θ 0 = 2 π n , and a 1 = r θ 1 ( sin θ 1 , 1 cos θ 1 ) . Hence,
V n = n 2 π θ 0 θ 1 ρ ( ( r cos θ , r sin θ ) , a 1 ) d θ = r 2 ( 1 n 2 π 2 sin 2 π n ) .
By Theorem 6, we have V n = r 2 1 n 2 π 2 sin 2 π n , and hence lim n n 2 V n = π 2 r 2 3 , which motivates us to give the following theorem.
Theorem 7.
Let P be the uniform distribution on the circle of radius r with center ( 0 , 0 ) . Then, the quantization coefficient for P exists as a finite positive number and equals π 2 r 2 3 .
Remark 6.
Theorem 7 implies that the quantization coefficient for a uniform distribution on a circle of radius r, though it exists as a finite positive number, depends on the radius r of the circle.

5. Conditional Optimal Sets and the Quantization Coefficients for the Uniform Distributions on the Boundaries of the Regular Polygons

In this section, for a uniform distribution defined on the boundary of a regular m-sided polygon, we calculate the conditional optimal sets of n-points and the nth conditional quantization errors, and the conditional quantization coefficient, taking the conditional set as the set of all vertices of the polygon.
Let P be the uniform distribution defined on the boundary L of a regular m-sided polygon given by A 1 A 2 A m for some m 3 . Without any loss of generality, we can assume that the polygon is inscribed in the circle x 2 + y 2 = r 2 which has center O ( 0 , 0 ) and radius r with the Cartesian coordinates of the vertex A 1 as ( r , 0 ) . Let θ be the central angle subtended by each side of the polygon, and let θ j be the polar angles of the vertices A j . Then, we have θ = 2 π m and θ j = ( j 1 ) 2 π m . Then, the polar coordinates of the vertices A j are given by ( r cos θ j , r sin θ j ) . Hence, if is the length of each of the sides A j A j + 1 for 1 j m , where the vertex A m + 1 is identified as the vertex A 1 , then we have
= length of A j A j + 1 = ρ ( ( r cos θ j , r sin θ j ) , ( r cos θ j + 1 , r sin θ j + 1 ) ) = 2 r sin π m .
The probability density function (pdf) f for the uniform distribution P is given by f ( x , y ) = 1 m for all ( x , y ) A 1 A 2 A m , and zero otherwise. Moreover, we can write
L = j = 1 m L j , where L j represents the side A j A j + 1 for 1 j m .
Notice that
A j A j + 1 = { ( 1 t ) ( r cos θ j , r sin θ j ) + t ( r cos θ j + 1 , r sin θ j + 1 ) : 0 t 1 } .
for 1 j m . Write
a j : = sec π m j sin 2 π ( j 1 ) m ( j 1 ) sin 2 π j m ,
b j : = 2 sin π m csc 2 π m ( j 1 ) cos 2 π j m j cos 2 π ( j 1 ) m ,
c j : = 2 ( j 1 ) r sin π m .
Let us consider the affine transformation
T : L [ 0 , 2 m r sin π m ] such that T ( x , y ) = a j x + b j y if ( x , y ) A j A j + 1 ,
where a j and b j are given by (7) and (8) for all 1 j m . Then, notice that T : L { ( r , 0 ) } ( 0 , 2 m r sin π m ) is a bijective function. Let Q be the image measure of P under the function T, i.e., Q = T P such that for any Borel subset A [ 0 , 2 m r sin π m ] , we have
Q ( A ) = T P ( A ) = P ( T 1 ( A ) ) .
By the distance between any two elements in [ 0 , 2 m r sin π m ] , it is meant the Euclidean distance between the two elements. On the other hand, by the distance between any two elements on L, it is meant the Euclidean distance between the two elements along the polygonal arc L in the counterclockwise direction. Let T j be the restriction of the mapping T to the set A j A j + 1 , i.e., T j = T | A j A j + 1 for 1 j m . Notice that each T j is a bijective function.
Lemma 4.
The function T : L { ( r , 0 ) } ( 0 , 2 m r sin π m ) preserves the distance.
Proof. 
Notice that
[ 0 , 2 m r sin π m ] = j = 1 m [ c j , c j + 1 ] and T ( A j A j + 1 ) = T j ( A j A j + 1 ) = [ c j , c j + 1 ] ,
where c j are given by (9) for all 1 j m . Since the length of A j A j + 1 equals the length of the closed interval [ c j , c j + 1 ] , and T j is a bijection, we can say that T : L { ( r , 0 ) } ( 0 , 2 m r sin π m ) preserves the distance. Thus, the lemma is yielded. □
The following lemma which is similar to Lemma 1 is also true here.
Lemma 5.
The image measure Q is a uniform distribution on [ 0 , 2 m r sin π m ] .
Remark 7.
Let { a 1 , a 2 , , a n } be a conditional unconstrained optimal set of n-points for Q with respect to the conditional set { c j : 1 j m + 1 } such that a 1 < a 2 < < a n . Then, by the definition of conditional set, we have n m + 1 . Moreover, notice that a 1 = c 1 = 0 and a n = c m + 1 = 2 m r sin π m . Since both P and Q are uniform distributions, and Q is the image measure of P under the function T, and T : L { ( r , 0 ) } ( 0 , 2 m r sin π m ) preserves the distance, we can say that the set T 1 ( { a 1 , a 2 , , a n } ) , i.e., the set { T 1 ( a j ) : 1 j n 1 } forms a conditional unconstrained optimal set of ( n 1 ) -points for P with respect to the conditional set { T 1 ( c j ) : 1 j m } , i.e., with respect to the conditional set { A j : 1 j m } .
Lemma 6.
Let γ n be a conditional optimal set of n-points for the uniform distribution Q with respect to the conditional set { c j : 1 j m + 1 } for any n m + 1 . Let n j = card ( γ n [ c j , c j + 1 ] ) for 1 j m . Then, n j 2 and n 1 + n 2 + + n m = n + m 1 , and | n i n j | = 0 or 1 for all 1 i j m .
Proof. 
Let γ n be a conditional optimal set of n-points and n j be the positive integers as defined in the hypothesis. Notice that each of the sets γ n [ c j , c j + 1 ] always contains the end elements c j , c j + 1 for 1 j m , where n m + 1 . Moreover, except the two elements c 1 and c m + 1 , all the end elements c j are counted two times. Hence, n j 2 and n 1 + n 2 + + n m = n + ( m + 1 2 ) = n + m 1 . Since Q is a uniform distribution and the lengths of the intervals [ c j , c j + 1 ] for 1 j m are all equal, the proof of | n i n j | = 0 or 1 for all 1 i j m is routine. □
Let us now give the following proposition.
Proposition 2.
Let γ n be a conditional optimal set of n-points and V n ( Q ) be the nth conditional quantization error for the uniform distribution Q with respect to the conditional set { c j : 1 j m + 1 } for any n m + 1 . Let n = m k + 1 + q , where 0 q < m . Then, if q = 0 , we have
γ n = j = 1 m { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 } with V n ( Q ) = r 2 sin 2 π m 3 k 2 .
On the other hand, if 0 < q < m , then there are C q m possible sets γ n , and one such set is given by
γ n = j = 1 q { c j + 2 ( i 1 ) r sin π m k + 1 : 1 i k + 2 } j = q + 1 m { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 } with V n ( Q ) = r 2 q sin 2 π m 3 m ( k + 1 ) 2 + r 2 ( m q ) sin 2 π m 3 m k 2 .
Proof. 
First assume that q = 0 , then we have n = m k + 1 , i.e., γ n contains k 1 elements from each of the intervals [ c j , c j + 1 ] except the boundary elements c j and c j + 1 . Hence, if n j = card ( γ n [ c j , c j + 1 ] ) , we have n j = k 1 + 2 = k + 1 . Hence, by ( i ) of Theorem 2, we have
γ n = j = 1 m { c j + i 1 k ( c j + 1 c j ) : 1 i k + 1 } = j = 1 m { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 }
with V n = j = 1 m ( c j + 1 c j ) 2 12 m k 2 = j = 1 m r 2 sin 2 π m 3 m k 2 = r 2 sin 2 π m 3 k 2 .
On the other hand, if 0 < q < m , then due to Lemma 6, we can assume that γ n contains k elements from each of the first q intervals except the boundary elements, and γ n contains ( k 1 ) elements from each of the remaining m q intervals except the boundary elements implying n 1 = n 2 = = n q = k + 2 and n q + 1 = n q + 2 = = n m = k + 1 . Hence, the expressions for γ n and the corresponding nth conditional quantization errors are obtained by ( i ) of Theorem 2 as
γ n = j = 1 q { c j + 2 ( i 1 ) r sin π m k + 1 : 1 i k + 2 } j = q + 1 m { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 }
with
V n = j = 1 q ( c j + 1 c j ) 2 12 m ( k + 1 ) 2 + j = q + 1 m ( c j + 1 c j ) 2 12 m k 2 = j = 1 q r 2 sin 2 π m 3 m ( k + 1 ) 2 + j = q + 1 m r 2 sin 2 π m 3 m k 2
yielding
V n = r 2 q sin 2 π m 3 m ( k + 1 ) 2 + r 2 ( m q ) sin 2 π m 3 m k 2 .
Notice that if n = m k + q , the optimal set γ n can be constructed in C q m ways. □
The following two theorems give the main results in this section.
Theorem 8.
Let α n be a conditional optimal set of n-points and V n ( P ) be the nth conditional quantization error for the uniform distribution P with respect to the conditional set { A j : 1 j m } for any n m . Let n = m k + q , where 0 q < m . Then, if q = 0 , we have
α n = j = 1 m T j 1 { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 } with V n ( P ) = r 2 sin 2 π m 3 k 2 .
On the other hand, if 0 < q < m , then there are C q m possible sets α n , and one such set is given by
α n = j = 1 q T j 1 { c j + 2 ( i 1 ) r sin π m k + 1 : 1 i k + 2 } j = q + 1 m T j 1 { c j + 2 ( i 1 ) r sin π m k : 1 i k + 1 } with V n ( P ) = r 2 q sin 2 π m 3 m ( k + 1 ) 2 + r 2 ( m q ) sin 2 π m 3 m k 2 .
Proof. 
For n = m k + 1 + q , where 0 q < m , let γ n be a conditional optimal set of n-points for Q as given by Proposition 2 with respect to the conditional set { c j : 1 j m + 1 } for n m + 1 . First assume that q = 0 . Then, γ n is given by (10). Then, by Remark 7, the set α n given by (12) forms a conditional optimal set of n-points with respect to the conditional set { A j : 1 j m } . Next, assume that 0 < q < m . Then, γ n is given by (11). Then, by Remark 7, the set α n given by (13) forms a conditional optimal set of n-points with respect to the conditional set { A j : 1 j m } . If n = m k + q , the optimal set γ n can be constructed in C q m ways, and so is the set α n . Recall that the bijective functions T j preserve the distance as well as the collinearity of the elements in each interval [ c j , c j + 1 ] . Hence, the ( m k + q ) th-conditional quantization error with respect to the uniform distribution P remains same as the ( m k + 1 + q ) th-conditional quantization error with respect to the uniform distribution Q. Thus, the expressions for quantization errors V n ( P ) given by (12) and (13) are followed from the expressions given by (10) and (11). □
Theorem 9.
Let P be the uniform distribution defined on the boundary of a regular m-sided polygon inscribed in a circle of radius r with center ( 0 , 0 ) . Then, the conditional quantization coefficient for P exists as a finite positive number and equals 1 3 m 2 r 2 sin 2 ( π m ) , i.e., lim n n 2 ( V n ( P ) V ( P ) ) = 1 3 m 2 r 2 sin 2 π m .
Proof. 
Let n N be such that n m . Then, there exists a unique positive integer k such that n = m k + q for some 0 q < m . Then, by Theorem 8, we have
V n ( P ) = r 2 q sin 2 π m 3 m ( k + 1 ) 2 + r 2 ( m q ) sin 2 π m 3 m k 2 = m 2 r 2 sin 2 π m m 2 + 2 m n 3 m q + n 2 4 n q + 3 q 2 3 ( n q ) 2 ( m + n q ) 2 .
Then, we see that V ( P ) = lim n V n ( P ) = 0 . In fact, we have
lim n n 2 ( V n ( P ) V ( P ) ) = 1 3 m 2 r 2 sin 2 π m .
Remark 8.
By Theorems 1 and 9, we can say that the quantization coefficient for the uniform distribution P defined on the boundary of a regular m-sided polygon is 1 3 m 2 r 2 sin 2 π m , which is a finite positive number, but it is not a constant, as it depends on both m and r, where m is the number of sides of the polygon and r is the radius of the circle in which the polygon is inscribed, and this leads us to conclude that the quantization coefficient for a uniform distribution defined on the boundary of a regular m-sided polygon depends on both the number of sides of the polygon and the length of the sides.

6. Conclusions and Future Work

In this paper, we studied the conditional quantization theory for uniform distributions supported on various geometric structures, specifically line segments, circles, and boundaries of regular polygons. For each case, we computed the conditional optimal sets of n-points and the corresponding nth conditional quantization errors under both constrained and unconstrained scenarios. We established that while the quantization dimension remains invariant—coinciding with the Euclidean dimension of the underlying space—the quantization coefficient is sensitive to the geometry of the support. Notably, we showed the following:
  • For uniform distributions on line segments, the quantization coefficient depends solely on the length of the segment and is independent of the conditional set;
  • For distributions on circles, the quantization coefficient depends on the radius of the circle;
  • For distributions on boundaries of regular polygons, the quantization coefficient depends on both the radius of the circumscribing circle and the number of polygon sides.
These findings affirm that the quantization coefficient, in contrast to the quantization dimension, reflects finer structural properties of the support, such as size and shape.
Building on the results of this paper, several promising directions for future research can be identified:
  • Extension to Non-Uniform Distributions: Investigating conditional quantization for absolutely continuous, non-uniform distributions (e.g., exponential or beta distributions) on bounded geometric supports remains an open and compelling problem.
  • Quantization on Fractal Supports: Analyzing conditional quantization on self-similar and self-affine fractal sets (e.g., Cantor sets, Koch curves, and Sierpiński gaskets) could further enrich the theory and reveal new structural dependencies.
  • Algorithmic and Computational Aspects: Developing efficient algorithms to numerically compute optimal sets of n-points and corresponding quantization coefficients for complex supports and arbitrary distributions could bridge theory with practical applications.
  • Applications to Information Theory and Signal Processing: Exploring how conditional and constrained quantization strategies can be leveraged in source coding, image compression, and sensor network optimization may lead to impactful applications in engineering.

Author Contributions

Investigation, E.N. and M.P.; Writing—review & editing, M.K.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Alphen aan den Rijn, The Netherlands, 1992. [Google Scholar]
  2. Gray, R.M.; Neuhoff, D.L. Quantization. IEEE Trans. Inf. Theory 1998, 44, 2325–2383. [Google Scholar] [CrossRef]
  3. Zamir, R. Lattice Coding for Signals and Networks: A Structured Coding Approach to Quantization, Modulation, and Multiuser Information Theory; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  4. Gray, R.M.; Kieffer, J.C.; Linde, Y. Locally optimal block quantizer design. Inf. Control 1980, 45, 178–198. [Google Scholar] [CrossRef]
  5. Gyorgy, A.; Linder, T. On the structure of optimal entropy-constrained scalar quantizers. IEEE Trans. Inf. Theory 2002, 48, 416–427. [Google Scholar] [CrossRef]
  6. Zador, P. Asymptotic quantization error of continuous signals and the quantization dimension. IEEE Trans. Inf. Theory 1982, 28, 139–149. [Google Scholar] [CrossRef]
  7. Abaya, E.F.; Wise, G.L. Some remarks on the existence of optimal quantizers. Stat. Probab. Lett. 1984, 2, 349–351. [Google Scholar] [CrossRef]
  8. Pandey, M.; Roychowdhury, M.K. Constrained quantization for probability distributions. Appear. J. Fractal Geom. 2025, 11, 319–341. [Google Scholar] [CrossRef]
  9. Pandey, M.; Roychowdhury, M.K. Conditional constrained and unconstrained quantization for probability distributions. arXiv 2023, arXiv:2312.02965. [Google Scholar]
  10. Hamilton, C.; Nyanney, E.; Pandey, M.; Roychowdhury, M.K. Conditional constrained and unconstrained quantization for uniform distributions on regular polygons. Real Anal. Exch. 2025, 1–45. [Google Scholar] [CrossRef]
  11. Pandey, M.; Roychowdhury, M.K. Constrained quantization for a uniform distribution with respect to a family of constraints. arXiv 2023, arXiv:2309.11498. [Google Scholar]
  12. Biteng, P.; Caguiat, M.; Deb, D.; Roychowdhury, M.K.; Villanueva, B. Constrained quantization for a uniform distribution. Houst. J. Math. 2024, 50, 121–142. [Google Scholar]
  13. Biteng, P.; Caguiat, M.; Dominguez, T.; Roychowdhury, M.K. Conditional quantization for uniform distributions on line segments and regular polygons. Mathematics 2025, 13, 1024. [Google Scholar] [CrossRef]
  14. Du, Q.; Faber, V.; Gunzburger, M. Centroidal Voronoi tessellations: Applications and algorithms. SIAM Rev. 1999, 41, 637–676. [Google Scholar] [CrossRef]
  15. Graf, S.; Luschgy, H. The quantization of the Cantor distribution. Math. Nachrichten 1997, 183, 113–133. [Google Scholar] [CrossRef]
  16. Graf, S.; Luschgy, H. Quantization for Probability Measures with Respect to the Geometric Mean Error. Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 2004; Volume 136, pp. 687–717. [Google Scholar]
  17. Graf, S.; Luschgy, H. Foundations of Quantization for Probability Distributions; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  18. Kesseböhmer, M.; Niemann, A.; Zhu, S. Quantization dimensions of compactly supported probability measures via Rényi dimensions. Trans. Am. Math. Soc. 2023, 376, 4661–4678. [Google Scholar] [CrossRef]
  19. Pollard, D. Quantization and the method of k-means. IEEE Trans. Inf. Theory 1982, 28, 199–205. [Google Scholar] [CrossRef]
  20. Pötzelberger, K. The Quantization Dimension of Distributions. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 2001; Volume 131, pp. 507–519. [Google Scholar]
  21. Bucklew, J.; Wise, G. Multidimensional asymptotic quantization theory with rth power distortion measures. IEEE Trans. Inf. Theory 1982, 28, 239–247. [Google Scholar] [CrossRef]
  22. Rosenblatt, J.; Roychowdhury, M.K. Uniform distributions on curves and quantization. Commun. Korean Math. Soc. 2023, 38, 431–450. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nyanney, E.; Pandey, M.; Roychowdhury, M.K. Conditional Optimal Sets and the Quantization Coefficients for Some Uniform Distributions. Mathematics 2025, 13, 2350. https://doi.org/10.3390/math13152350

AMA Style

Nyanney E, Pandey M, Roychowdhury MK. Conditional Optimal Sets and the Quantization Coefficients for Some Uniform Distributions. Mathematics. 2025; 13(15):2350. https://doi.org/10.3390/math13152350

Chicago/Turabian Style

Nyanney, Evans, Megha Pandey, and Mrinal Kanti Roychowdhury. 2025. "Conditional Optimal Sets and the Quantization Coefficients for Some Uniform Distributions" Mathematics 13, no. 15: 2350. https://doi.org/10.3390/math13152350

APA Style

Nyanney, E., Pandey, M., & Roychowdhury, M. K. (2025). Conditional Optimal Sets and the Quantization Coefficients for Some Uniform Distributions. Mathematics, 13(15), 2350. https://doi.org/10.3390/math13152350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop