Next Article in Journal
CLA-BERT: A Hybrid Model for Accurate Encrypted Traffic Classification by Combining Packet and Byte-Level Features
Previous Article in Journal
Low-Complexity Relay Selection for Full-Duplex Random Relay Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Characteristics of Joint Distribution Based on Minimum Entropy

1
School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
2
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(6), 972; https://doi.org/10.3390/math13060972
Submission received: 27 January 2025 / Revised: 24 February 2025 / Accepted: 6 March 2025 / Published: 14 March 2025

Abstract

:
This paper focuses on the extreme-value issue of Shannon entropy for joint distributions with specified marginals, a subject of growing interest. It introduces a theorem showing that the coupling with minimal entropy must be essentially order-preserving, whereas the coupling with maximal entropy aligns with independence. This means that the minimum-entropy coupling in a two-dimensional system forms an upper triangular discrete joint distribution by exchanging the rows and columns of the joint distribution matrix. Consequently, entropy is interpreted as a measure of system disorder. This manuscript’s key academic contribution is in clarifying the physical meaning behind optimal-entropy coupling, where a special ordinal relationship is pinpointed and methodically outlined. Furthermore, it offers a computational approach for order-preserving coupling as a practical illustration.

1. Introduction and Statement of the Result

The concept of entropy was introduced in thermodynamic and statistical mechanics as a measure of uncertainty or disorganization in a physical system [1,2]. In 1877, L. Boltzmann [2] gave the probabilistic interpretation of entropy and found the famous formula S = κ log W . Roughly speaking, entropy is the logarithm of the number of ways in which the physical system can be configured. The second law of thermodynamics says that the entropy of a closed system cannot decrease.
To reveal the physics of information, C. Shannon [3] introduced entropy into communication theory. Let X be a discrete random element with alphabet X and probability mass p = { p ( x ) = P ( X = x ) : x X } , the entropy of X (or p ) is defined by
H ( X ) = H ( p ) : = x X p ( x ) log p ( x ) .
Clearly, H ( X ) , which is called the Shannon entropy, takes its minimum 0 when X is degenerated and takes its maximum log | X | when X is uniformly distributed. In this sense, entropy is a measure of the uncertainty of a random element.
In the theory of information, the definition of entropy is extended to a pair of random variables as follows. Let ( X , Y ) be a two-dimensional random vector in X × Y with a joint distribution P = { p ( x , y ) : x X , y Y } ; the joint entropy of ( X , Y ) (or P) is defined by
H ( X , Y ) = H ( P ) : = x X y Y p ( x , y ) log p ( x , y ) .
Another important concept for ( X , Y ) is mutual information, which is a measure of the amount of information that one random variable contains about the other. It is defined by
I ( X , Y ) : = x X y Y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) ,
where { p ( x ) : x X } and { p ( y ) : y Y } are the marginal distributions of X and Y. By definition, one has
I ( X , Y ) = H ( X ) + H ( Y ) H ( X , Y ) .
Note that in some settings, the maximum of a variable’s mutual information is called its channel capacity, which plays a key role in information theory through the famous Shannon’s second theorem: the Channel Coding Theorem [3].
For basic concepts and properties in information theory, readers may refer to [4] and the references therein. For entropy and its developments in various topics of pure mathematics, readers may pay attention to the series of talks given by Xiang-Dong Li [5].
By (4), for the given marginals { p ( x ) : x X } and { p ( y ) : y Y } , maximizing I ( X , Y ) and minimizing H ( X , Y ) are two sides of a single coin. To infer an unknown joint distribution of two random variables with given marginals is an old problem in the area of probabilistic inference. As far as we know, the problem goes back to at least Frechet [6] and Hoeffding [7,8], who studied the question of identifying the extremal joint distribution that maximized (or minimized, respectively) their correlation. For more studies in this area and more applications in pure and applied sciences, readers may refer to [9,10,11,12,13,14], etc.
In this paper, we consider the following setting of the problem described above. For simplicity, suppose X = Y = { 1 , 2 , , n } , n 2 , and p = ( p 1 , p 2 , , p n ) and q = ( q 1 , q 2 , , q n ) be two discrete probability distributions on X . Let X and Y be the random variables in X , with distributions p and q , respectively; then, we try to seek a minimum-entropy of two-dimensional random vector ( X , Y ) in X × X with the marginals p and q .
One strategy to solve the problem mentioned above is to calculate the exact value of the minimum entropy, H ( X , Y ) . Our efforts in this direction will be reported in a following paper [15]: depending on the local optimization lemmas given in Section 2, we successfully obtain an algorithm to calculate the exact value of the minimal joint entropy for any given marginals p and q . Note that the computation complexity for our algorithm is n 2 2 n .
Another strategy to study the problem is to seek the unknown special structure of a minimum-entropy coupling ( X , Y ) . Clearly, in cases where X and Y are independent, the joint entropy H ( X , Y ) takes the maximum H ( X ) + H ( Y ) . That is to say, the independent structure (maybe the most disordered one) of ( X , Y ) determines the maximum entropy, but what special structure in a coupling ( X , Y ) will determine the minimum entropy of a two-dimensional random system? The main goal of the present paper is to establish such a structure.
Denote by P n the set of all discrete probability distributions on X = { 1 , 2 , , n } . For each p P n , let F p be the cumulative distribution function, defined by
F p ( i ) : = k = 1 i p k , 1 i n .
Recall that a permutation σ is a bijective map from X = { 1 , 2 , , n } into itself. For any p = ( p 1 , p 2 , , p n ) P n , define σ p : = ( p σ ( 1 ) , p σ ( 2 ) , , p σ ( n ) ) . From the Equation of (1), one has
H ( p ) = H ( σ p )
which holds for any permutation σ . For a random variable X with distribution p , the random variable σ X has the distribution σ 1 p , where σ 1 is the inverse of σ .
For any p , q P n , denote by C ( p , q ) the set of all joint distributions P with the marginals p , q . For any P C ( p , q ) , suppose ( X , Y ) is distributed according to P. For any permutation pair ( σ , π ) , denote by P ( σ , π ) the joint distribution of ( σ X , π Y ) . Then,
P ( σ , π ) C ( σ 1 p , π 1 q ) and H ( P ( σ , π ) ) = H ( P ) .
For any 1 k , l n , denote by E ( k , l ) = e i , j n × n the matrix such that e i , i = 1 ; i k , l ; e k , l = e l , k = 1 ; and e i , j = 0 . Otherwise, let E : = { E ( k , l ) : 1 k , l n } . For any n-th order probability matrix P, let P = E ( k , l ) · P (resp. P · E ( k , l ) ); then, P is the matrix obtained from P by exchanging the positions of its k-th and l-th rows (resp. columns). Furthermore, for any P C ( p , q ) and any finite sequences { E k E : 1 k s } and { E l E : 1 l t } , there exists a unique permutation pair ( σ , π ) , such that
P ( σ , π ) = E 1 · E 2 · · E s · P · E t · E t 1 · · E 2 · E 1 .
Definition 1. 
For any p , q P n , suppose ( X , Y ) is distributed according to P C ( p , q ) . We note the following:
1. 
( X , Y ) , or P, is order-preserving, if P ( X Y ) = 1 ;
2. 
( X , Y ) , or P, is essentially order-preserving, if for some permutation pair ( σ , π ) ,
P ( σ X π Y ) = 1 .
( X , Y ) being order-preserving is also called “X being stochastically dominated by Y” in probability theory. Clearly, it holds if and any if any 1 j < i n , P ( X = i , Y = j ) = p i , j = 0 , i.e., the joint distribution P = p i , j is upper triangular. For an essentially order-preserving P C ( p , q ) , for the permutation pair ( σ , π ) given in Definition 1, P ( σ , π ) is upper triangular.
For any p , q P n , by Strassen’s Theorem [16] on stochastic domination, there exists a coupling P C ( p , q ) such that P is upper triangular if and only if F q F p , where F p and F q are defined in (5). Note that in the literature on information theory (see [11,17] etc.), people usually say that “ q is majorized by p ” when F q F p .
Now, we turn to the following optimization problem:
P ˜ : H ( P ˜ ) = inf P C ( p , q ) H ( P ) .
Note that C ( p , q ) forms a compact subset of R n 2 ; the existence of such a P ˜ follows from the continuity of the entropy function H. Note that in this paper, we also call P ˜ the minimum-entropy coupling.
What key structural characteristics of a coupling (X,Y) govern the minimum entropy in the two-dimensional stochastic system, as formulated in optimization problem (9)? Specifically, what mathematical properties must the joint distribution (X,Y) satisfy to achieve the global minimization of the joint entropy? Is there an intrinsic connection between such optimal coupling structures and the order-preserving properties of the variables, as suggested by entropy minimization principles? We state our main result as follows.
Theorem 1. 
Suppose n 2 and p , q P n . If P ˜ C ( p , q ) solves the optimization problem (9), and ( X , Y ) is distributed according to P ˜ , then ( X , Y ) , or P ˜ , is essentially order-preserving.
Theorem 1 shows that the coupling with the minimal entropy must be essentially order-preserving, whereas the coupling with the maximal entropy aligns with independence. This means that the minimum-entropy coupling in a two-dimensional discrete system must be an upper triangular discrete joint distribution which can be formed by exchanging the rows and columns of the joint distribution matrix based on Theorem 3. Consequently, entropy is interpreted as a measure of system disorder.
Another structure of the minimum-entropy coupling is discovered in paper [15]: a tree structure in graph theory. Based on this structure, we develop an algorithm to obtain the exact value of the minimal entropy.
The rest of the paper is arranged as follows. In Section 2.1, we first develop the following local optimization lemmas: Lemma 2 and Lemma 3. Then, for any P C ( p , q ) , by using the local optimization lemmas, we construct a P C ( p , q ) such that H ( P ) H ( P ) . In Section 2.2, by Lemma 2, developed in Section 2.1, we prove Theorem 1. In Section 3, for n = 6 for P, the independent coupling of p , q P 6 is given as an example for Theorem 3, and we optimise P to an upper triangular P .

2. Local Optimization and Proof of Result

2.1. Local Optimization Lemmas

Suppose A = ( a i , j ) n × n is a nonnegative matrix (i.e., all its entries are nonnegative) such that C : = i = 1 n j = 1 n a i , j > 0 . We generalize the definition of entropy for nonnegative matrix A as
H ( A ) = h ( A ) : = i = 1 n j = 1 n a i , j log a i , j .
Let P = C 1 A , a probability matrix, then
H ( A ) = C H ( P ) C log C .
For any c > 0 , define h c ( x ) = x log x + ( c x ) log ( c x ) and x [ 0 , c ] . Before the local optimization lemmas, we give out the following simple property for function h c without proof.
Lemma 1. 
For any closed interval [ a , b ] [ 0 , c ] , max { h c ( x ) : x [ a , b ] } = h c ( a ) h c ( b ) = h c ( a ) I { a c b } + h c ( b ) I { a > c b } .
Lemma 2. 
For any second-order nonnegative matrix A = a i , j 2 × 2 , suppose that a 1 , 1 a 2 , 2 a 1 , 2 a 2 , 1 and denote b = a 1 , 2 a 2 , 1 . Let A = a i , j 2 × 2 such that a i , i = a i , i + b , i = 1 , 2 , a i , j = a i , j b , and i j . Then, H ( A ) H ( A ) . Furthermore, if b > 0 , then H ( A ) > H ( A ) .
Proof. 
Without loss of generality, assume that a 1 , 1 a 2 , 1 a 1 , 2 . Note that in this case, one has b = a 1 , 2 and a 1 , 2 = 0 . For any x [ 0 , b ] , define
A ( x ) = a 1 , 1 + x a 1 , 2 x a 2 , 1 x a 2 , 2 + x .
Let c 1 = a 1 , 1 + a 2 , 1 and c 2 = a 1 , 2 + a 2 , 2 ; then, h ( A ( x ) ) = h c 1 ( a 1 , 1 + x ) + h c 2 ( a 1 , 2 x ) .
By Lemma 1, and the assumption given above, max { h c 1 ( a 1 , 1 + x ) : x [ 0 , b ] } = h c 1 ( a 1 , 1 + b ) and max { h c 2 ( a 1 , 2 x ) : x [ 0 , b ] } = h c 2 ( 0 ) . Thus, max { h ( A ( x ) ) : x [ 0 , b ] } = h ( A ( b ) ) and h ( A ) = h ( A ( 0 ) ) h ( A ( b ) ) = h ( A ) ; then, by definition H ( A ) H ( A ) .
If b = a 1 , 2 > 0 , then a 1 , 1 > a 2 , 1 a 1 , 2 . By Lemma 1, h c 1 ( a 1 , 1 ) < h c 1 ( a 1 , 1 + a 1 , 2 ) ; this implies that h ( A ) < h ( A ) and H ( A ) > H ( A ) . □
Lemma 3. 
For any second-order nonnegative matrix A = a i , j 2 × 2 , suppose that a 1 , 1 + a 1 , 2 a 2 , 1 + a 2 , 2 , a 1 , 1 + a 2 , 1 a 1 , 2 + a 2 , 2 , and a 1 , 1 + a 1 , 2 a 1 , 1 + a 2 , 1 . Let b = a 1 , 2 a 2 , 1 and define A as in Lemma 2; then, H ( A ) H ( A ) .
Proof. 
By Relation (11), without loss of generality, assume A to be a probability matrix. Rewrite the probability matrix by P = p i , j 2 × 2 , and write p = ( p 1 , p 2 ) = ( p , 1 p ) and q = ( q 1 , q 2 ) = ( q , 1 q ) as its marginals, i.e., p 1 , 1 + p 1 , 2 = p and p 2 , 1 + p 2 , 2 = 1 p , and p 1 , 1 + p 2 , 1 = q and p 1 , 2 + p 2 , 2 = 1 q . By the conditions of the lemma, one has p 1 p , q 1 q and p q 1 2 .
For any x [ 0 , 1 p ] , let
P ( x ) = q x ( p q ) + x x ( 1 p ) x .
Clearly, P ( x ) is a joint probability distribution matrix with marginals p , q , and P ( p 2 , 1 ) = P and P ( 0 ) = P . Particularly, when x runs over the interval [ 0 , 1 p ] , P ( x ) runs over C ( p , q ) .
Now, h ( P ( x ) ) = h p ( q x ) + h 1 p ( x ) . It follows immediately from Lemma 1 that h 1 p ( x ) and h p ( q x ) take their maximums simultaneously when x = 0 ; then, h ( P ) = h ( P ( p 2 , 1 ) ) h ( P ( 0 ) ) = h ( P ) and H ( P ) H ( P ) . □
From the statements of Lemmas 2 and 3, for any second-order nonnegative matrix A, one may easily construct the nonnegative matrix A , which possesses the same row and column summations as A, such that H ( A ) H ( A ) .
As a consequence of Lemma 3, the optimization problem (9) for n = 2 is solved as follows.
Corollary 1. 
For p , q P 2 such that p = ( p , 1 p ) , q = ( q , 1 q ) , and p q 1 2 , let
P ˜ = q p q 0 1 p ,
then
H ( P ˜ ) = inf P C ( p , q ) H ( P ) = [ q log q + ( 1 p ) log ( 1 p ) + ( p q ) log ( p q ) ] .
Now, as a direct consequence of Lemmas 2 and 3, we obtain the following local optimization theorem.
Theorem 2. 
Suppose p , q P n , n 2 , and P C ( p , q ) . For any 1 i < k n and 1 j < l n , let
A = p i , j p i , l p k , j p k , l
be the second-order submatrix of P. Let A be given as in Lemma 2 or Lemma 3 and let P be the matrix obtained from P by A taking the place of A. Then, P C ( p , q ) and H ( P ) H ( P ) .

2.2. Proof of Theorem 1

In this subsection, we give proof to Theorem 1. Actually, the strategy of the proof is to use the local optimization Lemma 2 repeatedly. First of all, we have the following lemma.
Lemma 4. 
Let A = a i , j n × n , n 2 be an n-th-order nonnegative matrix. Suppose that
a 1 , 1 = max 1 i , j n a i , j , j = 1 n a 1 , j i = 1 n a i , 1
resp . a n , n = max 1 i , j n a i , j , j = 1 n a n , j i = 1 n a i , n .
Then, by using the local optimization procedure developed in Lemma 2 and Theorem 2 at most 2 ( n 1 ) times, finally transform A to A = a i , j n × n such that H ( A ) H ( A ) ,
i = 1 n a i , j = i = 1 n a i , j , j = 1 n a i , j = j = 1 n a i , j , 1 i , j n , and
a 1 , 1 = i = 1 n a i , 1 , a i , 1 = 0 , 2 i n
resp . a n , n = j = 1 n a n , j , a n , j = 0 , 1 j n 1 .
Furthermore, H ( A ) = H ( A ) if and only if a 1 , 1 = a 1 , 1 resp . a n , n = a n , n .
Proof. 
Let I = { i : a i , 1 > 0 , 2 i n } and J = { j : a 1 , j > 0 , 2 j n } , and denote by | I | and | J | the cardinalities of I and J. Without loss of generality, assume that i 0 I , j 0 J satisfies
a i 0 , 1 = min i I a i , 1 min j J a 1 , j , a 1 , j 0 = min j J a 1 , j .
Write b = a i 0 , 1 , and renew A by changing the second-order submatrix
a 1 , 1 a 1 , j 0 a i 0 , 1 a i 0 , j 0 to a 1 , 1 + b a 1 , j 0 b a i 0 , 1 b a i 0 , j 0 + b .
Then, the renewed A still satisfies the conditions of Lemma 2, and then, by this lemma, its entropy H ( A ) is decreased strictly. Note that after using the local optimization procedure once, for the renewed matrix A, the number | I | + | J | decreases by 1.
Repeat the above procedure until I = and write A as the final renewing of A; thus, A is obtained as required. □
In the situation given as in the statement of Lemma 4, we denote L n as the corresponding composite optimization procedure, and write A = L n ( A ) .
Now, we finish the proof of Theorem 1 by proving the following theorem.
Theorem 3. 
For any p , q P n , n 2 , P C ( p , q ) , there exists a permutation pair ( σ ¯ , π ¯ ) and an upper triangular P C ( σ ¯ 1 p , π ¯ 1 q ) such that H ( P ) H ( P ) . Furthermore, cases where P ( σ , π ) is not upper triangular for any permutation pair ( σ , π ) , H ( P ) > H ( P ) .
Proof. 
For n = 2 , Theorem 3 follows from Lemma 2 immediately. For general n > 2 and P = p i , j n × n C ( p , q ) , suppose p i 0 , j 0 = max 1 i , j n p i , j . In the case of
j = 1 n p i 0 , j i = 1 n p i , j 0 resp . j = 1 n p i 0 , j i = 1 n p i , j 0 ,
let A = E ( 1 , i 0 ) · P · E ( 1 , j 0 ) (resp. E ( n , i 0 ) · P · E ( n , j 0 ) ), where E ( 1 , i 0 ) , E ( 1 , j 0 ) , E ( n , i 0 ) , and E ( n , j 0 ) E . By (8), for some given permutation pair ( σ 1 , π 1 ) , A = P ( σ 1 , π 1 ) and A satisfies Condition (13). By Lemma 4, Equations (14) and (15) hold for A = L n ( A ) .
Note that by (14), A and A are all the probability matrixes in C ( σ 1 1 p , π 1 1 q ) , and H ( P ) = H ( A ) H ( A ) .
Now, A has the form
A = a 1 , 1 a 1 , 2 a 1 , n 1 a 1 , n 0 a 2 , 2 a 2 , n 1 a 2 , n 0 a n 1 , 2 a n 1 , n 1 a n 1 , n 0 a n , 2 a n , n 1 a n , n
resp . a 1 , 1 a 1 , 2 a 1 , n 1 a 1 , n a 2 , 1 a 2 , 2 a 2 , n 1 a 2 , n a n 1 , 1 a n 1 , 2 a n 1 , n 1 a n 1 , n 0 0 0 a n , n .
Consider the following ( n 1 ) -th-order submatrix: A ¯ = a ¯ i , j ( n 1 ) × ( n 1 ) of A with a ¯ i , j = a i + 1 , j + 1 (resp. a i , j ) for 1 i , j n 1 ,
A ¯ = a 2 , 2 a 2 , n 1 a 2 , n a n 1 , 2 a n 1 , n 1 a n 1 , n a n , 2 a n , n 1 a n , n
resp . a 1 , 1 a 1 , 2 a 1 , n 1 a 2 , 1 a 2 , 2 a 2 , n 1 a n 1 , 1 a n 1 , 2 a n 1 , n 1 ,
suppose a i 1 , j 1 to be the entry of A ¯ such that a i 1 , j 1 = max 1 i , j n 1 a ¯ i , j = max 2 i , j n a i , j (resp. max 1 i , j n 1 a i , j ).
For simplicity, we only treat the first case of A ¯ (the resp. case in the brackets can be treated similarly); of course, in this case, one has 2 i 1 , j 1 n .
In the subcase when
i = 2 n a i , j 1 j = 2 n a i 1 , j resp . i = 2 n a i , j 1 j = 2 n a i 1 , j ,
let A = ( a i , j ) n × n = E ( 2 , i 1 ) · A · E ( 2 , j 1 ) (resp. E ( n , i 1 ) · A · E ( n , j 1 ) ). Clearly, by (8), for some given permutation pair ( σ 2 , π 2 ) , A = A ( σ 2 , π 2 ) C ( ( σ 1 σ 2 ) 1 p , ( π 1 π 2 ) 1 q ) , and H ( P ) H ( A ) = H ( A ) . Here, we note the following:
  • A has the same form of A as given in Equation (16), i.e., all entries except for a 1 , 1 = a 1 , 1 in the first column of A are zero. In fact, by transforming P to A , we finished the first step of upper-triangulization;
  • A ¯ ¯ : = a i , j : 2 i , j n , as the ( n 1 ) -th-order submatrix of A satisfies the condition of Lemma 4 for n 1 .
At this moment, we have finished the first step of the optimization of P by transforming P to A . In fact, we are standing at the position to begin the second step by using Lemma 4 on A ¯ ¯ for n 1 .
Repeat the above procedure for n 1 , n 2 , ⋯, and finally for 2; A is then transformed to A , A ( 6 ) , and finally to P = A ( 2 ( n 1 ) ) . Additionally, certain permutation pair sequences { ( σ k , π k ) : 1 k n } further exist, such that,
A ( 2 k ) C ( ( σ 1 · · σ k + 1 ) 1 p , ( π 1 · · π k + 1 ) 1 q ) , 1 k n 1 ,
and
H ( P ) = H ( A ) H ( A ) H ( A ) H ( A ( 2 ( n 1 ) ) ) = H ( P ) .
Now, let σ ¯ : = σ 1 · · σ n , π ¯ : = π 1 · · π n . Then, P C ( σ ¯ 1 p , π ¯ 1 q ) , and by our setting, P is upper triangular.
If P ( σ , π ) is not upper triangular for any permutation pair ( σ , π ) , then at least one optimization step carried out above is concrete, i.e., when Lemma 2 is used, the corresponding b is strictly positive. Then, by Lemma 2 and the followed Theorem 2, H ( P ) > H ( P ) . □

3. An Example

Theorem 1 shows that the coupling with the minimal entropy must be essentially order-preserving. How can one construct such entropy-equalizing order-preserving coupling structures? In this section, we offer a computational approach for an order-preserving coupling as a practical illustration. Without loss of generality, let us consider the following example: Let n = 6 , p = ( 0.3 , 0.15 , 0.08 , 0.4 , 0.03 , 0.04 ) ) , q = ( 0.18 , 0.15 , 0.44 , 0.02 , 0.03 , 0.18 ) , and P C ( p , q ) be the independent coupling of p , q :
P = 0.054 0.045 0.132 0.006 0.009 0.054 0.027 0.0225 0.066 0.003 0.0045 0.027 0.0144 0.012 0.0352 0.0016 0.0024 0.0144 0.072 0.06 0.176 0.008 0.012 0.072 0.0054 0.0045 0.0132 0.0006 0.0009 0.0054 0.0072 0.006 0.0176 0.0008 0.0012 0.0072 .
Now, we begin to optimise P to a upper triangular P , as follows.
First, for the largest entry of P is p 4 , 3 = 0.176 and q 3 = 0.44 > p 4 = 0.4 , by exchanging the positions of the 4th and 6th rows and then the positions of the 3rd and 6th columns of P, we obtain
A = 0.054 0.045 0.054 0.006 0.009 0.132 0.027 0.0225 0.027 0.003 0.0045 0.066 0.0144 0.012 0.0144 0.0016 0.0024 0.0352 0.0072 0.006 0.0072 0.0008 0.0012 0.0176 0.0054 0.0045 0.0054 0.0006 0.0009 0.0132 0.072 0.06 0.072 0.008 0.012 0.176
By using Lemma 2 on submatrixes
a k , k a k , 6 a 6 , k a 6 , 6 for k = 1 , 2 , , 5 in turn ,
we renew and optimize A to
A = 0.126 0.045 0.054 0.006 0.009 0.06 0.027 0.0825 0.027 0.003 0.0045 0.006 0.0144 0.012 0.0496 0.0016 0.0024 0 0.0072 0.006 0.0072 0.0088 0.0012 0.0096 0.0054 0.0045 0.0054 0.0006 0.0129 0.0012 0 0 0.0368 0 0 0.3632 ;
then, using Lemma 2 on submatrix
a 1 , 3 a 1 , 6 a 6 , 3 a 6 , 6 = 0.054 0.06 0.0368 0.3632 ,
we optimize A to the following
A = 0.126 0.045 0.0908 0.006 0.009 0.0232 0.027 0.0825 0.027 0.003 0.0045 0.006 0.0144 0.012 0.0496 0.0016 0.0024 0 0.0072 0.006 0.0072 0.0088 0.0012 0.0096 0.0054 0.0045 0.0054 0.0006 0.0129 0.0012 0 0 0 0 0 0.4 .
In this situation, we denote
A ¯ = 0.126 0.045 0.0908 0.006 0.009 0.027 0.0825 0.027 0.003 0.0045 0.0144 0.012 0.0496 0.0016 0.0024 0.0072 0.006 0.0072 0.0088 0.0012 0.0054 0.0045 0.0054 0.0006 0.0129 ,
since a ¯ 1 , 1 = 0.126 is the largest entry and
j = 1 5 a ¯ 1 , j = 0.2768 > i = 1 5 a ¯ i , 1 = 0.18 ,
we take A = A , A ¯ ¯ = A ¯ , and A ¯ ¯ satisfies the conditions of Lemma 4 for n = 5 .
Second, by Lemma 4, A ¯ ¯ is optimised to the following
0.18 0.0036 0.0782 0.006 0.009 0 0.1095 0.027 0.003 0.0045 0 0.0264 0.0496 0.0016 0.0024 0 0.006 0.0144 0.0088 0.0012 0 0.0045 0.0108 0.0006 0.0129 .
Noticing that the largest entry of matrix
0.1095 0.027 0.003 0.0045 0.0264 0.0496 0.0016 0.0024 0.006 0.0144 0.0088 0.0012 0.0045 0.0108 0.0006 0.0129
is 0.1095 , and 0.1095 + 0.0264 + 0.006 + 0.0045 = 0.1464 > 0.1095 + 0.027 + 0.003 + 0.0045 = 0.144 , we optimise A to
A = 0.18 0.009 0.0782 0.006 0.0036 0.0232 0 0.0129 0.0108 0.0006 0.0045 0.0012 0 0.0024 0.0496 0.0016 0.0264 0 0 0.0012 0.0144 0.0088 0.006 0.0096 0 0.0045 0.027 0.003 0.1095 0.006 0 0 0 0 0 0.4 .
Finally, by similar procedures, we optimize A to A ( 6 ) , to A ( 8 ) , and then to P = A ( 10 ) as follows:
A ( 6 ) = 0.18 0.009 0.006 0.0782 0.0036 0.0232 0 0.0174 0.0006 0.0108 0 0.0012 0 0.0012 0.0118 0.0174 0 0.0096 0 0.0024 0.0016 0.0736 0.0024 0 0 0 0 0 0.144 0.006 0 0 0 0 0 0.4 ;
A ( 8 ) = 0.18 0.006 0.009 0.0782 0.0036 0.0232 0 0.0118 0.0036 0.015 0 0.0096 0 0.0022 0.0174 0.0092 0 0.0012 0 0 0 0.0776 0.0024 0 0 0 0 0 0.144 0.006 0 0 0 0 0 0.4 ;
P = A ( 10 ) = 0.18 0.006 0.009 0.0782 0.0036 0.0232 0 0.014 0.0014 0.015 0 0.0096 0 0 0.0196 0.0092 0 0.0012 0 0 0 0.0776 0.0024 0 0 0 0 0 0.144 0.006 0 0 0 0 0 0.4 .
Clearly, P is upper triangular; P C ( p , q ) with p = ( 0.3 , 0.04 , 0.03 , 0.08 , 0.15 , 0.4 ) and q = ( 0.18 , 0.02 , 0.03 , 0.18 , 0.15 , 0.44 ) .
By Lemma 2, P can be further optimised to an upper triangular matrix
P = 0.18 0 0 0.12 0 0 0 0.02 0 0 0 0.02 0 0 0.03 0 0 0 0 0 0 0.06 0 0.02 0 0 0 0 0.15 0 0 0 0 0 0 0.4 .
Using Lemma 2 on the following submatrix of P
0.02 0.02 0 0.02
and then exchanging the positions of its 2nd and 4th rows, we obtain
P = 0.18 0 0 0.12 0 0 0 0.02 0 0.06 0 0 0 0 0.03 0 0 0 0 0 0 0 0 0.04 0 0 0 0 0.15 0 0 0 0 0 0 0.4 .
Note that P is upper triangular and P C ( p , q ) with p = ( 0.3 , 0.08 , 0.03 , 0.04 , 0.15 , 0.4 ) and q = q .

4. Conclusions

This study achieves the extremal problem of Shannon entropy for joint distributions with given marginals by establishing a localized optimization method to derive minimum-entropy physical structures. We rigorously demonstrate that minimum-entropy couplings necessitate an order-preserving structure (transformable into upper triangular matrixes via permutations), while maximum-entropy coupling aligns with independent distributions, thereby confirming entropy’s role in quantifying system disorder and addressing the gap in system coupling optimization. The proposed order-preserving coupling framework, combined with matrix structure analysis, not only explains its implications for communications and bioinformatics but also provides a novel approach to simplify exact solution computations through the coupled minimum-entropy system. These findings systematically highlight the critical connection between ordinal relationships and entropy optimization, offering both methodological tools and fresh perspectives for understanding ordered structures in complex systems.

Author Contributions

Theoretical derivation, F.W. and X.-Y.W.; manuscript writing and submission, Y.-J.M.; funding support and discussion, K.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (grant number 11471222, 61973015), and was supported by Beijing Outstanding Young Scientist Program (No. JWZQ20240101027).

Data Availability Statement

No new data were created in the research.

Acknowledgments

The authors would like to thank Zhao Dong, from the Institute of Mathematics and Systems Science, Chinese Academy of Sciences, for useful discussion and comments.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Boltzmann, L. Weitere Studien über das Wärmegleichgewicht Unter Gasmolekülen; Aus der kk Hot-und Staatsdruckerei: Vienna, Austria, 1872. [Google Scholar]
  2. Boltzmann, L. Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung resp. den Sätzen ddotuber das Wärmegleichgewicht. Kais. Akad. Wiss. Wien Math. Naturwiss. Classe 1877, 76, 373–435. [Google Scholar]
  3. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  4. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Li, X.D. Series of Talks on Entropy and Geometry; The Institute of Applied Mathematics, Chinese Academy of Sciences: Beijing, China, 2021. [Google Scholar]
  6. Frechet, M. Sur les tableaux de correlation dont le marges sont donnees. Ann. Univ. Lyon Sci. Sect. A 1951, 14, 53–77. [Google Scholar]
  7. Hoeffding, W. Masstabinvariante Korrelationtheorie. Schriften Math. Inst. Univ. Berl. 1940, 5, 181–233. [Google Scholar]
  8. Fisher, N.I.; Sen, P.K. (Eds.) Scaleinvariant correlation theory. In The Collected Works of Wassily Hoeffding; English Translation; Springer: Berlin/Heidelberg, Germany, 1999; pp. 57–107. [Google Scholar]
  9. Kovačević, M.; Stanojević, I.; Šenk, V. On the hardness of entropy minimization and related problems. In Proceedings of the 2012 IEEE Information Theory Workshop, Lausanne, Switzerland, 3–7 September 2012; pp. 512–516. [Google Scholar] [CrossRef]
  10. Lin, G.D.; Dou, X.; Kuriki, S.; Huang, J.-S. Recent developments on the construction of bivariate distributions with fixed marginals. J. Stat. Distrib. Appl. 2014, 1, 14. [Google Scholar] [CrossRef]
  11. Cicalese, F.; Gargano, L.; Vaccaro, U. Minimum-Entropy Couplings and Their Applications. IEEE Trans. Inf. Theor. 2019, 65, 3436–3451. [Google Scholar] [CrossRef]
  12. Benes, V.; Stepan, J. (Eds.) Distributions with Given Marginals and Moment Problems; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  13. Cuadras, C.M.; Fortiana, J.; Rodriguez-Lallena, J.A. (Eds.) Distributions with Given Marginals and Statistical Modeling; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  14. Dall’Aglio, G.; Kotz, S.; Salinetti, G. (Eds.) Advances in Probability Distributions with Given Marginals; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  15. Ma, Y.J.; Wang, F.; Wu, X.Y.; Cai, K.Y. Calculating The Minimal Joint Entropy. 2024; in preparation. [Google Scholar]
  16. Strassen, V. The existence of probability measures with given marginals. Ann. Math. Stat. 1965, 36, 423–439. [Google Scholar] [CrossRef]
  17. Li, C.T. Efficient Approximate Minimum Entropy Couplinng of Multiple Probability Distribution. IEEE Trans. Inf. Theor. 2021, 67, 5259–5268. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.-J.; Wang, F.; Wu, X.-Y.; Cai, K.-Y. Research on the Characteristics of Joint Distribution Based on Minimum Entropy. Mathematics 2025, 13, 972. https://doi.org/10.3390/math13060972

AMA Style

Ma Y-J, Wang F, Wu X-Y, Cai K-Y. Research on the Characteristics of Joint Distribution Based on Minimum Entropy. Mathematics. 2025; 13(6):972. https://doi.org/10.3390/math13060972

Chicago/Turabian Style

Ma, Ya-Jing, Feng Wang, Xian-Yuan Wu, and Kai-Yuan Cai. 2025. "Research on the Characteristics of Joint Distribution Based on Minimum Entropy" Mathematics 13, no. 6: 972. https://doi.org/10.3390/math13060972

APA Style

Ma, Y.-J., Wang, F., Wu, X.-Y., & Cai, K.-Y. (2025). Research on the Characteristics of Joint Distribution Based on Minimum Entropy. Mathematics, 13(6), 972. https://doi.org/10.3390/math13060972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop