Next Article in Journal
MHSA-EC: An Indoor Localization Algorithm Fusing the Multi-Head Self-Attention Mechanism and Effective CSI
Next Article in Special Issue
A Skew Logistic Distribution for Modelling COVID-19 Waves and Its Evaluation Using the Empirical Survival Jensen–Shannon Divergence
Previous Article in Journal
Optimal Population Coding for Dynamic Input by Nonequilibrium Networks
Previous Article in Special Issue
Contingency Table Analysis and Inference via Double Index Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Inequalities via Submodularity and a Problem in Extremal Graph Theory

1
Andrew & Erna Viterbi Faculty of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel
2
Faculty of Mathematics, Technion-Israel Institute of Technology, Haifa 3200003, Israel
Entropy 2022, 24(5), 597; https://doi.org/10.3390/e24050597
Submission received: 30 March 2022 / Revised: 19 April 2022 / Accepted: 21 April 2022 / Published: 25 April 2022
(This article belongs to the Special Issue Information and Divergence Measures)

Abstract

:
The present paper offers, in its first part, a unified approach for the derivation of families of inequalities for set functions which satisfy sub/supermodularity properties. It applies this approach for the derivation of information inequalities with Shannon information measures. Connections of the considered approach to a generalized version of Shearer’s lemma, and other related results in the literature are considered. Some of the derived information inequalities are new, and also known results (such as a generalized version of Han’s inequality) are reproduced in a simple and unified way. In its second part, this paper applies the generalized Han’s inequality to analyze a problem in extremal graph theory. This problem is motivated and analyzed from the perspective of information theory, and the analysis leads to generalized and refined bounds. The two parts of this paper are meant to be independently accessible to the reader.

1. Introduction

Information measures and information inequalities are of fundamental importance and wide applicability in the study of feasibility and infeasibility results in information theory, while also offering very useful tools which serve to deal with interesting problems in various fields of mathematics [1,2]. The characterization of information inequalities has been of interest for decades (see, e.g., [3,4] and references therein), mainly triggered by their indispensable role in proving direct and converse results for channel coding and data compression for single and multi-user information systems. Information inequalities, which apply to classical and generalized information measures, have also demonstrated far-reaching consequences beyond the study of the coding theorems and fundamental limits of communication systems. One such remarkable example (among many) is the usefulness of information measures and information inequalities in providing information–theoretic proofs in the field of combinatorics and graph theory (see, e.g., [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]).
A basic property that is commonly used for the characterization of information inequalities relies on the nonnegativity of the (conditional and unconditional) Shannon entropy of discrete random variables, the nonnegativity of the (conditional and unconditional) relative entropy and the Shannon mutual information of general random variables, and the chain rules which hold for these classical information measures. A byproduct of these properties is the sub/supermodularity of some classical information measures, which also prove to be useful by taking advantage of the vast literature on sub/supermodular functions and polymatroids [22,23,24,25,26,27,28,29,30,31]. Another instrumental information inequality is the entropy power inequality, which dates back to Shannon [32]. It has been extensively generalized for different types of random variables and generalized entropies, studied in regard to its geometrical relations [33], and it has been also ubiquitously used for the analysis of various information–theoretic problems.
Among the most useful information inequalities are Han’s inequality [34], its generalized versions (e.g., [15,25,30,31]), and Shearer’s lemma [7] with its generalizations and refinements (e.g., [15,31,35]). In spite of their simplicity, these inequalities prove to be useful in information theory, and other diverse fields of mathematics and engineering (see, e.g., [6,35]). More specifically in regard to these inequalities, in Proposition 1 of [22], Madiman and Tetali introduced an information inequality which can be specialized to Han’s inequality, and which also refines Shearer’s lemma while also providing a counterpart result. In [30], Tian generalized Han’s inequality by relying on the sub/supermodularity of the unconditional/conditional Shannon entropy. Likewise, the work in [31] by Kishi et al., relies on the sub/supermodularity properties of Shannon information measures, and it provides refinements of Shearer’s lemma and Han’s inequality. Apart of the refinements of these classical and widely-used inequalities in [31], the suggested approach in the present work can be viewed in a sense as a (nontrivial) generalization and extension of a result in [31] (to be explained in Section 3.2).
This work is focused on the derivation of information inequalities via submodularity and nonnegativity properties, and on a problem in extremal graph theory whose analysis relies on an information inequality. The field of extremal graph theory, which is a subfield of extremal combinatorics, was among the early and fast developing branches of graph theory during the 20th century. Extremal graph theory explores the relations between properties of graphs such as its order, size, chromatic number or maximal and minimal degrees, under some constraints on the graph (by, e.g., considering graphs of a fixed order, and by also imposing a type of a forbidden subgraph). The interested reader is referred to the comprehensive textbooks [10,36] on the vast field of extremal combinatorics and extremal graph theory.
This paper suggests an approach for the derivation of families of inequalities for set functions, and it applies it to obtain information inequalities with Shannon information measures that satisfy sub/supermodularity and monotonicity properties. Some of the derived information inequalities are new, while some known results (such as the generalized version of Han’s inequality [25]) are reproduced as corollaries in a simple and unified way. This paper also applies the generalized Han’s inequality to analyze a problem in extremal graph theory, with an information–theoretic proof and interpretation. The analysis leads to some generalized and refined bounds in comparison to the insightful results in Theorems 4.2 and 4.3 of [6]. For the purpose of the suggested problem and analysis, the presentation here is self-contained.
The paper is structured as follows: Section 2 provides essential notation and preliminary material for this paper. Section 3 presents a new methodology for the derivation of families of inequalities for set functions which satisfy sub/supermodularity properties (Theorem 1). The suggested methodology is then applied in Section 3 for the derivation of information inequalities by relying on sub/supermodularity properties of Shannon information measures. Section 3 also considers connections of the suggested approach to a generalized version of Shearer’s lemma, and to other results in the literature. Most of the results in Section 3 are proved in Section 4. Section 5 applies the generalized Han’s inequality to a problem in extremal graph theory (Theorem 2). A byproduct of Theorem 2, which is of interest in its own right, is also analyzed in Section 5 (Theorem 3). The presentation and analysis in Section 5 is accessible to the reader, independently of the earlier material on information inequalities in Section 3 and Section 4. Some additional proofs, mostly for making the paper self-contained or for suggesting an alternative proof, are relegated to the appendices (Appendix A and Appendix B).

2. Preliminaries and Notation

The present section provides essential notation and preliminary material for this paper.
  • N { 1 , 2 , } denotes the set of natural numbers.
  • R denotes the set of real numbers, and R + denotes the set of nonnegative real numbers.
  • Ø denotes the empty set.
  • 2 Ω A : A Ω denotes the power set of a set Ω (i.e., the set of all subsets of Ω ).
  • T c Ω \ T denotes the complement of a subset T in Ω .
  • 1 { E } is an indicator of E; it is 1 if event E is satisfied, and zero otherwise.
  • [ n ] { 1 , , n } for every n N ;
  • X n ( X 1 , , X n ) denotes an n-dimensional random vector;
  • X S ( X i ) i S is a random vector for a nonempty subset S [ n ] ; if S = Ø , then X S is an empty set, and conditioning on X S is void.
  • Let X be a discrete random variable that takes its values on a set X , and let P X X Y Z be the probability mass function (PMF) of X. The Shannon entropy of X is given by
    H ( X ) x X P X X Y Z ( x ) log P X X Y Z ( x ) ,
    where throughout this paper, we take all logarithms to base 2.
  • The binary entropy function  H b : [ 0 , 1 ] [ 0 , log 2 ] is given by
    H b ( p ) p log p ( 1 p ) log ( 1 p ) , p [ 0 , 1 ] ,
    where, by continuous extension, the convention 0 log 0 = 0 is used.
  • Let X and Y be discrete random variables with a joint PMF P X Y X Y Z , and a conditional PMF of X given Y denoted by P X X Y Z | Y X Y Z . The conditional entropy of X given Y is defined as
    (3a) H ( X | Y ) ( x , y ) X × Y P X Y X Y Z ( x , y ) log P X X Y Z | Y X Y Z ( x | y ) (3b) = y Y P Y X Y Z ( y ) H ( X | Y = y ) ,
    and
    H ( X | Y ) = H ( X , Y ) H ( Y ) .
  • The mutual information between X and Y is symmetric in X and Y, and it is given by
    (5a) I ( X ; Y ) = H ( X ) + H ( Y ) H ( X , Y ) (5b) = H ( X ) H ( X | Y ) (5c) = H ( Y ) H ( Y | X ) .
  • The conditional mutual information between two random variables X and Y, given a third random variable Z, is symmetric in X and Y and it is given by
    (6a) I ( X ; Y | Z ) = H ( X | Z ) H ( X | Y , Z ) (6b) = H ( X , Z ) + H ( Y , Z ) H ( Z ) H ( X , Y , Z ) .
  • For continuous random variables, the sums in (1) and (3) are replaced with integrals, and the PMFs are replaced with probability densities. The entropy of a continuous random variable is named differential entropy.
  • For an n-dimensional random vector X n , the entropy power of X n is given by
    N ( X n ) exp 2 n H ( X n ) ,
    where the base of the exponent is identical to the base of the logarithm in (1).
We rely on the following basic properties of the Shannon information measures:
  • Conditioning cannot increase the entropy, i.e.,
    H ( X | Y ) H ( X ) ,
    with equality in (8) if and only if X and Y are independent.
  • Generalizing (4) to n-dimensional random vectors gives the chain rule
    H ( X n ) = i = 1 n H ( X i | X i 1 ) .
  • The subadditivity property of the entropy is implied by (8) and (9):
    H ( X n ) i = 1 n H ( X i ) ,
    with equality in (10) if and only if X 1 , , X n are independent random variables.
  • Nonnegativity of the (conditional) mutual information: In light of (5) and (8), I ( X ; Y ) 0 with equality if and only if X and Y are independent. More generally, I ( X ; Y | Z ) 0 with equality if and only if X and Y are conditionally independent given Z.
Let Ω be a finite and non-empty set, and let f : 2 Ω R be a real-valued set function (i.e., f is defined for all subsets of Ω ). The following definitions are used.
Definition 1.
(Sub/Supermodular function). The set function f : 2 Ω R is submodular if
f ( T ) + f ( S ) f ( T S ) + f ( T S ) , S , T Ω
Likewise, f is supermodular if f is submodular.
An identical characterization of submodularity is the diminishing return property (see, e.g., Proposition 2.2 in [23]), where a set function f : 2 Ω R is submodular if and only if
S T Ω , ω T c f ( S { ω } ) f ( S ) f ( T { ω } ) f ( T ) .
This means that the larger is the set, the smaller is the increase in f when a new element is added.
Definition 2.
(Monotonic function). The set function f : 2 Ω R is monotonically increasing if
S T Ω f ( S ) f ( T ) .
Likewise, f is monotonically decreasing if f is monotonically increasing.
Definition 3.
(Polymatroid, ground set and rank function). Let f : 2 Ω R be submodular and monotonically increasing set function with f ( Ø ) = 0 . The pair ( Ω , f ) is called a polymatroid, Ω is called a ground set, and f is called a rank function.
Definition 4.
(Subadditive function). The set function f : 2 Ω R is subadditive if, for all S , T Ω ,
f ( S T ) f ( S ) + f ( T ) .
A nonnegative and submodular set function is subadditive (this readily follows from (11) and (14)). The next proposition introduces results from [25,28,37]. For the sake of completeness, we provide a proof in Appendix A.
Proposition 1.
Let Ω be a finite and non-empty set, and let { X ω } ω Ω be a collection of discrete random variables. Then, the following holds:
(a) 
The set function f : 2 Ω R , given by
f ( T ) H ( X T ) , T Ω ,
is a rank function.
(b) 
The set function f : 2 Ω R , given by
f ( T ) H ( X T | X T c ) , T Ω ,
is supermodular, monotonically increasing, and f ( Ø ) = 0 .
(c) 
The set function f : 2 Ω R , given by
f ( T ) I ( X T ; X T c ) , T Ω ,
is submodular, f ( Ø ) = 0 , but f is not a rank function. The latter holds since the equality f ( T ) = f ( T c ) , for all T Ω , implies that f is not a monotonic function.
(d) 
Let U , V Ω be disjoint subsets, and let the entries of the random vector X V be conditionally independent given X U . Then, the set function f : 2 V R given by
f ( T ) I ( X U ; X T ) , T V ,
is a rank function.
(e) 
Let X Ω = { X ω } ω Ω be independent random variables, and let f : 2 Ω R be given by
f ( T ) H ω T X ω , T Ω .
Then, f is a rank function.
The following proposition addresses the setting of general alphabets.
Proposition 2.
For general alphabets, the set functions f in (15) and (17)–(19) are submodular, and the set function f in (16) is supermodular with f ( Ø ) 0 . Moreover, the function in (18) stays to be a rank function, and the function in (19) stays to be monotonically increasing.
Proof. 
The sub/supermodularity properties in Proposition 1 are preserved due to the nonnegativity of the (conditional) mutual information. The monotonicity property of the functions in (18) and (19) is preserved also in the general alphabet setting due to (A10) and (A14c), and the mutual information in (18) is nonnegative. □
Remark 1.
In contrast to the entropy of discrete random variables, the differential entropy of continuous random variables is not functionally submodular in the sense of Lemma A.2 in [38]. This refers to a different form of submodularity, which was needed by Tao [38] to prove sumset inequalities for the entropy of discrete random variables. A follow-up study in [39] by Kontoyiannis and Madiman required substantially new proof strategies for the derivation of sumset inequalities with the differential entropy of continuous random variables. The basic property which replaces the discrete functional submodularity is the data-processing property of mutual information [39]. In the context of the present work, where the commonly used definition of submodularity is used (see Definition 1), the Shannon entropy of discrete random variables and the differential entropy of continuous random variables are both submodular set functions.
We rely, in this paper, on the following standard terminology for graphs. An undirected graph G is an ordered pair G = ( V , E ) where V = V ( G ) is a set of elements, and E = E ( G ) is a set of 2-element subsets (pairs) of V. The elements of V are called the vertices of G, and the elements of E are called the edges of G. We use the notation V = V ( G ) and E = E ( G ) for the sets of vertices and edges, respectively, in the graph G. The number of vertices in a finite graph G is called the order of G, and the number of edges is called the size of G. Throughout this paper, we assume that the graph G is undirected and finite; it is also assumed to be a simple graph, i.e., it has no loops (no edge connects a vertex in G to itself) and there are no multiple edges which connect a pair of vertices in G. If e = { u , v } E ( G ) , then the vertices u and v are the two ends of the edge e. The elements u and v are adjacent vertices (neighbors) if they are connected by an edge in G, i.e., if e = { u , v } E ( G ) .

3. Inequalities via Submodularity

3.1. A New Methodology

The present subsection presents a new methodology for the derivation of families of inequalities for set functions, and in particular inequalities with information measures. The suggested methodology relies, to large extent, on the notion of submodularity of set functions, and it is presented in the next theorem.
Theorem 1.
Let Ω be a finite set with | Ω | = n . Let f : 2 Ω R with f ( Ø ) = 0 , and g : R R . Let the sequence t k ( n ) k = 1 n be given by
t k ( n ) 1 n k T Ω : | T | = k g f ( T ) k , k [ n ] .
(a) 
If f is submodular, and g is monotonically increasing and convex, then the sequence t k ( n ) k = 1 n is monotonically decreasing, i.e.,
t 1 ( n ) t 2 ( n ) t n ( n ) = g f ( Ω ) n .
In particular,
T Ω : | T | = k g f ( T ) k n k g f ( Ω ) n , k [ n ] .
(b) 
If f is submodular, and g is monotonically decreasing and concave, then the sequence t k ( n ) k = 1 n is monotonically increasing.
(c) 
If f is supermodular, and g is monotonically increasing and concave, then the sequence t k ( n ) k = 1 n is monotonically increasing.
(d) 
If f is supermodular, and g is monotonically decreasing and convex, then the sequence t k ( n ) k = 1 n is monotonically decreasing.
Proof. 
See Section 4.1. □
Corollary 1.
Let Ω be a finite set with | Ω | = n , f : 2 Ω R , and g : R R be convex and monotonically increasing. If
  • f is a rank function,
  • g ( 0 ) > 0 or there is N such that g ( 0 ) = = g ( 1 ) ( 0 ) = 0 with g ( ) ( 0 ) > 0 ,
  • { k n } n = 1 is a sequence such that k n [ n ] for all n N with k n n ,
then
lim n 1 n log T Ω : | T | = k n g f ( T ) k n H b k n n = 0 ,
and if lim n k n n = β [ 0 , 1 ] , then
lim n 1 n log T Ω : | T | = k n g f ( T ) k n = H b ( β ) .
Proof. 
See Section 4.2. □
Corollary 2.
Let Ω be a finite set with | Ω | = n , and f : 2 Ω R be submodular and nonnegative with f ( Ø ) = 0 . Then,
(a) 
For α 1 and k [ n 1 ]
T Ω : | T | = k f α ( Ω ) f α ( T ) c α ( n , k ) f α ( Ω ) ,
with
c α ( n , k ) 1 k α n α n k .
For α = 1 , (25) holds with c 1 ( n , k ) = n 1 k regardless of the nonnegativity of f.
(b) 
If f is also monotonically increasing (i.e., f is a rank function), then for α 1
k n α 1 n 1 k 1 f α ( Ω ) T Ω : | T | = k f α ( T ) n k f α ( Ω ) , k [ n ] .
Proof. 
See Section 4.3. □
Corollary 2 is next specialized to reproduce Han’s inequality [34], and a generalized version of Han’s inequality (Section 4 of [25]).
Let X n = ( X 1 , , X n ) be a random vector with finite entropies H ( X i ) for all i [ n ] . The set function f : 2 [ n ] [ 0 , ) , given by f ( T ) = H ( X T ) for all T [ n ] , is submodular [25] (see Proposition 1a and Proposition 2). From (25), the following holds:
(a)
Setting α = 1 in (25) implies that, for all k [ n 1 ] ,
(28a) 1 i 1 < < i k n H ( X n ) H ( X i 1 , , X i k ) 1 k n n k H ( X n ) (28b) = n 1 k H ( X n ) ,
(b)
Consequently, setting k = n 1 in (28) gives
i = 1 n H ( X n ) H ( X 1 , , X i 1 , X i + 1 , , X n ) H ( X n ) ,
which gives Han’s inequality.
Further applications of Theorem 1 lead to the next corollary, which partially introduces some known results that have been proved on a case-by-case basis in Theorems 17.6.1–17.6.3 of [1] and Section 2 of [2]. In particular, the monotonicity properties of the sequences in (30) and (32)–(34) were proved in Theorems 1 and 2, and Corollaries 1 and 2 of [2]. Both known and new results are readily obtained here, in a unified way, from Theorem 1. The utility of one of these inequalities in extremal combinatorics is discussed in the continuation to this subsection (see Proposition 3), providing a natural generalization of a beautiful combinatorial result in Section 3.2 of [19].
Corollary 3.
Let { X i } i = 1 n be random variables with finite entropies. Then, the following holds:
(a) 
The sequences
h k ( n ) 1 n k T [ n ] : | T | = k H ( X T ) k , k [ n ] ,
k ( n ) 1 n k T [ n ] : | T | = k I ( X T ; X T c ) k , k [ n ]
are monotonically decreasing in k. If { X i } i = 1 n are independent, then also the sequence
m k ( n ) 1 n 1 k 1 T [ n ] : | T | = k H ω T X ω , k [ n ]
is monotonically decreasing in k.
(b) 
The sequence
r k ( n ) 1 n k T [ n ] : | T | = k H ( X T | X T c ) k , k [ n ]
is monotonically increasing in k.
(c) 
For every r > 0 , the sequences
(34) s k ( n ) ( r ) 1 n k T [ n ] : | T | = k N r ( X T ) , k [ n ] , (35) u k ( n ) ( r ) 1 n k T [ n ] : | T | = k exp r H ( X T | X T c ) k , k [ n ] , (36) v k ( n ) ( r ) 1 n k T [ n ] : | T | = k exp r I ( X T ; X T c ) k , k [ n ]
are monotonically decreasing in k. If { X i } i = 1 n are independent, then also the sequence
w k ( n ) ( r ) 1 n k T [ n ] : | T | = k N r ω T X ω , k [ n ]
is monotonically decreasing in k.
Proof. 
The finite entropies of { X i } i = 1 n assure that the entropies involved in the sequences (30)–(37) are finite. Item (a) follows from Theorem 1a, where the submodular set functions f which correspond to (30)–(32) are given in (15), (17) and (19), respectively, and g is the identity function on the real line. The identity k n k = n n 1 k 1 is used for (32). Item (b) follows from Theorem 1c, where f is the supermodular function in (16) and g is the identity function on the real line. We next prove Item (c). The sequence (34) is monotonically decreasing by Theorem 1a, where f is the submodular function in (15), and g : R R is the monotonically increasing and convex function defined as g ( x ) = exp ( 2 r x ) for x R (with r > 0 ). The sequence (35) is monotonically decreasing by Theorem 1d, where f is the supermodular function in (16), and g : R R is the monotonically decreasing and convex function defined as g ( x ) = exp ( r x ) for x R . The sequence (36) is monotonically decreasing by Theorem 1a, where f is the submodular function in (17) and g is the monotonically increasing and convex function defined as g ( x ) = exp ( r x ) for x R . Finally, the sequence (37) is monotonically decreasing by Theorem 1a, where f is the submodular function in (19) and g is the monotonically increasing and convex function defined as g ( x ) = exp ( 2 r x ) for x R . □
Remark 2.
From Proposition 2, since the proof of Corollary 3 only relies on the sub/supermodularity property of f, the random variables { X i } i = 1 n do not need to be discrete in Corollary 3. In the reproduction of Han’s inequality as an application of Corollary 2, the random variables { X i } i = 1 n do not need to be discrete as well since f is not required to be nonnegative if α = 1 (only the submodularity of f in (15) is required, which holds due to Proposition 2).
The following result exemplifies the utility of the monotonicity result of the sequence (30) in extremal combinatorics. It also generalizes the result in Section 3.2 of [19] for an achievable upper bound on the cardinality of a finite set in the three-dimensional Euclidean space, expressed as a function of its number of projections on each of the planes X Y , X Z and Y Z . The next result provides an achievable upper bound on the cardinality of a finite set of points in an n-dimensional Euclidean space, expressed as a function of its number of projections on each of the k-dimensional Euclidean subspaces with an arbitrary k < n .
Proposition 3.
Let P R n be a finite set of points in the n-dimensional Euclidean space with | P | M . Let k [ n 1 ] , and n k . Let R 1 , , R be the projections of P on each of the k-dimensional subspaces of R n , and let | R j | = M j for all j [ ] . Then,
| P | j = 1 n k M j 1 n 1 k 1 .
Let R log M n , and R j log M j k for all j [ ] . An equivalent form of (38) is given by the inequality
R 1 j = 1 R j .
Moreover, if M 1 = = M and M 1 k N , then (38) and (39) are satisfied with equality if P is a grid of points in R n with M 1 k points on each dimension (so, M = M 1 n k ).
Proof. 
Pick uniformly at random a point X n = ( X 1 , , X n ) P . Then,
H ( X n ) = log | P | .
The sequence in (30) is monotonically decreasing, so h k ( n ) h n ( n ) , which is equivalent to
n 1 k 1 H ( X n ) T [ n ] : | T | = k H ( X T ) .
Let S 1 , , S be the k-subsets of the set [ n ] , ordered in a way such that M j is the cardinality of the projection of the set P on the k-dimensional subspace whose coordinates are the elements of the subset S j . Then, (41) can be expressed in the form
n 1 k 1 H ( X n ) j = 1 H ( X S j ) ,
and also
H ( X S j ) log M j , j [ ] ,
since the entropy of a random variable is upper bounded by the logarithm of the number of its possible values. Combining (40), (42) and (43) gives
n 1 k 1 log | P | j = 1 log M j .
Exponentiating both sides of (44) gives (38). In addition, using the identity n k = n k n 1 k 1 gives (39) from (44). Finally, the sufficiency condition for equalities in (38) or (39) can be easily verified, which is obtained if P is a grid of points in R n with the same finite number of projections on each dimension. □

3.2. Connections to a Generalized Version of Shearer’s Lemma and Other Results in the Literature

The next proposition is a known generalized version of Shearer’s Lemma.
Proposition 4.
Let Ω be a finite set, let { S j } j = 1 M be a finite collection of subsets of Ω (with M N ), and let f : 2 Ω R be a set function.
(a) 
If f is non-negative and submodular, and every element in Ω is included in at least d 1 of the subsets { S j } j = 1 M , then
j = 1 M f ( S j ) d f ( Ω ) .
(b) 
If f is a rank function, A Ω , and every element in A is included in at least d 1 of the subsets { S j } j = 1 M , then
j = 1 M f ( S j ) d f ( A ) .
The first part of Proposition 4 was pointed out in Section 1.5 of [35], and the second part of Proposition 4 is a generalization of Remark 1 and inequality (47) in [20]. We provide a (somewhat different) proof of Proposition 4a, as well as a self-contained proof of Proposition 4b in Appendix B.
Let { X i } i = 1 n be discrete random variables, and consider the set function f : 2 [ n ] R + which is defined as f ( A ) = H ( X A ) for all A [ n ] . Since f is a rank function [25], Proposition 4 then specializes to Shearer’s Lemma [7] and a modified version of this lemma (see Remark 1 of [20]).
In light of Proposition 1e and Proposition 4b, Corollaries 4 and 5 are obtained as follows.
Corollary 4.
Let { X i } i = 1 n be independent discrete random variables, { S j } j = 1 M be subsets of [ n ] , and A [ n ] . If each element in A belongs to at least d 1 of the sets { S j } j = 1 M , then
d H i A X i j = 1 M H i S j X i .
In particular, if every i [ n ] is included in at least d 1 of the subsets { S j } j = 1 M , then
d H i = 1 n X i j = 1 M H i S j X i .
Remark 3.
Inequality (48) is also a special case of [37] (Theorem 2), and they coincide if every element i [ n ] is included in a fixed number (d) of the subsets { S j } j = 1 M .
A specialization of Corollary 4 gives the next result.
Corollary 5.
Let { X i } i = 1 n be independent and discrete random variables with finite variances. Then, the following holds:
(a) 
For every k [ n 1 ] ,
H i = 1 n X i 1 n 1 k 1 T [ n ] : | T | = k H ω T X ω ,
and equivalently,
N i = 1 n X i T [ n ] : | T | = k N ω T X ω 1 n 1 k 1 .
(b) 
For every k [ n 1 ] ,
N i = 1 n X i 1 n k T [ n ] : | T | = k N n k ω T X ω ,
where (51) is in general looser than (50), with equivalence if { X i } i = 1 n are i.i.d.; in particular,
(52a) N i = 1 n X i j = 1 n N i j X i 1 n 1 (52b) 1 n j = 1 n N i j X i n n 1 .
Proof. 
Let { S j } j = 1 M be all the k-element subsets of Ω = [ n ] (with M = n k ). Then, every element i [ n ] belongs to d = k M n = n 1 k 1 such subsets, which then gives (49) as a special case of (48). Alternatively, (49) follows from Corollary 3b, which yields m k ( n ) m n ( n ) for all k [ n 1 ] . Exponentiating both sides of (49) gives (50). Inequality (51) is a loosened version of (50), which follows by invoking the AM-GM inequality (i.e., the geometric mean of nonnegative real numbers is less than or equal to their arithmetic mean, with equality between these two means if and only if these numbers are all equal), in conjunction with the identity k n n k = n 1 k 1 . Inequalities (50) and (51) are consequently equivalent if { X i } i = 1 n are i.i.d. random variables, and (52) is a specialized version of (50) and the loosened inequality (51) by setting k = n 1 . □
The next remarks consider information inequalities in Corollaries 3–5, in light of Theorem 1 here, and some known results in the literature.
Remark 4.
Inequality (49) was derived by Madiman as a special case of Theorem 2 in [37]. The proof of Corollary 5a shows that (49) can be also derived in two different ways as special cases of both Theorem 1a and Proposition 4a.
Remark 5.
Inequality (51) can be also derived as a special case of Theorem 1a, where f is the rank function in (19), and g : R R is given by g ( x ) exp ( 2 n x ) for all x R . It also follows from the monotonicity property in Corollary 3c, which yields w k ( n ) ( n ) w n ( n ) ( n ) for all k [ n 1 ] .
Remark 6.
The result in Theorem 8 of [31] is a special case of Theorem 1a here, which follows by taking the function g in Theorem 1a to be the identity function. The flexibility in selecting the function g in Theorem 1 enables to obtain a larger collection of information inequalities. This is in part reflected from a comparison of Corollary 3 here with Corollary 9 of [31]. More specifically, the findings about the monotonicity properties in (30), (31) and (33) were obtained in Corollary 9 of [31], while relying on Theorem 8 of [31] and the sub/supermodularity properties of the considered Shannon information measures. It is noted, however, that the monotonicity results of the sequences (34)–(37) (Corollary 3c) are not implied by Theorem 8 of [31].
Remark 7.
Inequality (52) forms a counterpart of an entropy power inequality by Artstein et al., (Theorem 3 of [40]), where for independent random variables { X i } i = 1 n with finite variances:
N i = 1 n X i 1 n 1 j = 1 n N i j X i .
Inequality (50), and also its looser version in (51), form counterparts of the generalized inequality by Madiman and Barron, which reads (see inequality (4) in [41]):
N i = 1 n X i 1 n 1 k 1 T [ n ] : | T | = k N ω T X ω , k [ n 1 ] .

4. Proofs

The present section provides proofs of (most of the) results in Section 3.

4.1. Proof of Theorem 1

We prove Item (a), and then readily prove Items (b)–(d). Define the auxiliary sequence
f k ( n ) 1 n k T Ω : | T | = k f ( T ) , k [ 0 : n ] ,
averaging f over all k-element subsets of the n-element set Ω { ω 1 , , ω n } . Let the permutation π : [ n ] [ n ] be arbitrary. For k [ n 1 ] , let
S 1 { ω π ( 1 ) , , ω π ( k 1 ) , ω π ( k ) } ,
S 2 { ω π ( 1 ) , , ω π ( k 1 ) , ω π ( k + 1 ) } ,
which are k-element subsets of Ω with k 1 elements in common. Then,
f ( S 1 ) + f ( S 2 ) f ( S 1 S 2 ) + f ( S 1 S 2 ) ,
which holds by the submodularity of f (by assumption), i.e.,
f { ω π ( 1 ) , , ω π ( k ) } + f { ω π ( 1 ) , , ω π ( k 1 ) , ω π ( k + 1 ) } f { ω π ( 1 ) , , ω π ( k + 1 ) } + f { ω π ( 1 ) , , ω π ( k 1 ) } .
Averaging the terms on both sides of (58) over all the n ! permutations π of [ n ] gives
(59a) 1 n ! π f { ω π ( 1 ) , , ω π ( k ) } = k ! ( n k ) ! n ! T Ω : | T | = k f ( T ) (59b) = 1 n k T Ω : | T | = k f ( T ) (59c) = f k ( n ) ,
and similarly
(60a) 1 n ! π f { ω π ( 1 ) , , ω π ( k 1 ) , ω π ( k + 1 ) } = f k ( n ) , (60b) 1 n ! π f { ω π ( 1 ) , , ω π ( k + 1 ) } = f k + 1 ( n ) , (60c) 1 n ! π f { ω π ( 1 ) , , ω π ( k 1 ) } = f k 1 ( n ) ,
with f 0 ( n ) = 0 since by assumption f ( Ø ) = 0 . Combining (58)–(60) gives
2 f k ( n ) f k + 1 ( n ) + f k 1 ( n ) , k [ n 1 ] ,
which is rewritten as
f k ( n ) f k 1 ( n ) f k + 1 ( n ) f k ( n ) , k [ n 1 ] .
Consequently, it follows that
(63a) f k ( n ) k f k + 1 ( n ) k + 1 = 1 k j = 1 k f j ( n ) f j 1 ( n ) 1 k + 1 j = 1 k + 1 f j ( n ) f j 1 ( n ) (63b) = 1 k 1 k + 1 j = 1 k f j ( n ) f j 1 ( n ) 1 k + 1 f k + 1 ( n ) f k ( n ) (63c) = 1 k ( k + 1 ) j = 1 k f j ( n ) f j 1 ( n ) f k + 1 ( n ) f k ( n ) (63d) 0 ,
where equality (63a) holds since f 0 ( n ) = 0 , and inequality (63d) holds by (62). The sequence f k ( n ) k k = 1 n is therefore monotonically decreasing, and in particular
f k ( n ) k f n ( n ) n = k n .
We next prove (25) when α = 1 , and then proceed to prove Theorem 1. By (64)
f n ( n ) n f n 1 ( n ) n 1 ,
where, by (55),
f n ( n ) = f ( Ω ) , f n 1 ( n ) = 1 n T Ω : | T | = n 1 f ( T ) .
Combining (65) and (66) gives
( n 1 ) f ( Ω ) T Ω : | T | = n 1 f ( T ) .
Since there are n subsets T Ω with | T | = n 1 , rearranging terms in (67) gives (25) for α = 1 ; it is should be noted that, for α = 1 , the set function f does not need to be nonnegative for the satisfiability of (25) (however, this will be required for α > 1 ).
We next prove Item (a). By (20), for k [ n ] ,
(68a) t k ( n ) = 1 n k T Ω : | T | = k g f ( T ) k (68b) = 1 n k T = { t 1 , , t k } Ω g f { t 1 , , t k } k .
Fix Ω k { t 1 , , t k } Ω , and let f ˜ : 2 Ω k R be the restriction of the function f to the subsets of Ω k . Then, f ˜ is a submodular set function with f ˜ ( Ø ) = 0 ; similarly to (55), (65) and (66) with f replaced by f ˜ , and n replaced by k, the sequence f ˜ j ( k ) j j = 1 k is monotonically decreasing. Hence, for k [ 2 : n ] ,
f ˜ k ( k ) k f ˜ k 1 ( k ) k 1 ,
where
(70a) f ˜ k ( k ) = f ˜ ( Ω k ) = f { t 1 , , t k } , (70b) f ˜ k 1 ( k ) = 1 k T Ω k : | T | = k 1 f ˜ ( T ) (70c) = 1 k T Ω k : | T | = k 1 f ( T ) (70d) = 1 k i = 1 k f { t 1 , , t i 1 , t i + 1 , , t k } .
Combining (69) and (70) gives
f { t 1 , , t k } 1 k 1 i = 1 k f { t 1 , , t i 1 , t i + 1 , , t k } ,
and, since by assumption g is monotonically increasing,
g f { t 1 , , t k } k g 1 k i = 1 k f { t 1 , , t i 1 , t i + 1 , , t k } k 1 .
From (68) and (72), for all k [ 2 : n ] ,
t k ( n ) 1 n k T = { t 1 , , t k } Ω g 1 k i = 1 k f { t 1 , , t i 1 , t i + 1 , , t k } k 1 ,
and
(74a) t k ( n ) 1 k n k i = 1 k T = { t 1 , , t k } Ω g f { t 1 , , t i 1 , t i + 1 , , t k } k 1 (74b) = n k + 1 k n k i = 1 k { t 1 , , t i 1 , t i + 1 , , t k } Ω g f { t 1 , , t i 1 , t i + 1 , , t k } k 1 (74c) = k ! ( n k ) ! ( n k + 1 ) n ! k S Ω : | S | = k 1 g f ( S ) k 1 (74d) = ( k 1 ) ! ( n k + 1 ) ! n ! S Ω : | S | = k 1 g f ( S ) k 1 (74e) = 1 n k 1 S Ω : | S | = k 1 g f ( S ) k 1 (74f) = t k 1 ( n ) ,
where (74a) holds by invoking Jensen’s inequality to the convex function g; (74b) holds since the term of the inner summation in the right-hand side of (74a) does not depend on t i , so for every ( k 1 ) -element subset S = { t 1 , , t i 1 , t i + 1 , , t k } Ω , there are n k + 1 possibilities to extend it by a single element ( t i ) into a k-element subset T = { t 1 , , t k } Ω ; (74e) is straightforward, and (74f) holds by the definition in (20). This proves Item (a).
Item (b) follows from Item (a), and similarly Item (d) follows from Item (c), by replacing g with g . Item (c) is next verified. If f is a supermodular set function with f ( Ø ) = 0 , then (57) and (58), and (61)–(63) hold with flipped inequality signs. Hence, if g is monotonically decreasing, then inequalities (72) and (73) are reversed; finally, if g is also concave, then (by Jensen’s inequality) (74) holds with a flipped inequality sign, which proves Item (c).

4.2. Proof of Corollary 1

By assumption f : 2 Ω R is a rank function, which implies that 0 f ( T ) f ( Ω ) for every T Ω . Since (by definition) f is submodular with f ( Ø ) = 0 , and (by assumption) the function g is convex and monotonically increasing, then (from (22), while replacing k with k n )
n k n g f ( Ω ) n T Ω : | T | = k n g f ( T ) k n n k n g f ( Ω ) k n , n N .
By the second assumption in Corollary 1, for positive values of x that are sufficiently close to zero, we have
  • g ( x ) g ( 0 ) > 0 if g ( 0 ) > 0 ;
  • g ( x ) scales like 1 ! g ( ) ( 0 ) x if g ( 0 ) = = g ( 1 ) ( 0 ) = 0 with g ( ) ( 0 ) > 0 for some N .
In both cases, it follows that
lim x 0 + x log g ( x ) = 0 .
In light of (75) and (76), and since (by assumption) k n n , it follows that
lim n 1 n log T Ω : | T | = k n g f ( T ) k n log n k n = 0 .
By the following upper and lower bounds on the binomial coefficient:
1 n + 1 exp n H b k n n n k n exp n H b k n n ,
the combination of equalities (77) and (78) gives equality (23). Equality (24) holds as a special case of (23), under the assumption that lim n k n n = β [ 0 , 1 ] .

4.3. Proof of Corollary 2

For α = 1 , Corollary 2 is proved in (67). Fix α > 1 , and let g : R R be
g ( x ) { x α , x 0 , 0 , x < 0 ,
which is monotonically increasing and convex on the real line. By Theorem 1a,
t k ( n ) t n ( n ) , k [ n ] .
Since by assumption f is nonnegative, it follows from (20) and (79) that
t k ( n ) = 1 n k T Ω : | T | = k g f ( T ) k
= 1 k α n k T Ω : | T | = k f α ( T ) .
Combining (80)–(81) and rearranging terms gives, for all α > 1 ,
(82a) T Ω : | T | = k f α ( T ) k n α n k f α ( Ω ) (82b) = k n α 1 n 1 k 1 f α ( Ω ) ,
where equality (82b) holds by the identity k n n k = n 1 k 1 . This further gives
(83a) T Ω : | T | = k f α ( Ω ) f α ( T ) = n k f α ( Ω ) T Ω : | T | = k f α ( T ) (83b) 1 k α n α n k f α ( Ω ) (83c) = c α ( n , k ) f α ( Ω ) ,
where equality (83c) holds by the definition in (26). This proves (25) for α > 1 .
We next prove Item (b). The function f is (by assumption) a rank function, which yields its nonnegativity. Hence, the leftmost inequality in (27) holds by (82). The rightmost inequality in (27) also holds since f : 2 Ω R is monotonically increasing, which yields f ( T ) f ( Ω ) for all T Ω . For k [ n ] and α 0 (in particular, for α 1 ),
T Ω : | T | = k f α ( T ) n k f α ( Ω ) ,
where (84) holds since there are n k k-element subsets T of the n-element set Ω , and every summand f α ( T ) (with T Ω ) is upper bounded by f α ( Ω ) .

5. A Problem in Extremal Graph Theory

This section applies the generalization of Han’s inequality in (28) to the following problem.

5.1. Problem Formulation

Let A { 1 , 1 } n , with n N , and let τ [ n ] . Let G = G A , τ be an un-directed simple graph with vertex set V ( G ) = A , and pairs of vertices in G are adjacent (i.e., connected by an edge) if and only if they are represented by vectors in A whose Hamming distance is less than or equal to τ :
{ x n , y n } E ( G ) x n , y n A , x n y n , d H x n , y n τ .
The question is how large can the size of G be (i.e., how many edges it may have) as a function of the cardinality of the set A , and possibly based also on some basic properties of the set A ?
This problem and its related analysis generalize and refine, in a nontrivial way, the bound in Theorem 4.2 of [6] which applies to the special case where τ = 1 . The motivation for this extension is next considered.

5.2. Problem Motivation

Constraint coding is common in many data recording systems and data communication systems, where some sequences are more prone to error than others, and a constraint on the sequences that are allowed to be recorded or transmitted is imposed in order to reduce the likelihood of error. Given such a constraint, it is then necessary to encode arbitrary user sequences into sequences that obey the constraint.
From an information–theoretic perspective, this problem can be interpreted as follows. Consider a communication channel W : X Y with input alphabet X and output alphabet Y , and suppose that a constraint is imposed on the sequences that are allowed to be transmitted over the channel. As a result of such a constraint, the information sequences are first encoded into codewords by an error-correction encoder, followed by a constrained encoder that maps these codewords into constrained sequences. Let them be binary n-length sequences from the set A { 1 , 1 } n . A channel modulator then modulates these sequences into symbols from X , and the received sequences at the channel output, with alphabet Y , are first demodulated, and then decoded (in a reverse order of the encoding process) by the constrained decoder and error-correction decoder.
Consider a channel model where pairs of binary n-length sequences from the set A whose Hamming distance is less than or equal to a fixed number τ share a common output sequence with positive probability, whereas this halts to be the case if the Hamming distance is larger than τ . In other words, we assume that by design, pairs of sequences in A whose Hamming distance is larger than τ cannot be confused in the sense that there does not exist a common output sequence which may be possibly received (with positive probability) at the channel output.
The confusion graph G that is associated with this setup is an undirected simple graph whose vertices represent the n-length binary sequences in A , and pairs of vertices are adjacent if and only if the Hamming distance between the sequences that they represent is not larger than τ . The size of G (i.e., its number of edges) is equal to the number of pairs of sequences in A which may not be distinguishable by the decoder.
Further motivation for studying this problem is considered in the continuation (see Section 5.5).

5.3. Analysis

We next derive an upper bound on the size of the graph G. Let X n = ( X 1 , , X n ) be chosen uniformly at random from the set A { 1 , 1 } n , and let P X n X Y Z be the PMF of X n . Then,
P X n ( x n ) = { 1 | A | , if x n A , 0 , if x n A ,
which implies that
H ( X n ) = log | A | .
The graph G is an un-directed and simple graph with a vertex set V ( G ) = A (i.e., the vertices of G are in one-to-one correspondence with the binary vectors in the set A ). Its set of edges E ( G ) are the edges which connect all pairs of vertices in G whose Hamming distance is less than or equal to τ . For d [ τ ] , let E d ( G ) be the set of edges in G which connect all pairs of vertices in G whose Hamming distance is equal to d, so
| E ( G ) | = d = 1 τ | E d ( G ) | .
For x n { 1 , 1 } n , d [ n ] , and integers k 1 , , k d such that 1 k 1 < < k d n , let
x ˜ ( k 1 , , k d ) ( x 1 , , x k 1 1 , x k 1 + 1 , , x k d 1 , x k d + 1 , , x n )
be a subvector of x n of length n d , obtained by dropping the bits of x n in positions k 1 , , k d ; if d = n , then ( k 1 , , k n ) = ( 1 , , n ) , and x ˜ ( k 1 , , k d ) is an empty vector. By the chain rule for the Shannon entropy,
H ( X n ) H X ˜ ( k 1 , , k d ) (90a) = H X k 1 , , X k d | X ˜ ( k 1 , , k d ) (90b) = x n { 1 , 1 } n P X n X Y Z ( x n ) log P X k 1 , , X k d X Y Z | X ˜ ( k 1 , , k d ) X Y Z x k 1 , , x k d | x ˜ ( k 1 , , k d ) (90c) = 1 | A | x n A log P X k 1 , , X k d X Y Z | X ˜ ( k 1 , , k d ) X Y Z x k 1 , , x k d | x ˜ ( k 1 , , k d ) ,
where equality (90c) holds by (86).
For x n { 1 , 1 } n , d [ n ] , and integers k 1 , , k d such that 1 k 1 < < k d n , let
x ¯ ( k 1 , , k d ) ( x 1 , , x k 1 1 , x k 1 , x k 1 + 1 , , x k d 1 , x k d , x k d + 1 , , x n ) ,
where the bits of x n in position k 1 , , k d are flipped (in contrast to x ˜ ( k 1 , , k d ) where the bits of x n in these positions are dropped), so x ¯ ( k 1 , , k d ) { 1 , 1 } n and d H x n , x ¯ ( k 1 , , k d ) = d . Likewise, if x n , y n { 1 , 1 } n satisfy d H x n , y n = d , then there exist integers k 1 , , k d such that 1 k 1 < < k d n where y n = x ¯ ( k 1 , , k d ) (i.e., the integers k 1 , , k d are the positions (in increasing order) where the vectors x n and y n differ).
Let us characterize the set A by its cardinality, and the following two natural numbers:
(a)
If x n A and x ¯ ( k 1 , , k d ) A for any ( k 1 , , k d ) such that 1 k 1 < < k d n , then there are at least m d m d ( A ) vectors y A whose subvectors y ˜ ( k 1 , , k d ) coincide with x ˜ ( k 1 , , k d ) , i.e., the integer m d 2 satisfies
m d min x n A , 1 k 1 < < k d n y n A : y ˜ ( k 1 , , k d ) = x ˜ ( k 1 , , k d ) , x ¯ ( k 1 , , k d ) A .
By definition, the integer m d always exists, and
2 m d min { 2 d , | A | } .
If no information is available about the value of m d , then it can be taken by default to be equal to 2 (since by assumption the two vectors x n A and y n x ¯ ( k 1 , , k d ) A satisfy the equality y ˜ ( k 1 , , k d ) = x ˜ ( k 1 , , k d ) ).
(b)
If x n A and x ¯ ( k 1 , , k d ) A for any ( k 1 , , k d ) such that 1 k 1 < < k d n , then there are at least d d ( A ) vectors y n A whose subvectors y ˜ ( k 1 , , k d ) coincide with x ˜ ( k 1 , , k d ) , i.e., the integer d 1 satisfies
d min x n A , 1 k 1 < < k d n y n A : y ˜ ( k 1 , , k d ) = x ˜ ( k 1 , , k d ) , x ¯ ( k 1 , , k d ) A .
By definition, the integer d always exists, and
1 d min { 2 d 1 , | A | 1 } .
Likewise, if no information is available about the value of d , then it can be taken by default to be equal to 1 (since x n A satisfies the requirement about its subvector x ˜ ( k 1 , , k d ) in (94)).
In general, it would be preferable to have the largest possible values of m d and d (i.e., those satisfying inequalities (92) and (94) with equalities, for obtaining a better upper bound on the size of G (this point will be clarified in the sequel). If d = 1 , then m d = 2 and d = 1 are the best possible constants (this holds by the definitions in (92) and (94), which can be also verified by the coincidence of the upper and lower bounds in (93) for d = 1 , as well as those in (95)).
If x n A , then we distinguish between the following two cases:
  • If x ¯ ( k 1 , , k d ) A , then
    P X k 1 , , X k d X Y Z | X ˜ ( k 1 , , k d ) X Y Z ( x k 1 , , x k d | x ˜ ( k 1 , , k d ) ) 1 m d ,
    which holds by the way that m d is defined in (92), and since X n is randomly selected to be equiprobable in the set A .
  • If x ¯ ( k 1 , , k d ) A , then
    P X k 1 , , X k d X Y Z | X ˜ ( k 1 , , k d ) X Y Z ( x k 1 , , x k d | x ˜ ( k 1 , , k d ) ) 1 d ,
    which holds by the way that d is defined in (94), and since X n is equiprobable on A .
For d [ τ ] and 1 k 1 < < k d n , it follows from (90), (96) and (97) that
H ( X n ) H X ˜ ( k 1 , , k d ) log m d | A | x n 1 { x n A , x ¯ ( k 1 , , k d ) A } + log d | A | x n 1 { x n A , x ¯ ( k 1 , , k d ) A } ,
which, by summing on both sides of inequality (98) over all integers k 1 , , k d such that 1 k 1 < < k d n , yields
( k 1 , , k d ) : 1 k 1 < < k d n H ( X n ) H X ˜ ( k 1 , , k d ) log m d | A | ( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } + log d | A | ( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } .
Equality holds in (99) if the minima on the RHS of (92) and (94) are attained by any element in these sets, and if (92) and (94) are satisfied with equalities (i.e., m d and d are the maximal integers to satisfy inequalities (92) and (94) for the given set A ). Hence, this equality holds in particular for d = 1 , with the constants m d = 2 and d = 1 .
The double sum in the first term on the RHS of (99) is equal to
( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } = 2 E d ( G ) ,
since every pair of adjacent vertices in G that refer to vectors in A whose Hamming distance is equal to d is of the form x n A and x ¯ ( k 1 , , k d ) A , and vice versa, and every edge { x n , x ¯ ( k 1 , , k d ) } E d ( G ) is counted twice in the double summation on the LHS of (100). For calculating the double sum in the second term on the RHS of (99), we first calculate the sum of these two double summations:
( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } + ( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A }
(101a) = ( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } + 1 { x n A , x ¯ ( k 1 , , k d ) A } (101b) = ( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A } (101c) = ( k 1 , , k d ) : 1 k 1 < < k d n | A | (101d) = n d | A | ,
so, subtracting (100) from (101d) gives that
( k 1 , , k d ) : 1 k 1 < < k d n x n 1 { x n A , x ¯ ( k 1 , , k d ) A } = n d | A | 2 E d ( G ) .
Substituting (100) and (102) into the RHS of (99) gives that, for all d [ τ ] ,
( k 1 , , k d ) : 1 k 1 < < k d n H ( X n ) H X ˜ ( k 1 , , k d ) (103a) 2 | E d ( G ) | log m d | A | + log d | A | n d | A | 2 E d ( G ) (103b) = n d log d + 2 | E d ( G ) | | A | log m d d ,
with the same necessary and sufficient condition for equality in (103a) as in (99). (Recall that it is in particular an equality for d = 1 , where in this case m 1 = 2 and 1 = 1 .)
By the generalized Han’s inequality in (28),
(104a) ( k 1 , , k d ) : 1 k 1 < < k d n H ( X n ) H X ˜ ( k 1 , , k d ) n 1 d 1 H ( X n ) (104b) = n 1 d 1 log | A | ,
where equality (104b) holds by (87). Combining (103) and (104) yields
n 1 d 1 log | A | n d log d + 2 | E d ( G ) | | A | log m d d ,
and, by the identity n d = n d n 1 d 1 , we get
| E d ( G ) | n 1 d 1 | A | log | A | n d log d 2 log m d d .
This upper bound is specialized, for d = 1 , to Theorem 4.2 of [6] (where, by definition, m 1 = 2 and 1 = 1 ). This gives that the number of edges in G, connecting pairs of vertices which refer to binary vectors in A whose Hamming distance is 1 from each other, satisfies
| E 1 ( G ) | 1 2 | A | log 2 | A | .
It is possible to select, by default, the values of the integers m d and d to be equal to 2 and 1, respectively, independently of the value of d [ τ ] . It therefore follows that the upper bound in (106) can be loosened to
| E d ( G ) | 1 2 n 1 d 1 | A | log 2 | A | .
This shows that the bound in (108) generalizes the result in Theorem 4.2 of [6], based only on the knowledge of the cardinality of A . Furthermore, the bound (108) can be tightened by the refined bound (106) if the characterization of the set A allows one to assert values for m d and d that are larger than the trivial values of 2 and 1, respectively.
In light of (88) and (108), the number of edges in the graph G satisfies
| E ( G ) | 1 2 d = 1 τ n 1 d 1 | A | log 2 | A | ,
and if τ n + 1 2 , then it follows that
| E ( G ) | 1 2 exp ( n 1 ) H b τ 1 n 1 | A | log 2 | A | .
Indeed, the transition from (109) to (110) holds by the inequality
k = 0 n θ n k exp n H b ( θ ) , θ 0 , 1 2 ,
where the latter bound is asymptotically tight in the exponent of n (for sufficiently large values of n).

5.4. Comparison of Bounds

We next consider the tightness of the refined bound (106) and the loosened bound (108). Since A is a subset of the n-dimensional cube { 1 , 1 } n , every point in A has at most n d neighbors in A with Hamming distance d, so
| E d ( G ) | 1 2 n d | A | .
Comparing the bound on the RHS of (106) with the trivial bound in (112) shows that the former bound is useful if and only if
log | A | n d log d log m d d n d ,
which is obtained by relying on the identity n d = n d n 1 d 1 . Rearranging terms in (113) gives the necessary and sufficient condition
| A | ( m d ) n d ,
which is independent of the value of d . Since, by definition, m d 2 , inequality (114) is automatically satisfied if the stronger condition
| A | 2 n d
is imposed. The latter also forms a necessary and sufficient condition for the usefulness of the looser bound on the RHS of (108) in comparison to (112).
Example 1.
Suppose that the set A { 1 , 1 } n is characterized by the property that for all d [ τ ] , with a fixed integer τ [ n ] , if x n A and x ¯ ( k 1 , , k d ) A then all vectors y n { 1 , 1 } n which coincide with x n and x ¯ ( k 1 , , k d ) in their ( n d ) agreed positions are also included in the set A . Then, for all d [ τ ] , we get by definition that m d = 2 d , which yields τ log 2 | A | . Setting m d = 2 d and the default value d = 1 on the RHS of (106) gives
(116a) | E d ( G ) | n 1 d 1 | A | log | A | n d log d 2 log m d d (116b) = n 1 d 1 | A | log | A | 2 log ( 2 d ) (116c) = 1 2 d n 1 d 1 | A | log 2 | A | (116d) = 1 2 n d | A | · log 2 | A | n .
Unless A = { 1 , 1 } n , the upper bound on the RHS of (116d) is strictly smaller than the trivial upper bound on the RHS of (112). This improvement is consistent with the satisfiability of the (necessary and sufficient) condition in (115), which is strictly satisfied since
| A | < 2 n = ( 2 d ) n d = ( m d ) n d .
On the other hand, the looser upper bound on the RHS of (108) gives
| E d ( G ) | 1 2 n d | A | · d log 2 | A | n ,
which is d times larger than the refined bound on the RHS of (116d) (since it is based on the exact value of m d for the set A , rather than taking the default value of 2), and it is worse than the trivial bound if and only if | A | > 2 n d . The latter finding is consistent with (115).
This exemplifies the utility of the refined upper bound on the RHS of (106) in comparison to the bound on the RHS of (108), where the latter generalizes Theorem 4.2 of [6] from the case where d = 1 to all d [ n ] . As it is explained above, this refinement is irrelevant in the special case where d = 1 , though it proves to be useful in general for d [ 2 : n ] (as it is exemplified here).
The following theorem introduces the results of our analysis (so far) in the present section.
Theorem 2.
Let A { 1 , 1 } n , with n N , and let τ [ n ] . Let G = ( V ( G ) , E ( G ) ) be an un-directed, simple graph with vertex set V ( G ) = A , and edges connecting pairs of vertices in G which are represented by vectors in A whose Hamming distance is less than or equal to τ. For d [ τ ] , let E d ( G ) be the set of edges in G which connect all pairs of vertices that are represented by vectors in A whose Hamming distance is equal to d (i.e., | E ( G ) | = d = 1 τ | E d ( G ) | ).
(a) 
For d [ τ ] , let the integers m d [ 2 : min { 2 d , | A | } ] and d [ min { 2 d 1 , | A | 1 } ] (be, preferably, the maximal possible values to) satisfy the requirements in (92) and (94), respectively. Then,
| E d ( G ) | n 1 d 1 | A | log | A | n d log d 2 log m d d .
(b) 
A loosened bound, which only depends on the cardinality of the set A , is obtained by setting the default values m d = 2 and d = 1 . It is then given by
| E d ( G ) | 1 2 n 1 d 1 | A | log 2 | A | , d [ τ ] ,
and, if τ n + 1 2 , then the (overall) number of edges in G satisfies
| E ( G ) | 1 2 exp ( n 1 ) H b τ 1 n 1 | A | log 2 | A | .
(c) 
The refined upper bound on the RHS of (119) and the loosened upper bound on the RHS of (120) improve the trivial bound 1 2 n d | A | , if and only if | A | < ( m d ) n d or | A | < 2 n d , respectively (see Example 1).

5.5. Influence of Fixed-Size Subsets of Bits

The result in Theorem 4.2 of [6], which is generalized and refined in Theorem 2 here, is turned to study the total influence of the n variables of an equiprobable random vector X n { 1 , 1 } n on a subset A { 1 , 1 } n . To this end, let X ¯ ( i ) denote the vector where the bit at the i-th position of X n is flipped, so X ¯ ( i ) ( X 1 , , X i 1 , X i , X i + 1 , , X n ) for all i [ n ] . Then, the influence of the i-th variable is defined as
I i ( A ) Pr 1 { X n A } 1 X ¯ ( i ) A , i [ n ] ,
and their total influence is defined to be the sum
I ( A ) i = 1 n I i ( A ) .
As it is shown in Chapters 9 and 10 of [6], influences of subsets of the binary hypercube have far reaching consequences in the study of threshold phenomena, and many other areas. As a corollary of (107), it is obtained in Theorem 4.3 of [6] that, for every subset A { 1 , 1 } n ,
I ( A ) 2 Pr ( A ) log 2 1 Pr ( A ) ,
where Pr ( A ) P [ X n A ] = | A | 2 n by the equiprobable distribution of X n over { 1 , 1 } n .
In light of Theorem 2, the same approach which is used in Section 4.4 of [6] for the transition from (107) to (124) can be also used to obtain, as a corollary, a lower bound on the average total influence over all subsets of d variables. To this end, let k 1 , , k d be integers such that 1 k 1 < < k d n , and the influence of the variables in positions k 1 , , k d be given by
I ( k 1 , , k d ) ( A ) Pr 1 { X n A } 1 X ¯ ( k 1 , , k d ) A .
Then, let the average influence of subsets of d variables be defined as
I ( n , d ) ( A ) 1 n d ( k 1 , , k d ) : 1 k 1 < < k d n I ( k 1 , , k d ) ( A ) .
Hence, by (123) and (126), I ( n , 1 ) ( A ) = 1 n I ( A ) for every subset A { 1 , 1 } n . Let
B ( n , d ) ( A ) ( x n , y n ) : x n A , y n { 1 , 1 } n \ A , d H x n , y n = d ,
be the set of ordered pairs of sequences ( x n , y n ) , where x n , y n { 1 , 1 } n are of Hamming distance d from each other, with x n A and y n A . By the equiprobable distribution of X n on { 1 , 1 } n , we get
(128a) I ( n , d ) ( A ) = 1 n d ( k 1 , , k d ) : 1 k 1 < < k d n Pr 1 { X n A } 1 X ¯ ( k 1 , , k d ) A (128b) = 2 n d ( k 1 , , k d ) : 1 k 1 < < k d n Pr X n A , X ¯ ( k 1 , , k d ) A (128c) = 2 n d · | B ( n , d ) ( A ) | 2 n (128d) = | B ( n , d ) ( A ) | 2 n 1 n d .
Since every point in A has n d neighbors of Hamming distance d in the set { 1 , 1 } n , it follows that
n d | A | = 2 E d ( G ) + B ( n , d ) ( A ) ,
where G is introduced in Theorem 2, and E d ( G ) is the set of edges connecting pairs of vertices in G which are represented by vectors in A of Hamming distance d. The multiplication by 2 on the RHS of (129) is because every edge whose two endpoints are in the set A is counted twice. Hence, by (106) and (129),
(130a) B ( n , d ) ( A ) = n d | A | 2 E d ( G ) (130b) n d | A | n 1 d 1 | A | log | A | n d log d log m d d (130c) = n d | A | 1 d n log | A | log d log m d d (130d) = n d | A | log m d d n log | A | log m d d ,
and the lower bound on the RHS of (130d) is positive if and only if | A | < ( m d ) n d (see also (114)). This gives from (128) that the average influence of subsets of d variables satisfies
(131a) I ( n , d ) ( A ) | A | 2 n 1 log m d d n log | A | log m d d (131b) = 2 Pr ( A ) log m d d n log 2 n Pr ( A ) log m d d (131c) = 2 Pr ( A ) d n log 1 Pr ( A ) log 2 d m d log m d d .
Note that by setting d = 1 , and the default values m d = 2 and d = 1 on the RHS of (131c) gives the total influence of the n variables satisfies, for all A { 1 , 1 } n ,
(132a) I ( A ) = n I ( n , 1 ) ( A ) (132b) 2 Pr ( A ) log 2 1 Pr ( A ) ,
which is then specialized to the result in (Theorem 4.3 of [6], see (124)). This gives the following result.
Theorem 3.
Let X n be an equiprobable random vector over the set { 1 , 1 } n , let d [ n ] and A { 1 , 1 } n . Then, the average influence of subsets of d variables of X n , as it is defined in (126), is lower bounded as follows:
I ( n , d ) ( A ) 2 Pr ( A ) d n log 1 Pr ( A ) log 2 d m d log m d d ,
where Pr ( A ) P [ X n A ] = | A | 2 n , and the integers m d and d are introduced in Theorem 2. Similarly to the refined upper bound in Theorem 2, the lower bound on the RHS of (133) is informative (i.e., positive) if and only if | A | < ( m d ) n d . The lower bound on the RHS of (133) can be loosened (by setting the default values m d = 2 and d = 1 ) to
I ( n , d ) ( A ) 2 Pr ( A ) d n log 2 1 Pr ( A ) + 1 d .

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Proposition 1

For completeness, we prove Proposition 1 which introduces results from [25,28,37].
Let Ω be a non-empty finite set, and let { X ω } ω Ω be a collection of discrete random variables. We first prove Item (a), showing that the entropy set function f : 2 Ω R in (15) is a rank function.
  • f ( Ø ) = 0 .
  • Submodularity: If S , T Ω , then
    f ( T S ) + f ( T S ) (A1a) = H ( X T S ) + H ( X T S ) (A1b) = H ( X T \ S , X T S , X S \ T ) + H ( X T S ) (A1c) = H ( X T \ S , X S \ T | X T S ) + 2 H ( X T S ) = H ( X T \ S | X T S ) + H ( X S \ T | X T S ) I ( X T \ S ; X S \ T | X T S ) (A1d) + 2 H ( X T S ) = H ( X T \ S | X T S ) + H ( X T S ) + H ( X S \ T | X T S ) + H ( X T S ) (A1e) I ( X T \ S ; X S \ T | X T S ) (A1f) = H ( X T \ S , X T S ) + H ( X S \ T , X T S ) I ( X T \ S ; X S \ T | X T S ) (A1g) = H ( X T ) + H ( X S ) I ( X T \ S ; X S \ T | X T S ) (A1h) = f ( T ) + f ( S ) I ( X T \ S ; X S \ T | X T S ) ,
    which gives
    f ( T ) + f ( S ) f ( T S ) + f ( T S ) = I ( X T \ S ; X S \ T | X T S ) 0 .
    This proves the submodularity of f, while also showing that
    f ( T ) + f ( S ) = f ( T S ) + f ( T S ) X T \ S X S \ T | X T S ,
    i.e., the rightmost side of (A2) holds with equality if and only if X T \ S and X S \ T are conditionally independent given X T S .
  • Monotonicity: If S T Ω , then
    (A4a) f ( S ) = H ( X S ) (A4b) H ( X S ) + H ( X T | X S ) (A4c) = H ( X T ) (A4d) = f ( T ) ,
    so f is monotonically increasing.
We next prove Item (b). Consider the set function f in (16).
  • f ( Ø ) = 0 , and f ( Ω ) = H ( X Ω ) .
  • Supermodularity: If S , T Ω , then
    (A5a) f ( T S ) + f ( T S ) = H ( X T S | X T c S c ) + H ( X T S | X T c S c ) (A5b) = H ( X Ω ) H ( X T c S c ) + H ( X Ω ) H ( X T c S c ) (A5c) = 2 H ( X Ω ) H ( X T c S c ) + H ( X T c S c ) (A5d) 2 H ( X Ω ) H ( X T c ) + H ( X S c ) (A5e) = H ( X Ω ) H ( X T c ) + H ( X Ω ) H ( X S c ) (A5f) = H ( X T | X T c ) + H ( X S | X S c ) (A5g) = f ( T ) + f ( S ) ,
    where inequality (A5d) holds since the entropy function in (15) is submodular (by Item (a)).
  • Monotonicity: If S T Ω , then
    (A6a) f ( S ) = H ( X S | X S c ) (A6b) H ( X S | X T c ) ( T c S c ) (A6c) H ( X T | X T c ) (A6d) = f ( T ) ,
    so f is monotonically increasing.
Item (c) follows easily from Items (a) and (b). Consider the set function f : 2 Ω R in (17). Then, for all T Ω , f ( T ) = I ( X T ; X T c ) = H ( X T ) H ( X T | X T c ) , so f is expressed as a difference of a submodular function and a supermodular function, which gives a submodular function. Furthermore, f ( Ø ) = 0 ; by the symmetry of the mutual information, f ( T ) = f ( T c ) for all T Ω , so f is not monotonic.
We next prove Item (d). Consider the set function f : 2 V R in (18), and we need to prove that f is submodular under the conditions in Item (d) where U , V Ω are disjoint subsets, and the entries of the random vector X V are conditionally independent given X U .
  • f ( Ø ) = I ( X U ; X Ø ) = 0 .
  • Submodularity: If S , T V , then
    f ( T S ) + f ( T S ) (A7a) = I ( X U ; X T S ) + I ( X U ; X T S ) (A7b) = H ( X T S ) H ( X T S | X U ) + H ( X T S ) H ( X T S | X U ) (A7c) = H ( X T S ) + H ( X T S ) H ( X T S | X U ) + H ( X T S | X U ) = H ( X T ) + H ( X S ) I ( X T \ S ; X S \ T | X T S ) (A7d) H ( X T S | X U ) + H ( X T S | X U ) ,
    where equality (A7d) holds by the proof of Item (a) (see (A2)). By the assumption on the conditional independence of the random variables { X v } v V given X U , we get
    (A8a) H ( X T S | X U ) + H ( X T S | X U ) = ω T S H ( X ω | X U ) + ω T S H ( X ω | X U ) (A8c) = ω T H ( X ω | X U ) + ω S H ( X ω | X U ) (A8c) = H ( X T | X U ) + H ( X S | X U ) .
    Consequently, combining (A7) and (A8) gives
    f ( T S ) + f ( T S ) = H ( X T ) + H ( X S ) I ( X T \ S ; X S \ T | X T S ) (A9a) H ( X T | X U ) + H ( X S | X U ) = H ( X T ) H ( X T | X U ) + H ( X S ) H ( X S | X U ) (A9b) I ( X T \ S ; X S \ T | X T S ) (A9c) = I ( X T ; X U ) + I ( X S ; X U ) I ( X T \ S ; X S \ T | X T S ) (A9d) = f ( T ) + f ( S ) I ( X T \ S ; X S \ T | X T S ) (A9e) f ( T ) + f ( S ) ,
    where the inequality (A9e) holds with equality if and only if X T \ S and X S \ T are conditionally independent given X T S .
  • Monotonicity: If S T V , then
    f ( S ) = I ( X U ; X S ) I ( X U ; X T ) = f ( T ) ,
    so f is monotonically increasing.
We finally prove Item (e), where it is needed to show that the entropy of a sum of independent random variables is a rank function. Let f : 2 Ω R be the set function as given in (19).
  • f ( Ø ) = 0 .
  • Submodularity: Let S , T Ω . Define
    U ω T S X ω , V ω S \ T X ω , W ω T \ S X ω .
    From the independence of the random variables { X ω } ω Ω , it follows that U , V and W are independent. Hence, we get
    f ( T ) + f ( S ) f ( T S ) + f ( T S ) (A12a) = f ( T ) f ( T S ) f ( T S ) f ( S ) (A12b) = H ( U + W ) H ( U ) H ( U + V + W ) H ( U + V ) (A12c) = H ( U + W ) H ( U + W | W ) H ( U + V + W ) H ( U + V ) (A12d) = H ( U + W ) H ( U + W | W ) H ( U + V + W ) H ( U + V + W | W ) (A12e) = I ( U + W ; W ) I ( U + V + W ; W ) (A12f) I ( U + W ; W ) I ( U + W , V ; W ) ,
    and
    (A13a) I ( U + W , V ; W ) = I ( V ; W ) + I ( U + W ; W | V ) (A13b) = I ( U + W ; W | V ) (A13c) = I ( U + W ; W ) .
    Combining (A12) and (A13) gives (11).
  • Monotonicity: If S T Ω , then since { X ω } ω Ω are independent random variables, (A11) implies that U and W are independent and V = 0 . Hence,
    (A14a) f ( T ) f ( S ) = H ( U + W ) H ( U ) (A14b) = H ( U + W ) H ( U + W | W ) (A14c) = I ( U + W ; W ) 0 .
This completes the proof of Proposition 1.

Appendix B. Proof of Proposition 4

Lemma A1.
Let { B j } j = 1 (with 2 ) be a sequence of sets that is not a chain (i.e., there is no permutation π : [ ] [ ] such that B π ( 1 ) B π ( 2 ) B π ( ) ). Consider a recursive process where, at each step, a pair of sets that are not related by inclusion is replaced with their intersection and union. Then, there exists such a recursive process that leads to a chain in a finite number of steps.
Proof. 
The lemma is proved by mathematical induction on . It holds for = 2 since B 1 B 2 B 1 B 2 , and the process halts in a single step. Suppose that the lemma holds with a fixed 2 , and for an arbitrary sequence of sets which is not a chain. We aim to show that it also holds for every sequence of + 1 sets which is not a chain. Let { B j } j = 1 + 1 be such an arbitrary sequence of sets, and consider the subsequence of the first sets B 1 , , B . If it is not a chain, then (by the induction hypothesis) there exists a recursive process as above which enables to transform it into a chain in a finite number of steps, i.e., we get a chain B 1 B 2 B . If B B + 1 or B + 1 B 1 , then we get a chain of + 1 sets. Otherwise, by proceeding with the recursive process where B and B + 1 are replaced with their intersection and union, consider the sequence
B 1 , , B 1 , B B + 1 , B B + 1 .
By the induction hypothesis, the first sets in this sequence can be transformed into a chain (in a finite number of steps) by a recursive process as above; this gives a chain of the form B 1 B 2 B 1 B . The first sets in (A15) are all included in B , so every combination of unions and intersections of these sets is also included in B . Hence, the considered recursive process leads to a chain of the form
B 1 B 2 B 1 B B B + 1 ,
where the last inclusion in (A16) holds since B B . The claim thus holds for + 1 if it holds for a given , and it holds for = 2 , it therefore holds by mathematical induction for all integers 2 . □
We first prove Proposition 4a. Suppose that there is a permutation π : [ M ] [ M ] such that S π ( 1 ) S π ( 2 ) S π ( M ) is a chain. Since every element in Ω is included in at least d of these subsets, then it should be included in (at least) the d largest sets of this chain, so S π ( j ) = Ω for every j [ M d + 1 : M ] . Due to the non-negativity of f, it follows that
(A17a) j = 1 M f ( S j ) j = M d + 1 M f ( S π ( j ) ) (A17b) = d f ( Ω ) .
Otherwise, if we cannot get a chain by possibly permuting the subsets in the sequence { S j } j = 1 M , consider a pair of subsets S n and S m that are not related by inclusion, and replace them with their intersection and union. By the submodularity of f,
(A18a) j = 1 M f ( S j ) = j n , m f ( S j ) + f ( S n ) + f ( S m ) (A18b) j n , m f ( S j ) + f ( S n S m ) + f ( S n S m ) .
For all ω Ω , let deg ( ω ) be the number of indices j [ M ] such that ω S j . By replacing S n and S m with S n S m and S n S m , the set of values { deg ( ω ) } ω Ω stays unaffected (indeed, if ω S n and ω S m , then it belongs to their intersection and union; if ω belongs to only one of the sets S n and S m , then ω S n S m and ω S n S m ; finally, if ω S n and ω S m , then it does not belong to their intersection and union). Now, consider the recursive process in Lemma A1. Since the profile of the number of inclusions of the elements in Ω is preserved in each step of the recursive process in Lemma A1, it follows that every element in Ω stays to belong to at least d sets in the chain which is obtained at the end of this recursive process. Moreover, in light of (A18), in every step of the recursive process in Lemma A1, the sum in the LHS of (A18) cannot increase. Inequality (45) therefore finally follows from the earlier part of the proof for a chain (see (A17)).
We next prove Proposition 4b. Let A Ω , and suppose that every element in A is included in at least d 1 of the subsets { S j } j = 1 M . For all j [ M ] , define S j S j A , and consider the sequence S j j = 1 M of subsets of A . If f is a rank function, then it is monotonically increasing, which yields
f ( S j ) f ( S j ) , j [ M ] .
Each element of A is also included in at least d of the subsets S j j = 1 M (by construction, and since (by assumption) each element in A is included in at least d of the subsets { S j } j = 1 M ). By the non-negativity and submodularity of f, Proposition 4a gives
j = 1 M f ( S j ) d f ( A ) .
Combining (A19) and (A20) yields (46). This completes the proof of Proposition 4.
Remark A1.
Lemma A1 is weaker than a claim that, in every recursive process as in Lemma A1, the number of pairs of sets that are not related by inclusion is strictly decreasing at each step. Lemma A1 is, however, sufficient for our proof of Proposition 4a.

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  2. Dembo, A.; Cover, T.M.; Thomas, J.A. Information theoretic inequalities. IEEE Trans. Inf. Theory 1991, 37, 1501–1518. [Google Scholar] [CrossRef] [Green Version]
  3. Chan, T. Recent progresses in characterising information inequalities. Entropy 2011, 13, 379–401. [Google Scholar] [CrossRef]
  4. Martin, S.; Padró, C.; Yang, A. Secret sharing, rank inequalities, and information inequalities. IEEE Trans. Inf. Theory 2016, 62, 599–610. [Google Scholar] [CrossRef]
  5. Babu, S.A.; Radhakrishnan, J. An entropy-based proof for the Moore bound for irregular graphs. In Perspectives on Computational Complexity; Agrawal, M., Arvind, V., Eds.; Birkhäuser: Cham, Switzerland, 2014; pp. 173–182. [Google Scholar]
  6. Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities - A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  7. Chung, F.R.K.; Graham, L.R.; Frankl, P.; Shearer, J.B. Some intersection theorems for ordered sets and graphs. J. Comb. Theory Ser. A 1986, 43, 23–37. [Google Scholar] [CrossRef] [Green Version]
  8. Erdos, P.; Rényi, A. On two problems of information theory. Publ. Math. Inst. Hung. Acad. Sci. 1963, 8, 241–254. [Google Scholar]
  9. Friedgut, E. Hypergraphs, entropy and inequalities. Am. Math. Mon. 2004, 111, 749–760. [Google Scholar] [CrossRef] [Green Version]
  10. Jukna, S. Extremal Combinatorics with Applications in Computer Science, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  11. Kaced, T.; Romashchenko, A.; Vereshchagin, N. A conditional information inequality and its combinatorial applications. IEEE Trans. Inf. Theory 2018, 64, 3610–3615. [Google Scholar] [CrossRef]
  12. Kahn, J. An entropy approach to the hard-core model on bipartite graphs. Comb. Comput. 2001, 10, 219–237. [Google Scholar] [CrossRef]
  13. Kahn, J. Entropy, independent sets and antichains: A new approach to Dedekind’s problem. Proc. Am. Math. Soc. 2001, 130, 371–378. [Google Scholar] [CrossRef]
  14. Madiman, M.; Marcus, A.W.; Tetali, P. Entropy and set cardinality inequalities for partition-determined functions. Random Struct. Algorithms 2012, 40, 399–424. [Google Scholar] [CrossRef] [Green Version]
  15. Madiman, M.; Marcus, A.W.; Tetali, P. Information–theoretic inequalities in additive combinatorics. In Proceedings of the 2010 IEEE Information Theory Workshop, Cairo, Egypt, 6–8 January 2010. [Google Scholar]
  16. Pippenger, N. An information–theoretic method in combinatorial theory. J. Comb. Ser. A 1977, 23, 99–104. [Google Scholar] [CrossRef] [Green Version]
  17. Pippenger, N. Entropy and enumeration of boolean functions. IEEE Trans. Inf. Theory 1999, 45, 2096–2100. [Google Scholar] [CrossRef]
  18. Radhakrishnan, J. An entropy proof of Bregman’s theorem. J. Comb. Theory Ser. A 1997, 77, 161–164. [Google Scholar] [CrossRef] [Green Version]
  19. Radhakrishnan, J. Entropy and counting. In Computational Mathematics, Modelling and Algorithms; Narosa Publishers: New Delhi, India, 2001; pp. 1–25. [Google Scholar]
  20. Sason, I. A generalized information–theoretic approach for bounding the number of independent sets in bipartite graphs. Entropy 2021, 23, 270. [Google Scholar] [CrossRef]
  21. Sason, I. Entropy-based proofs of combinatorial results on bipartite graphs. In Proceedings of the 2021 IEEE International Symposium on Information Theory, Melbourne, Australia, 12–20 July 2021; pp. 3225–3230. [Google Scholar]
  22. Madiman, M.; Tetali, P. Information inequalities for joint distributions, interpretations and applications. IEEE Trans. Inf. Theory 2010, 56, 2699–2713. [Google Scholar] [CrossRef]
  23. Bach, F. Learning with submodular functions: A convex optimization perspective. Found. Trends Mach. Learn. 2013, 6, 145–373. [Google Scholar] [CrossRef] [Green Version]
  24. Chen, Q.; Cheng, M.; Bai, B. Matroidal entropy functions: A quartet of theories of information, matroid, design and coding. Entropy 2021, 23, 323. [Google Scholar] [CrossRef]
  25. Fujishige, S. Polymatroidal dependence structure of a set of random variables. Inf. Control. 1978, 39, 55–72. [Google Scholar] [CrossRef] [Green Version]
  26. Fujishige, S. Submodular Functions and Optimization, 2nd ed.; Annals of Discrete Mathematics Series; Elsevier: Amsterdam, The Netherlands, 2005; Volume 58. [Google Scholar]
  27. Iyer, R.; Khargonkar, N.; Bilems, J.; Asnani, H. Generalized submodular information measures: Theoretical properties, examples, optimization algorithms, and applications. IEEE Trans. Inf. Theory 2022, 68, 752–781. [Google Scholar] [CrossRef]
  28. Krause, A.; Guestrin, C. Near-optimal nonmyopic value of information in graphical models. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI 2005), Edinburgh, UK, 26–29 July 2005; pp. 324–331. [Google Scholar]
  29. Lovász, L. Submodular functions and convexity. In Mathematical Programming The State of the Art; Bachem, A., Korte, B., Grotschel, M., Eds.; Springer: Berlin/Heidelberg, Germany, 1983; pp. 235–257. [Google Scholar]
  30. Tian, C. Inequalities for entropies of sets of subsets of random variables. In Proceedings of the 2011 IEEE International Symposium on Information Theory, Saint Petersburg, Russia, 31 July–5 August 2011; pp. 1950–1954. [Google Scholar]
  31. Kishi, Y.; Ochiumi, N.; Yanagida, M. Entropy inequalities for sums over several subsets and their applications to average entropy. In Proceedings of the 2014 IEEE International Symposium on Information Theory (ISIT 2014), Honolulu, HI, USA, 30 June–4 July 2014; pp. 2824–2828. [Google Scholar]
  32. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef] [Green Version]
  33. Madiman, M.; Mellbourne, J.; Xeng, P. Forward and reverse entropy power inequalities in convex geometry. In Convexity and Concentration; Carlen, E., Madiman, M., Werner, E.M., Eds.; IMA Volumes in Mathematics and Its Applications; Springer: Berlin/Heidelberg, Germany, 2017; Volume 161, pp. 427–485. [Google Scholar]
  34. Han, T.S. Nonnegative entropy measures of multivariate symmetric correlations. Inf. Control. 1978, 36, 133–156. [Google Scholar] [CrossRef] [Green Version]
  35. Polyanskiy, Y.; Wu, Y. Lecture Notes on Information Theory, version 5. Available online: http://people.lids.mit.edu/yp/homepage/data/itlectures_v5.pdf (accessed on 15 May 2019).
  36. Bollobás, B. Extremal Graph Theory; Academic Press: Cambridge, MA, USA, 1978. [Google Scholar]
  37. Madiman, M. On the entropy of sums. In Proceedings of the 2008 IEEE Information Theory Workshop, Porto, Portugal, 5–9 May 2008. [Google Scholar]
  38. Tao, T. Sumset and inverse sumset theory for Shannon entropy. Comb. Comput. 2010, 19, 603–639. [Google Scholar] [CrossRef] [Green Version]
  39. Kontoyiannis, I.; Madiman, M. Sumset and inverse sumset inequalities for differential entropy and mutual information. IEEE Trans. Inf. Theory 2014, 60, 4503–4514. [Google Scholar] [CrossRef] [Green Version]
  40. Artstein, S.; Ball, K.M.; Barthe, F.; Naor, A. Solution of Shannon’s problem on the monotonicity of entropy. J. Am. Soc. 2004, 17, 975–982. [Google Scholar] [CrossRef]
  41. Madiman, M.; Barron, A. Generalized entropy power inequalities and monotonicity properties of information. IEEE Trans. Inf. Theory 2007, 53, 2317–2329. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sason, I. Information Inequalities via Submodularity and a Problem in Extremal Graph Theory. Entropy 2022, 24, 597. https://doi.org/10.3390/e24050597

AMA Style

Sason I. Information Inequalities via Submodularity and a Problem in Extremal Graph Theory. Entropy. 2022; 24(5):597. https://doi.org/10.3390/e24050597

Chicago/Turabian Style

Sason, Igal. 2022. "Information Inequalities via Submodularity and a Problem in Extremal Graph Theory" Entropy 24, no. 5: 597. https://doi.org/10.3390/e24050597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop