Dimension-Free Bounds for the Union-Closed Sets Conjecture

The union-closed sets conjecture states that, in any nonempty union-closed family F of subsets of a finite set, there exists an element contained in at least a proportion 1/2 of the sets of F. Using an information-theoretic method, Gilmer recently showed that there exists an element contained in at least a proportion 0.01 of the sets of such F. He conjectured that their technique can be pushed to the constant 3−52 which was subsequently confirmed by several researchers including Sawin. Furthermore, Sawin also showed that Gilmer’s technique can be improved to obtain a bound better than 3−52 but this new bound was not explicitly given by Sawin. This paper further improves Gilmer’s technique to derive new bounds in the optimization form for the union-closed sets conjecture. These bounds include Sawin’s improvement as a special case. By providing cardinality bounds on auxiliary random variables, we make Sawin’s improvement computable and then evaluate it numerically, which yields a bound approximately 0.38234, slightly better than 3−52≈0.38197.


Introduction
This paper concerns the union-closed conjecture which is described in the informationtheoretic language as follows.Note that each set B ⊆ [n] := {1, 2, . . ., n} uniquely corresponds to an n-length sequence x n := (x 1 , x 2 , . . ., x n ) ∈ Ω n with Ω := {0, 1} in the way that x i = 1 if i ∈ B and x i = 0 otherwise.So, a family F of subsets of [n] uniquely corresponds to a subset A ⊆ Ω n .Denote the (element-wise) OR operation for two finite Ω-valued sequences as x n ∨ y n := (x i ∨ y i ) i∈[n] with x n , y n ∈ Ω n , where ∨ is the OR operation.The family F is closed under the union operation (i.e., F ∪ G ∈ F , ∀F, G ∈ F ) if and only if the corresponding set A ⊆ Ω n is closed under the OR operation (i.e., x n ∨ y n ∈ A, ∀x n , y n ∈ A).
Let A ⊆ Ω n be closed under the OR operation.Let X n := (X 1 , X 2 , . . ., X n ) be a random vector uniformly distributed on A, and denote P X n = Unif(A) as its distribution (or probability mass function, PMF).We are interested in estimating where P X i is the distribution of X i , and hence, P X i (1) is the proportion of the sets containing the element i among all sets in F .Frankl made the following conjecture.
This conjecture equivalently states that for any union-closed family F , there exists an element contained in at least a proportion 1/2 of the sets of F .Since the union-closed conjecture was posed by Peter Frankl in 1979, it had attracted a great deal of research interest; see, e.g., [6][7][8][9][10].We refer readers to the survey paper [11] for more details.Gilmer [1] made a breakthrough recently, showing that this conjecture holds with constant 0.01.Gilmer's method used a clever idea from information theory in which two independent random vectors were constructed.
It was conjectured by him that his method can improve the constant to confirmed by several groups of researchers [2][3][4][5].This constant is shown to be the best for an approximate version of the union-closed sets problem [3].Moreover, Sawin [2] further develops Gilmer's idea by allowing the two random vectors to depend with each other.Such a technique was in fact used by the present author in several existing works [12][13][14].By this technique, Sawin [2] showed that the constant can be improved to a value that is strictly larger 2 .However, without cardinality bounds on auxiliary random variables, Sawin's constant is difficult to compute, and hence, the accurate value of this improved constant is not explicitly given in [2].
The present paper further develops Gilmer's (or Sawin's) technique to derive new constants (or bounds) in the optimization form for the union-closed sets conjecture.These bounds include Sawin's improvement as a special case.By providing cardinality bounds on auxiliary random variables, we make Sawin's improvement computable, and then evaluate it numerically which yields a bound around 0.38234, slightly better than 3− √ 5 2 ≈ 0.38197.

Main Results
To state our result, we need to introduce some notations.Since we only consider distributions on finite alphabets, we do not distinguish between the terms "distributions" and "probability mass functions".For a pair of distributions (P X , P Y ), a coupling of (P X , P Y ) is a joint distribution P XY whose marginals are respectively P X , P Y .For a distribution P X defined on a finite alphabet X , a coupling P XX of (P X , P X ) is called symmetric if P XX (x, y) = P XX (y, x) for all x, y ∈ X .Denote C s (P X ) as the set of symmetric couplings of (P X , P X ).Denote δ x as the Dirac measure with atom at x.
For a joint distribution P XY , the (Pearson) correlation coefficient between (X, Y) ∼ P XY is defined by The maximal correlation between (X, Y) ∼ P XY is defined by where the supremum is taken over all pairs of real-valued functions ( f , g) such that Var( f (X))Var(g(Y)) ; see, e.g., [15].Clearly, the largest singular value of the matrix is equal to 1 with corresponding eigenvectors ( P X (x)) x and ( P Y (y)) y .Denote for p, q, ρ ∈ [0, 1], and ϕ(ρ, p, q) := median{max{p, q, p + q − z 2 }, 1/2, min{p + q, p + q − z 1 }}, where medianA denotes the median value of elements in a multiset A. We regard the set in ( where the supremum over P ρ and the infimum over P p are both taken over all finitely supported probability distributions on [0, 1].
Our main results are as follows.
Theorem 1.If Γ(t) > 1 for some t ∈ (0, 1/2), then p A ≥ t for any OR-closed A ⊆ Ω n (i.e., for any union-closed family F , there exists an element contained in at least a proportion t of the sets of F ).
The proof of Theorem 1 is given in Section 2 by using a technique based on coupling and entropy.It is essentially the same as the technique used by Sawin [2].However, prior to Sawin's work, such a technique was used by the present author in several works; see [12][13][14].
Equivalently, Theorem 1 states that p A ≥ t max for any OR-closed A ⊆ Ω n , where t max := sup{t ∈ (0, 1/2) : Γ(t) > 1}.To compute Γ(t) or its lower bounds numerically, it requires to upper bound the cardinality of the support of P p in the outer infimum in (2), since otherwise, infinitely many parameters are needed to optimize.This is left to be done in a future work.The following gives a computable bound.
If we choose P ρ = δ 0 , then Theorem 1 implies Gilmer's bound in [1], since for this case, the couplings constructed in the proof of Theorem 1 (given in the next section) turn to be independent, coinciding with Gilmer's construction.On the other hand, if we choose P ρ = δ 1 , then the couplings constructed in our proof are arbitrary.In fact, we can make a choice of P ρ better than these two special cases.As suggested by Sawin [2], we can choose P ρ = (1 − α)δ 0 + αδ 1 which in fact leads to an optimization over mixtures of independent couplings and arbitrary couplings.This final choice yields the following bound.
Substituting ρ = 0 and 1 respectively into ϕ(ρ, p, q) yields ϕ(0, p, q) = p + q − pq, where in the evaluation of ϕ(1, p, q), the following facts were used: 1) and otherwise, By defining and substituting where the infimum is taken over all distributions P pq of the form (1 and with δ (x,y) denoting the Dirac measure at (x, y).
As a consequence of two results above, we have the following corollary.
The proof of Corollary 1 is given in Section 3. The lower bound in ( 5) without the cardinality bound on the support of P pq was given by Sawin [2], which was used to show p A > 3− √ 5 2 .However, thanks to the cardinality bound, we can numerically compute the best bound on p A that can be derived using Γ(t).That is, p A ≥ tmax for any OR-closed A ⊆ Ω n , where tmax := sup{t ∈ (0, 1/2) : Γ(t) > 1}.Numerical results 2 show that if we set α = 0.035, t = 0.38234, then the optimal P pq = (1 − β)Q a,a + βQ a,1 with a ≈ 0.3300622 and β ≈ 0.1560676 which leads to the lower bound Γ(t) ≥ 1.00000889.Hence, p A ≥ 0.38234 for any OR-closed A ⊆ Ω n .This is slightly better than the previous bound 3− √ 5 2 ≈ 0.38197.The choice of (α, t) in our evaluation is nearly optimal.More decimal places of Sawin's bound (or equivalently, tmax ) were computed by Cambie in [16], i.e., 0.382345533366702 ≤ tmax ≤ 0.382345533366703 which is attained by the choice α ≈ 0.03560698136437784.This more precise evaluation can be also verified using our code in Footnote 2.

Proof of Theorem 1
Denote H(X) = − ∑ x P X (x) log P X (x) as the Shannon entropy of a random variable X ∼ P X .Let A ⊆ Ω n be closed under the OR operation.We assume |A| ≥ 2. This is because, Theorem 1 holds obviously for singletons A, since for this case, p A = 1.Let P X n = Unif(A).So, H(X n ) > 0, and by the chain rule, We hence have sup . Relaxing P X n = Unif(A) to arbitrary distributions such that P X i (1) ≤ t, we obtain Γ n (t) ≤ 1 where In other words, if given t, Γ n (t) > 1, then by contradiction, p A > t.
We next show that Γ n (t) ≥ Γ(t) which implies Theorem 1.To this end, we need the following lemmas.
For two conditional distributions P X|U , P Y|V , denote C(P X|U , P Y|V ) as the set of conditional distributions Q XY|UV such that its marginals satisfying Q X|UV = P X|U , Q Y|UV = P Y|V .The conditional (Pearson) correlation coefficient of X and Y given U is defined by The conditional maximal correlation coefficient of X and Y given U is defined by where the supremum is taken over all real-valued functions f (x, u), g(y, u) where ρ m (X; Y|U = u) = ρ m (X ; Y ) with (X , Y ) ∼ P XY|U=u .

Lemma 1 (Product Construction of Couplings
Moreover, for For a conditional distribution P X|U defined on finite alphabets, a conditional coupling P XX |UU of (P X|U , P X|U ) is called symmetric if P XX |UU (x, y|u, v) = P XX |UU (y, x|v, u) for all x, y ∈ X , u, v ∈ U .Denote C s (P X|U ) as the set of symmetric conditional couplings of (P X|U , P X|U ).Applying the lemma above to symmetric couplings, we have that if couplings We hence have that for any ρ ∈ [0, 1], sup + sup ≥ sup where the first inequality above follows by Lemma 1 and the chain rule for entropies.In fact, in the derivation above, the i-th distribution P X i Y i |X i−1 Y i−1 is chosen as a greedy coupling in the sense that it only maximizes the i-th objective function H(Z i |Z i−1 ), regardless of other H(Z j |Z j−1 ) with j > i (although it indeed affects their values).By the fact that conditioning reduces entropy, it holds that Then, the expression at the right-hand side of ( 11) is further lower bounded by ∑ n i=1 g i (P X i−1 , ρ).Combing this with ( 8) and (11), and by noting that ρ ∈ [0, 1] is arbitrary, we obtain that = inf P X n :P X i (1)≤t,∀i ≥ sup P ρ inf P X j :H(X j |X j−1 )>0,P X j (1)≤t , where • (13) follows since a+b c+d ≥ min{ a c , b d } for a, b ≥ 0, c, d > 0, and H(X i |X i−1 ) = 0 implies X i is a function of X i−1 , and hence, g i (P X i−1 , ρ) = 0; • the index j in the last line is the optimal i attaining the minimum in (13).
We next further simplify the lower bound in (14).Denote So, , where (X u , Y v ) ∼ P XY|U=u,V=v , ρ p denotes the Pearson correlation coefficient, and ( 16) follows since the maximal correlation coefficient between two binary random variables is equal to the absolute value of the Pearson correlation coefficient between them; see, e.g., [19].So, ≤ ρ a.s., and also equivalent to The inner supremum in ( 14) can be rewritten as sup By the fact that h is increasing on [0, 1/2] and decreasing on [1/2, 1], it holds that the optimal r attaining the supremum in the last line above, denoted by r * , is the median of max{0, p + q − 1, z 1 }, p + q − 1/2, and min{p, q, z 2 }, which implies Recall the definition of ϕ in (1).So, the inner supremum in ( 14) is equal to . We make following observations.Firstly, Secondly, by the definition of maximal correlation, ρ m (p; q) ≤ ρ m (U; V) holds (which is known as the data processing inequality) since p, q are respectively functions of U, V; see (15).Lastly, observe that P UV is symmetric, and p, q are obtained from U, V via the same function P X|U (1|•) (since P X|U = P Y|V holds by the symmetry of P XY|UV ).Hence, P pq is symmetric as well.Substituting all of these into ( 14) yields Γ n (t) ≥ Γ(t).

Proof of Proposition 1
By choosing Let B be a finite subset of [0, 1].Let P B be the set of symmetric distributions P pq concentrated on B 2 such that Ep ≤ t.By the Krein-Milman theorem, P B is equal to the closed convex hull of its extreme points.These extreme points are of the form (1 where recall the definitions a := a 1 +a 2 2 , b := b 1 +b 2 2 , and Q x,y := 1  2 δ (x,y) + 1 2 δ (y,x) in ( 6) and (7).By Carathéodory's theorem, it is easy to see that the convex hull of these extreme points is closed (in the weak topology, or equivalently, in the relative topology on the probability simplex).So, every P pq supported on a finite set B 2 ⊆ [0, 1] 2 such that Ep ≤ t is a convex combination of the extreme points above, i.e., P pq = ∑ k i=1 γ i Q i where Q i , i ∈ [k] are extreme points, and γ i > 0 and ∑ k i=1 γ i = 1.For this distribution, g(P pq , α) where in the last line, we use the fact that E Q i h(p) = 0 implies Q i = δ (0,0) (note that t < 1/2), and hence, g(Q i , α) = 0. Therefore, g(P pq , α) where the infimum is taken over distributions P pq of the form (1 (Recall the definition of a, b in (6).)

Discussion
The breakthrough made by Gilmer [1] shows the power of information-theoretic techniques in tackling problems from related fields.In fact, the union-closed sets conjecture has a natural interpretation in the information-theoretic (or coding-theoretic) sense.Consider the memoryless OR multi-access channel (x n , y n ) ∈ Ω 2n → x n ∨ y n ∈ Ω n .We would like to find a nonempty code A ⊆ Ω n generate two independent inputs X n , Y n with each following Unif(A) such that the input constraint E[X i ] ≤ t, ∀i ∈ [n] is satisfied and the output X n ∨ Y n is still in A a.s.The union-closed sets conjecture states that such a code exists if and only if t ≥ 1/2.Based on this information-theoretic interpretation, it is reasonable to see that the information-theoretic techniques work for this conjecture.It is well-known that informationtheoretic techniques usually work very well for problems with "approximate" constraints, e.g., the channel coding problem with the asymptotically vanishing error probability constraint (or the approximate version of the union-closed sets problem introduced in [3]).It is hard to say whether information-theoretic techniques are sufficient to prove sharp bounds for problems with "exact" constraints, e.g., the zero-error coding problem (or the original version of the union-closed sets conjecture).
Furthermore, as an intermediate result, it has been shown that Γ n (t) > 1 implies p A > t for any OR-closed A ⊆ Ω n .Here Γ n (t) is given in (8), expressed in the multi-letter form (i.e., the dimension-dependent form).By the super-block coding argument, it is verified that given t > 0, lim n→∞ Γ n (t) exists.It is interesting to investigate this limit, and prove a single-letter (dimension-independent) expression for it.
For simplicity, in this paper, we only consider the maximal correlation coefficient as the constraint function.In fact, the maximal correlation coefficient used here can be replaced by other functionals.The key property of the maximal correlation coefficient we used in this paper is the "tensorization" property, i.e., (10) (in fact, only "≤" part of (10) was used in our proof).In literature, there is a class of measures of correlation satisfying this property, e.g., the hypercontractivity constant, strong data processing inequality constant, or more generally, Φ-ribbons, see [20][21][22].(Although the tensorization property in the literature is only defined and proven for independent random variables, this property can be extended to the coupling constructed in (9).)Following the same proof steps given in this paper, one can obtain various variants of Theorem 1 with the maximal correlation coefficient replaced by other quantities, as long as these quantities satisfy the tensorization property.Another potential direction is to replace the Shannon entropy with a class of more general quantities, Rényi entropies.However, unfortunately Rényi entropies do not satisfy the chain rule (unlike the Shannon entropy), which leads to a serious difficulty in single-letterizing the corresponding multi-letter bound like Γ n (t) in (8) (i.e., in making the multi-letter bound dimension-independent).

Funding:
This work was supported by the NSFC grant 62101286 and the Fundamental Research Funds for the Central Universities of China (Nankai University).Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement: Not applicable.
1], and moreover, ρ m (X; Y) = 0 if and only if X, Y are independent.Moreover, ρ m (X; Y) is equal to the second largest singular value of the matrix 1) as a multiset which means median{a, a, b} = a.Denote h(a) = −a log 2 a − (1 − a) log 2 (1 − a) for a ∈ [0, 1] as the binary entropy function.Define for t > 0, Note that P pq → g(P pq , α) is concave, since by [4, Lemma 5], P p → E (p,q)∼P ⊗2 p h(p + q − pq) is concave, and P pq → P p is linear.