Next Article in Journal
Nonlinear Modeling and Coordinate Optimization of a Semi-Active Energy Regenerative Suspension with an Electro-Hydraulic Actuator
Previous Article in Journal
Acknowledgement to Reviewers of Algorithms in 2017
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inapproximability of Maximum Biclique Problems, Minimum k-Cut and Densest At-Least-k-Subgraph from the Small Set Expansion Hypothesis †

Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94709, USA
An extended abstract of this work appeared at ICALP 2017 under a different title “Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis”.
Algorithms 2018, 11(1), 10; https://doi.org/10.3390/a11010010
Submission received: 28 November 2017 / Revised: 29 December 2017 / Accepted: 5 January 2018 / Published: 17 January 2018

Abstract

:
The Small Set Expansion Hypothesis is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small subset of vertices whose (edge) expansion is almost zero and one in which all small subsets of vertices have expansion almost one. In this work, we prove conditional inapproximability results with essentially optimal ratios for the following graph problems based on this hypothesis: Maximum Edge Biclique, Maximum Balanced Biclique, Minimum k-Cut and Densest At-Least-k-Subgraph. Our hardness results for the two biclique problems are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani to avoid locality of gadget reductions with a generalization of Bansal and Khot’s long code test whereas our results for Minimum k-Cut and Densest At-Least-k-Subgraph are shown via elementary reductions.

1. Introduction

Since the PCP theorem was proved two decades ago [1,2], our understanding of approximability of combinatorial optimization problems has grown enormously; tight inapproximability results have been obtained for fundamental problems such as Max-3SAT [3], Max Clique [4] and Set Cover [5,6]. Yet, for other problems, including Vertex Cover and Max Cut, known NP-hardness of approximation results come short of matching best known algorithms.
Khot’s introduction of the Unique Games Conjecture (UGC) [7] propelled another wave of development in hardness of approximation that saw many of these open problems resolved (see e.g., [8,9]). Alas, some problems continue to elude even attempts at proving UGC-hardness of approximation. For a class of such problems, the failure stems from the fact that typical reductions are local in nature; many reductions from unique games to graph problems could produce disconnected graphs. If we try to use such reductions for problems that involve some forms of expansion of graphs (e.g., Sparsest Cut), we are out of luck.
One approach to overcome the aforementioned issue is through the Small Set Expansion Hypothesis (SSEH) of Raghavendra and Steurer [10]. To describe the hypothesis, let us introduce some notations. Throughout the paper, we represent an undirected edge-weighted graph G = ( V , E , w ) by a vertex set V, an edge set E and a weight function w : E R 0 . We call Gd-regular if v : ( u , v ) E w ( u , v ) = d for every u V . For a d-regular weighted graph G, the edge expansion Φ ( S ) of S V is defined as
Φ ( S ) = E ( S , V   \   S ) d min { | S | , | V   \   S | }
where E ( S , V   \   S ) is the total weight of edges across the cut ( S , V   \   S ) . The small set expansion problem SSE ( δ , η ) , where η , δ are two parameters that lie in (0, 1), can be defined as follows.
Definition 1
(SSE( δ , η )). Given a regular edge-weighted graph G = ( V , E , w ) , distinguish between:
  • (Completeness) There exists S V of size δ | V | such that Φ ( S ) η .
  • (Soundness) For every S V of size δ | V | , Φ ( S ) 1 η .
Roughly speaking, SSEH asserts that it is NP-hard to distinguish between a graph that has a small non-expanding subset of vertices and one in which all small subsets of vertices have almost perfect edge expansion. More formally, the hypothesis can be stated as follows.
Conjecture 1
(SSEH [10]). For every η > 0 , there is δ = δ ( η ) ( 0 , 1 / 2 ] such that SSE( δ , η ) is NP-hard.
Interestingly, SSEH not only implies UGC [10], but it is also equivalent to a strengthened version of the latter, in which the graph is required to have almost perfect small set expansion [11].
Since its proposal, SSEH has been used as a starting point for inapproximability of many problems whose hardnesses are not known otherwise. Most relevant to us is the work of Raghavendra, Steurer and Tulsiani (henceforth RST) [11] who devised a technique that exploited structures of SSE instances to avoid locality in reductions. In doing so, they obtained inapproximability of Minimum Bisection, Minimum Balanced Separator, and Minimum Linear Arrangement, all of which are not known to be hard to approximate under UGC.

1.1. Maximum Edge Biclique and Maximum Balanced Biclique

Our first result is an adaptation of RST technique to prove inapproximability of Maximum Edge Biclique (MEB) and Maximum Balanced Biclique (MBB). For both problems, the input is a bipartite graph. The goal for the former is to find a complete bipartite subgraph that contains as many edges as possible whereas, for the latter, the goal is to find a balanced complete bipartite subgraph that contains as many vertices as possible.
Both problems are NP-hard. MBB was stated (without proof) to be NP-hard in Garey and Johnson’s seminal book (p. 196 [12]); several proofs of this exist such as one provided in [13]. For MEB, it was proved to be NP-hard more recently by Peeters [14]. Unfortunately, much less is known when it comes to approximability of both problems. Similar to Maximum Clique, folklore algorithms give O ( n / polylog n ) approximation ratio for both MBB and MEB, and no better algorithm is known. However, not even NP-hardness of approximation of some constant ratio is known for the problems. This is in stark contrast to Maximum Clique for which strong inapproximability results are known [4,15,16,17]. Fortunately, the situation is not completely hopeless as the problems are known to be hard to approximate under stronger complexity assumptions.
Feige [18] showed that, assuming that random 3SAT formulae cannot be refuted in polynomial time, both problems cannot be approximated to within n ε of the optimum in polynomial time for some ε > 0 . (While Feige only stated this for MBB, the reduction clearly works for MEB too.) Later, Feige and Kogan [19] proved 2 ( log   n ) ε ratio inapproximability for both problems for some ε > 0 , assuming that 3SAT ∉ DTIME( 2 n 3 / 4 + δ ) for some δ > 0 . Moreover, Khot [20] showed, assuming 3SAT ∉ BPTIME( 2 n δ ) for some δ > 0 , that no polynomial time algorithm achieves n ε -approximation for MBB for some ε > 0 . Ambühl et al. [21] subsequently built on Khot’s result and showed a similar hardness for MEB. Recently, Bhangale et al. [22] proved that both problems are hard to approximate to within n 1 ε factor for every ε > 0 , assuming a certain strengthened version of UGC and NP ≠ BPP. (In [22], the inapproximability ratio is only claimed to be n ε for some ε > 0 . However, it is not hard to see that their result in fact implies n 1 ε factor hardness of approximation as well.) In addition, while not stated explicitly, the author’s recent reduction for Densest k-Subgraph [23] yields n 1 / polyloglog   n ratio inapproximability for both problems under the Exponential Time Hypothesis [24] (3SAT ∉ DTIME( 2 o ( n ) )) and this ratio can be improved to n f ( n ) for any f o ( 1 ) under the stronger Gap Exponential Time Hypothesis [25,26] (no 2 o ( n ) time algorithm can distinguish a fully satisfiable 3SAT formula from one which is only ( 1 ε ) -satisfiable for some ε > 0 ); these ratios are better than those in [19] but worse than those in [20,21,22].
Finally, it is worth noting that, assuming the Planted Clique Hypothesis [27,28] (no polynomial time algorithm can distinguish between a random graph G ( n , 1 / 2 ) and one with a planted clique of size Ω ( n ) ), it follows (by partitioning the vertex set into two equal sets and delete all the edges within each partition) that Maximum Balanced Biclique cannot be approximated to within O ˜ ( n ) ratio in polynomial time. Interestingly, this does not give any hardness for Maximum Edge Biclique, since the planted clique has only O ( n ) edges, which less than that in a trivial biclique consisting of any vertex and all of its neighbors.
In this work, we prove strong inapproximability results for both problems, assuming SSEH:
Theorem 1.
Assuming the Small Set Expansion Hypothesis, there is no polynomial time algorithm that approximates MEB or MBB to within n 1 ε factor of the optimum for every ε > 0 , unless NP BPP.
We note that the only part of the reduction that is randomized is the gap amplification via randomized graph product [29,30]. If one is willing to assume only that NP ≠ P (and SSEH), our reduction still implies that both are hard to approximate to within any constant factor.
Only Bhangale et al.’s result [22] and our result achieve the inapproximability ratio of n 1 ε for every ε > 0 ; all other results achieve at most n ε ratio for some ε > 0 . Moreover, only Bhangale et al.’s reduction and ours are candidate NP-hardness reductions, whereas each of the other reductions either uses superpolynomial time [19,20,21,23] or relies on an average-case assumption [18]. It is also worth noting here that, while both Bhangale et al.’s result and our result are based on assumptions which can be viewed as stronger variants of UGC, the two assumptions are incomparable and, to the best of our knowledge, Bhangale et al.’s technique does not apply to SSEH. A discussion on the similarities and differences between the two assumptions can be found in Appendix C.
Along the way, we prove inapproximability of the following hypergraph bisection problem, which may be of independent interest: given a hypergraph H = ( V H , E H ) find a bisection ( T 0 , T 1 ) of V H such that the number of uncut hyperedges is maximized. ( ( T 0 , T 1 ) is a bisection of V H if | T 0 | = | T 1 | = | V H | / 2 , T 0 T 1 = and V H = T 0 T 1 .) We refer to this problem as Max UnCut Hypergraph Bisection (MUCHB). Roughly speaking, we show that, assuming SSEH, it is hard to distinguish a hypergraph whose optimal bisection cuts only ε fraction of hyperedges from one in which every bisection cuts all but ε fraction of hyperedges:
Lemma 1.
Assuming the Small Set Expansion Hypothesis, for every ε > 0 , it is NP-hard to, given a hypergraph H = ( V H , E H ) , distinguish between the following two cases:
  • (Completeness) There is a bisection ( T 0 , T 1 ) of V H s.t. | E H ( T 0 ) | , | E H ( T 1 ) | ( 1 / 2 ε ) | E H | .
  • (Soundness) For every set T V H of size at most | V H | / 2 , | E H ( T ) | ε | E H | .
Here E H ( T ) { e E H e T } denotes the set of hyperedges that lie completely inside of the set T V H .
Our result above is similar to Khot’s quasi-random PCP [20]. Specifically, Khot’s quasi-random PCP can be viewed as a hardness for MUCHB in the setting where the hypergraph is d-uniform; roughly speaking, Khot’s result states that it is hard (if 3SAT ∉ δ > 0 BPTIME( 2 n δ )) to distinguish between a d-uniform hypergraph where 1 / 2 d 2 fraction of hyperedges are uncut in the optimal bisection from one where roughly 1 / 2 d 1 fraction of hyperedges are uncut in any bisection. Note that the latter is the fraction of uncut hyperedges in random d-uniform hypergraphs and hence the name “quasi-random”. In this sense, Khot’s result provides better soundness at the expense of worse completeness compared to Theorem 1.

1.2. Minimum k-Cut

In addition to the above biclique problems, we prove an inapproximability result for the Minimum k-Cut problem, in which a weighted graph is given and the goal is to find a set of edges with minimum total weight whose removal paritions the graph into (at least) k connected components. The Minimum k-Cut problem has long been studied. When k = 2 , the problem can be solved in polynomial time simply by solving Minimum s t cut for every possible pairs of s and t. In fact, for any fixed k, the problem was proved to be in P by Goldschmidt and Hochbaum [31], who also showed that, when k is part of the input, the problem is NP-hard. To circumvent this, Saran and Vazirani [32] devised two simple polynomial time ( 2 2 / k ) -approximation algorithms for the problem. In the ensuing years, different approximation algorithms [33,34,35,36,37] have been proposed for the problem, none of which are able achieve an approximation ratio of ( 2 ε ) for some ε > 0 in polynomial time. In fact, Saran and Vazirani themselves conjectured that ( 2 ε ) -approximation is intractible for the problem [32]. In this work, we show that their conjecture is indeed true, if the SSEH holds:
Theorem 2.
Assuming the Small Set Expansion Hypothesis, it is NP-hard to approximate Minimum k-Cut to within ( 2 ε ) factor of the optimum for every constant ε > 0 .
Note that the problem was claimed to be APX-hard in [32]. However, to the best of our knowledge, the proof has never been published and no other inapproximability is known.

1.3. Densest At-Least-k-Subgraph

Our final result is a hardness of approximating the Densest At-Least-k-Subgraph (DALkS) problem, which can be stated as follows. Given an edge-weighted graph, find a subset S of at least k vertices such that the induced subgraph on S has maximum density, which is defined as the ratio between the total weight of edges and the number of vertices. The problem was first introduced by Andersen and Chellapilla [38] who also gave a 3-approximation algorithm for the problem. Shortly after, 2-approximation algorithms for the problem were discovered by Andersen [39] and independently by Khuller and Saha [40]. We show that, assuming SSEH, this approximation guarantee is essentially the best we can hope for:
Theorem 3.
Assuming the Small Set Expansion Hypothesis, it is NP-hard to approximate Densest At-Least-k-Subgraph to within ( 2 ε ) factor of the optimum for every constant ε > 0 .
After our manuscript was made available online, we were informed that Theorem 3 was also proved independently by Bergner [41]. To the best of our knowledge, this is the only known hardness of approximation result for DALkS. We remark that DALkS is a variant of the Densest k-Subgraph (DkS) problem, which is the same as DALkS except that the desired set S must have size exactly k. DkS has been extensively studied dating back to the early 90s [10,18,20,23,42,43,44,45,46,47,48,49,50,51,52]. Despite these considerable efforts, its approximability is still wide open. In particular, even though lower bounds have been shown under stronger complexity assumptions [10,18,20,23,50,52] and for LP/SDP hierarchies [49,53,54], not even constant factor NP-hardness of approximation for DkS is known. On the other hand, the best polynomial time algorithm for DkS achieves only O ( n 1 / 4 + ε ) -approximation [49]. Since any inapproximability result for DALkS translates directly to DkS, even proving some constant factor NP-hardness of approximating DALkS would advance our understanding of approximability of DkS.

2. Inapproximability of Minimum k-Cut

We now proceed to prove our main results. Let us start with the simplest: Minimum k-Cut.
Proof of Theorem 2.
The reduction from SSE( δ , η ) to Minimum k-Cut is simple; the graph G remains the input graph for Minimum k-Cut and we let k = δ n + 1 where n = | V | .
Completeness. If there is S V of size δ n such that Φ ( S ) η , then we partition the graph into k groups where the first group is V   \   S and each of the other groups contains one vertex from S. The edges cut are the edges across the cut ( S , V   \   S ) and the edges within the set S itself. The total weight of edges of the former type is d | S | Φ ( S ) η d | S | and that of the latter type is at most d | S | / 2 . Hence, the total weight of edges cut in this partition is at most ( 1 / 2 + η ) d | S | = ( 1 / 2 + η ) δ d n .
Soundness. Suppose that, for every S V of size δ n , Φ ( S ) 1 η . Let T 1 , , T k V be any k-partition of the graph. Assume without loss of generality that | T 1 | | T k | . Let A = T 1 T i where i is the maximum index such that | T 1 T i | δ n .
We claim that | A | δ n n . To see that this is the case, suppose for the sake of contradiction that | A | < δ n n . Since | A     T i + 1 | > δ n , we have T i + 1 > n . Moreover, since A = T 1 T i , we have i | A | < δ n n . As a result, we have n = | T 1 T k | | T i + 1 T k | ( k i ) | T i + 1 | > n · n = n , which is a contradiction. Hence, | A | δ n n .
Now, note that, for every S V of size δ n , Φ ( S ) 1 η implies that E ( S ) η d δ n / 2 where E ( S ) denotes the total weight of all edges within S. Since | A | δ n , we also have E ( A ) η d δ n / 2 . As a result, the total weight of edges across the cut ( A , V   \   A ) , all of which are cut by the partition, is at least
d | A | η d δ n ( 1 η ) d δ n d n = 1 η 1 δ n δ d n .
For every sufficiently small constant ε > 0 , by setting η = ε / 20 and n 100 / ( ε 2 δ 2 ) , the ratio between the two cases is at least ( 2 ε ) , which concludes the proof of Theorem 2. ☐

3. Inapproximability of Densest At-Least-k-Subgraph

We next prove our inapproximability result for Densest At-Least-k-Subgraph, which is also very simple. For this reduction and the subsequent reductions, it will be more convenient for us to use a different (but equivalent) formulation of SSEH. To state it, we first define a variant of SSE( δ , η ) called SSE( δ , η , M ); the completeness remains the same whereas the soundness is strengthened to include all S of size in δ | V | M , δ | V | M .
Definition 2
(SSE( δ , η , M )). Given a regular edge-weighted graph G = ( V , E , w ) , distinguish between:
  • (Completeness) There exists S V of size δ | V | such that Φ ( S ) η .
  • (Soundness) For every S V with | S | δ | V | M , δ | V | M , Φ ( S ) 1 η .
The new formulation of the hypothesis can now be stated as follows.
Conjecture 2.
For every η , M > 0 , there is δ = δ ( η , M ) ( 0 , 1 / 2 ] such that SSE ( δ , η , M ) is NP-hard.
Raghavendra et al. [11] showed that this formulation is equivalent to the earlier formulation (Conjecture 1); please refer to Appendix A.2 of [11] for a simple proof of this equivalence.
Proof of Theorem 3.
Given an instance G = ( V , E , w ) of SSE( δ , η , M ), we create an input graph G = ( V , w ) for Densest At-Least-k-Subgraph as follows. V consists of all the vertices in V and an additional vertex v . The weight function w remains the same as w for all edges in V whereas v has only a self-loop with weight d δ n / 2 . (If we would like to avoid self-loops, we can replace v with two vertices v 1 , v 2 with an edge of weight d δ n / 2 between them.) In other words, E = E { ( v , v ) } and w ( ( v , v ) ) = d δ n / 2 . Finally, let k = 1 + δ n where n = | V | .
Completeness. If there is S V of size δ n such that Φ ( S ) η , consider the set S = S { v } . We have | S | = k and the density of S is d δ n / 2 + E ( S ) / k where E ( S ) denote the total weight of edges within S. This can be written as
δ n k d δ n / 2 + E ( S ) δ n = δ n k d 2 + 1 Φ ( S ) 2 d d δ n ( 1 η / 2 ) k .
Soundness. Suppose that Φ ( S ) 1 η for every S V of size | S | [ δ n / M , δ n M ] . Consider any set T V of size at least k. Let T = T \ { v } and let E ( T ) denote the total weight of edges within T. Observe that the density of S is at most ( d δ n / 2 + E ( T ) ) / | T | . Let us consider the following two cases.
  • | T | δ n M . In this case, Φ ( T ) 1 η and we have
    d δ n / 2 + E ( T ) | T | d δ n 2 k + d 1 Φ ( T ) 2 d δ n 2 k + d η / 2 = d δ n 1 2 + η 2 1 + 1 δ n k d δ n ( 1 / 2 + η ) k .
  • | T | > δ n M . In this case, we have
    d δ n / 2 + E ( T ) | T | < d 2 M + d 2 = d δ n k 1 2 + 1 2 M 1 + 1 δ n d δ n 1 2 + 1 δ n + 1 M k .
Hence, in both cases, the density of T is at most d δ n 1 / 2 + max { η , 1 δ n + 1 M } / k .
For every sufficiently small constant ε > 0 , by picking η = ε / 20 , M = 40 / ε and n 800 / ( ε δ ) , the ratio between the two cases is at least ( 2 ε ) , concluding the proof of Theorem 3. ☐

4. Inapproximability of MEB and MBB

Let us now turn our attention to MEB and MBB. First, note that we can reduce MUCHB to MEB/MBB by just letting the two sides of the bipartite graph be E H and creating an edge ( e 1 , e 2 ) iff e 1 e 2 = . This immediately shows that Lemma 1 implies the following:
Lemma 2.
Assuming SSEH, for every δ > 0 , it is NP-hard to, given a bipartite graph G = ( L , R , E ) with | L | = | R | = n , distinguish between the following two cases:
  • (Completeness) G contains K ( 1 / 2 δ ) n , ( 1 / 2 δ ) n as a subgraph.
  • (Soundness) G does not contain K δ n , δ n as a subgraph.
Here K t , t denotes the complete bipartite graph in which each side contains t vertices.
We provide the full proof of Lemma 2 in Appendix A. We also note that Theorem 1 follows from Lemma 2 by gap amplification via randomized graph product [29,30]. Since this has been analyzed before even for biclique (Appendix D [20]), we defer the full proof to Appendix B.
We are now only left to prove Lemma 1; we devote the rest of this section to this task.

4.1. Preliminaries

Before we continue, we need additional notations and preliminaries. For every graph G = ( V , E , w ) and every vertex v, we write G ( v ) to denote the distribution on its neighbors weighted according to w. (That is, Pr u G ( v ) [ u = u ] is w ( ( v , u ) ) / d if ( v , u ) E and is zero otherwise.) Moreover, we sometimes abuse the notation and write e G to denote a random edge of G weighted according to w.
While our reduction can be understood without notation of unique games, it is best described in a context of unique games reductions. We provide a definition of unique games below.
Definition 3 
(Unique Game (UG)). A unique game instance U = ( G = ( V , E , W ) , [ R ] , { π e } e E ) consists of an edge-weighted graph G = ( V , E , W ) , a label set [ R ] = { 1 , , R } , and, for each e E , a permutation π e : [ R ] [ R ] . The goal is to find an assignment F : V [ R ] such that val U ( F ) Pr ( u , v ) G [ π ( u , v ) ( F ( u ) ) = F ( v ) ] is maximized; we call an edge ( u , v ) such that π ( u , v ) ( F ( u ) ) = F ( v ) satisfied.
Khot’s Unique Games Conjecture (UGC) [7] states that, for every ε > 0 , it is NP-hard to distinguish between a unique game in which there exists an assignment satisfying at least ( 1 ε ) fraction of edges from one in which every assignment satisfies at most ε fraction of edges.
Finally, we need some preliminaries in discrete Fourier analysis. We state here only few facts that we need. We refer interested readers to [55] for more details about the topic.
For any discrete probability space Ω , f : Ω R [ 0 , 1 ] can be written as σ [ | Ω | ] R f ^ ( σ ) ϕ σ where { ϕ σ } σ [ | Ω | ] R is the product Fourier basis of L 2 ( Ω R ) (see [55] (Chapter 8.1)). The degree-d influence on the j-th coordinate of f is infl j d ( f ) σ [ | Ω | ] R , σ j 1 , # σ d f ^ 2 ( σ ) where # σ | { i [ R ] σ i 1 } | . It is well known that j = 1 R infl j d ( f ) d (see [56] (Proposition 3.8)).
We also need the following theorem. It follows easily from the so-called “It Ain’t Over Till It’s Over” conjecture, which is by now a theorem ([56] (Theorem 4.9)). For more details on how this version follows from there, please refer to [57] (p. 769).
Theorem 4
([56]). For any β , ε T , γ > 0 , there exists κ > 0 and t , d N such that, if any functions f 1 , , f t : Ω R { 0 , 1 } where Ω is a probability space whose probability of each atom is at least β satisfy
i [ t ] , E x Ω R [ f i ( x ) ] 0.99 and j [ R ] , 1 i 1 i 2 t , min { infl j d ( f i 1 ) , infl j d ( f i 2 ) } κ ,
then
Pr x Ω R , D S ε T ( R ) i = 1 t f i ( C D ( x ) ) 1 < γ
where D S ε T ( R ) is a random subset of [ R ] where each i [ R ] is included independently w.p. ε T , C D ( x ) { x x [ R ] \ D = x [ R ] \ D } and f i ( C D ( x ) ) 1 is a shorthand for x C D ( x ) , f i ( x ) = 1 .
We remark that the constant 0.99 in Theorem 4 could be replaced by any constant less than one; we use it only to avoid introducing more parameters

4.2. Bansal-Khot Long Code Test and A Candidate Reduction

Theorem 4 leads us nicely to the Bansal-Khot long code test [58]. For UGC hardness reductions, one typically needs a long code test (aka dictatorship gadget) which, on input f 1 , , f t : { 0 , 1 } R { 0 , 1 } , has the following properties:
  • (Completeness) If f 1 = = f t is a long code, the test accepts with large probability. (A long code is simply j-junta (i.e. a function that depends only on the x j ) for some j [ R ] .)
  • (Soundness) If f 1 , , f t are balanced (i.e. E f 1 = = E f t = 1 / 2 ) and are “far from being a long code”, then the test accepts with low probability.
    A widely-used notion of “far from being a long code”, and one we will use here, is that the functions do not share a coordinate with large low degree influences, i.e., for every j [ R ] and every i 1 i 2 [ t ] , at least one of infl j d ( f i 1 ) and infl j d ( f i 2 ) is small.
Bansal-Khot long code test works by first picking x { 0 , 1 } R and D S ε T ( R ) . Then, test whether f i evaluates to 1 on the whole C D ( x ) . This can be viewed as an “algorithmic” version of Theorem 4; specifically, the theorem (with Ω = { 0 , 1 } ) immediately implies the soundness property of this test. On the other hand, it is obvious that, if f 1 = = f t is a long code, then the test accepts with probability 1 / 2 ε T .
Bansal and Khot used this test to prove tight hardness of approximation of Vertex Cover. The reduction is via a natural composition of the test with unique games. Their reduction also gives a cadidate reduction from UG to MUCHB, which is stated below in Figure 1.
As is typical for gadget reductions, for T V H , we view the indicator function f u ( x ) 𝟙 [ ( u , x ) T ] for each u V as the intended long code. If there exists an assignment ϕ to the unique game instance that satisfies nearly all the constraints, then the bisection corresponding to f u ( x ) = x ϕ ( u ) cuts only small fraction of edges, which yields the completeness of MUCHB.
As for the soundness, we want to decode an UG assignment from T V H of size at most | V H | / 2 which contains at least ε fraction of hyperedges. In terms of the tests, this corresponds to a collection of functions { f u } u V such that E u V E x { 0 , 1 } R f u ( x ) 1 / 2 and the Bansal-Khot test on f v 1 , , f v t passes with probability at least ε where v 1 , , v t are sampled as in Figure 1. Now, if we assume that E x f u ( x ) 0.99 for all u V , then such decoding is possible via a similar method as in [58] since Theorem 4 can be applied here.
Unfortunately, the assumption E x f u ( x ) 0.99 does not hold for an arbitrary T V H and the soundness property indeed fails. For instance, imagine the constraint graph G of the starting UG instance consisting of two disconnected components of equal size; let V 0 , V 1 be the set of vertices in the two components. In this case, the bisection ( V 0 × { 0 , 1 } R , V 1 × { 0 , 1 } R ) does not even cut a single edge! This is regardless of whether there exists an assignment to the UG that satisfies a large fraction of edges.

4.3. RST Technique and The Reduction from SSE to MUCHB

The issue described above is common for graph problems that involves some form of expansion of the graph. The RST technique [11] was in fact invented to specifically circumvent this issue. It works by first reducing SSE to UG and then exploiting the structure of the constructed UG instance when composing it with a long code test; this allows them to avoid extreme cases such as one above. There are four parameters in the reduction: ε V , β ( 0 , 1 ) and R , k N such that R is divisible by k. Before we describe the reduction, let us define additional notations:
  • Let G R denote the R-tensor graph of G = ( V , E , w ) ; the vertex set of G R is V R and, for every A , B V R , the edge weight between A , B is the product of w ( A i , B i ) in G for all i [ R ] .
  • For each A V R , T V ( A ) denote the distribution on V R where the i-th coordinate is set to A i with probability 1 ε V and is uniformly randomly sampled from V otherwise.
  • Let Π R , k denote the set of all permutations π ’s of [ R ] such that, for each j [ k ] , π ( { R ( j 1 ) / k + 1 , , R j / k } ) = { R ( j 1 ) / k + 1 , , R j / k } .
  • Let { 0 , 1 , } β denote the probability space such that the probability for 0 , 1 are both β / 2 and the probability for ⊥ is 1 β .
The first step of reduction takes an SSE( δ , η , M ) instance G = ( V , E , w ) and produces a unique game U = ( G = ( V , E , W ) , [ R ] , { π e } e E ) where V = V R and the edges are distributed as follows:
  • Sample A V R and A ˜ T V ( A ) .
  • Sample B G R ( A ˜ ) and B ˜ T V ( B ) .
  • Sample two random π A , π B Π R , k .
  • Output an edge e = ( π A ( A ˜ ) , π B ( B ˜ ) ) with π ( A , B ) = π B π A 1 .
Here ε V is a small constant, k is large and R / k should be think of as Θ ( 1 / δ ) . When there exists a set S V of size δ | V | with small edge expansion, the intended assignment is to, for each A V R , find the first block j [ k ] such that | A ( j ) S | = 1 where A ( j ) denotes the multiset { A R ( j 1 ) / k + 1 , , A R j / k } and let F ( A ) be the coordinate of the vertex in that intersection. If no such j exists, we assign F ( A ) arbitrarily. Note that, since R / k = Θ ( 1 / δ ) , Pr [ | A ( j ) S | = 1 ] is constant, which means that only 2 Ω ( k ) fraction of vertices are assigned arbitrarily. Moreover, it is not hard to see that, for the other vertices, their assignments rarely violate constraints as ε V and Φ ( S ) are small. This yields the completeness. In addition, the soundness was shown in [10,11], i.e., if every S V of size δ | V | has near perfect expansion, no assignment satisfies many constraints in U (see Lemma 7).
The second step is to reduce this UG instance to a hypergraph H = ( V H , E H ) . Instead of making the vertex set V R × { 0 , 1 } R as in the previous candidate reduction, we will instead make V H = V R × Ω R where Ω = { 0 , 1 , } β and β is a small constant. This does not seem to make much sense from the UG reduction standpoint because we typically want to assign which side of the bisection ( A , x ) V H is according to x F ( A ) but x F ( A ) could be ⊥ in this construction. However, it makes sense when we view this as a reduction from SSE directly: let us discard all coordinates i’s such that x i = and define A ( j , x ) { A i i { R ( j 1 ) / k + 1 , , R j / k } x i } . If there exists a block j [ k ] such that | A ( j , x ) S | = 1 , then let j ( A , x ) be the first such block (i.e. min { j | A ( j , x ) S | = 1 } ), let i ( A , x ) be the coordinate in the intersection between A ( j ( A , x ) , x ) and S, and assign ( A , x ) to T x i ( A , x ) . (Recall that ( T 0 , T 1 ) is the intended bisection.) Otherwise, if no such block exists, assign ( A , x ) arbitrarily.
Observe that, in the intended solution, the side that ( A , x ) is assigned to does not change if (1) A i is modified for some i [ R ] s.t. x i = or (2) we apply some permutation π Π R , k to both A and x. In other words, we can “merge” two vertices ( A , x ) and ( A , x ) that are equivalent through these changes together in the reduction. For notational convenience, instead of merging vertices, we will just modify the reduction so that, if ( A , x ) is included in some hyperedge, then every ( A , x ) reachable from ( A , x ) by these operations is also included in the hyperedge. More specifically, if we define M x ( A ) { A V R A i = A i for all i [ R ] such that x i } corresponding to the first operation, then we add π ( A , x ) to the hyperedge for every A M x ( A ) and π Π R , k . The full reduction is shown in Figure 2.
Note that the test we apply here is slightly different from Bansal-Khot test as our test is on Ω = { 0 , 1 , } β instead of { 0 , 1 } used in [58]. Another thing to note is that now our vertices and hyperedges are weighted, the vertices according to the product measure of V R × Ω R and the edges according to the distribution produced from the reduction. We write μ H to denote the measure on the vertices, i.e., for T V R × { 0 , 1 , } R , μ H ( T ) = Pr A V R , x Ω R [ ( A , x ) T ] , and we abuse the notation E H ( T ) and use it to denote the probability that a hyperedge as generated in Figure 2 lies completely in T. We note here that, while the MUCHB as stated in Lemma 1 is unweighted, it is not hard to see that we can go from weighted version to unweighted by copying each vertex and each edge proportional to their weights. (Note that this is doable since we can pick β , ε V , ε T to be rational.)
The advantage of this reduction is that the vertex “merging” makes gadget reduction non-local; for instance, it is clear that even if the starting graph V has two connected components, the resulting hypergraph is now connected. In fact, Raghavendra et al. [11] show a much stronger quantitative bound. To state this, let us consider any T V H with μ H ( T ) = 1 / 2 . From how the hyperedges are defined, we can assume w.l.o.g. that, if ( A , x ) T , then π ( A , x ) T for every A M x ( A ) and every π Π R , k . Again, let f A ( x ) 𝟙 [ ( A , x ) T ] . The following bound on the variance of E x f A ( x ) is implied by the proof of Lemma 6.6 in [11]:
E A V R E x Ω R f A ( x ) 1 / 2 2 β .
The above bound implies that, for most A’s, the mean of f A cannot be too large. This will indeed allow us to ultimately apply Theorem 4 on a certain fraction of the tuples ( B ˜ 1 , , B ˜ ) in the reduction, which leads to an UG assignment with non-negligible value.

4.4. Completeness

In the completeness case, we define a bisection similar to that described above. This bisection indeed cuts only a small fraction of hyperedges; quantitatively, this yields the following lemma.
Lemma 3.
If there is a set S V such that Φ ( S ) η and | S | = δ | V | where δ k 10 β R , k β R , then there is a bisection ( T 0 , T 1 ) of V H such that E H ( T 0 ) , E H ( T 1 ) 1 / 2 O ( ε T / β ) O ( η / β ) O ( ε V / β ) 2 Ω ( k ) where O ( · ) and Ω ( · ) hide only absolute constants.
Proof. 
Suppose that there exists S V of size | S | = δ | V | where k 10 β R , k β R and Φ ( S ) η . For A V R , x { 0 , 1 , } R , we will use the following notations throughout this proof:
  • For j [ k ] , let W ( A , x , j ) denote the set of all coordinates i in j-th block such that x i and A i S , i.e., W ( A , x , j ) = { i { R ( j 1 ) / k + 1 , , R j / k } A i S x i } .
  • Let j ( A , x ) denote the first block j with | W ( A , x , j ) | = 1 , i.e., j ( A , x ) = min { j [ k ] | W ( A , x , j ) | = 1 } . Note that if such block does not exist, we set j ( A , x ) = 1 .
  • Let i ( A , x ) be the only element in W ( A , x , j ( A , x ) ) . If j ( A , x ) = 1 , let i ( A , x ) = 1 .
To define T 0 , T 1 , we start by constructing T 0 T 0 and T 1 T 1 as follows: assign each ( A , x ) V H such that j ( A , x ) 1 to T x i ( A , x ) . Finally, we assign the rest of the vertices arbitrarily to T 0 and T 1 in such a way that μ H ( T 0 ) = μ H ( T 1 ) . Since T 0 T 0 , T 1 T 1 , it suffices to show the desired bound for E H ( T 0 ) , E H ( T 1 ) . Due to symmetry, it suffices to bound E H ( T 0 ) . Recall that E H ( T 0 ) = Pr [ e T 0 ] where e is generated as detailed in Figure 2.
To compute E H ( T 0 ) , it will be most convenient to make a block-by-block analysis. In particular, for each block j [ k ] , we define G j to denote the event that j ( B , x ) = j for some ( B , x ) e . We will be interested in bounding the following conditional probabilities:
  • c 1 Pr [ j ( A , x ) = j j ( A , x ) > j 1 ]
  • c 2 Pr [ G j j ( A , x ) > j ¬ G 1 ¬ G j 1 ]
  • c 3 Pr [ e T 0 j ( A , x ) = j ¬ G 1 ¬ G j 1 ]
Here and throughout the proof, e , A , A ˜ 1 , , A ˜ , B 1 , , B , B ˜ 1 , , B ˜ are as sampled by our reduction in Figure 2. Note also that it is clear that c 1 , c 2 , c 3 do not depend on j.
Before we bound c 1 , c 2 , c 3 , let us see how these probabilities can be used to bound E H ( T 0 ) .
Pr e E H [ e T 0 ] j = 1 k Pr e E H [ e T 0 j ( A , x ) = j ] j = 1 k Pr e E H [ e T 0 j ( A , x ) = j ¬ G 1 ¬ G j 1 ] = j = 1 k Pr e E H [ e T 0 j ( A , x ) = j ¬ G 1 ¬ G j 1 ] Pr [ j ( A , x ) = j ¬ G 1 ¬ G j 1 ] = ( 1 c 3 ) j = 1 k Pr [ j ( A , x ) = j ¬ G 1 ¬ G j 1 ] = ( 1 c 3 ) j = 1 k Pr [ j ( A , x ) = j ] Pr [ ¬ G 1 ¬ G j 1 j ( A , x ) = j ] .
The probability that j ( A , x ) = j is in fact simply
Pr [ j ( A , x ) = j ] = Pr [ j ( A , x ) = j j ( A , x ) > j 1 ] q = 1 j 1 Pr [ j ( A , x ) q j ( A , x ) > q 1 ] = c 1 ( 1 c 1 ) j 1 .
Moreover, Pr [ ¬ G 1 ¬ G j 1 j ( A , x ) = j ] can be written as
Pr [ ¬ G 1 ¬ G j 1 j ( A , x ) = j ] = q = 1 j 1 Pr [ ¬ G q j ( A , x ) = j ¬ G 1 ¬ G q 1 ] = ( 1 c 2 ) j 1 .
Plugging these two back into Equation (1), we have
E H ( T 0 ) ( 1 c 3 ) j = 1 k c 1 ( 1 c 1 ) j 1 ( 1 c 2 ) j 1 c 1 ( 1 c 3 ) j = 1 k ( 1 c 1 c 2 ) j 1 = c 1 ( 1 c 3 ) 1 ( 1 c 1 c 2 ) k c 1 + c 2 = ( 1 c 3 ) ( 1 ( 1 c 1 c 2 ) k ) 1 1 + c 2 / c 1 ( 1 c 3 ) ( 1 ( 1 c 1 ) k ) ( 1 c 2 / c 1 ) 1 c 3 ( 1 c 1 ) k c 2 / c 1 .
With Equation (2) in mind, we will proceed to bound c 1 , c 2 , c 3 . Before we do so, let us state two inequalities that will be useful: for every i [ R ] , p [ ] , we have
Pr [ B ˜ i p S A i S ] 2 ε V δ + 2 η δ
and
Pr [ B ˜ i p S A i S ] 2 ε V + η .
The first inequality comes from the fact that, for B ˜ i p to be in S when A i S , at least one of the following events must occur: (1) A i A ˜ i p and A ˜ i p S , (2) B i p B ˜ i p and B ˜ i p S , (3) ( A ˜ i p , B ˜ i p ) ( V   \   S ) × S . Each of first two occurs with probability ε V δ whereas the last event occurs with probability at most η δ / ( 1 δ ) 2 η δ . On the other hand, for the second inequality, at least one of the following events must occur: (1) A i A ˜ i p , (2) B i p B ˜ i p , (3) ( A ˜ i p , B ˜ i p ) S × ( V   \   S ) . Each of first two occurs with probability ε V whereas the last event occurs with probability at most η .
Bounding c 1 . To compute c 1 , observe that Pr [ j ( A , z ) = j j ( A , z ) > j 1 ] is the probability that, for exactly one i in the j-th block, A i S and x i . For a fixed i, this happens with probability β δ . Hence, c 1 = ( R / k ) β δ ( 1 β δ ) R / k 1 . Since δ k 10 β R , k β R , we can conclude that c 1 is simply a constant (i.e. c 1 [ 10 5 , 0.5 ] ).
Bounding c 2 . We next bound c 2 . If j ( A , x ) > j , we know that | W ( A , x , j ) | 1 . Let us consider the following two cases:
  • W ( A , x , j ) = . Observe that, if G j occurs, then there exist p [ ] and i { R ( j 1 ) / k + 1 , , R j / k } such that B ˜ i p S , and x i or i D . For brevity, below we denote the conditional event j ( A , x ) > j ¬ G 1 ¬ G j 1 W ( A , x , j ) = by E. By union bound, our observation gives the following bound.
    Pr [ G j E ] i = R ( j 1 ) / k + 1 R j / k Pr [ p [ ] , B ˜ i p S ( x i i D ) E ] = i = R ( j 1 ) / k + 1 R j / k Pr [ p [ ] , B ˜ i p S x i E ] + Pr [ p [ ] , B ˜ i p S x i = i D E ]
    We can now bound the first term by
    Pr [ p [ ] , B ˜ i p S x i E ] Pr [ p [ ] , B ˜ i p S x i E ] p = 1 Pr [ B ˜ i p S x i E ] ( Since W ( A , j ) = ) = p = 1 Pr [ B ˜ i p S A i S ] ( From Equation ( 3 ) ) ( 2 ε V δ + 2 η δ ) .
    Consider the other term in Equation (5). We can rearrange it as follows.
    Pr [ p [ ] , B ˜ i p S x i = i D E ] = ε T Pr [ p [ ] , B ˜ i p S x i = i D E ] ε T Pr [ p [ ] , B ˜ i p S x i = i D E ] ε T ( Pr [ A i S x i = i D E ] + Pr [ p [ ] , B ˜ i p S A i S x i = i D E ] ) = ε T ( Pr [ A i S x i = ] + Pr [ p [ ] , B ˜ i p S A i S ] ) ( From ( 3 ) ) ε T δ + 2 ε V δ + 2 η δ .
    Combining Equations (5)–(7) and from δ k β R , we have
    Pr [ G j E ] 1 / ( β δ ) 2 ε V δ + 2 η δ + ε T δ + 2 ε V δ + 2 η δ O ( ε T / β ) + O ( η / β ) + O ( ε V / β ) .
  • | W ( A , x , j ) | > 1 . Let i 1 and i 2 be two different (arbitrary) elements of W ( A , x , j ) . Again, for convenient, we use E to denote the conditional event j ( A , x ) > j ¬ G 1 ¬ G j 1 { i 1 , i 2 } W ( A , x , j ) . Now, let us first split Pr [ G j | E ] as follows.
    Pr [ G j E ] Pr [ i 1 D ] + Pr [ i 2 D ] + Pr [ G j E i 1 , i 2 D ] = 2 ε T + Pr [ G j E i 1 , i 2 D ] .
    Observe that, when i 1 , i 2 D , x i 1 , x i 2 for every x C D ( x ) . Hence, for G j to occur, there must be p [ ] such that at least one of B ˜ i 1 p , B ˜ i 2 p is not in S. In other words,
    Pr [ G j E i 1 , i 2 D ] Pr [ p [ ] , B ˜ i 1 p S B ˜ i 2 p S E i 1 , i 2 D ] p = 1 Pr [ B ˜ i 1 p S E i 1 , i 2 D ] + Pr [ B ˜ i 2 p S E i 1 , i 2 D ] ( From i 1 , i 2 W ( A , x , j ) ) = p = 1 Pr [ B ˜ i 1 p S A i 1 S ] + Pr [ B ˜ i 2 p S A i 2 S ] ( From Equation ( 4 ) ) 2 ε V + η .
    Combining this with Equation (8), we have Pr [ G j E ] O ( ε T ) + O ( η ) + O ( ε V ) .
As a result, we can conclude that c 2 is at most O ( ε T / β ) + O ( η / β ) + O ( ε V / β ) .
Bounding c 3 . Finally, let us bound c 3 . First, note that the probability that x i ( A , x ) = 1 is 1 / 2 and that the probability that i ( A , x ) D is ε T . This means that
c 3 1 / 2 + ε T + Pr [ e T 0 E ] .
where E is the event x i ( A , x ) = 0 i ( A , x ) D j ( A , x ) = j ¬ G 1 ¬ G j 1 .
Moreover, since A i ( A , x ) S , from Equation (4) and from union bound, we have
Pr p [ ] , B ˜ i ( A , x ) p S | E 2 ε V + η .
From the above two inequalities, we have
c 3 1 / 2 + ε T + 2 ε V + η + Pr e T 0 | E p [ ] , B ˜ i ( A , x ) p S .
Conditioned on the above event, e T 0 implies that there exists p [ ] and some i i ( A , x ) in this (j-th) block such that B ˜ i p S , and x i or i D . We have bounded an almost identical probability before in the case W ( A , x , j ) = when we bound c 2 . Similarly, here we have an upper bound of O ( ε T / β ) + O ( η / β ) + O ( ε V / β ) on this probability. Hence,
c 3 1 / 2 + O ( ε T / β ) + O ( η / β ) + O ( ε V / β )
By combining our bounds on c 1 , c 2 , c 3 with Equation (2), we immediately arrive at the desired bound:
E H ( T 0 ) 1 / 2 O ( ε T / β ) O ( η / β ) O ( ε V / β ) 2 Ω ( k ) .
 ☐

4.5. Soundness

Let us consider any set T such that μ H ( T ) 1 / 2 . We would like to give an upper bound on E H ( T ) . From how we define hyperedges, we can assume w.l.o.g. that ( A , x ) T if and only if π ( A , x ) T for every A M x ( A ) and π Π R , k . We call such T Π R , k -invariant.
Let f : V R × Ω R { 0 , 1 } denote the indicator function for T, i.e., f ( A , x ) = 1 if and only if ( A , x ) T . Note that E A V R , x Ω R f ( A , x ) = μ H ( T ) 1 / 2 . Following notation from [11], we write f A ( x ) as a shorthand for f ( A , x ) . In addition, for each A V R , we will write B ˜ Γ ( A ) as a shorthand for B ˜ generated randomly by sampling A ˜ T V ( A ) , B G R ( A ˜ ) and B ˜ T V ( B ) respectively. Let us restate Raghavendra et al.’s [11] lemma regarding the variance of E x   f A ( x ) in a more convenient formulation below.
Lemma 4
([11] (Lemma 6.6)). For every A V R , let μ A E x Ω R f A ( x ) . We have
E A V R E B ˜ Γ ( A ) μ B ˜ μ H ( T ) 2 β .
Note that Lemma 6.6 in [11] requires a symmetrization of f’s, but we do not need it here since T is Π R , k -invariant.
To see how the above lemma helps us decode an UG assignment, observe that, if our test accepts on f B ˜ 1 , , f B ˜ , x , D , then it also accepts on any subset of the functions (with the same x , D ); hence, to apply Theorem 4, it suffices that t of the functions have means 0.99 . We will choose to be large compared to t. Using above lemma and a standard tail bound, we can argue that Theorem 4 is applicable for almost all tuples B ˜ 1 , , B ˜ , as stated below.
Lemma 5.
For any positive integer t 0.01 ,
Pr A V R , B ˜ 1 , , B ˜ Γ ( A ) [ | { i [ ] | μ B ˜ i 0.99 } | t ] 1 10 β 2 / 100 .
Proof. 
First, note that, since μ H ( T ) 1 / 2 , we can use Cherbychev’s inequality and Lemma 4 to arrive at the following bound, which is analogous to Lemma 6.7 in [11]:
Pr A V R E B ˜ Γ ( A ) μ B ˜ 0.9 10 β .
Let us call A V R such that E B ˜ Γ ( A ) μ B ˜ 0.9 bad and the rest of A V R good.
For any good A V R , Markov’s inequality implies that Pr B ˜ Γ ( A ) [ μ B ˜ > 0.99 ] 0.9 / 0.99 < 0.95 . As a result, an application of Chernoff bound gives the following inequality.
Pr B ˜ 1 , , B ˜ Γ ( A ) [ | { i [ ] | μ B ˜ i 0.99 } | < t | A is good ] 2 / 100 .
Finally, observe that Equations (10) and (11) immediately yields the desired bound. ☐

4.5.1. Decoding an Unique Games Assignment

With Lemma 5 ready, we can now decode an UG assignment via a similar technique from [58].
Lemma 6.
For any ε T , γ , β > 0 , let t = t ( ε T , γ , β ) , κ = κ ( ε T , γ , β ) and d = d ( ε T , γ , β ) be as in Theorem 4. For any integer 100 t , if there exists T V H of such that μ H ( T ) 1 / 2 and E H ( T ) 2 γ + 10 β + 2 / 100 , then there exists F : V R [ R ] such that
Pr A V R , B ˜ Γ ( A ) , π A , π B Π R , k [ π A 1 ( F ( π A ( A ˜ ) ) ) = π B 1 ( F ( π B ( B ˜ ) ) ) ] γ κ 2 4 d 2 2 .
Proof. 
The decoding procedure is as follows. For each A V R , we construct a set of candidate labels Cand [ A ] { j [ R ] infl j d ( f A ) κ } . We generate F randomly by, with probability 1/2, setting F ( A ) to be a random element of Cand [ A ] and, with probability 1/2, sampling B ˜ Γ ( A ) and setting F ( A ) to be a random element from Cand [ B ] . Note that, if the candidate set is empty, then we simply pick an arbitrary assignment.
From our assumption that T is Π R , k -invariant, it follows that, for every A V R , π Π R , k and j [ R ] , Pr F [ π 1 ( F ( π ( A ) ) ) = j ] = Pr F [ F ( A ) = j ] . In other words, we have
Pr F , A V R , B ˜ Γ ( A ) , π A , π B Π R , k [ π A 1 ( F ( π A ( A ˜ ) ) ) = π B 1 ( F ( π B ( B ˜ ) ) ) ] = Pr F , A V R , B ˜ Γ ( A ) [ F ( A ˜ ) = F ( B ˜ ) ] .
Next, note that, from how our reduction is defined, E H ( T ) can be written as
E H ( T ) = Pr A V R , B ˜ 1 , , B ˜ Γ ( A ) , x Ω R , D S ε T ( R ) i = 1 f B ˜ i ( C D ( x ) ) 1 .
From E H ( T ) 2 γ + 10 β + 2 / 100 and from Lemma 5, we can conclude that
Pr A , B ˜ 1 , , B ˜ , x , D i = 1 f B ˜ i ( C D ( x ) ) 1 | { i [ ] | μ B ˜ i 0.99 } | t 2 γ .
From Markov’s inequality, we have
γ Pr A , B ˜ 1 , , B ˜ Pr x , D i = 1 f B ˜ i ( C D ( x ) ) 1 | { i [ ] | μ B ˜ i 0.99 } | t γ = Pr A , B ˜ 1 , , B ˜ Pr x , D i = 1 f B ˜ i ( C D ( x ) ) 1 γ | { i [ ] | μ B ˜ i 0.99 } | t .
A tuple ( A , B ˜ 1 , , B ˜ ) is said to be good if Pr x Ω R , D S ε T ( R ) i = 1 f B ˜ i ( C D ( x ) ) 1 γ and | { i [ ] | μ B ˜ i 0.99 } | t . For such tuple, Theorem 4 implies that there exist i 1 i 2 [ ] , j [ R ] s.t. infl j d ( f B ˜ i 1 ) , infl j d ( f B ˜ i 2 ) κ . This means that Cand ( B ˜ i 1 ) Cand ( B ˜ i 2 ) .
Hence, if we sample a tuple ( A , B ˜ 1 , , B ˜ ) at random, and then sample two different B ˜ , B ˜ randomly from B ˜ 1 , , B ˜ , then the tuple is good with probability at least γ and, with probability 1 / 2 , we have B ˜ = B ˜ i 1 , B ˜ = B ˜ i 2 . This gives the following bound:
Pr A , B ˜ , B ˜ Cand ( B ˜ ) Cand ( B ˜ ) γ 2 .
Now, observe that B ˜ and B ˜ above are distributed in the same way as if we pick both of them independently with respect to Γ ( A ) . Recall that, with probability 1/2, F ( A ) is a random element of Cand ( B ˜ ) where B ˜ Γ ( A ) and, with probability 1/2, F ( B ˜ ) is a random element of Cand ( B ˜ ) . Moreover, since the sum of degree d-influence is at most d (Proposition 3.8 [56]), the candidate sets are of sizes at most d / κ . As a result, the above bound yields
Pr A V R , B ˜ Γ ( A ) [ F ( A ) = F ( B ˜ ) ] γ κ 2 4 d 2 2 ,
which, together with Equation (12), concludes the proof of the lemma. ☐

4.5.2. Decoding a Small Non-Expanding Set

To relate our decoded UG assignment back to a small non-expanding set in G, we use the following lemma of [11], which roughly states that, with the right parameters, the soundness case of SSEH implies that only small fraction of constriants in the UG can be satisfied.
Lemma 7
([11] (Lemma 6.11)). If there exists F : V R [ R ] such that
Pr A V R , B ˜ Γ ( A ) , π A , π B Π R , k [ π A 1 ( F ( π A ( A ˜ ) ) ) = π B 1 ( F ( π B ( B ˜ ) ) ) ] ζ ,
then there exists a set S V with | S | | V | ζ 16 R , 3 k ε V R with Φ ( S ) 1 ζ 16 k .
By combining the above lemma with Lemma 6, we immediately arrive at the following:
Lemma 8.
For any ε T , γ , β > 0 , let t = t ( ε T , γ , β ) , κ = κ ( ε T , γ , β ) and d = d ( ε T , γ , β ) be as in Theorem 4. For any integer 100 t and any ε V > 0 , if there exists T V H with μ H ( T ) 1 / 2 such that E H ( T ) 2 γ + 10 β + 2 / 100 , then there exists a set S V with | S | | V | ζ 16 R , 3 k ε V R with Φ ( S ) 1 ζ 16 k where ζ = γ κ 2 4 d 2 2 .

4.6. Putting Things Together

We can now deduce inapproximability of MUCHB by simply picking appropriate parameters.
Proof of Lemma 1.
The parameters are chosen as follows:
  • Let β = ε / 30 , γ = ε / 6 , and k = Ω ( log ( 1 / ε ) ) so that the term 2 Ω ( k ) in Lemma 3 is ε / 4 .
  • Let ε T = O ( β ε ) so that the error term O ( ε T / β ) in Lemma 3 is at most ε / 4 .
  • Let t = t ( ε T , γ , β ) , κ = κ ( ε T , γ , β ) and d = d ( ε T , γ , β ) be as in Theorem 4.
  • Let ζ = γ κ 2 4 d 2 2 be as in Lemma 8 and let = max { 100 t , 1000 log ( 1 / ε ) } .
  • Let ε V = O ( ε β / ) where so that the error term O ( ε V / β ) in Lemma 3 is at most ε / 4 .
  • Let η = min { ζ 32 k , O ( ε β / ) } so that the error term O ( η / β ) in Lemma 3 is at most ε / 4 .
  • Let M = max { 16 k β ζ , 3 β ε V } .
  • Finally, let R = k β δ where δ = δ ( η , M ) is the parameter from the SSEH (Conjecture 2).
Let G = ( V , E , w ) be an instance of SSE( η , δ , M ) and let H = ( V H , G H ) be the hypergraph resulted from our reduction. If there exists S V of size δ | V | of expansion at most η , Lemma 3 implies that there is a bisection ( T 0 , T 1 ) of V H such that E H ( T 0 ) , E H ( T 1 ) 1 / 2 ε .
As for the soundness, Lemma 8 with our choice of parameters implies that, if there exists a set T V H with μ ( T ) 1 / 2 and E H ( T 0 ) ε , there exists S V with | S | δ | V | M , δ | V | M whose expansion is less than 1 η . The contrapositive of this yields the soundness property. ☐

5. Conclusions

In this work, we prove essentially tight inapproximability of MEB, MBB, Minimum k-Cut and DALkS based on SSEH. Our results, expecially for the biclique problems, demonstrate further the applications of the hypothesis and particularly the RST technique [11] in proving hardness of graph problems that involve some form of expansion. Given that the technique has been employed for only a handful of problems [11,59], an obvious but intriguing research direction is to try to utilize the technique to other problems. One plausible candidate problem to this end is the 2-Catalog Segmentation Problem [60] since a natural candidate reduction for this problem fails due to a similar counterexample as in Section 4.2.
Another interesting question is to derandomize graph product used in the gap amplification step for biclique problems. For Maximum Clique, this step has been derandomized before [17,61]; in particular, Zuckerman [17] derandomized Håstad’s result [4] to achieve n 1 ε ratio NP-hardness for approximating Maximum Clique. Without going into too much detail, we would like to note that Zuckerman’s result is based on a construction of dispersers with certain parameters; properties of dispersers then imply soundness of the reduction whereas completeness is trivial from the construction since Håstad’s PCP has perfect completeness. Unfortunately, our PCP does not have perfect completeness and, in order to use Zuckerman’s approach, additional properties are required in order to argue about completeness of the reduction.

Acknowledgments

I am grateful to Prasad Raghavendra for providing his insights on the Small Set Expansion problem and techniques developed in [10,11] and Luca Trevisan for lending his expertise in PCPs with small free bits. I would also like to thank Haris Angelidakis for useful discussions on Minimum k-Cut and Daniel Reichman for inspiring me to work on the two biclique problems. Furthermore, I thank ICALP 2017 anonymous reviewers for their useful comments and, more specifically, for pointing me to [22]. Finally, I wish to thank Aditya Bhaskara for pointing me to Bergner’s thesis [41] and Lilli Bergner for providing me with a copy of the thesis. This material is based upon work supported by the National Science Foundation under Grants No. CCF 1540685 and CCF 1655215.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Reduction from MUCHB to Biclique Problems

Proof of Lemma 2 from Lemma 1.
The reduction from MUCHB to the two biclique problems are simple. Given a hypergraph H = ( V H , E H ) , we create a bipartite graph G = ( L , R , E ) by letting L = R = E H and creating an edge ( e 1 , e 2 ) for e 1 L , e 2 R iff e 1 e 2 = .
Completeness. If there is a bisection ( T 0 , T 1 ) of V H such that | E H ( T 0 ) | , | E H ( T 1 ) | ( 1 / 2 ε ) | E H | , then E H ( T 0 ) L and E H ( T 1 ) R induce a complete bipartite subgraph with at least ( 1 / 2 ε ) n vertices on each side in G.
Soundness. Suppose that, for every T V H of size at most | V H | / 2 , we have | E H ( T ) | ε | V H | . We will show that G does not contain K ε n + 1 , ε n + 1 as a subgraph. Suppose for the sake of contradiction that G contains K ε n + 1 , ε n + 1 as a subgraph. Consider one such subgraph; let e 1 , , e ε n + 1 L and e 1 , , e ε n + 1 R be the vertices of the subgraph. From how the edges are defined, T 0 e 1 e ε n + 1 and T 1 e 1 e ε n + 1 are disjoint. At least one of the two sets is of size at most | V H | / 2 , assume without loss of generality that | T 0 | | V H | / 2 . This is a contradiction since T 0 contains at least ε n + 1 hyperedges e 1 , , e ε n + 1 .
By picking ε = δ / 2 and n > 2 / ε , we arrive at the desired hardness result. ☐

Appendix B. Gap Amplification via Randomized Graph Product

In this section, we provide a full proof of the gap amplification step for biclique problems, thereby proving Theorem 1. The argument provided below is almost the same as the randomized graph product-based analysis of Khot (Appendix D [20]), which is in turn based on the analysis of Berman and Schnitger [29], except that we modify the construction slightly so that the reduction time is polynomial. (Using Khot’s result directly would result in the construction time being quasi-polynomial.)
Specifically, we will prove the following statement, which immediately implies Theorem 1.
Lemma A1.
Assuming SSEH and NP ≠ BPP, for every ε > 0 , no polynomial time algorithm can, given a bipartite graph G = ( L , R , E ) with | L | = | R | = N , distinguish between the following two cases:
  • (Completeness) G contains K N 1 ε , N 1 ε as a subgraph.
  • (Soundness) G does not contain K N ε , N ε as a subgraph.
Proof. 
Let ε > 0 be any constant. Assume without loss of generality that ε < 1 . Consider any bipartite graph G = ( L , R , E ) with | L | = | R | = n . Let δ = 2 4 / ε , k = log   n and N = ( 1 / δ ) k . We construct a graph G = ( L , R , E ) where | L | = | R | = N as follows. For i [ N ] , pick random elements U i L k and V i R k , and add them to L and R respectively. Finally, for every U L and V R , there is an edge between U and V in G if and only if there is an edge in the original graph G between U j 1 and V j 2 for every j 1 , j 2 [ k ] .
Completeness. Suppose that the original graph G contains K ( 1 / 2 δ ) n , ( 1 / 2 δ ) n as a subgraph. Let one such subgraph be ( S , T ) where S L and T R . Observe that ( L S k , R T k ) induces a biclique in G . For each i [ N ] , note that U i S k with probability ( 1 / 2 δ ) k ( 1 / 4 ) k = N ε / 2 independent of each other. As a result, from Chernoff bound, | L S k | N 1 ε with high probability. Similarly, we also have | R T k | N 1 ε with high probability. Thus, G contains K N 1 ε , N 1 ε as a subgraph with high probability.
Soundness. Suppose that G does not contain K δ n , δ n as a subgraph. We will show that, with high probability, G does not contain K N ε , N ε as a subgraph. To do so, we will first prove the following proposition.
Proposition A1.
For any set A, let P ( A ) denote the power set of A. Moreover, let F : P ( L k R k ) P ( L R ) be the “flattening” operation defined by F ( A ) U A { U i i [ k ] } . Then, with high probability, we have | F ( S ) | δ n for every subset S L of size N ε and | F ( T ) | δ n for every subset T R of size N ε .
Proof of Proposition A1.
Let us consider the probability that there exists a set S L of size N ε such that | F ( S ) | < δ n . This can be bounded as follows.
Pr [ S L , | S | = N ε F ( S ) < δ n ] = Pr [ S L , | S | < δ n | S k L | N ε ] S L , | S | < δ n Pr [ | S k L | N ε ] = S L , | S | < δ n Pr i [ N ] 𝟙 [ U i S k ] N ε .
Observe that, for each i [ N ] , 𝟙 [ U i S k ] is simply an independent Bernoulli random variable with mean ( | S | / n ) k < δ k = 1 / N . Hence, by Chernoff bound, we have
Pr [ S L , | S | = N ε F ( S ) < δ n ] S L , | S | < δ n 2 Ω ( N ε ) S L , | S | < δ n 2 Ω ( n 2 ) = 2 Ω ( n 2 )
as desired.
Analogously, we also have | F ( T ) | δ n for every subset T R of size N ε with high probability, thereby concluding the proof of Proposition A1. ☐
With Proposition A1 ready, let us proceed with our soundness proof. Suppose that the event in Proposition A1 occurs. Consider any subset S L of size N ε and any subset T R of size N ε . Since | F ( S ) | δ n , | F ( T ) | δ n and G does not contain K δ n , δ n as a subgraph, there exists u F ( S ) and v F ( T ) such that ( u , v ) E . From the definition of G , this implies that S and T do not induce a biclique in G . As a result, G does not contain K N ε , N ε as a subgraph. From this and from Proposition A1, G does not contain K N ε , N ε as a subgraph with high probability, concluding our soundness argument.
Since Lemma 2 asserts that distinguishing between the two cases above are NP-hard (assuming SSEH) and the above reduction takes polynomial time, we can conclude that, assuming SSEH and NP ≠ BPP, no polynomial time algorithm can distinguish the two cases stated in the lemma. ☐

Appendix C. Comparison Between SSEH and Strong UGC

In this section, we briefly discuss the similarities and differences between the classical Unique Games Conjecture [7], the Small Set Expansion Hypothesis [10] and the Strong Unique Games Conjecture [58]. Let us start by stating the Unique Games Conjecture, proposed by Khot in his influential work [7]:
Conjecture A1
(Unique Games Conjecture (UGC) [7]). For every ε , η > 0 , there exists R = R ( ε , η ) such that, given an UG instance ( G = ( V , E , W ) , [ R ] , { π e } e E ) such that G is regular, it is NP-hard to distinguish between the following two cases:
  • (Completeness) There exists an assignment F : V [ R ] such that val U ( F ) 1 ε .
  • (Soundness) For every assignment F : V [ R ] , val U ( F ) η .
In other words, Khot’s UGC states that it is NP-hard to distinguish between an UG instance which is almost satisfiable from one in which only small fraction of edges can be satisfied. While SSEH as stated in Conjecture 1 is not directly a statement about an UG instance, it has a strong connection with the UGC. Raghavendra and Steurer [10], in the same work in which they proposed the conjecture, observed that SSEH is implied by a variant of UGC in which the soundness is strengthened so that the constraint graph is also required to be a small-set expander (i.e. every small set has near perfect edge expansion). In a subsequent work, Raghavendra, Steurer and Tulsiani [11] showed that the two conjectures are in fact equivalent. More formally, the following variant of UGC is equivalent to SSEH:
Conjecture A2
(UGC with Small-Set Expansion (UGC with SSE) [10]). For every ε , η > 0 , there exist δ = δ ( ε ) > 0 and R = R ( ε , η ) such that, given an UG instance ( G = ( V , E , W ) , [ R ] , { π e } e E ) such that G is regular, it is NP-hard to distinguish between the following two cases:
  • (Completeness) There exists an assignment F : V [ R ] such that val U ( F ) 1 ε .
  • (Soundness) For every assignment F : V [ R ] , val U ( F ) η . Moreover, G satisfies Φ ( S ) 1 ε for every S V of size δ n .
While our result is based on SSEH (which is equivalent to UGC with SSE), Bhangale et al. [22] relies on another strengthened version of the UGC, which requires the following additional properties:
  • There is not only an assignment that satisfies almost all constraints, but also a partial assignment to almost the whole graph such that every constraint between two assigned vertices is satisfied.
  • The graph in the soundness case has to satisfy the following vertex expansion property: for every not too small subset of V , its neighborhood spans almost the whole graph.
More formally, the conjecture can be stated as follows.
Conjecture A3
(Strong UGC (SUGC) [58]). For every ε , η , δ > 0 , there exists R = R ( ε , η , δ ) such that, given an UG instance ( G = ( V , E , W ) , [ R ] , { π e } e E ) such that G is regular, it is NP-hard to distinguish between the following two cases:
  • (Completeness) There exists a subset S V of size at least ( 1 ε ) | V | and a partial assignment F : S [ R ] such that every edge inside S is satisfied.
  • (Soundness) For every assignment F : V [ R ] , val U ( F ) η . Moreover, G satisfies | Γ ( S ) | ( 1 δ ) | V | for every S V of size δ n where Γ ( S ) denote the set of all neighbors of S.
The conjecture was first formulated by Bansal and Khot [58]. We note here that the name “Strong UGC” was not given by Bansal and Khot, but was coined by Bhangale et al. [22]. In fact, the name “Strong UGC” was used earlier by Khot and Regev [8] to denote a different variant of UGC, in which the completeness is strengthened to be the same as in Conjecture A3 but the soundness does not include the vertex expansion property. Interestingly, this variant of UGC is equivalent to the original version of the conjecture [8]. Moreover, as pointed out in [58], it is not hard to see that the soundness property of SUGC can also be achieved by simply adding a complete graph with negligible weight to the constraint graph. In other words, both the completeness and soundness properties of SUGC can be achieved separately. However, it is not known whether SUGC is implied by UGC.
The conjecture was first formulated by Bansal and Khot [58]. We note here that the name “Strong UGC” was not given by Bansal and Khot, but was coined by Bhangale et al. [22]. In fact, the name “Strong UGC” was used earlier by Khot and Regev [8] to denote a different variant of UGC, in which the completeness is strengthened to be the same as in Conjecture A3 but the soundness does not include the vertex expansion property. Interestingly, this variant of UGC is equivalent to the original version of the conjecture [8]. Moreover, as pointed out in [58], it is not hard to see that the soundness property of SUGC can also be achieved by simply adding a complete graph with negligible weight to the constraint graph. In other words, both the completeness and soundness properties of SUGC can be achieved separately. However, it is not known whether SUGC is implied by UGC.
To the best of our knowledge, it is not known if one of Conjecture A2 and Conjecture A3 implies the other. In particular, while the soundness cases of both conjectures require certain expansion properties of the graphs, Conjecture A2 deals with edge expansion whereas Conjecture A3 deals with vertex expansion; even though these notations are closely related, they do not imply each other. Moreover, as pointed out earlier, the completeness property of SUGC is stronger than that of UGC with SSE; we are not aware of any reduction from SSE to UG that achieves this while maintaining the same soundness as in Conjecture A2.
Finally, we note that both soundness and completeness properties of SUGC are crucial for Bhangale et al.’s reduction [22]. Hence, it is unlikely that their technique applies to SSEH. Similarly, our reduction relies crucially on edge expansion properties of the graph and, thus, is unlikely to be applicable to SUGC.

References

  1. Arora, S.; Lund, C.; Motwani, R.; Sudan, M.; Szegedy, M. Proof Verification and the Hardness of Approximation Problems. J. ACM 1998, 45, 501–555. [Google Scholar] [CrossRef]
  2. Arora, S.; Safra, S. Probabilistic Checking of Proofs: A New Characterization of NP. J. ACM 1998, 45, 70–122. [Google Scholar] [CrossRef]
  3. Håstad, J. Some Optimal Inapproximability Results. J. ACM 2001, 48, 798–859. [Google Scholar] [CrossRef]
  4. Håstad, J. Clique is Hard to Approximate within n1−ε; Springer: Berlin/Heidelberg, Gemany, 1996; pp. 627–636. [Google Scholar]
  5. Moshkovitz, D. The Projection Games Conjecture and the NP-Hardness of ln n-Approximating Set-Cover. Theory Comput. 2015, 11, 221–235. [Google Scholar] [CrossRef]
  6. Dinur, I.; Steurer, D. Analytical approach to parallel repetition. In Proceedings of the STOC ’14, Forty-Sixth Annual ACM Symposium on Theory of Computing, New York, NY, USA, 31 May–3 June 2014; pp. 624–633. [Google Scholar]
  7. Khot, S. On the Power of Unique 2-prover 1-round Games. In Proceedings of the STOC ’02, Thiry-Fourth Annual ACM Symposium on Theory of Computing, Montreal, QC, Canada, 19–21 May 2002; pp. 767–775. [Google Scholar]
  8. Khot, S.; Regev, O. Vertex cover might be hard to approximate to within 2 − ε. J. Comput. Syst. Sci. 2008, 74, 335–349. [Google Scholar] [CrossRef]
  9. Khot, S.; Kindler, G.; Mossel, E.; O’Donnell, R. Optimal Inapproximability Results for MAX-CUT and Other 2-Variable CSPs? SIAM J. Comput. 2007, 37, 319–357. [Google Scholar] [CrossRef]
  10. Raghavendra, P.; Steurer, D. Graph Expansion and the Unique Games Conjecture. In Proceedings of the STOC ’10, Forty-Second ACM Symposium on Theory of Computing, Cambridge, MA, USA, 5–8 June 2010; pp. 755–764. [Google Scholar]
  11. Raghavendra, P.; Steurer, D.; Tulsiani, M. Reductions Between Expansion Problems. In Proceedings of the CCC ’12, 27th Conference on Computational Complexity, Porto, Portugal, 26–29 June 2012; pp. 64–73. [Google Scholar]
  12. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman & Co.: New York, NY, USA, 1979. [Google Scholar]
  13. Johnson, D.S. The NP-completeness Column: An Ongoing Guide. J. Algorithms 1987, 8, 438–448. [Google Scholar] [CrossRef]
  14. Peeters, R. The Maximum Edge Biclique Problem is NP-complete. Discrete Appl. Math. 2003, 131, 651–654. [Google Scholar] [CrossRef]
  15. Khot, S. Improved Inaproximability Results for MaxClique, Chromatic Number and Approximate Graph Coloring. In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science, Las Vegas, NV, USA, 14–17 October 2001; pp. 600–609. [Google Scholar]
  16. Khot, S.; Ponnuswami, A.K. Better Inapproximability Results for MaxClique, Chromatic Number and Min-3Lin-Deletion. In Proceedings of the International Colloquium on Automata, Languages, and Programming, Venice, Italy, 10–14 July 2006; pp. 226–237. [Google Scholar]
  17. Zuckerman, D. Linear Degree Extractors and the Inapproximability of Max Clique and Chromatic Number. Theory Comput. 2007, 3, 103–128. [Google Scholar] [CrossRef]
  18. Feige, U. Relations Between Average Case Complexity and Approximation Complexity. In Proceedings of the STOC ’02, Thiry-Fourth Annual ACM Symposium on Theory of Computing, Montreal, QC, Canada, 19–21 May 2002; pp. 534–543. [Google Scholar]
  19. Feige, U.; Kogan, S. Hardness of Approximation of the Balanced Complete Bipartite Subgraph Problem; Technical Report; Weizmann Institute of Science: Rehovot, Israel, 2004. [Google Scholar]
  20. Khot, S. Ruling Out PTAS for Graph Min-Bisection, Dense k-Subgraph, and Bipartite Clique. SIAM J. Comput. 2006, 36, 1025–1071. [Google Scholar] [CrossRef]
  21. Ambühl, C.; Mastrolilli, M.; Svensson, O. Inapproximability Results for Maximum Edge Biclique, Minimum Linear Arrangement, and Sparsest Cut. SIAM J. Comput. 2011, 40, 567–596. [Google Scholar] [CrossRef]
  22. Bhangale, A.; Gandhi, R.; Hajiaghayi, M.T.; Khandekar, R.; Kortsarz, G. Bicovering: Covering Edges With Two Small Subsets of Vertices. SIAM J. Discrete Math. 2016, 31, 2626–2646. [Google Scholar] [CrossRef]
  23. Manurangsi, P. Almost-Polynomial Ratio ETH-Hardness of Approximating Densest k-Subgraph. In Proceedings of the STOC ’17, 49th Annual ACM SIGACT Symposium on Theory of Computing, Montreal, QC, Canada, 19–23 June 2017; pp. 954–961. [Google Scholar]
  24. Impagliazzo, R.; Paturi, R.; Zane, F. Which Problems Have Strongly Exponential Complexity? J. Comput. Syst. Sci. 2001, 63, 512–530. [Google Scholar] [CrossRef]
  25. Dinur, I. Mildly exponential reduction from gap 3SAT to polynomial-gap label-cover. ECCC Electron. Colloq. Comput. Complex. 2016, 23, 128. [Google Scholar]
  26. Manurangsi, P.; Raghavendra, P. A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs. In Proceedings of the ICALP ’17, 44th International Colloquium on Automata, Languages, and Programming, Warsaw, Poland, 10–14 July 2017; pp. 78:1–78:15. [Google Scholar]
  27. Jerrum, M. Large Cliques Elude the Metropolis Process. Random Struct. Algorithms 1992, 3, 347–359. [Google Scholar] [CrossRef]
  28. Kučera, L. Expected Complexity of Graph Partitioning Problems. Discrete Appl. Math. 1995, 57, 193–212. [Google Scholar] [CrossRef]
  29. Berman, P.; Schnitger, G. On the Complexity of Approximating the Independent Set Problem. Inf. Comput. 1992, 96, 77–94. [Google Scholar] [CrossRef]
  30. Blum, A. Algorithms for Approximate Graph Coloring. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1991. [Google Scholar]
  31. Goldschmidt, O.; Hochbaum, D.S. A polynomial algorithm for the k-cut problem for fixed k. Math. Oper. Res. 1994, 19, 24–37. [Google Scholar] [CrossRef]
  32. Saran, H.; Vazirani, V.V. Finding k Cuts Within Twice the Optimal. SIAM J. Comput. 1995, 24, 101–108. [Google Scholar] [CrossRef]
  33. Naor, J.S.; Rabani, Y. Tree Packing and Approximating k-cuts. In Proceedings of the SODA ’01, Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms, Washington, DC, USA, 7–9 January 2001; pp. 26–27. [Google Scholar]
  34. Zhao, L.; Nagamochi, H.; Ibaraki, T. Approximating the minimum k-way cut in a graph via minimum 3-way cuts. J. Comb. Optim. 2001, 5, 397–410. [Google Scholar] [CrossRef]
  35. Ravi, R.; Sinha, A. Approximating k-cuts via Network Strength. In Proceedings of the SODA ’02, Thirteenth Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, USA, 6–8 January 2002; pp. 621–622. [Google Scholar]
  36. Xiao, M.; Cai, L.; Yao, A.C.C. Tight approximation ratio of a general greedy splitting algorithm for the minimum k-way cut problem. Algorithmica 2011, 59, 510–520. [Google Scholar] [CrossRef]
  37. Gupta, A.; Lee, E.; Li, J. An FPT Algorithm Beating 2-Approximation for k-Cut. arXiv, 2018; arXiv:1710.08488. [Google Scholar]
  38. Andersen, R.; Chellapilla, K. Finding Dense Subgraphs with Size Bounds. In Proceedings of the International Workshop on Algorithms and Models for the Web-Graph 2009, Barcelona, Spain, 12–13 February 2009; pp. 25–37. [Google Scholar]
  39. Andersen, R. Finding large and small dense subgraphs. arXiv, 2007; arXiv:cs/0702032. [Google Scholar]
  40. Khuller, S.; Saha, B. On Finding Dense Subgraphs. In Proceedings of the International Colloquium on Automata, Languages, and Programming, Rhodes, Greece, 5–12 July 2009; pp. 597–608. [Google Scholar]
  41. Bergner, L. Small Set Expansion. Master’s Thesis, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2013. [Google Scholar]
  42. Kortsarz, G.; Peleg, D. On Choosing a Dense Subgraph (Extended Abstract). In Proceedings of the 34th Annual Symposium on Foundations of Computer Science, Palo Alto, CA, USA, 3–5 November 1993; pp. 692–701. [Google Scholar]
  43. Feige, U.; Seltser, M. On the Densest k-Subgraph Problem; Technical Report; Weizmann Institute of Science: Rehovot, Israel, 1997. [Google Scholar]
  44. Srivastav, A.; Wolf, K. Finding Dense Subgraphs with Semidefinite Programming. In Proceedings of the International Workshop on Approximation Algorithms for Combinatorial Optimization, Aalborg, Denmark, 18–19 July 1998; pp. 181–191. [Google Scholar]
  45. Feige, U.; Langberg, M. Approximation Algorithms for Maximization Problems Arising in Graph Partitioning. J. Algorithms 2001, 41, 174–211. [Google Scholar] [CrossRef]
  46. Feige, U.; Kortsarz, G.; Peleg, D. The Dense k-Subgraph Problem. Algorithmica 2001, 29, 410–421. [Google Scholar] [CrossRef]
  47. Asahiro, Y.; Hassin, R.; Iwama, K. Complexity of finding dense subgraphs. Discrete Appl. Math. 2002, 121, 15–26. [Google Scholar] [CrossRef]
  48. Goldstein, D.; Langberg, M. The Dense k Subgraph problem. arXiv, 2009; arXiv:0912.5327. [Google Scholar]
  49. Bhaskara, A.; Charikar, M.; Chlamtac, E.; Feige, U.; Vijayaraghavan, A. Detecting high log-densities: An O(n1/4) approximation for densest k-subgraph. In Proceedings of the STOC ’10, 42nd ACM Symposium on Theory of Computing, Cambridge, MA, USA, 5–8 June 2010; pp. 201–210. [Google Scholar]
  50. Alon, N.; Arora, S.; Manokaran, R.; Moshkovitz, D.; Weinstein, O. Inapproximabilty of Densest k-Subgraph from Average Case Hardness. Unpublished Manuscript. 2018. [Google Scholar]
  51. Bhaskara, A.; Charikar, M.; Vijayaraghavan, A.; Guruswami, V.; Zhou, Y. Polynomial Integrality Gaps for Strong SDP Relaxations of Densest k-subgraph. In Proceedings of the SODA ’12, Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, Kyoto, Japan, 17–19 January 2012; pp. 388–405. [Google Scholar]
  52. Braverman, M.; Ko, Y.K.; Rubinstein, A.; Weinstein, O. ETH Hardness for Densest-k-Subgraph with Perfect Completeness. In Proceedings of the SODA ’17, Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 1326–1341. [Google Scholar]
  53. Manurangsi, P. On Approximating Projection Games. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. [Google Scholar]
  54. Chlamtác, E.; Manurangsi, P.; Moshkovitz, D.; Vijayaraghavan, A. Approximation Algorithms for Label Cover and The Log-Density Threshold. In Proceedings of the SODA ’17, Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 900–919. [Google Scholar]
  55. O’Donnell, R. Analysis of Boolean Functions; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  56. Mossel, E.; O’Donnell, R.; Oleszkiewicz, K. Noise stability of functions with low influences: Invariance and optimality. Ann. Math. 2010, 171, 295–341. [Google Scholar] [CrossRef]
  57. Svensson, O. Hardness of Vertex Deletion and Project Scheduling. Theory Comput. 2013, 9, 759–781. [Google Scholar] [CrossRef]
  58. Bansal, N.; Khot, S. Optimal Long Code Test with One Free Bit. In Proceedings of the FOCS ’09, 50th Annual IEEE Symposium on Foundations of Computer Science, Atlanta, GA, USA, 5–27 October 2009; pp. 453–462. [Google Scholar]
  59. Louis, A.; Raghavendra, P.; Vempala, S. The Complexity of Approximating Vertex Expansion. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, CA, USA, 26–29 October 2013; pp. 360–369. [Google Scholar]
  60. Kleinberg, J.; Papadimitriou, C.; Raghavan, P. Segmentation Problems. J. ACM 2004, 51, 263–280. [Google Scholar] [CrossRef]
  61. Alon, N.; Feige, U.; Wigderson, A.; Zuckerman, D. Derandomized Graph Products. Comput. Complex. 1995, 5, 60–75. [Google Scholar] [CrossRef]
Figure 1. A Candidate Reduction from UG to MUCHB.
Figure 1. A Candidate Reduction from UG to MUCHB.
Algorithms 11 00010 g001
Figure 2. Reduction from SSE to Max UnCut Hypergraph Bisection.
Figure 2. Reduction from SSE to Max UnCut Hypergraph Bisection.
Algorithms 11 00010 g002

Share and Cite

MDPI and ACS Style

Manurangsi, P. Inapproximability of Maximum Biclique Problems, Minimum k-Cut and Densest At-Least-k-Subgraph from the Small Set Expansion Hypothesis. Algorithms 2018, 11, 10. https://doi.org/10.3390/a11010010

AMA Style

Manurangsi P. Inapproximability of Maximum Biclique Problems, Minimum k-Cut and Densest At-Least-k-Subgraph from the Small Set Expansion Hypothesis. Algorithms. 2018; 11(1):10. https://doi.org/10.3390/a11010010

Chicago/Turabian Style

Manurangsi, Pasin. 2018. "Inapproximability of Maximum Biclique Problems, Minimum k-Cut and Densest At-Least-k-Subgraph from the Small Set Expansion Hypothesis" Algorithms 11, no. 1: 10. https://doi.org/10.3390/a11010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop