Next Article in Journal
Local Comparison between Two Ninth Convergence Order Algorithms for Equations
Next Article in Special Issue
Special Issue “New Frontiers in Parameterized Complexity and Algorithms”: Foreward by the Guest Editors
Previous Article in Journal
Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges
Previous Article in Special Issue
Parameterized Optimization in Uncertain Graphs—A Survey and Some Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

1
Department of Applied Mathematics (KAM), Charles University, 118 00 Prague, Czech Republic
2
Department of Computer Science, Tel Aviv University, Tel Aviv 6997801, Israel
3
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
4
Google Research, Mountain View, CA 94043, USA
*
Authors to whom correspondence should be addressed.
Algorithms 2020, 13(6), 146; https://doi.org/10.3390/a13060146
Submission received: 1 October 2019 / Revised: 31 May 2020 / Accepted: 2 June 2020 / Published: 19 June 2020
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)

Abstract

:
Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.

1. Introduction

In their seminal papers of the mid 1960s, Cobham [1] and Edmonds [2] independently phrased what is now known as the Cobham–Edmonds thesis. It states that an optimization problem is feasibly solvable if it admits an algorithm with the following two properties:
  • Accuracy: the algorithm should always compute the best possible (optimum) solution.
  • Efficiency: the runtime of the algorithm should be polynomial in the input size n.
Shortly after the Cobham–Edmonds thesis was formulated, the development of the theory of NP-hardness and reducibility identified a whole plethora of problems that are seemingly intractable, i.e., for which algorithms with the above two properties do not seem to exist. Even though the reasons for this phenomenon remain elusive up to this day, this has not hindered the development of algorithms for such problems. To obtain an algorithm for an NP-hard problem, at least one of the two properties demanded by the Cobham–Edmonds thesis needs to be relaxed. Ideally, the properties are relaxed as little as possible, in order to stay close to the notion of feasible solvability suggested by the thesis.
A very common approach is to relax the accuracy condition, which means aiming for approximation algorithms [3,4]. The idea here is to use only polynomial time to compute an α -approximation, i.e., a solution that is at most a factor α times worse than the optimum solution obtainable for the given input instance. Such an algorithm may also be randomized, i.e., there is either a high probability that the output is an α -approximation, or the runtime is polynomial in expectation.
In a different direction, several relaxations of the efficiency condition have also been proposed. Popular among these is the notion of parameterized algorithms [5,6]. Here the input comes together with some parameter k N , which describes some property of the input and can be expected to be small in typical applications. The idea is to isolate the seemingly necessary exponential runtime of NP-hard problems to the parameter, while the runtime dependence on the input size n remains polynomial. In particular, the algorithm should compute the optimum solution in f ( k ) n O ( 1 ) time, for some computable function f : N N independent of the input size n. If such an algorithm exists for a problem it is fixed-parameter tractable (FPT), and the algorithm is correspondingly referred to as an FPT algorithm.
Approximation and FPT algorithms have been studied extensively for the past few decades, and this has lead to a rich literature on algorithmic techniques and deep links to other research fields within mathematics. However, in this process the limitations of these approaches have also become apparent. Some NP-hard problems can fairly be considered to be feasibly solvable in the respective regimes, as they admit polynomial-time algorithms with rather small approximation factors, or can be shown to be solvable optimally with only a fairly small exponential runtime overhead due to the parameter. However, many problems can also be shown not to admit any reasonable algorithms in either of these regimes, assuming some standard complexity assumptions. Thus considering only approximation and FPT algorithms, as has been mostly done in the past, we are seemingly stuck in a swamp of problems for which we have substantial evidence that they cannot be feasibly solved.
To find a way out of this dilemma, an obvious possibility is to lift both the accuracy and the efficiency requirements of the Cobham–Edmonds thesis. In this way, we obtain a parameterized α -approximation algorithm, which computes an α -approximation in f ( k ) n O ( 1 ) time for some computable function f, given an input of size n with parameter k. The study of such algorithms had been suggested dating back to the early days of parameterized complexity (cf. [5,7,8,9]), and we refer the readers to an excellent survey of Marx [8] for discussions on the earlier results in the area.
Recently this approach has received some increased interest, with many new results obtained in the past few years, both in terms of algorithms and hardness of approximation. The aim of this survey is to give an overview on some of these newer results. We would like to caution the readers that the goal of this survey is not to compile all known results in the field but rather to give examples that demonstrate the flavor of questions studied, techniques used to obtain them, and some potential future research directions. Finally, we remark that on the broader theme of approximation in P, there was an excellent survey recently made available by Rubinstein and Williams [10] focusing on the approximability of popular problems in P which admit simple quadratic/cubic algorithms.

Organization of the Survey

The main body of the survey is organized into two sections: one on FPT hardness of approximation (Section 3) and the other on FPT approximation algorithms (Section 4). Before these two main sections, we list some notations and preliminaries in Section 2. Finally, in Section 5, we highlight some open questions and future directions (although a large list of open problems have also been detailed throughout Section 3 and Section 4).

2. Preliminaries

In this section, we review several notions that will appear regularly throughout the survey. However, we do not include definitions of basic concepts such as W-hardness, para-NP-hardness, APX-hardness, and so forth; the interested reader may refer to [3,4,5,6] for these definitions.
Parameterized approximation algorithms. We briefly specify the different types of algorithms we will consider. As already defined in the introduction, an FPT algorithm computes the optimum solution in f ( k ) n O ( 1 ) time for some parameter k and computable function f : N N on inputs of size n. The common choices of parameters are the standard parameters based on solution size, structural parameters, guarantee parameters, and dual parameters.
An algorithm that computes the optimum solution in f ( k ) n g ( k ) time for some parameter k and computable functions f , g : N N , is called a slice-wise polynomial (XP) algorithm. If the parameter is the approximation factor, i.e., the algorithm computes a ( 1 + ε ) -approximation in f ( ε ) n g ( ε ) time, then it is called a polynomial-time approximation scheme (PTAS). The latter type of algorithm has been studied avant la lettre for quite a while. This is also true for the corresponding FPT algorithm, which computes a ( 1 + ε ) -approximation in f ( ε ) n O ( 1 ) time, and is referred to as an efficient polynomial-time approximation scheme (EPTAS). Note that if the standard parameterization of an optimization problem is W[1]-hard, then the optimization problem does not have an EPTAS (unless FPT = W[1]) [11].
Some interesting links between these algorithms, traditionally studied from the perspective of polynomial-time approximation algorithms, and parameterized complexity have been uncovered more recently [8,11,12,13,14,15,16].
As also mentioned in the introduction, a parameterized α -approximation algorithm computes an α -approximation in f ( k ) n O ( 1 ) time for some parameter k on inputs of size n. If α can be set to 1 + ε for any ε > 0 and the runtime is f ( k , ε ) n g ( ε ) , then we obtain a parameterized approximation scheme (PAS) for parameter k. Note that this runtime is only truly FPT if we assume that ε is constant. If we forbid this and consider ε as a parameter as well, i.e., the runtime should be of the form f ( k , ε ) n O ( 1 ) , then we obtain EPAS.
Kernelization. A further topic closely related to the FPT algorithms is kernelization. Here, the idea is that an instance is efficiently pre-processed by removing the “easy parts” so that only the NP-hard core of the instance remains. More concretely, a kernelization algorithm takes an instance I and a parameter k of some problem and computes a new instance I with parameter k of the same problem. The runtime of this algorithm is polynomial in the size of the input instance I and k, while the size of the output I and k is bounded as a function of the input parameter k. For optimization problems, it should also be the case that any optimum solution to I can be converted to an optimum solution of I in polynomial time. The new instance I is called the kernel of I (for parameter k). A fundamental result in fixed-parameter tractability is that an (optimization) problem parameterized by k is FPT if and only if it admits a kernelization algorithm for the same parameter [17]. However the size of the guaranteed kernel will in general be exponential (or worse) in the input parameter. Therefore, an interesting question is whether an NP-hard problem admits small kernels of polynomial size. This can be interpreted as meaning that the problem has a very efficient pre-processing algorithm, which can be used prior to solving the kernel. It also gives an additional dimension to the parameterized complexity landscape.
Kernelization has played a fundamental role in the development of FPT algorithms, where a pre-processing step is often used to simplify the structure of the input instance. It is therefore only natural to consider such pre-processing algorithms for parameterized approximation algorithms as well. Lokshtanov et al. [18] define an α -approximate kernelization algorithm, which computes a kernel I such that any β -approximation in I can be converted into an α β -approximation to the input instance I in polynomial time. Again, the size of I and k need to be bounded as a function of the input parameter k, and the algorithm needs to run in polynomial time. The instance I is now called an α -approximate kernel. Analogous to exact kernels, any problem has a parameterized α -approximation algorithm if and only if it admits an α -approximate kernel for the same parameter [18], which however might be of exponential size in the parameter.
An α -approximate kernelization algorithm that computes a polynomial-sized kernel, and for which we may set α to 1 + ε for any ε > 0 , is called a polynomial-sized approximate kernelization scheme (PSAKS). In this case ε is necessarily considered to be a constant, since any kernelization algorithms needs to run in polynomial time.
We remark here that apart from α -approximate kernels, there is another common workaround for problems with no polynomial kernels, captured using the notion of Turing kernels. There is also a lower bound framework for Turing kernels [19], and the question of approximate kernels for problems that do not even admit Turing kernels is fairly natural to ask. However we skip this discussion for the sake of brevity.
Finally, note that in literature, there is another notion of approximate kernels called α -fidelity kernelization [20] which is different from the one mentioned above. Essentially, an α -fidelity kernel is a polynomial time preprocessing procedure such that an optimal solution to the reduced instance translates to an α -approximate solution to the original. This definition allows a loss of precision in the preprocessing step, but demands that the reduced instance has to be solved to optimality. See [18] for a detailed discussion on the differences between the two approximate kernel notions.
Complexity-Theoretic Hypotheses. We assume that the readers have basic knowledge of (classic) parameterized complexity theory, including the W-hierarchy, the exponential time hypothesis (ETH), and the strong exponential time hypothesis (SETH). The reader may choose to recapitulate these definitions by referring to [6] (Sections 13 and 14).
We will additionally discuss two hypotheses that may not be standard to the community. The first is the Gap Exponential Time Hypothesis (Gap-ETH), which is a strengthening of ETH. Roughly speaking, it states that even the approximate version of 3SAT cannot be solved in subexponential time; a more formal statement of Gap-ETH can be found in Hypothesis 2. Another hypothesis we will discuss is the Parameterized Inapproximability Hypothesis (PIH), which states that the multicolored version of the Denest k-Subgraph is hard to approximate in FPT time. Once again, we do not define PIH formally here; please refer to Hypothesis 1 for a formal statement.

3. FPT Hardness of Approximation

In this section, we focus on showing barriers against obtaining good parameterized approximation algorithms. The analogous field of study in the non-parameterized (NP-hardness) regime is the theory of hardness of approximation. The celebrated PCP Theorem [21,22] and numerous subsequent works have developed a rich set of tools that allowed researchers to show tight inapproximability results for many fundamental problems. In the context of parameterized approximation, the field is still in the nascent stage. Nonetheless, there have been quite a few tools that have already been developed, which are discussed in the subsequent subsections.
We divide this section into two parts. In Section 3.1, we discuss the results and techniques in the area of hardness of parameterized approximation under the standard assumption of W [ 1 ] FPT . In Section 3.2, we discuss results and techniques in hardness of parameterized approximation under less standard assumptions such as the Gap Exponential Time Hypothesis, where the gap is inherent in the assumption, and the challenge is to construct gap-preserving reductions.

3.1. W[1]-Hardness of Gap Problems

In this subsection, we discuss W [ 1 ] -hardness of approximation of a few fundamental problems. In particular, we discuss the parameterized inapproximability (i.e., W [ 1 ] -hard to even approximate) of the Dominating Set problem, the (One-sided) Biclique problem, the Even Set problem, the Shortest Vector problem, and the Steiner Orientation problem. We emphasize here that the main difficulty that is addressed in this subsection is gap generation, i.e., we focus on how to start from a hard problem (with no gap), say k-Clique (which is the canonical W [ 1 ] -complete problem), and reduce it to one of the aforementioned problems, while generating a non-trivial gap in the process.

3.1.1. Parameterized Intractability of Biclique and Applications to Parameterized Inapproximability

In this subsubsection, we will discuss the parametrized inapproximability of the one-sided biclique problem, and show how both that result and its proof technique lead to more inapproximability results.
We begin our discussion by formally stating the k-Biclique problem where we are given as input a graph G and an integer k, and the goal is to determine whether G contains a complete bipartite subgraph with k vertices on each side. The complexity of k-Biclique was a long standing open problem and was resolved only recently by Lin [23] where he showed that it is W [ 1 ] -hard. In fact, he showed a much stronger result and this shall be the focus of attention in this subsubsection.
Theorem 1
([23]). Given a bipartite graph G ( L ˙ R , E ) and k N as input, it is W[1]-hard to distinguish between the following two cases:
  • Completeness: There are k vertices in L with at least n Θ ( 1 k ) common neighbors in R;
  • Soundness: Any k vertices in L have at most ( k + 6 ) ! common neighbors in R.
We shall refer to the gap problem in the above theorem as the One-Sided k-Biclique problem. To prove the above result, Lin introduced a technique which we shall refer to as Gadget Composition. The gadget composition technique has found more applications since [23]. We provide below a failed approach (given in [23]) to prove the above theorem; nonetheless it gives us good insight into how the gadget composition technique works.
Suppose we can construct a set family T = { S 1 , S 2 , , S n } of subsets of [ n ] for some integers k, n and h > (for example, h = n 1 / k and = ( k + 1 ) ! ) such that:
  • Property 1: Any k + 1 distinct subsets in T have intersection size at most ;
  • Property 2: Any k distinct subsets in T have intersection size at least h.
Then we can combine T with an instance of k-Clique to obtain a gap instance of One-Sided k-Biclique as follows. Given a graph G and parameter k with V ( G ) [ n ] , we construct our instance of One-Sided k-Biclique, say H ( L ˙ R , E ( H ) ) by setting L : = E ( G ) and R : = [ n ] , where for any ( v i , v j ) L and v [ n ] , we have that ( ( v i , v j ) , v ) E ( H ) if and only if v S i S j . Let s : = k ( k 1 ) / 2 . It is easy to check that if G has a k-vertex clique, say { v 1 * , , v k * } is a clique in G, then Property 2 implies that | Δ : = i [ k ] S v i * | h . It follows that the set of s vertices in L given by { ( v i * , v j * ) : for all { i , j } [ k ] 2 } are neighbors of every vertex in Δ R . On the other hand, if G contains no k-vertex clique, then any s distinct vertices in L (i.e., s edges in G) must have at least k + 1 vertices in G as their end points. Say V was the set of all vertices contained the s edges. By Property 1, we know that | Δ : = v V S v | , and thus any s distinct vertices in L have at most common neighbors in R.
It is indeed very surprising that this technique can yield non-trivial inapproximability results, as the gap is essentially produced from the gadget and is oblivious to the input! This also stands in stark contrast to the PCP theorem and hardness of approximation results in NP, where all known results were obtained by global transformations on the input. The key difference between the parameterized and NP worlds is the notion of locality. For example, consider the k-Clique problem, if a graph does not have a clique of size k, then given any k vertices, a random vertex pair in these k vertices does not have an edge with probability at least 1 / k 2 . It is philosophically possible to compose the input graph with a simple error correcting code to amplify this probability to a constant, as we are allowed to blowup the input size by any function of k. In contrast, when k is not fixed, like in the NP world, k is of the same magnitude as the input size, and thus we are only allowed to blow up the input size by poly ( n ) factor. Nonetheless, we have to point out that the gadgets typically needed to make the gadget composition technique work must be extremely rich in combinatorial structure (and are typically constructed from random objects or algebraic objects), and were previously studied extensively in the area of extremal combinatorics.
Returning to the reduction above from k-Clique to One-Sided k-Biclique, it turns out that we do not know how to construct the set system T , and hence the reduction does not pan out. Nonetheless Lin constructed a variant of T , where Property 2 was more refined and the reduction from k-Clique to One-Sided k-Biclique, went through with slightly more effort.
Before we move on to discussing some applications of Theorem 1 and the gadget composition technique, we remark about known stronger time lower bound for One-Sided k-Biclique under stronger running time hypotheses. Lin [23] showed a lower bound of n Ω ( k ) for One-Sided k-Biclique assuming ETH. We wonder if this can be further improved.
Open Question 1 (Lower bound of One-Sided k-Biclique under ETH and SETH) Can the running time lower bound on One-Sided k-Biclique be improved to n Ω ( k ) under ETH? Can it be improved to n k o ( 1 ) under SETH?
We remark that a direction to address the above question was detailed in [24]. While on the topic of the k-Biclique problem, it is worth noting that the lower bound of n Ω ( k ) for One-Sided k-Biclique assuming ETH yields a running time lower bound of n Ω ( log k log log k ) for the k-Biclique problem (due to the soundness parameters in Theorem 1). However, assuming randomized ETH, the running time lower bound for the k-Biclique problem can be improved to n Ω ( k ) [23]. Can this improved running time lower bound be obtained just under (deterministic) ETH? Finally, we remark that we shall discuss about the hardness of approximation of the k-Biclique problem in Section 3.2.3.
Inapproximability of k -Dominating Set via Gadget Composition. We shall discuss about the inapproximability of k-Dominating Set in detail in the next subsubsection. We would like to simply highlight here how the above framework was used by Chen and Lin [25] and Lin [26] to obtain inapproximability results for k-Dominating Set.
In [25], the authors starting from Theorem 1, obtain the W[1]-hardness of approximating k-Dominating Set to a factor of almost two. Then they amplify the gap to any constant by using a specialized graph product.
We now turn our attention to a recent result of Lin [26] who provided strong inapproximability result for k-Dominating Set (we refer the reader to Section 3.1.2 to obtain the context for this result). Lin’s proof of inapproximability of k-Dominating Set is a one-step reduction from an instance of k-Set Cover on a universe of size O ( log n ) (where n is the number of subsets given in the collection) to an instance of k-Set Cover on a universe of size poly ( n ) with a gap of log n log log n 1 / k . Lin then uses this gap-producing self-reduction to provide running time lower bounds (under different time hypotheses) for approximating k-set cover to a factor of ( 1 o ( 1 ) ) · log n log log n 1 / k . Recall that k-Dominating Set is essentially [27,28] equivalent to k-Set Cover.
Elaborating, Lin designs a gadget by combining the hypercube partition gadget of Feige [29] with a derandomizing combinatorial object called universal set, to obtain a gap gadget, and then combines the gap gadget with the input k-Set Cover instance (on small universe but with no gap) to obtain a gap k-Set Cover instance. This is another success story of the gadget composition technique.
Finally, we remark that Lai [30] recently extended Lin’s inapproximability results for dominating set (using the same proof framework) to rule out constant-depth circuits of size f ( k ) n o ( k ) for any computable function f.
Even Set. A recent success story of Theorem 1 is its application to resolve a long standing open problem called k-Minimum Distance Problem (also referred to as k-Even Set), where we are given as input a generator matrix A F 2 n × m of a binary linear code and an integer k, and the goal is to determine whether the code has distance at most k. Recall that the distance of a linear code is min 0 x F 2 m A x 0 where · 0 denote the 0-norm (aka the Hamming norm).
In [31], the authors showed that k-Even Set is W [ 1 ] -hard under randomized reductions. The result was obtained by starting from the inapproximability result stated in Theorem 1 followed by a series of intricate reductions. In fact they proved the following stronger inapproximability result.
Theorem 2
([31]). For any γ 1 , given input ( A , k ) F n × m × N , it is W[1]-hard (under randomized reductions) to distinguish between
  • Completeness: Distance of the code generated by A is at most k, and,
  • Soundness: Distance of the code generated by A is more than γ · k .
We emphasize that even to obtain the W[1]-hardness of k-Even Set (with no gap), they needed to start from the gap problem given in Theorem 1.
The proof of the above theorem proceeds by first showing FPT hardness of approximation of the non-homogeneous variant of k-Minimum Distance Problem called the k-Nearest Codeword Problem. In k-Nearest Codeword Problem, we are given a target vector y (in F n ) in addition to ( A , k ) , and the goal is to find whether there is any x (in F m ) such that the Hamming norm of A x y is at most k. As an intermediate step of the proof of Theorem 2, they showed that k-Nearest Codeword Problem is W[1]-hard to approximate to any constant factor.
An important intermediate problem which was studied by [31] to prove the inapproximability of k-Nearest Codeword Problem, was the k-Linear Dependent Set problem where given a set A of n vectors over a finite field F q and an integer k, the goal is to decide if there are k vectors in A that are linearly dependent. They ruled out constant factor approximation algorithms for this problem running in FPT time. Summarizing, the high level proof overview of Theorem 2 follows by reducing One-Sided k-Biclique to k-Linear Dependent Set, which is then reduced to k-Nearest Codeword Problem, followed by a final randomized reduction to k-Minimum Distance Problem.
Finally, we note that there is no reason to define k-Minimum Distance Problem only for binary code, but can instead be defined over larger fields as well. It turns out that [31] cannot rule out FPT algorithms for k-Minimum Distance Problem over F p with p > 2 , when p is fixed and is not part of the input. Thus we have the open problem.
Open Question 2.Is it W[1]-hard to decide k-Minimum Distance Problem over F p with p > 2 , when p is fixed and is not part on the input?
Shortest Vector Problem. Theorem 1 (or more precisely the constant inapproximability of k-Linear Dependent Set stated above) was also used to resolve the complexity of the parameterized k-Shortest Vector Problem in lattices, where the input (in the p norm) is an integer k N and a matrix A Z n × m representing the basis of a lattice, and we want to determine whether the shortest (non-zero) vector in the lattice has length at most k, i.e., whether min 0 x Z m A x p k . Again, k is the parameter of the problem. It should also be noted here that (as in [32]), we require the basis of the lattice to be integer valued, which is sometimes not enforced in literature (e.g., [33,34]). This is because, if A is allowed to be any matrix in R n × m , then parameterization is meaningless because we can simply scale A down by a large multiplicative factor.
In [31], the authors showed that k-Shortest Vector Problem is W [ 1 ] -hard under randomized reductions. In fact they proved the following stronger inapproximability result.
Theorem 3
([31]). For any p > 1 , there exists a constant γ p > 1 such that given input ( A , k ) Z n × m × N , it is W[1]-hard (under randomized reductions) to distinguish between
  • Completeness: The p norm of the shortest vector of the lattice generated by A is k , and,
  • Soundness: The p norm of the shortest vector of the lattice generated by A is > γ p · k .
Notice that Theorem 2 rules out FPT approximation algorithms with any constant approximation ratio for k-Even Set. In contrast, the above result only prove FPT inapproximability with some constant ratio for k-Shortest Vector Problem in p norm for p > 1 . As with k-Even Set, even to prove the W[1]-hardness of k-Shortest Vector Problem (with no gap), they needed to start from the gap problem given in Theorem 1.
The proof of the above theorem proceeds by first showing FPT hardness of approximation of the non-homogeneous variant of k-Shortest Vector Problem called the k-Nearest Vector Problem. In k-Nearest Vector Problem, we are given a target vector y (in Z n ) in addition to ( A , k ) , and the goal is to find whether there is any x (in Z m ) such that the p norm of A x y is at most k. As an intermediate step of the proof of Theorem 2, they showed that k-Nearest Vector Problem is W[1]-hard to approximate to any constant factor. Summarizing, the high level proof overview of Theorem 3 follows by reducing One-Sided k-Biclique to k-Linear Dependent Set, which is then reduced to k-Nearest Vector Problem, followed by a final randomized reduction to k-Shortest Vector Problem.
An immediate open question left open from their work is whether Theorem 3 can be extended to k-Shortest Vector Problem in the 1 norm. In other words,
Open Question 3 (Approximation of k-Shortest Vector Problem in 1 norm]). Is k-Shortest Vector Problem in the 1 norm in FPT?

3.1.2. Parameterized Inapproximability of Dominating Set

In the k-Dominating Set problem we are given an integer k and a graph G on n vertices as input, and the goal is to determine if there is a dominating set of size at most k. It was a long standing open question to design an algorithm which runs in time T ( k ) · poly ( n ) (i.e., FPT-time), that would find a dominating set of size at most F ( k ) · k whenever the graph G has a dominating set of size k, for any computable functions T and F.
The first non-trivial progress on this problem was by Chen and Lin [25] who ruled out the existence of such algorithms (under W [ 1 ] FPT ) for all constant functions F (i.e., F ( n ) = c , where c is any universal constant). We discussed their proof technique in the previous subsubsection. A couple of years later, Karthik C. S. et al. [35] completely settled the question, by ruling out the existence of such an algorithm (under W [ 1 ] FPT ) for any computable function F. Thus, k-Dominating Set was shown to be totally inapproximable. We elaborate on their proof below.
Theorem 4
([35]). Let F : N N be any computable function. Given an instance ( G , k ) of k-Dominating Set as input, it is W [ 1 ] -hard to distinguish between the following two cases:
  • Completeness: G has a dominating set of size k.
  • Soundness: Every dominating set of G is of size at least F ( k ) · k .
The overall proof follows by reducing k-Multicolor Clique to the gap k-Dominating Set with parameters as given in the theorem statement. In the k-Multicolor Clique problem, we are given an integer k and a graph G on vertex set V : = V 1 ˙ V 2 ˙ ˙ V k as input, where each V i is an independent set of cardinality n, and the goal is to determine if there is a clique of size k in G. Following a straightforward reduction from the k-Clique problem, it is fairly easy to see that k-Multicolor Clique is W [ 1 ] -hard.
The reduction from k-Multicolor Clique to the gap k-Dominating Set proceeds in two steps. In the first step we reduce k-Multicolor Clique to k-Gap CSP. This is the step where we generate the gap. In the second step, we reduce k-Gap CSP to gap k-Dominating Set. This step is fairly standard and mimics ideas from Feige’s proof of the NP-hardness of approximating the Max Coverage problem [29].
Before we proceed with the details of the above two steps, let us introduce a small technical tool from coding theory that we would need. We need codes known in literature as good codes, these are binary error correcting codes whose rate and relative distances are both constants bounded away from 0 (see [36] (Appendix E.1.2.5) for definitions). The reader may think of them as follows: for every N , we say that C { 0 , 1 } is a good code if (i) | C | = 2 ρ , for some universal constant ρ > 0 , (ii) for any distinct c , c C we have that c and c have different values on at least δ fraction of coordinates, for some universal constant δ > 0 . An encoding of C is an injective function E C : { 0 , 1 } ρ C . The encoding is said to be efficient if E C ( x ) can be computed in poly ( ) time for any x { 0 , 1 } ρ .
Let us fix k N and F : N N as in the theorem statement. We further define
α : = 1 1 k 2 · F ( k 2 ) k 2 .
From k -Multicolor Clique to k -Gap CSP. Starting from an instance of k-Multicolor Clique, say G on vertex set V : = V 1 ˙ V 2 ˙ ˙ V k , we write down a set of constraints P on a variable set X : = { x i , j i , j [ k ] , i j } as follows. For every i , j [ k ] , such that i j , define E i , j to be the set of all edges in G whose end points are in V i and V j . An assignment to variable x i , j is an element of E i , j , i.e., a pair of vertices, one from V i and the other from V j . Suppose that x i , j was assigned the edge { v i , v j } , where v i V i and v j V j . Then we define the assignment of x i , j i to be v i and the assignment of x i , j j to be v j . We define P : = { P 1 , , P k } , where the constraint P i is defined to be satisfied if the assignment to all of x 1 , i i , x 2 , i i , , x i 1 , i i , x i + 1 , i i , , x k , i i are the same. We refer to the problem of determining if there is an assignment to the variables in X such that all the constraints are satisfied as the k-CSP problem. Notice that while this is a natural way to write k-Multicolor Clique as a CSP, where we have tried to check if all variables having a vertex in common, agree on its assignment, there is no gap yet in the k-CSP problem. In particular, if there was a clique of size k in G then there is an assignment to the variables of X (by assigning the edges of the clique in G to the corresponding variable in X) such that all the constraints in P are satisfied; however, if every clique in G is of size less than k then there every assignment to the variables of X may violate only one constraint in P (and not more).
In order to amplify the gap, we rewrite the set of constraints P in a different way to obtain the set of constraints P , on the same variable set X, as follows. Suppose that x i , j was assigned the edge { v i , v j } , where v i V i and v j V j , then for β [ log n ] , we define the assignment of x i , j i , β to be the β th coordinate of v i . Recall that | V i | = n and therefore we can label all vertices in V i by vectors in { 0 , 1 } log n . We define P : = { P 1 , , P log n } , where the constraint P β is defined to be satisfied if and only if the following holds for all i [ k ] : the assignment to all of x 1 , i i , β , x 2 , i i , β , , x i 1 , i i , β , x i + 1 , i i , β , , x k , i i , β are the same. Again notice that there is an assignment to the variables of X such that all the constraints in P are satisfied if and only if the same assignment also satisfies all the constraints in P .
However, rewriting P as P allows us to simply apply the error correcting code C (with parameters ρ and δ , and encoding function E C ) to the constraints in P , to obtain a gap! In particular, we choose to be such that ρ = log n . Consider a new set of constraints P , on the same variable set X, as follows. For any z { 0 , 1 } log n and β [ ] , we denote by E C ( z ) β , the β th coordinate of E C ( z ) . We define P : = { P 1 , , P } , where the constraint P β is defined to be satisfied if and only if the following holds for all i [ k ] : the assignment to all of E C ( x 1 , i i ) β , E C ( x 2 , i i ) β , , E C ( x i 1 , i i ) β , E C ( x i + 1 , i i ) β , , E C ( x k , i i ) β are the same.
Notice, as before, that there is an assignment to the variables of X such that all the constraints in P are satisfied if and only if the same assignment also satisfies all the constraints in P . However, for every assignment to X that violates at least one constraint in P , we have that the same assignment violates at least δ fraction of the constraints in P . To see this, consider an assignment that violates the constraint P 1 in P . This implies that there is some i [ k ] such that the assignment to all of x 1 , i i , 1 , x 2 , i i , 1 , , x i 1 , i i , 1 , x i + 1 , i i , 1 , , x k , i i , 1 are not the same. Let us suppose, without loss of generality, that the assignment to x 1 , i i , 1 and x 2 , i i , 1 are different. In other words, we have that x 1 , i i x 2 , i i , where we think of x 1 , i i , x 2 , i i as log n bit vectors. Let Δ [ ] such that β Δ if and only if E C ( x 1 , i i ) β E C ( x 2 , i i ) β . By the distance of the code C we have that | Δ | δ . Finally, notice that for all β Δ , we have that the assignment does not satisfy constraint P β in P . We refer to the problem of distinguishing if there is an assignment to X such that all the constraints are satisfied or if every assignment to X does not satisfy a constant fraction of the constraints, as the k-Gap CSP problem.
In order to rule out F ( k ) approximation FPT algorithms for k-Dominating Set, we will need that for every assignment to X that violates at least one constraint in P , we have that the same assignment violates at least α fraction of the constraints in P (instead of just δ ; note that α is very close to 1, whereas δ can be at most half). To boost the gap [37] we apply a simple repetition/direct-product trick to our constraint system. Starting from P , we construct a new set of constraints P * , on the same variable set X, as follows.
P * : = { P S S [ ] t } ,
where t = log ( 1 α ) log ( 1 δ ) . For every S [ ] t , we define P S to be satisfied if and only if for all β S , the constraint P β is satisfied.
It is easy to see that P and P * have the same set of completely satisfying assignments. However, for every assignment to X that violates δ fraction of constraints in P , we have that the same assignment violates at least α fraction of the constraints in P * . To see this, consider an assignment that violates δ fraction of constraints in P , say it violates all constraints P β P , for every β Δ [ ] . This implies that the assignment satisfies constraint P S if and only if S ( [ ] \ Δ ) t . This implies that the fraction of constraints in P * that the assignment can satisfy is upper bounded by ( 1 δ ) t = 1 α .
From k -Gap CSP to gap k -Dominating Set. In the second part, starting from the aforementioned instance of k-Gap CSP (after boosting the gap), we construct an instance H of k-Dominating Set. The construction is due to Feige [29,38] and it proceeds as follows. Let F be the set of all functions from { 0 , 1 } t k to k 2 , i.e., F : = { f : { 0 , 1 } t k k 2 } . The graph H is on vertex set U = A ˙ B , where A = P * × F and B = E ( G ) , i.e., B is simply the edge set of G. We introduce an edge between all pairs of vertices in B. We introduce an edge between a : = ( S : = ( s 1 , , s t ) [ ] t , f : { 0 , 1 } t k k 2 ) A and e : = ( v i , v j ) E if and only if the following holds.
τ : = ( τ 1 , , τ t ) { 0 , 1 } k t , such that f ( τ ) = { i , j } and r [ t ] , we have E C ( v i ) s r = ( τ r ) i and E C ( v j ) s r = ( τ r ) j .
Notice that the number of vertices in H is | A | + | B | ( log n ρ ) t · k 2 2 t k + n 2 < η ( k ) · n 2.01 , for some computable function η . It is not hard to check that the following hold:
  • (Completeness) If there is an assignment to X that satisfies all constraints in P * , then the corresponding k 2 vertices in B dominate all vertices in the graph H.
  • (Soundness) If each assignment can only satisfy ( 1 α ) fraction of constraints in P * , then any dominating set of H has size at least F k 2 · k 2 .
We skip presenting details of this part of the proof here. The proofs have been derived many times in literature; if needed, the readers may refer to Appendix A of [35]. This completes our sketch of the proof of Theorem 4.
A few remarks are in order. First, the k-Gap CSP problem described in the proof above, is formalized as the k-Maxcover problem in [35] (and was originally introduced in [39]). In particular, the formalism of k-Maxcover (which may be thought of as the parameterized label cover problem) is generic enough to be used as an intermediate gap problem to reduce to both k-Dominating Set (as in [35]) and k-Clique (as in [39]). Moreover, it was robust enough to capture stronger running time lower bounds (under stronger hypotheses); this will elaborated below. However, in order to keep the above proof succinct, we skipped introducing the k-Maxcover problem, and worked with k-Gap CSP, which was sufficient for the above proof.
Second, Karthik C. S. et al. [35] additionally showed that for every computable functions T , F : N N and every constant ε > 0 :
  • Assuming the Exponential Time Hypothesis (ETH), there is no F ( k ) -approximation algorithm for k-Dominating Set that runs in T ( k ) · n o ( k ) time.
  • Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k 2 , there is no F ( k ) -approximation algorithm for k-Dominating Set that runs in T ( k ) · n k ε time.
In order to establish Theorem 4 and the above two results, Karthik C. S. et al. [35] introduced a framework to prove parameterized hardness of approximation results. In this framework, the objective was to start from either the W [ 1 ] FPT hypothesis, ETH, or SETH, and end up with the gap k-Dominating Set, i.e., they design reductions from instances of k-Clique, 3-CNF−SAT, and -CNF−SAT, to an instance of gap k-Dominating Set. A prototype reduction in this framework has two modular parts. In the first part, which is specific to the problem they start from, they generate a gap and obtain hardness of gap k-Maxcover. In the second part, they show a gap preserving reduction from gap k-Maxcover to gap k-Dominating Set, which is essentially the same as the reduction from k-Gap CSP to k-Dominating Set in the proof of Theorem 4.
The first part of a prototype reduction from the computational problem underlying a hypothesis of interest to gap k-Maxcover follows by the design of an appropriate communication protocol. In particular, the computational problem is first reduced to a constraint satisfaction problem (CSP) over k (or some function of k) variables over an alphabet of size n. The predicate of this CSP would depend on the computational problem underlying the hypothesis from which we started. Generalizing ideas from [40], they then show how a protocol for computing this predicate in the multiparty (number of players is the number of variables of the CSP) communication model, can be combined with the CSP to obtain an instance of gap k-Maxcover. For example, for the W [ 1 ] FPT hypothesis and ETH, the predicate is a variant of the equality function, and for SETH, the predicate is the well studied disjointness function. The completeness and soundness of the protocols computing these functions translate directly to the completeness and soundness of k-Maxcover.
Third, we recall that Lin [26] recently provided alternate proofs of Theorem 4 and the above mentioned stronger running time lower bounds. While we discussed about his proof technique in Section 3.1.1, we would like to discuss about his result here. Following the right setting of parameters in the proof of Theorem 4 (for example set α = 1 1 ( log n ) Ω ( 1 / k ) ), we can obtain that approximating k-Dominating Set to a factor of ( log n ) 1 / k 3 is W [ 1 ] -hard. Lin improved the exponent of 1 / k 3 in the approximation factor to h ( k ) for any computable function h. Can this inapproximability be further improved? On the other hand, can we do better than the simple polynomial time greedy algorithm which provides a (1 + ln n ) factor approximation? This leads us to the following question:
Open Question 4 (Tight inapproximability of k-Dominating Set) Is there a ( log n ) 1 o ( 1 ) factor approximation algorithm for k-Dominating Set running in time n k 0.1 ?
We conclude the discussion on k-Dominating Set with an open question on W [ 2 ] -hardness of approximation. As noted earlier, k-Dominating Set is a W [ 2 ] -complete problem, and Theorem 4 shows that the problem is W [ 1 ] -hard to approximate to any F ( k ) factor. However, is there some computable function F for which approximating k-Dominating Set is in W [ 1 ] ? In other words we have:
Open Question 5 ( W [ 2 ] -completeness of approximating k-Dominating Set) Can we base total inapproximability of k-Dominating Set on W [ 2 ] FPT ?

3.1.3. Parameterized Inapproximability of Steiner Orientation by Gap Amplification

Gap amplification is a widely used technique in the classic literatures on (NP-)hardness of approximation (e.g., [41,42,43]). In fact, the arguably simplest proof of the PCP theorem, due to Dinur [43], is indeed via repeated gap amplification. The overall idea here is simple: we start with a hardness of approximation for a problem with small factor (e.g., 1 + 1 / n ). At each step, we perform an operation that transforms an instance of our problem to another instance, in such a way that the gap becomes bigger; usually this new instance will also be bigger than our instance. By repeatedly applying this operation, one can finally arrive at a constant, or even super constant, factor hardness of approximation.
There are two main parameters that determine the success/failure of such an approach: how large the new instance is compared to the old instance (i.e., size blow-up) and how large the new gap is compared to the old gap, in each operation. To see how these two come into the picture, let us first consider a case study where a (straightforward) gap amplification procedure does not work: k-Clique. The standard way to amplify the gap for k-Clique is through graph product. Recall that the (tensor) graph product of a graph G = ( V , E ) with itself, denoted by G 2 , is a graph whose vertex set is V 2 and there is an edge between ( u 1 , u 2 ) and ( v 1 , v 2 ) if and only if ( u 1 , v 1 ) E and ( u 2 , v 2 ) E . It is not hard to check that, if we can find a clique of size t in G 2 , then we can find one of size t in G (and vice versa). This implies that, if we have an instance of clique that is hard to approximate to within a factor of ( 1 + ε ) , then we may take the graph product with itself which yields an instance of Clique that is hard to approximate to within a factor of ( 1 + ε ) 2 .
Now, let us imagine that we start with the hard instance of an exact version of k-Clique. We may think of this as being hard to approximate to within a factor of ( 1 1 / k ) . Hence, we may apply the above gap amplification procedure log k times, resulting in an instance of Clique that is hard to approximate to within a factor of 1 1 / k 2 log k , which is a constant bounded away from one (i.e., 1 / e ). The bad news here is that the number of the vertices of the final graph is n 2 log k = n k , where n is the number of vertices of the initial graph. This does not give any lower bound, because we can solve k-Clique in the original graph in n O ( k ) time trivially! In the next subsection, we will see a simple way to prove hardness of approximating k-Clique, assuming stronger assumptions. However, it remains an interesting and important open question how to prove such hardness from a non-gap assumption:
Open Question 6.Is it W[1]-hard or ETH-hard to approximatek-Cliqueto within a constant factor in FPT time?
Having seen a failed attempt, we may now move on to a success story. Remarkably, Wlodarczyk [44] recently managed to use gap amplification to prove hardness of approximation for connectivity problems, including the k-Steiner Orientation problem. Here we are given a mixed graph G, whose edges are either directed or undirected, and a set of k terminal pairs { ( s i , t i ) } i [ k ] . The goal is to orient all the undirected edges in such a way that maximizes the number of t i that can be reached from s i . The problem is known to be in XP [45] but is W[1]-hard even when all terminal pairs can be connected [46]. Starting from this W[1]-hardness, Wlodarczyk [44] devises a gap amplification step that implies a hardness of approximation with factor ( log k ) o ( 1 ) for the problem. Due to the technicality of the gap amplification step, we will not go into the specifics in this survey. However, let us point out the differences between this gap amplification and the (failed) one for Clique above. The key point here is that the new instance of Wlodarczyk’s gap amplification has size of the form f ( k ) · n instead of n 2 as in the graph product. This means that, even if we are applying Wlodarczyk’s gap amplification step log ( k ) times, or, more generally, g ( k ) times, it only results in an instance of size f ( f ( ( f ( g ( k ) times k ) ) ) ) · n , which is still FPT! Since the technique is still quite new, it is an exciting frontier to examine whether other parameterized problems allow such similar gap amplification steps.

3.2. Hardness from Gap Hypotheses

In the previous subsection, we have seen that several hardness of approximation results can be proved based on standard assumptions. However, as alluded to briefly, some basic problems, including k-Clique, still evades attempts at proving such results. This motivates several researchers in the community to come up with new assumptions that allow more power and flexibility in proving inapproximability results. We will take a look at two of these hypotheses in this subsection; we note that there have also been other assumptions formulated, but we only focus on these two since they arguably have been used most often.
The first assumption, called the Parameterized Inapproximability Hypothesis (PIH) for short, can be viewed as a gap analogue of the W[1] ≠ FPT assumption. There are many (equivalent) ways to state PIH. We choose to state it in terms of an inapproximability of the colored version of Denest k-Subgraph. In Multicolored Densest k-Subgraph, we are given a graph G = ( V , E ) where the vertex set V is partition in to k parts V 1 , , V k . The goal is to select k vertices v 1 V 1 , v 2 V 2 , , v k V k such that { v 1 , , v k } induces as many edges as possible.
It is easy to see that the exact version of this problem is W[1]-hard, via a straightforward reduction from k-Clique. PIH postulates that even the approximate version of this problem is hard:
Hypothesis 1
(Parameterized Inapproximability Hypothesis (PIH) [47,48]). For some constant ε > 0 , there is no ( 1 + ε ) factor FPT approximation algorithm for Multicolored Densest k-Subgraph.
There are two important remarks about PIH. First, the factor ( 1 + ε ) is not important, and the conjecture remains equivalent even if we state it for a factor C for any arbitrarily large constant C; this is due to gap amplification via parallel repetition [42]. Second, PIH implies that k-Clique is hard to approximate to within any constant factor:
Lemma 1.
Assuming PIH, there is no constant factor FPT approximation algorithm for k-Clique.
The above result can be shown via a classic reduction of Feige, Goldwasser, Lovász, Safra and Szegedy (henceforth FGLSS) [49], which was one of the first works connecting proof systems and hardness of approximation. Specifically, the FGLSS reduction transforms G to another graph G by viewing the edges of G as vertices of G . Then, we connect { u 1 , v 1 } and { u 2 , v 2 } except when the union { u 1 , v 1 } { u 2 , v 2 } contains two distinct vertices from the same partition. One can argue that the size of the largest clique in G is exactly equal to the number of edges in the optimal solution of Multicolored Densest k-Subgraph on G. As a result, PIH implies hardness of approximation of the former. Interestingly, however, it is not known if the inverse is true and this remains an interesting open question:
Open Question 7.Does PIH hold if we assume that k-Cliqueis FPT inapproximable to within any constant factor?
As demonstrated by the FGLSS reduction, once we have a gap, it is much easier to give a reduction to another hardness of approximation result, because we do not have to create the initial gap ourselves (as in the previous subsection) but only need to preserve or amplify the gap. Indeed, PIH turns out to be a pretty robust hypothesis that gives FPT inapproximability for many problems, including k-Clique, Directed Odd Cycle Traversal [48] and Strongly Connected Steiner Subgraph [50]. We remark that the current situation here is quite similar to that of the landscape of the classic theory of hardness of approximation before the PCP Theorem [21,22] was proved. There, Papadimitriou and Yannakakis introduced a complexity class MAX-SNP and show that many optimization problems are hard (or complete) for this class [51]. Later, the PCP Theorem confirms that these problems are NP-hard. In our case of FPT inapproximability, PIH seems to be a good analogy of MAX-SNP for problems in W[1] and, as mentioned before, PIH has been used as a starting point of many hardness of approximation results. However, there has not yet been many reverse reductions to PIH, and this is one of the motivation behind Question Section 3.2 above.
Despite the aforementioned applications of PIH, there are still quite a few questions that seem out of reach of PIH, such as whether there is an o ( k ) factor FPT approximation for k-Clique or questions related to running time lower bounds of approximation algorithms. On this front, another stronger conjecture called the Gap Exponential Time Hypothesis (Gap-ETH) is often used instead:
Hypothesis 2
(Gap Exponential Time Hypothesis (Gap-ETH) [52,53]). For some constants ε , δ > 0 , there is no O ( 2 δ n ) -time algorithm that can, given a 3CNF formula, distinguish between the following two cases:
  • (Completeness) the formula is satisfiable.
  • (Soundness) any assignment violates more than ε fraction of the clauses.
Here n denotes the number of clauses [54].
Clearly, Gap-ETH is a strengthening of ETH, which can be thought of in the above form but with ε = 1 / n . Another interesting fact is that Gap-ETH is stronger than PIH. This can be shown via the standard reduction from 3SAT to k-Clique that establishes N Ω ( k ) lower bound for the latter. The reduction, due to Chen et al. [55,56], proceed as follows. First, we partition the set of clauses C into C 1 , , C k each of size n / k . For each C i , we create a partition V i in the new graph where each vertex corresponds to all partial assignments (to variables that appear in at least one clause of C k ) that satisfy all the clauses in C k . Two vertices are connected if the corresponding partial assignments are consistent, i.e., they do not assign a variable to different values.
If there is an assignment that satisfies all the clauses, then clearly the restrictions of this assignment to each clause corresponds to k vertices from different partitions that form a clique. On the other hand, it is also not hard to argue that, in the soundness case, the number of edges induced by any k vertices from different partitions is at most 1 Θ ( ε ) . Thus, Gap-ETH implies PIH as claimed.
Now that we have demonstrated that Gap-ETH is at least as strong as PIH, we may go further and ask how much more can we achieve from Gap-ETH, compared to PIH. The obvious consequences of Gap-ETH is that it can give explicit running time lower bounds for FPT hardness of approximation results. Perhaps more surprising, however, is that it can be used to improve the inapproximability ratio as well. The rest of this subsection is devoted to present some of these examples, together with brief overviews of how the proofs of these results work.

3.2.1. Strong Inapproximability of k-Clique

Our first example is the k-Clique problem. Obviously, we can approximate k-Clique to within a factor of k, by just outputting any single vertex. It had long been asked whether an o ( k ) -approximation is achievable in FPT time. As we saw above, PIH implies that a constant factor FPT approximation does not exist, but does not yet resolve this question. Nonetheless, assuming Gap-ETH, this question can be resolved in the negative:
Theorem 5
([39]). Assuming Gap-ETH, there is no o ( k ) -FPT-approximation for k-Clique.
The reduction used in [39] to prove the above inapproximability is just a simple modification of the above reduction [55,56] that we saw for k-Clique. Suppose that we would like to rule out a k g -approximation, where g = g ( k ) is a function such that lim k g ( k ) = . The only change in the reduction is that, instead of letting C 1 , , C k be the partition of the set of clauses C, we let each C i be a set of D n g clauses for some sufficiently large constant D > 0 . The rest of the reduction works similar to before: for each C i , we create a vertex corresponding to each partial assignment that satisfies all the clauses in C i . Two vertices are joined by an edge if and only if they are consistent. This completes the description of the reduction.
To see that the reduction yields Theorem 5, first note that, if there is an assignment that satisfies the CNF formula, then we can again pick the restrictions on this formula onto C 1 , , C k ; these gives k vertices that induces a clique in the graph.
On the other hand, suppose that every assignment violates more than an ε fraction of clauses. We will argue that there is no clique of size g in the constructed graph. The only property we need from the subsets C 1 , , C k is that the union of any g such subsets contain at least ( 1 ε ) fraction of the clauses. It is not hard to show that this is true with high probability, when we choose D to be sufficiently large. Now, suppose for the sake of contradiction that there exists a clique of size g in the graph. Since the vertices corresponding to the same subset C i form an independent set, it must be that these g vertices are from different subsets. Let us call these subsets C i 1 , , C i g . Because these vertices induce a clique, we can find a global assignment that is consistent with each vertex. This global assignment satisfies all the clauses in C i 1 C i g . However, C i 1 C i g contains at least 1 ε fraction of all clauses, which contradicts to our assumption that every assignment violates more than ε fraction of the clauses.
Now, if we can o ( k ) -approximate k-Clique in T ( k ) · N O ( 1 ) time. Then, we may run this algorithm to distinguish the two cases in Gap-ETH in f ( k ) · ( 2 D n / g ) O ( 1 ) = 2 o ( n ) time, which violates Gap-ETH. This concludes our proof sketch. We end by remarking that the reduction may also be viewed as an instantiation of the randomized graph product [41,57,58], and it can also be derandomized. We omit the details of the latter here. Interested readers may refer to [39] for more detail.

3.2.2. Strong Inapproximability of Multicolored Densest k-Subgraph and Label Cover

For our second example, we go back to the Multicolored Densest k-Subgraph once again. Recall that PIH asserts that this problem is hard to approximate to some constant factor, and we have seen above that Gap-ETH also implies this. On the approximation front, however, only the trivial k-approximation algorithm is known: just pick a vertex that has edges to as many partitions as possible. Then, output that vertex and one of its neighbors from each partition. It is hence a natural question to ask whether it is possible to beat this approximation ratio. This question has been, up to lower order terms, answered in the negative, assuming Gap-ETH:
Theorem 6
([59]). Assuming Gap-ETH, there is no k 1 o ( 1 ) -approximation for Multicolored Densest k-Subgraph.
An interesting aspect of the above result is that, even in the NP-hardness regime, no NP-hardness of factor k γ for some constant γ > 0 is known. In fact, the problem is closely related to (and is a special case of) a well-known conjecture in the hardness of approximation community called the Sliding Scale Conjecture (SSC) [60,61,62]. (See [59] for more discussion on the relation between the two.) Thus, this is yet another instance where taking a parameterized complexity perspective helps us advance knowledge even in the classical settings.
To prove Theorem 6, arguably the most natural reduction here is the above reduction for Clique! Note that we now view the vertices corresponding to each subset C i as forming a partition V i . The argument in the YES case is exactly the same as before: if the formula is satisfiable, then there is a (multicolored) k-clique. However, as the readers might have noticed, the argument in the NO case does not go through anymore. In particular, even when the graph is quite dense (e.g., having half of the edges present), it may not contain any large clique at all and hence it is unclear how to recover back an assignment that satisfies a large fraction of constraints.
This obstacle was overcomed in [59] by proving an agreement testing theorem (i.e., direct product theorem), which is of the following form. Given k local functions f 1 , , f k , where f i : S i { 0 , 1 } is a boolean function whose domain S i is a subset of a universe U . If some (small) ζ fraction of the pairs agree [63] with each other, then we can find (i.e., “decode”) a global function h : [ n ] { 0 , 1 } that “approximately agrees” with roughly ζ fraction of the local functions. The theorem in [59] works when S 1 , , S k are sets of size Ω ( n ) .
Due to the technical nature of the definitions, we will not fully formalize the notions in the previous paragraph. Nonetheless, let us sketch how to apply the agreement testing theorem to prove the NO case for our reduction. Suppose for the sake of contradiction that the formula is not ( 1 δ ) -satisfiable and that there exists a k-subgraph with density ζ 1 k 1 o ( 1 ) . Recall that each selected vertex is simply a partial assignment onto the subset of clauses C i for some i; we may view this as a function f i : S i { 0 , 1 } where S i denote the set of variables that appear in C i . Here the universe U is the set of all variables. With this perspective, we can apply the agreement testing theorem to recover a global function h : U { 0 , 1 } that “approximately agrees” with roughly ζ k of the local functions. Notice that, in this context, h is simply a global assignment for the CNF formula. Previously in the proof for inapproximability of Clique, we had a global assignment that (perfectly) agrees with g local functions, from which we can conclude that this assignment satisfies all but δ fraction of the clauses. It turns out that relaxing “perfect agreement” to “approximate agreement” does not affect the proof too much, and the latter still implies that h satisfies all but δ fraction of clauses as desired.
As for the proof of the agreement testing theorem itself, we will not delve too much into detail here. However, we note that the proof is based on looking at different “agreement levels” and the graph associated with them. It turns out that such a graph has a certain transitivity property, which allows one to “decode” back the global function h. This general approach of looking at different agreement levels and their transitivity properties is standard in the direct product/agreement testing literature [64,65,66]. The main challenge in [59] is to make the proof works for ζ as small as 1 / k , which requires a new notion of transitivity.
To end this subsection, we remark that the Multicolored Densest k-Subgraph is known as the 2-ary Constraint Satisfaction Problem (2-CSP) in the classical hardness of approximation community. The problem, and in particular its special case called Label Cover, serves as the starting point of almost all known NP-hardness of approximation (see e.g., [67,68,69]). The technique in [59] can also be used to show inapproximability for Label Cover with strong running time lower bound of the form f ( k ) · N Ω ( k ) [70]. Due to known reductions, this has numerous consequences. For example, it implies, assuming Gap-ETH, that approximating k-Even Set to within any factor less than two cannot be done in f ( k ) · N o ( k ) time, considerably improving the lower bound mentioned in the previous subsection.

3.2.3. Inapproximability of k-Biclique and Densest k-Subgraph

While PIH (or equivalently the Multicolored Densest k-Subgraph problem) can serve as a starting point for hardness of approximation of many problems, there are some problems for which not even a constant factor hardness is known under PIH, but strong inapproximability results can be obtained via Gap-ETH. We will see two examples of this here.
First is the k-Biclique problem. Recall that in this problem, we are given a bipartite graph and we would like to determine whether there is a complete bipartite subgraph of size k. As stated earlier in the previous subsection, despite its close relationship to k-Clique, k-Biclique turned out to be a much more challenging problem to prove intractibility and even its W[1]-hardness was only shown recently [23]. This difficulty is corroborated by its approximability status in the classical (non-parameterized) regime; while Clique is long known to be NP-hard to approximate to within N 1 o ( 1 ) factor [71], Biclique is not even known to be NP-hard to approximate to within say 1.01 factor [72,73,74,75]. With this in mind, it is perhaps not a surprise that k-Biclique is not known to be hard to approximate under PIH. Nonetheless, when we assume Gap-ETH, we can in fact prove a very strong hardness of approximation for the problem:
Theorem 7
([39]). Assuming Gap-ETH, there is no o ( k ) -FPT-approximation for k-Biclique.
Note that, similar to k-Clique, a k-approximation for Biclique can be easily achieved by outputting a single edge. Hence, in terms of the inapproximability ratio, the above result is tight.
Due to its technicality, we only sketch an outline of the proof of Theorem 7 here. Firstly, the reduction starts by constructing a graph that is similar (but not the same) to that of k-Clique that we describe above. The main properties of this graph is that (i) in the YES case where the formula is satisfiable, the graph contains many copies of k-Biclique, and (ii) in the NO case where the formula is not even ( 1 δ ) -satisfiable, the graph contains few copies of g-Biclique. The construction and these properties were in fact shown in [76]. In [39], it was observed that, if we subsample the graph by keeping each vertex independently with probability p for an appropriate value of p, then (i) ensures that at least one of the k-Biclique survives the subsampling, whereas (ii) ensures that no g-Biclique survives. This indeed gives the claimed result in the above theorem.
We remark that, while Theorem 7 seems to resolve the approximability of k-Biclique, there is still one aspect that is not yet completely understood: the running time lower bound. To demonstrate this, recall that, for k-Clique, the reduction that gives hardness of k-vs-g Clique has size 2 O ( n / g ) ; this means that we have a running time lower bound of f ( k ) · N Ω ( g ) on the problem. This is of course tight, because we can determine whether a graph has a g-clique in N O ( g ) time. However, for k-Biclique, the known reduction that gives hardness for k-vs-g Biclique has size 2 O ( n / g ) . This results in a running time lower bound of only f ( k ) · N Ω ( g ) . Specifically, for the most basic setting of constant factor approximation, Theorem 7 only rules out algorithms with running time f ( k ) · N o ( k ) . Hence, an immediate question here is:
Open Question 8.Is there an f ( k ) · N o ( k ) -time algorithm that approximates k-Bicliqueto within a constant factor?
To put things into perspective, we note that, even for exact algorithms for k-Biclique, the best running time lower bound is still f ( k ) · N Ω ( k ) [23] (under any reasonable complexity assumption). This means that, to answer Question Section 3.2.3, one has to first settle the best known running time lower bound for exact algorithms, which would already be a valuable contribution to the understanding of the problem.
Let us now point out an interesting consequence of Theorem 7 for the Denest k-Subgraph problem. This is the “uncolored” version of the Multicolored Densest k-Subgraph problem as defined above, where there are no partitions V 1 , , V k and we can pick any k vertices in the input graph G with the objective of maximizing the number of induced edges. The approximability status of Denest k-Subgraph very much mirrors that of k-Biclique. Namely, in the parameterized setting, PIH is not known to imply hardness of approximation for Denest k-Subgraph. Furthermore, in the classic (non-parameterized) setting, Denest k-Subgraph is not known [72,76,77,78,79] to be NP-hard to approximate even to within a factor of say 1.01. Despite these, Gap-ETH does give a strong inapproximability for Denest k-Subgraph, as stated below:
Theorem 8
([39]). Assuming Gap-ETH, there is no k o ( 1 ) -FPT-approximation for Denest k-Subgraph.
In fact, the above result is a simple consequence of Theorem 7. To see this, recall the following classic result in extremal graph theory commonly referred to as the Kovári-Sós-Turán (KST) Theorem [80]: any k-vertex graph that does not contain a g-biclique as a subgraph has density at most O ( k 1 / g ) . Now, the hardness for k-Biclique from Theorem 7 tells us that there is no FPT time algorithm that can distinguish between the graph containing k-biclique from one that does not contain g-biclique for any g = ω ( 1 ) . When the graph contains a k-biclique, we have a k-vertex subgraph with density (at least) 1/2. On the other hand, when the graph does not even contain a g-biclique, the KST Theorem ensures us that any k-vertex subgraph has density at most O ( k 1 / g ) . This indeed gives a gap of O ( k 1 / g ) in terms of approximation Densest k-Subgraph and finishes the proof sketch for Theorem 8.
Unfortunately, Theorem 8 does not yet resolve the FPT approximability of Denest k-Subgraph. In particular, while the hardness is only of the form k o ( 1 ) , the best known algorithm (which is the same as that of the multicolored version discussed above) only gives an approximation ratio of k. Hence, we may ask whether this can be improved:
Open Question 9.Is there an o ( k ) -FPT-approximation algorithm forDenestk-Subgraph?
This should be contrasted with Theorem 6, for which the FPT approximability of Multicolored Densest k-Subgraph is essentially resolved (up to lower order terms).

4. Algorithms

In this section we survey some of the developments on the algorithmic side in recent years. The organization of this section is according to problem types. We begin with basic packing and covering problems in Section 4.1 and Section 4.2. We then move on to clustering in Section 4.3, network design in Section 4.4, and cut problems in Section 4.5. In Section 4.6 we present width reduction problems.
The algorithms in the above mentioned subsections compute approximate solutions to problems that are W[1]-hard. Therefore it is necessary to approximate, even when using parameterization. However, one may also aim to obtain faster parameterized runtimes than the known FPT algorithms, by sacrificing in the solution quality. We present some results of this type in Section 4.7.

4.1. Packing Problems

For a packing problem the task is to select as many combinatorial objects of some mathematical structure (such as a graph or a set system) as possible under some constraint, which restricts some objects to be picked if others are. A basic example is the Independent Set problem, for which a maximum sized set of vertices of a graph needs to be found, such that none of them are adjacent to each other.

4.1.1. Independent Set

The Independent Set problem is notoriously hard in general. Not only is there no polynomial time n 1 ε -approximation algorithm [81] for any constant ε > 0 , unless P = NP, but also, under Gap-ETH, no g ( k ) -approximation can be computed in f ( k ) n O ( 1 ) time [39] for any computable functions f and g, where k is the solution size. On the other hand, for planar graphs a PTAS exists [82]. Hence a natural question is how the problem behaves for graphs that are “close” to being planar.
One way to generalize planar graphs is to consider minor-free graphs, because planar graphs are exactly those excluding K 5 and K 3 , 3 as minors. When parameterizing by the size of an excluded minor, the Independent Set problem is paraNP-hard, since the problem is NP-hard on planar graphs [83]. Nevertheless a PAS can be obtained for this parameter [84].
Theorem 9
([84,85]). Let H be a fixed graph. For H-minor-free graphs, Independent Set admits an ( 1 + ε ) -approximation algorithm that runs in f ( H , ε ) n O ( 1 ) time for some function f.
This result is part of the large framework of “bidimensionality theory” where any graph in an appropriate minor-closed class has treewidth bounded above in terms of the problem’s solution value, typically by the square root of that value. These properties lead to efficient, often subexponential, fixed-parameter algorithms, as well as polynomial-time approximation schemes, for bidimensional problems in many minor-closed graph classes. The bidimensionality theory is based on algorithmic and combinatorial extensions to parts of the Robertson–Seymour Graph Minor Theory, in particular initiating a parallel theory of graph contractions. The foundation of this work is the topological theory of drawings of graphs on surfaces. We refer the reader to the survey of [86] and more recent papers [85,87,88].
A different way to generalize planar graphs is to consider a planar deletion set, i.e., a set of vertices in the input graph whose removal leaves a planar graph. Taking the size of such a set as a parameter, Independent Set is again paraNP-hard [83]. However, by first finding a minimum sized planar deletion set, then guessing the intersection of this set with the optimum solution to Independent Set, and finally using the PTAS for planar graphs [82], a PAS can be obtained parameterized by the size of a planar deletion set [8].
Theorem 10
([8]). For the Independent Set problem a ( 1 + ε ) -approximation can be computed in 2 k n O ( 1 / ε ) time for any ε > 0 , where k is the size of a minimum planar deletion set.
Ideas using linear programming allow us to generalize and handle larger noise at the expense of worse dependence on ε . Bansal et al. [89] showed that given a graph obtained by adding δ n edges to some planar graph, one can compute a ( 1 + O ( ε + δ ) ) -approximate independent set in time n O ( 1 / ε 4 ) , which is faster than the 2 k n O ( 1 / ε ) running time of Theorem 10 for large k = δ n . Magen and Moharrami [90] showed that for every graph H and ε > 0 , given a graph G = ( V , E ) that can be made H-minor-free after at most δ n deletions and additions of vertices or edges, the size of the maximum independent set can be approximately computed within a factor ( 1 + ε + O ( δ | H | log | H | ) ) in time n f ( ε , H ) . Note that this algorithm does not find an independent set. Recently, Demaine et al. [91] presented a general framework to obtain better approximation algorithms for various problems including Independent Set and Chromatic Number, when the input graph is close to well-structured graphs (e.g., bounded degeneracy, degree, or treewidth).
It is also worth noting here that Independent Set problem can be generalized to the d-Scattered Set problem where we are given an (edge-weighted) graph and are asked to select at least k vertices, so that the distance between any pair is at least d [92]. Recently in [93] some lower and upper bounds on the approximation of the d-Scattered Set problem have been provided.
A special case of Independent Set is the Independent Set of Rectangles problem, where a set of axis-parallel rectangles is given in the two-dimensional plane, and the task is to find a maximum sized subset of non-intersecting rectangles. This is a special case, since pairwise intersections of rectangles can be encoded by edges in a graph for which the vertices are the rectangles. Parameterized by the solution size, the problem is W[1]-hard [94], and while a QPTAS is known [95], it is a challenging open question whether a PTAS exists. It was shown [96] however that both a PAS and a PSAKS exist for Independent Set of Rectangles parameterized by the solution size, even for the weighted version.
The runtime of this PAS is f ( k , ε ) n g ( ε ) for some functions f and g, where k is the solution size. Note that the dependence on ε in the degree of the polynomial factor of this algorithm cannot be removed, unless FPT=W[1], since any efficient PAS with runtime f ( k , ε ) n O ( 1 ) could be used to compute the optimum solution in FPT time by setting ε to 1 k + 1 in the W[1]-hard unweighted version of the problem [94]. However, in the so-called shrinking model an efficient PAS can be obtained [97] for Independent Set of Rectangles. The parameter in this case is a factor 0 < δ < 1 by which every rectangle is shrunk before computing an approximate solution, which is compared to the optimum solution without shrinking.
Theorem 11
([96,97]). For the Independent Set of Rectangles problem a ( 1 + ε ) -approximation can be computed in k O ( k / ε 8 ) n O ( 1 / ε 8 ) time for any ε > 0 , where k is the size of the optimum solution, or in f ( δ , ε ) n O ( 1 ) time for some computable function f and any ε > 0 and 0 < δ < 1 , where δ is the shrinking factor. Moreover, a ( 1 + ε ) -approximate kernel with k O ( 1 / ε 8 ) rectangles can be computed in polynomial time.
Another special case of Independent Set is the Independent Set on Unit Disk graph problem, where given set of n unit disks in the Euclidean plane, the task is to determine if there exists a set of k non-intersecting disks. The problem is NP-hard [98] but admits a PTAS [99]. Marx [94] showed that, when parameterized by the solution size, the problem is W[1]-hard; this also rules out EPTAS (and even efficient PAS) for the problem, assuming FPT W [ 1 ] . On the other hand, in [100] the authors give an FPT algorithm for a special case of Independent Set on Unit Disk graph when there is a lower bound on the distance between any pair of centers.

4.1.2. Vertex Coloring

A problem related to Independent Set is the Vertex Coloring problem, for which the vertices need to be colored with integer values, such that no two adjacent vertices have the same color (which means that each color class forms an independent set in the graph). The task is to minimize the number of used colors. For planar graphs the problem has a polynomial time 4 / 3 -approximation algorithm [8] via the celebrated Four Color Theorem, and a better approximation is not possible in polynomial time [101]. Using this algorithm, a 7 / 3 -approximation can be computed in FPT time when parameterizing by the size of a planar deletion set [8]. When generalizing planar graphs by excluding any fixed minor, and taking its size as the parameter, a 2-approximation can be computed in FPT time [102]. Due to the NP-hardness for planar graphs [101], neither of these two parameterizations admits a PAS, unless P = NP.
Theorem 12
([8,102]). For the Vertex Coloring problem
  • a 7 / 3 -approximation can be computed in k k n O ( 1 ) time, where k is the size of a minimum planar deletion set, and
  • a 2-approximation can be computed in f ( k ) n O ( 1 ) time for some function f, where k is the size of an excluded minor of the input graph.
One way to generalize Vertex Coloring is to see each color class as an induced graph of degree 0. The Defective Coloring problem [103] correspondingly asks for a coloring of the vertices, such that each color class induces a graph of maximum degree Δ , for some given Δ . The aim again is to minimize the number of used colors. In contrast to Vertex Coloring, the Defective Coloring problem is W[1]-hard [104] parameterized by the treewidth. This parameter measures how “tree-like” a graph is, and is defined as follows.
Definition 1.
A tree decomposition of a graph G = ( V , E ) is a tree T for which every node is associated with a bag X V , such that the following properties hold:
  • the union of all bags is the vertex set V of G,
  • for every edge ( u , v ) of G, there is a node of T for which the associated bag contains u and v, and
  • for every vertex u of G, all nodes of T for which the associated bags contain u, induce a connected subtree of T.
The width of a tree decomposition is the size of the largest bag minus 1 (which implies that a tree has a decomposition of width 1 where each bag contains the endpoints of one edge). The treewidth of a graph is the smallest width of any of its tree decompositions.
Treewidth is fundamental parameter of a graph and will be discussed more elaborately in Section 4.6.1. However, it is worth mentioning here that Vertex Coloring is in FPT when parameterized by treewidth.
The strong polynomial-time approximation lower bound of n 1 ε for Vertex Coloring [81] naturally carries over to the more general Defective Coloring problem. A much improved approximation factor of 2 is possible though in FPT time if the parameter is the treewidth [104]. It can be shown however, that a PAS is not possible in this case, as there is no ( 3 / 2 ε ) -approximation algorithm for any ε > 0 parameterized by the treewidth [104], unless FPT = W[1]. A natural question is whether the bound Δ of Defective Coloring can be approximated instead of the number of colors. For this setting, a bicriteria PAS parameterized by the treewidth exists [104], which computes a solution with the optimum number of colors where each color class induces a graph of maximum degree at most ( 1 + ε ) Δ .
Theorem 13
([104]). For the Defective Coloring problem, given a tree decomposition of width k of the input graph,
  • a solution with the optimum number of colors where each color class induces a graph of maximum degree ( 1 + ε ) Δ can be computed in ( k / ε ) O ( k ) n O ( 1 ) time for any ε > 0 ,
  • a 2-approximation (of the optimum number of colors) can be computed in k O ( k ) n O ( 1 ) time, but
  • no ( 3 / 2 ε ) -approximation (of the optimum number of colors) can be computed in f ( k ) n O ( 1 ) time for any ε > 0 and computable function f, unless FPT = W[1].
The algorithms of the previous theorem build on the techniques of [105] using approximate addition trees in combination with dynamic programs that yield XP algorithms for these problems. This technique can be applied to various problems (cf. Section 4.2), including a different generalization of Vertex Coloring called Equitable Coloring. Here the aim is to color the vertices of a graph with as few colors as possible, such that every two adjacent vertices receive different colors, and all color classes contain the same number of vertices. It is a generalization of Vertex Coloring, since one may add a sufficiently large independent set (i.e., a set of isolated vertices) to a graph such that the number of colors needed for an optimum Vertex Coloring solution is the same as for an optimum Equitable Coloring solution.
The Equitable Coloring problem is W[1]-hard even when combining the number of colors needed and the treewidth of the graph as parameters [106]. On the other hand, a PAS exists [105] if the parameter is the cliquewidth of the input graph. This is a weaker parameter than treewidth, as the cliquewidth of a graph is bounded as a function of its treewidth. However, while bounded treewidth graphs are sparse, cliquewidth also allows for dense graphs (such as complete graphs). Formally, a graph of cliquewidth can be constructed using the following recursive operations using labels on the vertices:
  • Introduce(x): create a graph containing a singleton vertex labelled x { 1 , , } .
  • Union( G 1 , G 2 ): return the disjoint union of two vertex-labelled graphs G 1 and G 2 .
  • Join( G , x , y ): add all edges connecting a vertex of label x with a vertex of label y to the vertex-labelled graph G.
  • Rename( G , x , y ): change the label of every vertex of G with label x to y { 1 , , } .
A cliquewidth expression with labels is a recursion tree describing how to construct a graph using the above four operations using only labels from the set { 1 , , } . Notice that the cliquewidth of a complete graph is two and therefore we have graphs of bounded clique-width but unbounded treewidth. As stated earlier the cliquewidth of a graph is bounded above exponentially in its treewidth and this dependence is tight for some graph families [107].
The PAS for Equitable Coloring will compute a coloring using at most k colors such that the ratio between the sizes of any two color classes is at most 1 + ε . In this sense it is a bicriteria approximation algorithm.
Theorem 14
([105]). For the Equitable Coloring problem, given a cliquewidth expression with ℓ labels for the input graph, a solution with optimum number of colors where the ratio between the sizes of any two color classes is at most 1 + ε , can be computed in ( k / ε ) O ( k ) n O ( 1 ) time [108,109] for any ε > 0 , where k is the optimum number of colors.
A variant of Vertex Coloring is the Min Sum Coloring problem, where, instead of minimizing the number of colors, the aim is to minimize the sum of (integer) colors, where the sum is taken over all vertices. This problem is FPT parameterized by the treewidth [110], but the related Min Sum Edge Coloring problem is NP-hard [111] on graphs of treewidth 2 (while being polynomial time solvable on trees [112]). For this problem the edges need to be colored with integer values, so that no two edges sharing a vertex have the same color, and the aim again is to minimize the total sum of colors. Despite being APX-hard [111] and also paraNP-hard for parameter treewidth, Min Sum Edge Coloring admits a PAS for this parameter [113].
Theorem 15
([113]). For the Min Sum Edge Coloring problem a ( 1 + ε ) -approximation can be computed in f ( k , ε ) n time for any ε > 0 , where k is the treewidth of the input graph.

4.1.3. Subgraph Packing

A special family of packing problems can be obtained by subgraph packing. Let H be a fixed “pattern” graph. The H-PACKING problem, given the “host” graph G, asks to find the maximum number of vertex-disjoint copies of H. One can also let H be a family of graphs and ask the analogous problem. There is another choice whether each copy of H is required to be an induced subgraph or a regular subgraph. We focus on the regular subgraph case here.
When H is a single graph with k vertices, a simple greedy algorithm that finds an arbitrary copy of H and adds it to the packing, guarantees a k-approximation in time f ( H , n ) · n . Here f ( H , n ) denotes the time to find a copy of H in an n-vertex graph. Following a general result for K-SET PACKING, a ( k + 1 + ε ) / 3 -approximation algorithm that runs in polynomial time for fixed k , ε exists [114]. When H is 2-vertex-connected or a star graph, even for fixed k, it is NP-hard to approximate the problem better than a factor Ω ( k / polylog ( k ) ) [115]. There is no known connected H that admits an FPT (or even XP) algorithm achieving a k 1 δ -approximation for some δ > 0 ; in particular, the parameterized approximability of k-Path Packing is wide open. It is conceivable that k-Path Packing admits a parameterized o ( k ) -approximation algorithm, given an O ( log k ) -approximation algorithm for k-Path Deletion [116] and an improved kernel for Induced P 3 Packing [117].
When H is the family of all cycles, the problem becomes the Vertex Cycle Packing problem, for which the largest number of vertex-disjoint cycles of a graph needs to be found. No polynomial time O ( log 1 / 2 ε n ) -approximation is possible for this problem [118] for any ε > 0 , unless every problem in NP can be solved in randomized quasi-polynomial time. Furthermore, despite being FPT [119] parameterized by the solution size, Vertex Cycle Packing does not admit any polynomial-sized exact kernel for this parameter [120], unless NP⊆coNP/poly. Nevertheless, a PSAKS can be found [18].
Theorem 16
([18]). For the Vertex Cycle Packing problem, a ( 1 + ε ) -approximate kernel of size k O ( 1 / ( ε log ε ) ) can be computed in polynomial time, where k is the solution size.

4.1.4. Scheduling

Yet another packing problem on graphs, which, however, has applications in scheduling and bandwidth allocation, is the Unsplittable Flow on a Path problem. Here a path with edge capacities is given together with a set of tasks, each of which specifies a start and an end vertex on the path and a demand value. The goal is to find the largest number of tasks such that for each edge on the path the total demand of selected tasks for which the edge lies between its start and end vertex, does not exceed the capacity of the edge. This problem admits a QPTAS [121], but it remains a challenging open question whether a PTAS exists. When parameterizing by the solution size, Unsplittable Flow on a Path is W[1]-hard [122]. However a PAS exists [122] for this parameter.
Theorem 17
([122]). For the Unsplittable Flow on a Path problem a ( 1 + ε ) -approximation can be computed in 2 O ( k log k ) n g ( ε ) time for some computable function g and any ε > 0 , where k is the solution size.
Another scheduling problem is Flow Time Scheduling, for which a set of jobs is given, each of which is specified by a processing time, a release date, and a weight. The jobs need to be scheduled on a given number of machines, such that no job is processed before its release date and a job only runs on one machine at a time. Given a schedule, the flow time of a job is the weighted difference between its completion time and release date, and the task for the Flow Time Scheduling problem is to minimize the sum of all flow times. Two types of schedules are distinguished: in a preemtive schedule a job may be interrupted on one machine and then resumed on another, while in a non-preemtive schedule every job runs on one machine until its completion once it was started. If pre-emptive schedules are allowed, Flow Time Scheduling has no polynomial time O ( log 1 ε p ) -approximation algorithm [123], unless P = NP, where p is the maximum processing time. For the more restrictive non-preemtive setting, no O ( n 1 / 2 ε ) -approximation can be computed in polynomial time [124], unless P = NP, where n is the number of jobs. The latter lower bound is in fact even valid for only one machine, and thus parameterizing Flow Time Scheduling by the number of machines will not yield any better approximation ratio in this setting. A natural parameter for Flow Time Scheduling is the maximum over all processing times and weights of the given jobs. It is not known whether the problem is FPT or W[1]-hard for this parameter. However, when combining this parameter with the number of machines, a PAS can be obtained [125] despite the strong polynomial time approximation lower bounds.
Theorem 18
([125]). For the Flow Time Scheduling problem a ( 1 + ε ) -approximation can be computed in ( m k ) O ( m k 3 / ε ) n O ( 1 ) time in the preemtive setting, and in ( m k / ε ) O ( m k 5 ) n O ( 1 ) time in the non-preemtive setting, for any ε > 0 , where m is the number of machines and k is an upper bound on every processing time and weight.

4.2. Covering Problems

For a covering problem the task is to select a set of k combinatorial objects in a mathematical structure, such as a graph or set system (i.e., hypergraph), under some constraints that demands certain other objects to be intersected/covered. A basic example is the Set Cover problem where we are given a set system, which is simply a collection of subsets of a universe. The goal is to determine whether there are k subsets whose union cover the whole universe.
There are two ways define optimization based on covering problems. First, we may view the covering demands as strict constraints and aim to find a solution that minimize the constraint/cost while covering all objects (i.e., relaxing the size-k constraint); this results in a minimization problem. Second, we may view the size constraint as a strict constraint and aim to find a solution that covers as many objects as possible; this results in a maximization problem. We divide our discussion mainly into two parts, based on these two types of optimization problems. In Section 4.2.3, we discuss problems related to covering that fall into neither category.

4.2.1. Minimization Variants

We start out discussion with the minimization variants. For brevity, we overload the problem name and use the same name for the minimization variant (e.g., we use Set Cover instead of the more cumbersome Min Set Cover). Later on, we will use different names for the maximization versions; hence, there will be no confusion.
Set Cover, Dominating Set and Vertex Cover. As discussed in detail in Section 3.1.2, Set Cover and equivalently Dominating Set are very hard to approximate in the general case. Hence, special cases where some constraints are placed on the set system are often considered. Arguably the most well-studied special case of Set Cover is the Vertex Cover problem, in which the set system is a graph. That is, we would like to find the smallest set of vertices such that every edge has at least one endpoint in the selected set (i.e., the edge is “covered”). Vertex Cover is well known to be FPT [126] and admit a linear-size kernel [127]. A generalization of Vertex Cover on d-uniform hypergraph, where the input is now a hypergraph and the goal is to find the smallest set of vertices such that every hyperedge contain at least one vertex from the set, is also often referred to as d-Hitting Set in the parameterized complexity community. However, we will mostly use the nomenclature Vertex Cover on d-uniform hypergraph because many algorithms generalizes well from Vertex Cover in graphs to hypergraphs. Indeed, branching algorithms for Vertex Cover on graphs can be easily generalized to hypergraphs, and hence the latter is also FPT. Polynomial-size kernels are also known for Vertex Cover on d-uniform hypergraphs [128].
While Vertex Cover both on graphs and d-uniform hypergraphs are already tractable, approximation can still help make algorithms even faster and kernels even smaller. We defer this discussion to Section 4.7.
Connected Vertex Cover. A popular variant of Vertex Cover that is the Connected Vertex Cover problem, for which the computed solution is required to induce a connected subgraph of the input. Just as Vertex Cover, the problem is FPT [129]. However, unlike Vertex Cover, Connected Vertex Cover does not admit a polynomial-time kernel [130], unless NP⊆coNP/poly. In spite of this, a PSAKS for Connected Vertex Cover exists:
Theorem 19
([18]). For any ε > 0 , an ( 1 + ε ) -approximate kernel with k O ( 1 / ε ) vertices can be computed in polynomial time.
The ideas behind [18] is quite neat and we sketch it here. There are two reduction rules: (i) if there exists a vertex with degree more than Δ : = 1 / ε just “select” the vertex and (ii) if we see a vertex with more than k false twins, i.e., vertices with the same set of neighbors, then we simply remove it from the graph. An important observation for (i) is that, since we have to either pick the vertex or all Δ neighbors anyway, we might as well just select it even in the second case because it affects the size of the solution by a factor of at most 1 + Δ Δ = 1 + ε . For (ii), it is not hard to see that we either select one of the false twins or all of them; hence, if a vertex has more than k false twins, then it surely cannot be in the optimal solution. Roughly speaking, these two observations show that this is an ( 1 + ε ) -kernel. Of course, in the actual proof, “selecting” a vertex needs to be defined more carefully, but we will not do it here. Nonetheless, imagine the end step when we cannot apply these two reduction rules anymore. Essentially speaking, we end up with a graph where some (less than ( 1 + ε ) k ) vertices are marked as “selected” and the remaining vertices have degree at most Δ . Now, every vertex is either inside the solution, or all of its neighbors must lie in the solution. There are only (at most) k vertices in the first case. For the second case, note that these vertices have degree at most Δ and they have at most k false twins, meaning that there are at most k 1 + Δ = k 1 + 1 / ε such vertices. In other words, the kernel is of size k O ( 1 / ε ) as desired. This constitutes the main ideas in the proof; let us stress again that the actual proof is of course more complicated than this since we did not define rule (i) formally.
Recently, Krithika et al. [131] considered the following structural parameters beyond the solution size: split deletion set, clique cover and cluster deletion set. In each case, the authors provide a PSAKS for the problem. We will not fully define these parameters here, but we note that the first parameter (split deletion set) is always no larger than the size of the minimum vertex cover of the graph. In another very recent work, Majumdar et al. [132] give a PSAKS for each of the following parameters, each of which is always no larger than the solution size: the deletion distance of the input graph to the class of cographs, the class of bounded treewidth graphs, and the class of all chordal graphs. Hence, these results may be viewed as a generalization of the aforementioned PSAKS from [18].
Connected Dominating Set. Similar to Connected Vertex Cover, the Connected Dominating Set problem is the variant of Dominating Set for which the solution additionally needs to induce a connected subgraph of the input graph. When placing no restriction on the input graph, the problem is as hard to approximate as Dominating Set. However, for some special classes of graphs, PSAKS or bi-PSAKS [133] are known; these include graphs with bounded expansion, nowhere dense graphs, and d-biclique-free graphs [134].
Covering Problems parameterized by Graph Width Parameters. Several works in literature also study the approximability of variants of Vertex Cover and Dominating Set parameterized by graph widths [105,135]. These variants include:
  • Power Vertex Cover (PVC). Here, along with the input graph, each edge has an integer demand and we have to assign (power) values to vertices, such that each edge has at least one endpoint with a value at least its demand. The goal is to minimize the total assigned power. Note that this is generalizes of Vertex Cover, where edges have unit demands.
  • Capacitated Vertex Cover (CVC). The problem is similar to Vertex Cover, except that each vertex has a capacity which limits the number of edges that it can cover. Once again, Vertex Cover is a special case of CVC where each vertex’s capacity is .
  • Capacitated Dominating Set (CDS). Analogous to CVC, this is a generalization of Dominating Set where each vertex has a capacity and it can only cover/dominate at most that many other vertices.
All problems above are FPT under standard parameter (i.e., the optimum) [135,136]. However, when parameterizing by the treewidth [137], all three problems become W[1]-hard [135]. (This is in contrast to Vertex Cover and Dominating Set, both of which admit straightforward dynamic programming FPT algorithms parameterized by treewidth.) Despite this, good FPT approximation algorithms are known for the problem. In particular, a PAS is known for PVC [135]. For CVC and CDS, a bicriteria PAS exists for the problem [105], which in this case computes a solution of size at most the optimum, so that no vertex capacity is violated by more than a factor of 1 + ε .
The approximation algorithms for CVC and CDS are results of a more general approach of Lampis [105]. The idea is to execute an “approximate” version of dynamic programming in tree decomposition instead of the exact version; this helps reduce the running time from n O ( w ) to ( log n / ε ) O ( w ) , which is FPT. The approach is quite flexible: several approximation for graphs problems including covering problems can be achieved via this method and it also applies to clique-width. Please refer to [105] for more details.
Packing-Covering Duality and Erdos-Pósa Property. Given a set system ( V , C ) where V is the universe and C = { C 1 , , C m } is a collection of subsets of V, Hitting Set is the problem of computing the smallest S V that intersects every C i , and Set Packing is the problem of computing the largest subcollection C C such that no two sets in C intersect. It can also be observed that the optimal value for Hitting Set is at least the optimal value for Set Packing, while the standard LP relaxations for them (covering LP and packing LP) have the same optimal value by strong duality. Studying the other direction of the inequality (often called the packing-covering duality) for natural families of set systems has been a central theme in combinatorial optimization. The gap between the covering optimum and packing optimum is large in general (e.g., Dominating Set/Independent Set), but can be small for some families of set systems (e.g., s-t Cut /s-t Disjoint Paths and Vertex Cover/Matching especially in bipartite graphs).
One notion that has been important for both parameterized and approximation algorithms is the Erdos-Pósa property [138]. A family of set systems is said to have the Erdos-Pósa property when there is a function f : N N such that for any set system in the family, if the packing optimum is k, the covering optimum is at most f ( k ) . This immediately implies that the multiplicative gap between these two optima is at most f ( k ) / k , and constructive proofs for the property for various set systems have led to ( f ( k ) / k ) -approximation algorithms. Furthermore, for some problems including Cycle Packing, the Erdos-Pósa property gives an immediate parameterized algorithm. We refer the reader to a recent survey [139] and papers [119,140,141].
The original paper of Erdos and Pósa [138] proved the property for set systems ( V , C ) when there is an underlying graph G = ( V , E ) and C is the set of cycles, which corresponds to the pair Cycle Packing/Feedback Vertex Set; every graph either has at least k vertex-disjoint cycles or there is a feedback vertex set of size at most O ( k log k ) . Many subsequent papers also studied natural set systems arising from graphs where V is the set of vertices or edges and C denotes a collection of subgraphs of interest. For those set systems, Erdos-Pósa Properties are closely related to Set Packing introduced in Section 4.1.3 and F -Deletion problems introduced in Section 4.6.

4.2.2. Maximization Variants

We now move on to the maximization variants of covering problems. To our knowledge, these covering problems are much less studied in the context of parameterized approximability compared to their minimization counterparts. In particular, we are only aware of works on the maximization variants of Set Cover and Vertex Cover, which are typically called Max k-Coverage and Max k-Vertex Cover respectively.
Max k -Coverage. Recall that here we are given a set system and the goal is to select k subsets whose union is maximized. It is well known that the simple greedy algorithm yields an e e 1 -approximation [142]. Furthermore, Fiege shows, in his seminal work [29], show that this is tight: e e 1 ε -approximation is NP-hard for any constant ε > 0 . In fact, recently it has been shown that this inapproximability applies also to the parameterized setting. Specifically, under Gap-ETH, e e 1 ε -approximation cannot be achieved in FPT time [143] or even f ( k ) · n o ( k ) time [70]. In other words, the trivial algorithm is tight in terms of running time, the greedy algorithm is tight in terms of approximation ratio, and there is essentially no trade-off possible between these two extremes. We remark here that this hardness of approximation is also the basis of hardness for k-Median and k-Means [143] (see Section 4.3).
Due to the strong inapproximability result for the general case of Max k-Coverage, different parameters have to be considered in order to obtain a PAS for Max k-Coverage. An interesting positive result here is when the parameters are k and the VC dimension of the set system, for which a PAS exists while the exact version of the problem is W[1]-hard [144].
Max k -Vertex Cover. Another special case of Max k-Coverage is the restriction when each element belongs to at most d subsets in the system. This corresponds exactly to the maximization variant of the Vertex Cover problem on d-uniform hypergraph, which will refer to as Max k-Vertex Cover. Note here that, for such set systems, their VC-dimensions are also bounded by log d + 1 and hence the aforementioned PAS of [144] applies here as well. Nonetheless, Max k-Vertex Cover admits a much simpler PAS (and even PSAKS) compared to Max k-Coverage parameterized by k and VC-dimension, as we will discuss more below.
Maxk-Vertex Cover was first studied in the context of parameterized complexity by Guo et al. [145] who showed that the problem is W[1]-hard. Marx, in his survey on parameterized approximation algorithms [8], gave a PAS for the problem with running time 2 O ˜ ( k 3 / ε ) . Later, Lokshtanov et al. [18] shows that Marx’s approach can be used to give a PSAKS of size O ( k 5 / ε 2 ) . Both of these results mainly focus on graphs. Later, Skowron and Faliszewski [146,147,148] gave a more general argument that both works generally for any d-uniform hypergraph and improves the running time and kernel size:
Theorem 20
([146]). For the Max k-Vertex Cover problem in d-uniform hypergraphs, a ( 1 + ε ) -approximation can be computed in O * ( d / ε ) k time for any ε > 0 . Moreover, an ( 1 + ε ) -approximate kernel with O ( d k / ε ) vertices can be computed in polynomial time.
The main idea of the above proof is simple and elegant, and hence we will include it here. For convenience, we will only discuss the graph case, i.e., d = 2 . It suffices to just give the O ( k / ε ) -vertex kernel; the PAS immediately follows by running the brute force algorithm on the output instance from the kernel. The kernel is as simple as it gets: just keep 2 k / ε vertices with highest degrees and throw the remaining vertices away! Note that there is a subtle point here, which is that we do not want to throw away the edges linking from the kept vertices to the remaining vertices. If self-loops are allowed in a graph, this is not an issue since we may just add a self-loop to each vertex for each edge adjacent to it with the other endpoint being discarded. When self-loops are not allows, it is still possible to overcome this issue but with slightly larger kernel; we refer the readers to Section 3.2 of [147] for more detail.
Having defined the kernel, let us briefly discuss the intuition as to why it works. Let V 2 k / ε denote the set of 2 k / ε highest-degree vertices. The main argument of the proof is that, if there is an optimal solution S, then we may modify it to be entirely contained in V 2 k / ε while preserving the number of covered edges to within ( 1 + ε ) factor. The modification is simple: for every vertex that is outside of V 2 k / ε , we replace it with a random vertex from V 2 k / ε . Notice here that we always replace a vertex with a higher-degree vertex. Naturally, this should be good in terms of covering more edges, but there is a subtle point here: it is possible that the high degree vertices are “double counted” if a particular edge is covered by both endpoints. The size 2 k / ε is selected exactly to combat this issue; since the set is large enough, “double counting” is rare for random vertices. This finishes our outline for the intuition.
We end by remarking that Max k-Vertex Cover on graphs is already APX-hard [149], and hence the PASes mentioned above once again demonstrate additional power of FPT approximation algorithms over polynomial-time approximation algorithms.

4.2.3. Other Related Problems

There are several other covering-related problems that do not fall into the two categories we discussed so far. We discuss a couple such problems below.
Min k -Uncovered. The first is the Min k-Uncovered problem, where the input is a set system and we would like to select k sets as to minimize the number of uncovered elements. When we are concerned with exact solutions, this is of course the Set Cover. However, the optimization version becomes quite different from Max k-Coverage. In particular, since it is hard to determine whether we can find k subsets that cover the whole universe, the problem is not approximable at all in the general case. However, if restrict ourselves to graphs and hypergraphs (for which we refer to the problem as Min k-Vertex Cover), it is possible to get a (randomized) PAS for the problem [146]:
Theorem 21
([146]). For the Min k-Vertex Uncovered problem in d-uniform hypergraphs, a ( 1 + ε ) -approximation can be computed in O * ( d / ε ) k time for any ε > 0 .
The algorithm is based on the following simple randomized branching: pick a random uncovered element and branch on all possibilities of selecting a subset that contains it. Notice that since an element belongs to only d subsets, the branching factor is at most d. The key intuition in the approximation proof is that, when the number of elements we have covered so far is still much less than that in the optimal solution, there is a relatively large probability (i.e., ε ) that the random element is covered in the optimal solution. If we always pick such a “good” element in most branching steps, then we would end up with a solution close to the optimum. Skowron and Faliszewski [146] formalizes this intuition by showing that the algorithm outputs an ( 1 + ε ) -approximate solution with probability roughly ε k . Hence, by repeating the algorithm ( 1 / ε ) k time, one arrives at the claimed PAS. To the best of our knowledge, it is unknown whether a PSAKS exists for the problem.
Min k -Coverage. Another variant of the Set Cover problem studied is Min k-Coverage [150,151,152], where we would like to select k subsets that minimizes the number of covered elements. We stress here that this problem is not a relaxation of Set Cover but rather is much more closely related to graph expansion problems (see [151]).
It is known that, when there is no restriction on the input set system, the problem is (up to a polynomial factor) as hard to approximate as the Denest k-Subgraph problem [150]. Hence, by the inapproximability of the latter discussed earlier in the survey (Theorem 8), we also have that there is no k o ( 1 ) -approximation algorithm for the problem that runs in FPT time.
Once again, the special case that has been studied in literature is when the input set system is a graph, in which case we refer to the problem as Min k-Vertex Cover. Gupta, Lee, and Li [153,154] used the technique of Marx [8] to give a PAS for the problem with running time O * ( ( k / ε ) O ( k ) ) . The running time was later improved in [147] to O * ( ( 1 / ε ) O ( k ) ) . The algorithm there is again based on branching, but the rules are more delicate and we will not discuss them here. An interesting aspect to note here is that, while both Max k-Vertex Cover and Min k-Vertex Cover have PSAKS of the (asymptotically) same running time, the former admits a PSAKS whereas the latter does not (assuming a variant of the Small Set Expansion Conjecture) [147].
To the best of our knowledge, Min k-Vertex Cover has not been explicitly studied on d-uniform hypergraphs before, but we suspect that the above results should carry over from graphs to hypergraphs as well.

4.3. Clustering

Clustering is a representative task in unsupervised machine learning that has been studied in many fields. In combinatorial optimization communities, it is often formulated as the following: Given a set P of points and a set F of candidate centers (also known as facilities), and a metric on X P F given by the distance ρ : X × X R + { 0 } , choose k centers C F to minimize some objective function cost : = cost ( P , C ) . To fully specify the problem, the choices to make are the following. Let ρ ( C , p ) : = min c C ρ ( c , p ) .
  • Objective function: Three well-studied objective functions are
  • k-Median ( cost ( P , C ) : = p P ρ ( C , p ) ).
  • -
    k-Means ( cost ( P , C ) : = p P ρ ( C , p ) 2 ).
    -
    k-Center ( cost ( P , C ) : = max p P ρ ( C , p ) ).
  • Metric space: The ambient metric space X can be
    -
    A general metric space explicitly given by the distance ρ : X × X R + { 0 } .
    -
    The Euclidean space R d equipped with the 2 distance.
    -
    Other structured metric spaces including metrics with bounded doubling dimension or bounded highway dimension.
While many previous results on clustering focused on non-parameterized polynomial time, there are at least three natural parameters one can parameterize: The number of clusters k, the dimension d (if defined), and the approximation accuracy parameter ε . In general metric spaces, parameterized approximation algorithms (mainly with parameter k) were considered very recently, but in Euclidean spaces, many previous results already give parameterized approximation algorithms with parameters k , d , and ε .

4.3.1. General Metric Space

We can assume X = P F without loss of generality. Let n : = | X | and note that the distance ρ : X × X R + { 0 } is explicitly specified by Θ ( n 2 ) numbers. A simple exact algorithm running in time O ( n k + 1 ) can be achieved by enumerating all k centers c 1 , , c k F and assign each point p to the closest center. In this setting, the best approximation ratios achieved by polynomial time algorithms are 2.611 + ε for k-Median [155], 9 + ε for k-Means [156], and 3 for k-Center [157,158]. From the hardness side, it is NP-hard to approximate k-Median within a factor 1 + 2 / e ε 1.73 ε , k-Median within a factor 1 + 8 / e ε 3.94 ε , k-Center within a factor 3 ε [159].
While there are some gaps between the best algorithms and the best hardness results for k-Median and k-Means, it is an interesting question to ask how parameterization by k changes the approximation ratios for both problems. Cohen-Addad et al. [143] studied this question and gave exact answers.
Theorem 22
([143]). For any ε > 0 , there is an ( 1 + 2 / e + ε ) -approximation algorithm for k-Median, and an ( 1 + 8 / e + ε ) -approximation algorithm for k-Means, both running in time ( O ( k log k / ε 2 ) ) k n O ( 1 ) .
There exists a function g : R + R + such that assuming the Gap-ETH, for any ε > 0 , any ( 1 + 2 / e ε ) -approximation algorithm fork-Median, and any ( 1 + 8 / e ε ) -approximation algorithm fork-Means, must run in time at least n k g ( ε ) .
These results show that if we parameterize by k, 1 + 2 / e (for k-Median) and 1 + 8 / e (for k-Means) are the exact limits of approximation for parameterized approximation algorithms. Similar reductions also show that no parameterized approximation algorithm can achieve ( 3 ε ) -approximation for k-Center for any ε > 0 (only assuming W [ 2 ] FPT ), so the power of parameterized approximation is exactly revealed for all three objective functions.
Algorithm for k -Median. We briefly describe ideas for the algorithm for k-Median in Theorem 22. The main technical tool that the algorithm uses is a coreset, which will be also frequently used for Euclidean subspaces in the next subsection.
When S is a set of points with weight functions w : S R + , let us extend the definition of the objective function cost ( S , C ) such that
cost ( S , C ) : = p S w ( p ) · ρ ( C , s ) .
Given a clustering instance ( P , F , ρ , k ) and ε > 0 , a subset S P with weight functions w : S R + is called a (strong) corset if for any k centers C = { c 1 , , c k } F ,
| cost ( S , C ) cost ( P , C ) | ε · cost ( P , C ) .
For a general metric space, Chen [160] gave a coreset of cardinality O ˜ ( k 2 log 2 n / ε 2 ) . (In this subsection, O ˜ ( · ) hides poly ( log log n , log k , log ( 1 / ε ) .) This was improved by Feldman and Langeberg [161] to O ( k log n / ε 2 ) . We introduce high-level ideas of [160] later.
Given the coreset, it remains to give a good parameterized approximation algorithm for the problem for a much smaller (albeit weighted) point set | P | = O ( poly ( log n , k ) ) . Note that | F | can be still as large as n, so naively choosing k centers from F will take n k time and exhaustively partitioning P into k sets will take k | P | = n poly ( k ) time. (Indeed, exactly solving this small case will give an EPAS, which will contradict the Gap-ETH.)
Fix an optimal solution, and let C * = { c 1 * , , c k * } are the optimal centers and P i * P is the cluster assigned to c i * . One information we can guess is, for each i [ k ] , the point p i P i * closest to c i * and (approximate) ρ ( c i * , p i ) . Since | P | = poly ( k log n ) , guessing them only takes time ( k log n ) O ( k ) , which can be made FPT by separately considering the case ( log n ) k n and the case ( log n ) k n , in which k = Ω ( log n / log log n ) and ( log n ) k = ( k log k ) O ( k ) .
Let F i F be the set of candidate centers that are at distance approximately r i from p i , so that c i * F i for each i. The algorithm chooses k centers C F such that | C F i | 1 for each i [ k ] . Let c i C F i . For any point p P (say p P j * , though the algorithm doesn’t need to know j), then
ρ ( C , p ) ρ ( c j , p ) ρ ( c j , p j ) + ρ ( p j , c j * ) + ρ ( c j * , p ) 3 ρ ( c j * , p ) ,
where ρ ( c j , p j ) ρ ( p j , c j * ) ρ ( c j * , p ) by the choice of p j .
This immediately gives a 3-approximation algorithm in FPT time, which is worse than the best polynomial time approximation algorithm. To get the optimal ( 1 + 2 / e ) -approximation algorithm, we further reduce the job of finding c i F i to maximizing a monotone submodular function with a partition matroid constraint, which is known to admit an optimal ( 1 1 / e ) -approximation algorithm [162]. Then we can ensure that for ( 1 1 / e ) fraction of points, the distance to the chosen centers is shorter than in the optimal solution, and for the remaining 1 / e fraction, the distance is at most three times the distance in the optimal solution. We refer the reader to [143] for further details.
Constructing a coreset. As discussed above, a coreset is a fundamental building block for optimal parameterized approximation algorithms for k-Median and k-Means for general metrics. We briefly describe the construction of Chen [160] that gives a coreset of cardinality O ˜ ( k 2 log 2 n / ε 2 ) for k-Median. Similar ideas can be also used to obtain an EPAS for Euclidean spaces parameterized by k, though better specific constructions are known in Euclidean spaces.
We first partition P into P 1 , P such that = O ( k log n ) and
i = 1 | P i | diam ( P i ) = O ( O PT ) .
Such a partition can be obtained by using a known (bicriteria) constant factor approximation algorithm for k-Median. Next, let t = O ˜ ( k log n ) , and for each i = 1 , , , we let S i = { s 1 , , s t } be a random subset of t points of P i where each s j is an independent and uniform sample from P i and is given weight | P i | / t . (If | P i | t , we simply let S i = P i with weights 1.) The final coreset S is the union of all S i ’s.
To prove that it works, we simply need to show that for any set of k centers C F with | C | = k ,
Pr [ | cost ( S , C ) cost ( P , C ) | > ε · cost ( P , C ) ] o ( 1 / n k ) ,
so that the union bound over n k choices of C works. Indeed, we show that for each i = 1 , , ,
Pr [ | cost ( S i , C ) cost ( P i , C ) | > ε · | P i | diam ( P i ) ] o ( 1 / ( · n k ) ) ,
so that we can also union bound and sum over i [ ] , using the fact that i | P i | diam ( P i ) = O ( O PT ) O ( cost ( P , C ) ) .
It is left to prove (1). Fix C and i (let P i = { p 1 , , p | P i | } ), and recall that
cost ( P i , C ) = j = 1 | P i | ρ ( C , p j ) .
When | P i | t , S i = P i , so (1) holds. Otherwise, recall that S i = { s 1 , , s t } where each s j is an independent and uniform sample from P i with weight w : = | P i | / t . For j = 1 , , t , let X j : = w · ρ ( C , s j ) . Note that cost ( S i , C ) = j X j and cost ( P i , C ) = t · E [ X j ] = E [ cost ( S i , C ) ] . A crucial observation is that that | ρ ( C , p j ) ρ ( C , p j ) | diam ( P i ) for any j , j [ | P i | ] , so that | X j X j | w · diam ( P i ) for any j , j [ t ] . If we let X min : = min j [ | P i | ] ( w ρ ( C , p j ) ) and Y j : = ( X j X min ) / ( w · diam ( P i ) ) , Y j ’s are t i.i.d. random variables that are supported in [ 0 , 1 ] . The standard Chernoff–Hoeffding inequality gives
Pr | j X j t E [ X j ] | > ε | P i | diam ( P i ) = Pr | j Y j t E [ Y j ] | > ε t exp ( O ( ε 2 t ) ) o ( 1 / ( · n k ) ) ,
proving (1) for t = O ˜ ( k log n ) and finishing the proof. A precise version of this argument was stated in Haussler [163].

4.3.2. Euclidean Space

For Euclidean spaces, we assume that X = F = R d for some d N , endowed with the standard 2 metric. Let n = | P | in this subsection. Now we have k and d as natural structural parameters of clustering tasks. Many previous approximation algorithms in Euclidean k-Median and Euclidean k-Means in Euclidean spaces, without explicit mention to parameterized complexity, are parameterized approximation algorithms parameterized by k or d (or both). The highlight of this subsection is that for both Euclidean k-Median and Euclidean k-Means, an EPAS exists with only one of k and d as a parameter. Without any parameterization, both Euclidean k-Median and k-Means are known to be APX-hard [164,165]. We introduce these results in the chronological order, highlighting important ideas.
Euclidean k -Median with parameter d . The first PTAS for Euclidean k-Median in Euclidean spaces with fixed d appears in Arora et al. [166]. The techniques extend Arora’s previous PTAS for the Euclidean Travelling Salesman problem in Euclidean spaces [167], first proving that there exists a near-optimal solution that interacts with a quadtree (a geometric division of R d into a hierarchy of square regions) in a restricted sense, and finally finding such a tour using dynamic programming. The running time is n O ( 1 / ε ) for d = 2 and n ( log n / ε ) d 2 for d > 2 . Kolliopoulous and Rao [168] improved the running time to 2 O ( ( log ( 1 / ε ) / ε ) d 1 ) n log d + 6 n , which is an EPAS with parameter d.
Euclidean k -Median and Euclidean k -Means with parameter k .
An EPAS for Euclidean k-Means even with parameters both k and d took longer to be discovered, and first appeared when Matoušek [169] gave an approximation scheme that runs in time O ( n ε 2 k 2 d log k n ) . After this, several improvements on the running time followed Badoiu et al. [170], De La Vega et al. [171], Har-Peled and Mazumdar [172].
Kumar et al. [173,174] gave approximation schemes for both k-Median and k-Means, running in time 2 ( k / ε ) O ( 1 ) d n . This shows that an EPAS can be obtained by using only k as a parameter. Using this result and and improved coresets, more improvements followed [160,161,175]. The current best runtime to get ( 1 + ε ) -approximation is O ( n d k + d · poly ( k / ε ) + 2 O ˜ ( k / ε ) ) for k-Means [175], and O ( n d k + 2 poly ( 1 / ε , k ) ) time for k-Median [161].
A crucial property of the Euclidean space that allows an EPAS with parameter k (which is ruled out for general metrics by Theorem 22) is the sampling property, which says that for any set Q R d as one cluster, there is an algorithm that is given only g ( 1 / ε ) samples from Q and outputs h ( 1 / ε ) candidate centers such that one of them is ε -close to the optimal center for the entire cluster Q for some functions g , h . (For example, for k-Means, the mean of O ( 1 / ε ) random samples ε -approximates the actual mean with constant probability.) This idea leads to an ( 1 + ε )-approximation algorithm running in time | P | f ( ε , k ) . Together even with a general coreset construction of size poly ( k , log n , 1 / ε ) , one already gets an EPAS with parameter k. Better coresets construction are also given in Euclidean spaces. Recent developments [176,177,178] construct core-sets of size poly ( k , 1 / ε ) (no dependence on n or d), which is further extended to the shortest-path metric of an excluded-minor graph [179].
Euclidean k -Means with parameter d .
Cohen-Addad et al. [180] and Friggstad et al. [181] recently gave approximation schemes running in time n f ( d , ε ) using local search techniques. These results were improved to an EPAS in [182], and also extended to doubling metrics [183].
Other metrics and k -CENTER. For the k-Center problem an EPAS exists when parametrizing by both k and the doubling dimension [184], and also for planar graphs there is an EPAS for parameter k, which is implied by the EPTAS of Fox-Epstein et al. [185] (cf. [184]).
There are also parameterized approximation schemes for metric spaces with bounded highway dimension [184,186,187] and various graph width parameters [108].
Capacitated clustering and other variants. Another example where the parameterization by k helps is Capacitated k-Median, where each possible center c F has a capacity u c N and can be assigned at most u c points. It is not known whether there exists a constant-factor approximation algorithm, and known constant factor approximation algorithms either open ( 1 + ε ) k centers [188] or violate capacity constraints by an ( 1 + ε ) factor [189]. Adamczyk et al. [190] gave a ( 7 + ε ) -approximation algorithm in f ( k , ε ) n O ( 1 ) time, showing that a constant factor parameterized approximation algorithm is possible. The approximation ratio was soon improved to ( 3 + ε ) [191]. For Capacitated Euclidean k-Means, 192] also gave a ( 69 + ε ) -approximation algorithm for in f ( k , ε ) n O ( 1 ) time.
While the capacitated versions of clustering look much harder than their uncapacitated counterparts, there is no known theoretical separation between the capacitated version and the uncapacitated version in any clustering task. Since the power of parameterized algorithms for uncapacitated clustering is well understood, it is a natural question to understand the “capacitated VS uncapacitated question” in the FPT setting.
Open Question 10.DoesCapacitatedk-Medianadmit an ( 1 + 2 / e ) -approximation algorithm in FPT time with parameter k? DoCapacitated Euclideank-Means/k-Medianadmit an EPAS with parameter k or d?
Since clustering is a universal task, like capacitated versions, many variants of clustering tasks have been studied including k-Median/k-Means with Outliers [193] and Matroid/Knapsack Median [194]. While no variant is proved to harder than the basic versions, it would be interesting to see whether they all have the same parameterized approximability with the basic versions.

4.4. Network Design

In network design, the task is to connect some set of vertices in a metric, which is often given by the shortest-path metric of an edge-weighted graph. Two very prominent problems of this type are the Travelling Salesperson (TSP) and Steiner Tree problems. For TSP all vertices need to be connected in a closed walk (called a route), and the length of the route needs to be minimized [195]. For Steiner Tree a subset of the vertices (called terminals) is given as part of the input, and the objective is to connect all terminals by a tree of minimum weight in the metric (or graph). Both of these are fundamental problems that have been widely studied in the past, both on undirected and directed input graphs.
Undirected graphs. A well-studied parameter for Steiner Tree is the number of terminals, for which the problem has been known to be FPT since the early 1970s due to the work of Dreyfus and Wagner [196]. Their algorithm is based on dynamic programming and runs in 3 k n O ( 1 ) time if k is the number of terminals. Faster algorithms based on the same ideas with runtime ( 2 + δ ) k n O ( 1 ) for any constant δ > 0 exist [197] (here the degree of the polynomial depends on δ ). The unweighted Steiner Tree problem also admits a 2 k n O ( 1 ) time algorithm [198] using a different technique based on subset convolution. Given any of these exact algorithms as a subroutine, a faster PAS can also be found [20] (cf. Section 4.7). On the other hand, no exact polynomial-sized kernel exists [130] for the Steiner Tree problem, unless NP⊆coNP/poly. Interestingly though, a PSAKS can be obtained [18].
This kernel is based on a well-known fact proved by Borchers and Du [199], which is very useful to obtain approximation algorithms for the Steiner Tree problem. It states that any Steiner tree can be covered by smaller trees containing few terminals, such that these trees do not overlap much. More formally, a full-component is a subtree of a Steiner tree, for which the leaves coincide with its terminals. For the optimum Steiner tree T and any ε > 0 , there exist full-components C 1 , , C of T such that
  • each full-component C i contains at most 2 1 / ε terminals (leaves),
  • the sum of the weights of the full-components is at most 1 + ε times the cost of T, and
  • taking any collection of Steiner trees T 1 , , T , such that each tree T i connects the subset of terminals that forms the leaves of full-component C i , the union i = 1 T i is a feasible solution to the input instance.
Not knowing the optimum Steiner tree, it is not possible to know the subsets of terminals of the full-components corresponding to the optimum. However, it is possible to compute the optimum Steiner tree for every subset of terminals of size at most 2 1 / ε using an FPT algorithm for Steiner Tree. The time to compute all these solutions is k O ( 2 1 / ε ) n O ( 1 ) , using for instance the Dreyfus and Wagner [196] algorithm. Now the above three properties guarantee that the graph given by the union of all the computed Steiner trees, contains a ( 1 + ε ) -approximation for the input instance. In fact, the best polynomial time approximation algorithm known to date [200] uses an iterative rounding procedure to find a ln ( 4 ) -approximation of the optimum solution in the union of these Steiner trees. To obtain a kernel, the union needs to be sparsified, since it may contain many Steiner vertices and also the edge weights might be very large. However, Lokshtanov et al. [18] show that the number of Steiner vertices can be reduced using standard techniques, while the edge weights can be encoded so that their space requirement is bounded in the parameter and the cost of any solution is distorted by at most a 1 + ε factor.
Theorem 23
([18,20]). For the Steiner Tree problem a ( 1 + ε ) -approximation can be computed in ( 2 + δ ) ( 1 ε / 2 ) n O ( 1 ) time for any constant δ > 0 (and in 2 ( 1 ε / 2 ) n O ( 1 ) time in the unweighted case) for any ε > 0 , where k is the number of terminals. Moreover, a ( 1 + ε ) -approximate kernel of size ( k / ε ) O ( 2 1 / ε ) can be computed in polynomial time.
A natural alternative to the number of terminals is to consider the vertices remaining in the optimum tree after removing the terminals: a folklore result states that Steiner Tree is W[2]-hard parameterized by the number of non-terminals (called Steiner vertices) in the optimum solution. At the same time, unless P = NP there is no PTAS for the problem, as it is APX-hard [201]. However an approximation scheme is obtainable when parametrizing by the number of Steiner vertices k in the optimum, and also a PSAKS is obtainable under this parameterization.
To obtain both of these results, Dvořák et al. [202] devise a reduction rule that is based on the following observation: if the optimum tree contains few Steiner vertices but many terminals, then the tree must contain (1) a large component containing only terminals, or (2) a Steiner vertex that has many terminal neighbours. Intuitively, in case (2) we would like to identify a large star with terminal leaves and small cost in the current graph, while in case (1) we would like to find a cheap edge between two terminals. Note that such a single edge also is a star with terminal leaves. The reduction rule will therefore find the star with minimum weight per contained terminal, which can be done in polynomial time. This rule is applied until the number of terminals, which decreases after each use, falls below a threshold depending on the input parameter k and the desired approximation ratio 1 + ε . Once the number of terminals is bounded by a function of k and ε , the Dreyfus and Wagner [196] algorithm can be applied on the remaining instance, or a kernel can be computed using the PSAKS of Theorem 23. It can be shown that the reduction rule does not distort the optimum solution by much as long as the threshold is large enough, which implies the following theorem.
Theorem 24
([202]). For the Steiner Tree problem a ( 1 + ε ) -approximation can be computed in 2 O ( k 2 / ε 4 ) n O ( 1 ) time for any ε > 0 , where k is the number of non-terminals in the optimum solution. Moreover, a ( 1 + ε ) -approximate kernel of size ( k / ε ) 2 O ( 1 / ε ) can be computed in polynomial time.
This theorem is also generalizable to the Steiner Forest problem, where a list of terminal pairs is given and the task is to find a minimum weight forest in the input graph connecting each pair. In this case though, the parameter has to be combined with the number of connected components of the optimum forest [202].
A variation of the Steiner Forest problem is the Shallow-Light Steiner Network (SLSN) problem. Here a graph with both edge costs and edge lengths is given, together with a set of terminal pairs and a length threshold L. The task is to compute a minimum cost subgraph, which connects each terminal pair with a path of length at most L. For this problem a dichotomy result was shown [203] in terms of the pattern given by the terminal pairs. More precisely, the terminal pairs are interpreted as edges in a graph for which the vertices are the terminals: if C is some class of graphs, then SLSN C is the Shallow-Light Steiner Network problem restricted to sets of terminal pairs that span some graph in C . Let C denote the class of all stars, and C λ the class of graphs with at most λ edges. The SLSN C problem is APX-hard [201], as it is a generalization of Steiner Tree (where L = ). At the same time, both the SLSN C and SLSN C λ problems parameterized by the number of terminals are paraNP-hard [204], since they are generalizations of the Restricted Shortest Path problem (where there is exactly one terminal pair). A PAS can however be obtained for both of these problems (whenever λ is a constant), but for no other class C of demand patterns [203].
Theorem 25
([203]). For any constant λ > 0 , there is an FPTAS for the SLSN C λ problem. For the SLSN C problem a ( 1 + ε ) -approximation can be computed in 4 k ( n / ε ) O ( 1 ) time for any ε > 0 , where k is the number of terminal pairs. Moreover, under Gap-ETH no ( 5 / 3 ε ) -approximation for SLSN C can be computed in f ( k ) n O ( 1 ) time for any ε > 0 and computable function f, whenever C is a recursively enumerable class for which C ¬ C C λ for every constant λ.
A notable special case is when all edge lengths are 1 but edge costs are arbitrary. Then SLSN C λ is polynomial time solvable for any constant λ , while SLSN C is FPT parameterized by the number of terminals [203]. At the same time the parameterized approximation lower-bound of Theorem 25 is still valid for this case. It is not known however, whether constant approximation factors can be obtained for SLSN C when C is a class different from C λ and C . More generally we may ask the following question.
Open Question 11.Given some class of graphs C ¬ C C λ , which approximation factor α C can be obtained in FPT time for SLSN C parameterized by the number of terminals?
Turning to the TSP problem, a generalization of TSP introduces deadlines until which vertices need to be visited by the computed tour. A natural parameterization in this setting is the number of vertices that have deadlines. It can be shown [205] that no approximation better than 2 can be computed when using this parameter. Nevertheless, a 2.5 -approximation can be computed in FPT time [205]. The algorithm will guess the order in which the vertices with deadlines are visited by the optimum solution. It then computes a 3 / 2 -approximation for the remaining vertices using Christofides algorithm [3]. The approximation ratio follows, since the optimum tour can be thought of as two tours, of which one visits only the deadline vertices, while the other contains all remaining vertices. The approximation algorithm incurs a cost of O PT for the former, and a cost of 3 2 · O PT for the latter part of the optimum tour.
Theorem 26
([205]). For the DlTSP problem a 2.5 -approximation can be computed in O ( k ! · k ) + n O ( 1 ) time, if the number of vertices with deadlines is k. Moreover, no ( 2 ε ) -approximation can be computed in f ( k ) n O ( 1 ) time for any ε > 0 and computable function f, unless P = NP.
Low dimensional metrics. Just as for clustering problems, another well-studied parameter in network design is the dimension of the underlying geometric space. A typical setting is when the input is assumed to be a set of points in some k-dimensional p -metric, where distances between points x and y are given by a function dist ( x , y ) = ( i = 1 k | x i y i | p ) 1 / p . Two prominent examples are Euclidean metrics (where p = 2 ) and Manhattan metrics (where p = 1 ). The dimension k of the metric space has been studied as a parameter from the parameterized approximation point-of-view avant la lettre for quite a while. It was shown [206,207] that both Steiner Tree and TSP are paraNP-hard for this parameter (since they are NP-hard even if k = 2 ), and that they are APX-hard in general metrics [201,208]. However, a PAS for Euclidean metrics both for the Steiner Tree and the TSP problems were shown to exist in the seminal work of Arora [167,209]. The techniques are similar to those used for clustering, and we refer to Section 4.3.2 for an overview.
Theorem 27
([167]). For the Steiner Tree and TSP problems a ( 1 + ε ) -approximation can be computed in k O ( k / ε ) k 1 n 2 time for any ε > 0 , if the input consists of n points in k-dimensional Euclidean space.
This result also holds for the t-MST and t-TSP problems [167], where the cheapest tree or tour, respectively, on at least t nodes needs to be found. In this case the runtime has to be multiplied by t however.
A related setting is the parameterization by the doubling dimension of the underlying metric. That is, when the parameter k is the smallest integer such that any ball in the metric can be covered by 2 k balls of half the radius. Any point set in a k dimensional p -metric has doubling dimension O ( k ) , and thus the latter parameter generalizes the former. For the TSP problem the above theorem can be generalized [210] to a PAS parameterized by the doubling dimension.
Theorem 28
([210]). For the TSP problem a ( 1 + ε ) -approximation can be computed in 2 ( k / ε ) O ( k 2 ) n log 2 n time for any ε > 0 , if the input consists of n points with doubling dimension k.
Given that a PAS exists for Steiner Tree in the Euclidean case, it is only natural to ask whether this is also possible for low doubling metrics. Only a QPTAS is known so far [211]. Moreover, a related parameter is the highway dimension, which is used to model transportation networks. As shown by Feldmann et al. [212] the techniques of Talwar [211] for low doubling metrics can be generalized to the highway dimension to obtain a QPTAS as well. Again, it is quite plausible to assume that a PAS exists.
Open Question 12.Is there a PAS forSteiner Treeparameterized by the doubling dimension? Is there a PAS for eitherSteiner Treeor TSP parameterized by the highway dimension?
Directed Graphs. When considering directed input graphs (asymmetric metrics), the Directed Steiner Tree problem takes as input a terminal set and a special terminal called the root. The task is to compute a directed tree of minimum weight that contains a path from each terminal to the root. In general no f ( k ) -approximation can be computed in FPT time for any computable function f, when the parameter k is the number of Steiner vertices in the optimum solution [202]. A notable special case is the unweighted Directed Steiner Tree problem, which for this parameter admits a PAS. The techniques here are the same as those used to obtain Theorem 24 for the undirected case. However, in contrast to the undirected case which admits a PSAKS, no polynomial-sized ( 2 ε ) -approximate kernelization exists for Directed Steiner Tree [202], unless NP⊆coNP/poly. It is an intriguing question whether a 2-approximate kernel exists.
Open Questions 13.Is there a polynomial-sized 2-approximate kernel for the unweightedDirected Steiner Treeproblem parameterized by the number of Steiner vertices in the optimum solution?
If the parameter is the number of terminals, the (weighted) Directed Steiner Tree problem is FPT, using the same algorithm as for the undirected version [196,197]. A different variant of Steiner Tree in directed graphs is the Strongly Connected Steiner Subgraph problem, where a terminal set needs to be strongly connected in the cheapest possible way. This problem is W[1]-hard parameterized by the number of terminals [213], and no O ( log 2 ε n ) -approximation can be computed in polynomial time [214], unless NP ⊆ ZTIME ( n polylog ( n ) ) . However, a 2-approximation can be computed in FPT time [215].
The crucial observation for this algorithm is that in any strongly connected solution, fixing some terminal as the root, every terminal can be reached from the root, while at the same time the root can be reached from each terminal. Thus the optimum solution is the union of two directed trees, of which one is directed towards the root and the other is directed away from the root, and the leaves of both trees are terminals. Hence it suffices to compute two solutions to the Directed Steiner Tree problem, which can be done in FPT time, to obtain a 2-approximation for Strongly Connected Steiner Subgraph. Interestingly, no better approximation is possible with this runtime [50].
Theorem 29
([50,215]). For the Strongly Connected Steiner Subgraph problem a 2-approximation can be computed in ( 2 + δ ) k n O ( 1 ) time for any constant δ > 0 , where k is the number of terminals. Moreover, under Gap-ETH no ( 2 ε ) -approximation can be computed in f ( k ) n O ( 1 ) time for any ε > 0 and computable function f.
A generalization of both Directed Steiner Tree and Strongly Connected Steiner Subgraph is the Directed Steiner Network problem [216], for which an edge-weighted directed graph is given together with a list of ordered terminal pairs. The aim is to compute the cheapest subgraph that contains a path from s to t for every terminal pair ( s , t ) . If k is the number of terminals, then for this problem no k 1 / 4 o ( 1 ) -approximation can be computed in f ( k ) n O ( 1 ) time [59] for any computable function f, under Gap-ETH. Both a PAS and a PSAKS exist [50] for the special case when the input graph is planar and bidirected, i.e., for every directed edge u v the reverse edge v u exists and has the same cost.
Similar to the PSAKS for the Steiner Tree problem, these two algorithms are based on a generalization of Borchers and Du [199]. That is, Chitnis et al. [50] show that a planar solution in a bidirected graph can be covered by planar graphs with at most 2 O ( 1 / ε ) terminals each, such that the sum of their costs is at most 1 + ε times the cost of the solution. These covering graphs may need to contain edges that are reverse to those in the solution, but are themselves not part of the solution. For this the underlying graph needs to be bidirected. Analogous to Steiner Tree, to obtain a kernel it then suffices to compute solutions for every possible list of ordered pairs of at most 2 O ( 1 / ε ) terminals. In contrast to Steiner Tree however, there is no FPT algorithm for this. Instead, an XP algorithm with runtime 2 O ( k 3 / 2 log k ) n O ( k ) needs to be used, which runs in polynomial time for k 2 O ( 1 / ε ) terminals with ε being a constant. After taking the union of all computed solutions, the number of Steiner vertices and the encoding length of the edge weights can be reduced in a similar way as for the Steiner Tree problem. To obtain a PAS, the algorithm will guess how the planar optimum can be covered by solutions involving only small numbers of terminals. It will then compute solutions on these subsets of at most 2 O ( 1 / ε ) terminals using the same XP algorithm.
Theorem 30
([50]). For the Directed Steiner Network problem on planar bidirected graphs a ( 1 + ε ) -approximation can be computed in max { 2 k 2 O ( 1 / ε ) , n 2 O ( 1 / ε ) } time for any ε > 0 , where k is the number of terminals. Moreover, a ( 1 + ε ) -approximate kernel of size ( k / ε ) 2 O ( 1 / ε ) can be computed in polynomial time.

4.5. Cut Problems

Starting from Menger’s theorem and the corresponding algorithm for s-t Cut, graph cut problems have always been at the heart of combinatorial optimization. While many natural generalizations of s-t Cut are NP-hard, further study of these cut problems yielded beautiful techniques such as flow-cut gaps and metric embeddings in approximation algorithms [217,218], and also important separators and randomized contractions in parameterized algorithms [219,220,221,222].

4.5.1. Multicut

An instance of Undirected Multicut (resp. Directed Multicut) is an undirected (resp. directed) graph G = ( V , E ) with k pairs of vertices ( s 1 , t 1 ) , , ( s k , t k ) . The goal is to remove the minimum number of edges such that there is no path from s i to t i for every i [ k ] . Directed Multiway Cut (resp. Directed Multiway Cut) is a special case of Undirected Multicut (resp. Directed Multiway Cut) where k vertices are given as terminals and the goal is to make sure there is no path between any pair of terminals. They have been actively studied from both approximation and parameterized algorithms perspectives. We survey parameterized approximation algorithms for these problems with parameters k and the solution size O PT .
Undirected Multicut.Undirected Multicut admits an O ( log k ) -approximation algorithm [223] in polynomial time, and is NP-hard to approximate within any constant factor assuming the Unique Games Conjecture [224]. Directed Multiway Cut admits an 1.2965 -approximation algorithm [225] in polynomial time, and is NP-hard to approximate within a factor 1.20016 [226]. Undirected Multicut (and thus Directed Multiway Cut) admits an exact algorithm parametrized by O PT [219,220].
With k as a parameter, we cannot hope for an exact algorithm or an approximation scheme, since even Directed Multiway Cut with 3 terminals is NP-hard to approximate within a factor 12 / 11 ε for any ε > 0 under the Unique Games Conjecture. However, for Undirected Multicut with k pairs ( s 1 , t 1 ) , , ( s k , t k ) , one can reduce it to k O ( k ) instances of Directed Multiway Cut with at most 2 k terminals, by guessing a partition of these s 1 , t 1 , , s k , t k according to the connected components containing them in the optimal solution (e.g., s i and t i should be always in different groups), merging the vertices in the same group into one vertex, and solving Directed Multiway Cut with the merged vertices as terminals. This shows an 1.2965 -approximation algorithm for Undirected Multicut that runs in time k O ( k ) n O ( 1 ) .
Some recent results improve or generalize this observation. For graphs with bounded genus g, Cohen-Addad et al. [227] gave an EPAS running in time f ( g , k , ε ) · n log n . Chekuri and Madan [228] considered the demand graph H, which is the graph formed by k edges ( s 1 , t 1 ) , , ( s k , t k ) . When t is the smallest integer such that H does not contain t disjoint edges as an induced subgraph, they presented a 2-approximation algorithm that runs in time k O ( t ) n O ( 1 ) .
Directed Multicut. Generally, Directed Multicut is a much harder computational task than Undirected Multicut in terms of both approximation and parameterized algorithms. Directed Multicut admits a min ( k , O ˜ ( n 11 / 23 ) ) -approximation algorithm [229]. It is NP-hard to approximate within a factor k ε for any ε > 0 for fixed k [230] under the Unique Games Conjecture, or 2 Ω ( log 1 ε n ) for any ε > 0 [231] for general k. Directed Multiway Cut admits an 2-approximation algorithm [232], which is tight even when k = 2 [230]. Parameterizing by Opt, Directed Multicut is FPT for k = 2 , but Directed Multicut is W[1]-hard even when k = 4 [46]. Directed Multiway Cut on the other hand is in FPT [221].
Since it is hard to improve the trivial k-approximation algorithm even for fixed k [230], parameterizing by k does not yield a better approximation algorithm. Chitnis and Feldmann [233] gave a k / 2 -approximation algorithm that runs in time 2 O ( O PT 2 ) n O ( 1 ) , and also proved that the problem under the same parameterization is still hard to approximate within a factor 59 / 58 with k = 4
Open Question 14.What is the best approximation ratio (as a function of k) achieved by a parameterized algorithm (with parameter O PT )? Will it be close to O ( 1 ) or Ω ( k ) ?

4.5.2. Minimum Bisection and Balanced Separator

Given a graph G = ( V , E ) , Minimum Edge Bisection (resp. Minimum Vertex Bisection) asks to remove the fewest number of edges such that the graph is partitioned into two parts A and B with | | A | | B | | 1 . Balanced Edge Separator (resp. Balanced Vertex Separator) is a more relaxed version of the problem where the goal is to bound the size of the largest component by α n for some 1 / 2 < α < 1 . It has been actively studied from approximation algorithms, culminating in O ( log n ) -approximation algorithms for both Balanced Edge Separator and Balanced Vertex Separator [218,234], and an O ( log n ) -approximation algorithm for Minimum Edge Bisection [235].
If we parameterize by the size of optimal separator k, Minimum Edge Bisection admits an exact parameterized algorithm [222]. While Minimum Vertex Bisection is W[1]-hard [219], Feige and Mahdian [236] gave an algorithm that given 2 / 3 α < 1 and ε > 0 , in time 2 O ( k ) n O ( 1 ) returns an ( α + ε ) separator of size at most k.

4.5.3. k-Cut

Given an undirected graph G = ( V , E ) and an integer k N , the k-Cut problem asks to remove the smallest number of edges such that G is partitioned into at least k non-empty connected components. The edge contraction algorithm by Karger and Stein [237] yields a randomized exact XP algorithm running in time O ( n 2 k ) , which was made deterministic by Thorup [238]. There were recent improvements to the running time [154,239]. There is an exact parameterized algorithm with parameter O PT [221,240]. For general k, it admits a ( 2 2 / k ) -approximation algorithm [241], and is NP-hard to approximate within a factor ( 2 ε ) for any ε > 0 under the Small Set Expansion Hypothesis [74].
A simple reduction shows that k-Cut captures ( k 1 ) -Clique, so an exact FPT algorithm with parameter k is unlikely to exist. Gupta et al. [153] gave an ( 2 δ ) -approximation algorithm for a small universal constant δ > 0 that runs in time f ( k ) · n O ( 1 ) . The approximation ratio was improved to 1.81 in [154], and further to 1.66 [242]. Very recently, Lokshtanov et al. [243] gave a PAS that runs in time ( k / ε ) O ( k ) n O ( 1 ) , thereby (essentially) resolving the parameterized approximability of k-Cut.

4.6. F -Deletition Problems

Let F be a vertex-hereditary family of undirected graphs, which means that if G F and H is a vertex-induced subgraph of G, then H F as well. F -Deletion is the problem where given a graph G = (V,E), we are supposed to find SV such that the subgraph induced by V \ S (denoted by G \ S) belongs to F . The goal is to minimize |S|. The natural weighted version, where there is a non-negative weight w(v) for each vertex v and the goal is to minimize the sum of the weights of the Vertices in S, is called Weighted F -Deletion.
F -Deletion captures numerous combinatorial optimization problems, including Vertex Cover (when F includes all graphs with no edges), Feedback Vertex Set (when F is the set of all forests), and Odd Cycle Transversal (when F is the set of all bipartite graphs). There are a lot more interesting graph classes F studied in structural and algorithmic graph theory. Some famous examples include planar graphs, perfect graphs, chordal graphs, and graphs with bounded treewidth.
In addition to beautiful structural results that give multiple equivalent characterizations, these graph classes often admit very efficient algorithms for some tasks that are believed to be hard in general graphs. Therefore, a systematic study of F -Deletion for more graph classes is not only an interesting algorithmic task by itself, but also a way to obtain better algorithms for other optimization problems when the given graph G is close to a nice class F (i.e., deleting few vertices from G makes it belong to F . Indeed, some algorithms for Independent Set for noisy planar/minor-free graphs discussed in Section 4.1 use an algorithm for F -Deletion as a subroutine. [89].
For the maximization version where the goal is to maximize | V \ S | , a powerful but pessimistic characterization is known. Lund and Yannakakis [244] showed that whenever F is vertex-hereditary and nontrivial (i.e., there are infinitely many graphs in F and out of F ), the maximization version is hard to approximate within a factor 2 log 1 / 2 ε n for any ε > 0 . So no nontrivial F is likely to admit even a polylogarithmic approximation algorithm. However, the situation is different for the minimization problem, since Vertex Cover admits a 2-approximation algorithm, while Odd Cycle Transversal [245] and Perfect Deletion [246] are NP-hard to approximate within any constant factor approximation algorithm. (The first result assumes the Unique Games Conjecture.) It indicates that a characterization of approximabilities for the minimization versions will be more complex and challenging.
There are two (closely related) frameworks to capture large graph classes.
  • Choose a graph width parameter (e.g., treewidth, pathwidth, cliquewidth, rankwidth, etc.) and k N . Let F be the set of graphs G with the chosen width parameter at most k. The parameter of F -Deletion is k.
  • Choose a notion of subgraph (e.g., subgraph, induced subgraph, minor, etc.) and a finite family of forbidden graphs H . Let F be the set of graphs G that do not have any graph in H as the chosen notion of subgraph. The parameter of F -Deletion is | H | : = H H | V ( H ) | .
Many interesting classes are capture by the above frameworks. For example, to express Feedback Vertex Set, we can take F to be the set of graphs with treewidth at most 1, or equivalently, the set of graphs that does not have the triangle graph K 3 as a minor. In the rest of the subsection, we introduce known results of F -Deletion under the above two parameterization. Note that under these two parameterizations, the need for approximation is inherent since the simplest problem in both frameworks, Vertex Cover, already does not admit a polynomial-time (2-ε)-approximation algorithm under the Unique Games Conjecture.
Finally, we mention that the parameterization by the size of the optimal solution has been studied more actively from the parameterized complexity community, where many important problems are shown to be in FPT [247,248,249].

4.6.1. Treewidth and Planar Minor Deletion

The treewidth of a graph (see Definition 1) is arguably the most well-studied graph width parameter with numerous structural and algorithmic applications. It is one of the most important concepts in the graph minor project of Robertson and Seymour. Algorithmically, Courcelle’s theorem [250] states that every problem expressible in the monadic second-order logic of graphs can be solved in FPT time parameterized by treewidth. We refer the reader to the survey of Bodlaender [251]. Computing treewidth is NP-hard in general [252], but if we parameterize by treewidth, it can be done in FPT time [253], and there is a faster constant-factor parameterized approximation algorithm [254].
Let k N be the parameter. Treewidth k-Deletion (also known as Treewidth k-Modulator in the literature) is a special case of F -Deletion where F is the set of all graphs with treewidth at most k. Note the case k = 0 yields Vertex Cover and k = 1 yields Feedback Vertex Set.
Fomin et al. [247] gave a randomized f ( k ) -approximation algorithm that runs in g ( k ) · n m for some computable functions f and g. The approximation ratio was improved by Gupta et al. [255] that gave a deterministic O ( log k ) -approximation algorithm that runs in f ( k ) · n O ( 1 ) some f.
This result has immediate applications to minor deletion problems. Let H be a finite set of graphs, and consider H -Minor Deletion, which is a special case of F -Deletion when F is the set of all graphs that do not have any graph in H as a minor. Its parameterized and kernelization complexity (with parameter O PT ) for family H has been actively studied [247,256,257].
When H contains a planar graph H (also known as PLANAR H -Deletion in the literature), by the polynomial grid-minor theorem [258], any graph G ∈ F has treewidth at most k:= poly(|V(H)|). Therefore, in order to solve H -Minor Deletion, one can first solve Treewidth k-Deletion to reduce the treewidth to k and then solve H -Minor Deletion optimally using Courcelle’s theorem [250]. Combined with the above algorithm for Treewidth k-Deletion [255], this strategy yields an O ( log k ) -approximation algorithm that runs in f ( | H | ) · n O ( 1 ) time.
Beyond Planar H -Deletion, there are not many results known for H -Minor Deletion. The case H = { K 5 , K 3 , 3 } is called Minimum Planarization and was recently shown to admit an O ( log O ( 1 ) n ) -approximation algorithm in n O ( log   n / log   log   n ) time [259].
While the unweighted versions of Treewidth k-Deletion and Planar H -Deletion admit an approximation algorithm whose approximation ratio only depends on k not n, such an algorithm is not known for Weighted Treewidth k-Deletion or Weighted Planar H -Deletion. Agrawal et al. [260] gave a randomized O ( log 1.5 n ) -approximation algorithm and a deterministic O ( log 2 n ) -approximation algorithm that run in polynomial time for fixed k, i.e., the degree of the polynomial depends on k. Bansal et al. [89] gave an O ( log n log log n ) -approximation algorithm for the edge deletion version. The only graphs H whose weighted minor deletion problem is known to admit a constant factor approximation algorithm are single edge (Weighted Vertex Cover), triangle (Weighted Feedback Vertex Set), and diamond [261]. For the weighted versions, no hardness beyond Vertex Cover is known.
Open Question 15.DoesWeighted Treewidthk-Deletionadmit an f ( k ) -approximation algorithm with parameter k for some functionf? DoesTreewidthk-Deletionadmit a c-approximation algorithm with parameter k for some universal constant c?
Algorithms for Treewidth k -Deletion. Here we present high-level ideals of [255,260] for Treewidth k-Deletion and Weighted Treewidth k-Deletion respectively. These two algorithms share the following two important ingredients:
  • Graphs with bounded treewidth admit good separators.
  • There are good approximation algorithms to find such separators.
Given an undirected and vertex-weighted graph G = ( V , E ) and an integer k N , let (Weighted) k-Vertex Separator be the problem whose goal is to remove the vertices of minimum total weight so that each connected component has at most k vertices. An algorithm is called an α -bicriteria approximation algorithm if it returns a solution whose total weight is at most α · O PT and each connected component has at most 1.1 k vertices [262]. The case k = 2 n / 3 is called Balanced Separator and has been actively studied in the approximation algorithms community, and the best approximation algorithm achieves O ( log n ) -bicriteria approximation [234]. When k is small, O ( log k ) -bicriteria approximation is also possible [116].
Weighted Treewidth k -Deletion. Agrawal et al. [260] achieves an O ( log 1.5 n ) -approximation for Weighted Treewidth k-Deletion in time n O ( k ) . It would be interesting to see whether the running time can be made FPT with parameter k.
The main structure of their algorithm is top-down recursive. Deleting the optimal solution S * from G reduces the treewidth of G \ S * to k, so from the forest decomposition of G \ S * , there exists a set M * G \ S * with at most k + 1 vertices such that each connected component of G \ ( M * S * ) has at most 2 n / 3 vertices. While we do not know S * , we can exhaustively try every possible M V with | M | k + 1 and use the bicriteria approximation algorithm for Balanced Separator to find M and S such that (1) | M | k + 1 , (2) w ( S ) O ( log n ) O PT , and (3) G \ ( M \ S ) has at most 1.1 · ( 2 n / 3 ) 3 n / 4 vertices.
Let G 1 , , G t be the resulting connected components of G \ ( S M ) . We solve each G i recursively to compute S i such that each G i \ S i has treewidth at most k. The weight of S was already bounded in terms of O PT , but the weight of M was not, so we finally need to consider the graph induced by M V ( G 1 ) V ( G t ) and delete vertices of small weight to ensure small treewidth. However, this task is easy since since the treewidth of each G i is bounded by k and | M | k + 1 , which bounds the treewidth of the considered graph by 2 k + 1 . So we can fetch the algorithm for small treewidth graphs to solve the problem optimally. Note that the total weight of removed vetices in this recursive call is at most ( O ( log n ) + 1 ) O PT . Since i O PT ( G i ) O PT ( G ) and the recursion depth is at most O ( log n ) , the total approximation ratio is O ( log 1.5 n ) .
Treewidth k -Deletion. Gupta et al. [255] give an O ( log k ) -approximation algorithm that runs in time f ( k ) · n O ( 1 ) for the unweighted version of Treewidth k-Deletion. The main structure of this algorithm is bottom-up iterative refinement. The algorithm maintains a feasible solution S V (we can start with S = V ), and iteratively uses S to obtain another feasible solution S . If the new solution is not smaller (i.e., | S | | S | ), then | S | O ( log k ) · O PT .
Let us focus on one refinement step with the current feasible solution S. Let S * be the optimal solution, so that G \ S * has treewidth at most k. We use the following simple lemma showing the existence of a good separator of G in a finer scale than before.
Lemma 2
([87,255]). Let H be a graph with treewidth at most k, T V ( H ) be any subset of vertices, and ε > 0 . There exists R V ( H ) such that (1) | R | ε | T | and (2) every connected component of H \ R has at most O ( k / ε ) vertices from T.
Plugging H G \ S * , T S in the above lemma and letting S = R S * , we can conclude that there exists S V such that | S | | R | + | S * | ε | S | + O PT and each connected component of G \ S has at most O ( k / ε ) vertices from S.
How can we find such a set S efficiently? Note that if S = V , then S is an O ( k / ε ) -vertex separator of G. Lee [116] defined a generalization of k-Vertex Separator called k-Subset Vertex Separator, where the input consists of G = ( V , E ) , S V , k N , and the goal is to remove the smallest number of vertices so that each connected component has at most k vertices from S, and gave an O ( log k ) -bicriteria approximation algorithm.
Since the above lemma guarantees that O PT for O ( k / ε ) -Subset Vertex Separator is at most O PT for Treewidth k-Deletion plus ε | S | , applying this bicriteria approximation algorithm yields S such that | S | O ( log k ) ( O PT + ε | S | ) and each connected component of G \ S has at most O ( k / ε ) vertices from S. Since S is a feasible solution, it implies that the treewidth of each connected component is bounded by O ( k / ε ) , so we can solve each component optimally in time f ( k / ε ) · n O ( 1 ) . By setting ε = 0.5 , we can see the size of new solution is strictly decreased unless | S | = O ( log k ) · O PT , finishing the proof.

4.6.2. Subgraph Deletion

Let H be a fixed pattern graph with k vertices. Given a host graph G, deciding whether H is a subgraph of G (in the usual sense) is known as Subgraph Isomorphism, whose parameterized complexity with various parameters (e.g., k, tw ( H ) , genus ( G ) , etc.) was studied by Marx and Pilipczuk [263].
Guruswami and Lee [115] studied the corresponding vertex deletion problem H-Subgraph Deletion (called H-Transversal in the paper), which is a special case of F -Deletion where F is the set of graphs that do not have sH as a subgraph. Note that the problem admits a simple k-approximation algorithm that runs in time O ( n · f ( n , H ) ) where f ( n , H ) denotes time to solve Subgraph Isomorphism with the pattern graph H and a host graph with n vertices. Their main hardness result states that assuming the Unique Games Conjecture, whenever H is 2-vertex connected, for any ε > 0, no polynomial time algorithm (including algorithms running in time n f ( k ) for any f) can achieve a (kε)-approximation. (Without the UGC, they still ruled out a (k − 1 − ε)-approximation.)
Among H that are not 2-vertex-connected, there is an O ( log k ) -approximation algorithm when H is a star (in time n O ( 1 ) ) or a path (in time f ( k ) n O ( 1 ) ) [115,116,264]. The algorithm for k-path follows from the result for Treewidth k-Deletion, because any graph without a k-path has treewidth at most k. Whenever H is a tree with k vertices, detecting a copy of H in G with n vertices can be done in 2 O ( k ) n O ( 1 ) time [265], and it is open whether there is an O ( log k ) -approximation algorithm for H-SUBGRAPH Deletion in time f ( k ) · n O ( 1 ) .

4.6.3. Other Deletion Problems

Chordal graphs. A graph is chordal if it does not have an induced cycle of length 4 . Chordal graphs form a subclass of perfect graphs that have been actively studied. Initially motivated by efficient kernels, approximation algorithms for Chordal Deletion have been developed recently. The current best results are a poly ( O PT ) -approximation [140,266] and a O ( log 2 n ) -approximation [260].
Edge versions. While this subsection focused on the vertex deletion problem, there are some results on the edge deletion, edge addition, and edge modification versions. (Edge modification allows both addition and deletion.) Cao and Sandeep [267] studied Minimum Fill-In, whose goal is to add the minimum number of edges to make a graph chordal. They gave new inapproximability results implying improved time lower bounds for parameterized algorithms. Giannopoulou et al. [268] gave O ( 1 ) -approximation algorithms for Planar H -Immersion Deletion parameterized by H . Bliznets et al. [269] considered H-free edge modification for a forbidden induced subgraph H and give an almost complete characterization on its approximability depending on H.
Directed graphs. There is also a large body of work on parameterized algorithms for vertex deletion problems in directed graphs. While many of the known problems (including Directed Feedback Vertex Set [270]) admit an exact FPT algorithm, Lokshtanov et al. [48] studied Directed Odd Cycle Transversal, and proved that it is W[1]-hard and is unlikely to admit an PAS under the Parameterized Inapproximability Hypothesis (or Gap-ETH). They complemented the result by showing a 2-approximation algorithm running in time f ( O PT ) n O ( 1 ) .

4.7. Faster Algorithms and Smaller Kernels via Approximation

The focus of this section so far has been on problems for which its exact version is intractable (i.e., W[1]/W[2]-hard) and the goal is to obtain good approximations in FPT time. In this subsection, we shift our focus slightly by asking: does approximation allow us to find faster algorithms for problems already known to be in FPT?
To illustrate this, let us consider Vertex Cover. It is of course well-known that the exact version of the problem can be solved in FPT time, with the current best running time being O * ( 1 . 2738 k ) [271]. The question here would be: if we are allowed to output an ( 1 ε ) -approximate solution, instead of just an exact one, can we speed up the algorithm?
To the best of our knowledge, such a question was tackled for the first time by Bourgeois et al. [272] and revisited quite a few times in the literature [20,273,274,275,276]. As one might have suspected, the answer to this question is a YES, as stated below.
Theorem 31
([20]). Let δ > 0 be such that there exists an O * ( δ k ) -time algorithm for Vertex Cover (e.g., δ = 1.2738 ). Then, for any ε > 0 , there is an ( 1 + ε ) -approximation for Vertex Cover that runs in O * ( δ ( 1 ε ) k ) time.
The main idea of the algorithm is inspired by the “local ratio” method in the approximation algorithms literature (see e.g., [277]) and we sketch it here. The algorithm works in two stages. In the first stage, we run the greedy algorithm: as long as we have picked less than 2 ε k vertices so far and not all edges are covered, pick an uncovered edge and add both endpoints to our solution. In the second stage, we run the exact algorithm on the remaining part of the graph to find a Vertex Cover of size ( 1 ε ) k . Since the first stage runs in polynomial time, the running time of the entire algorithm is dominated by the second stage, whose running time is O * ( δ ( 1 ε ) k ) as desired. The correctness of the algorithm follows from the fact that, for each selected edge in the first step, the optimal solution still needs to pick at least one endpoint. As a result, the optimal solution must pick at least ε k vertices with respect to the first stage (compared to 2 ε k picked by the algorithm). Thus, when the optimal solution is of size at most k, there must be a solution in the second stage of size at most ( 1 ε ) k , meaning that the algorithm finds such a solution and outputs a vertex cover of size ( 1 + ε ) k as claimed.
The above “approximate a small fraction and brute force the rest” approach of Fellows et al. [20] generalizes naturally to problems beyond Vertex Cover. Fellows et al. [20] formalized the method in terms of α -fidelity kernelization and apply it to several problems, including Connected Vertex Cover, d-Hitting Set and Steiner Tree. For these problems, the method gives an ( 1 + ε ) -approximation algorithm that runs in time O * ( δ ( 1 Ω ( ε ) ) k ) , where δ > 0 denotes a constant for which a O * ( δ k ) -time algorithm is known for the exact version of the corresponding problem. The approach, in some form or another, is also applicable both to other parameterized problems [278,279] and to non-parameterized problems (e.g., [272]); since the latter is out-of-scope for the survey, we will not discuss the specifics here.
An intriguing question related to this line of work is whether it must be the case that the running time of ( 1 + ε ) -approximation algorithms is of the form O * ( δ ( 1 Ω ( ε ) ) k ) . That is, can we get a ( 1 + o ( 1 ) ) -approximation for these problems in time O * ( λ k ) where λ is a constant strictly smaller than δ ? More specifically, we may ask the following:
Open Question 16.LetLet δ > 0 be the smallest (known) constant such that an O * ( δ k ) -time exact algorithm exists for Vertex Cover. Is there an algorithm that, for any ε > 0 , runs in time f ( 1 / ε ) · O * ( λ k ) for some constant λ < δ ?
Of course, the question applies not only for Vertex Cover but other problems in the list as well. The informal crux of this question is whether, in the regime of very good approximation factors (i.e. 1 + o ( 1 ) ), approximation can still be exploited in such a way that the algorithm works significantly better than the approach “approximate a o ( 1 ) fraction and then brute force”.
Turning back once again to our running example of Vertex Cover, it turns out that algorithms faster than “approximate a small fraction and then brute force” are known [273,275,276] but only for the regime of large approximation ratios. In particular, Brankovic and Fernau [273] give faster algorithms than in Theorem 31 already for approximation ratios as small as 3/2. The algorithms in [275,276] focus on the case of “barely non-trivial” ( 2 ρ ) -approximation factors. (Recall the greedy algorithm yields a 2-approximation and, under the Unique Games Conjecture, the problem is NP-hard to approximate to within any constant factor less than 2.) The algorithm in [275] has a running time of O * ( 2 k / 2 Ω ( 1 / ρ ) ) , which was later improved in [276] to O * ( 2 k / 2 Ω ( 1 / ρ 2 ) ) . These running times should be contrasted with that of “approximate a small fraction and then brute force” (i.e., applying Theorem 31 directly with ε = 1 ρ ) which gives an algorithm with running time O * ( 2 k ρ ) . In other words, Refs. [275,276] improve the “saving factor” from 1 / ρ to 2 Ω ( 1 / ρ ) and 2 Ω ( 1 / ρ 2 ) respectively. It should be noted however that, since the known ( 2 o ( 1 ) ) -factor hardness of approximation is shown via the Unique Games Conjecture and unique games admit subexponential time algorithms [280,281], it is still entirely possible that this regime of approximating Vertex Cover admits subexponential time algorithms as well. This is perhaps the biggest open question in the “barely non-trivial” approximation range:
Open Question 17.Is there an algorithm that runs in 2 o ( k ) n O ( 1 ) time and achieves an approximation ratio of ( 2 ρ ) for some absolute constant ρ > 0 ?
Let us now briefly discuss the techiques used in some of the aforementioned works. The algorithms in [273,275] are based on branching in conjunction with certain approximation techniques. (See also [282] where a similar technique is used for a related problem Total Vertex Cover.) A key idea in [273,275] is that (i) if the (average or maximum) degree of the graph is small, then good polynomial-time approximation algorithms are known [283] and (ii) if the degree is large, then branching algorithms are naturally already fast. The second part of [273] involves a delicate branching rule. However, for [275], it is quite simple: for some threshold d (to be specified), as long as there exists a vertex with degree at least d, then (1) with some probability, simply add the vertex to the vertex cover, or (2) branch on both possibilities of it being inside the cover and outside. After this branching finishes and we are left with low-degree graphs, just run the known polynomial-time approximation algorithms [283] on these graphs. The point here is that the “error” incurred if option (1) is chosen will be absorbed by the approximation. By carefully selecting d and the probability, one can arrive at the desired running time and approximation guarantee. This algorithm is randomized, but can be derandomized using the sparsification lemma [284].
To the best of our knowledge, this “barely non-trivial approximation” regime has not been studied beyond Vertex Cover. In particular, while Bansal et al. [275] apply their techniques on several problems, these are not parameterized problems and we are not aware of any other parameterized study related to the regime discussed here.
Parallel to the running time questions we have discussed so far, one may ask an analogous question in the kernelization regime: does approximation allow us to find smaller kernels for problems that already admit polynomial-size kernels? As is the case with exact algorithms, parameterized approximation algorithms go hand in hand with approximate kernels. Indeed, many algorithmic improvements mentioned can also be viewed as improvements in terms of the size of the kernels. In particular, recall the proof sketch of Theorem 31 for Vertex Cover. If we stop and do not proceed with brute force in the second step, then we are left with an ( 1 + ε ) -approximate kernel. It is also not hard to argue that, by for instance applying the standard 2 k -size kernelization at the end, we are left with at most 2 ( 1 ε ) k vertices. This improves upon the best known 2 k Θ ( log k ) bound for the exact kernel [285]. A similar improvement is known also for d-Hitting Set [20].

5. Future Directions

Although we have provided open questions along the way, we end this survey by zooming out and discussing some general future directions or meta-questions, which we find to be interesting and could be the basis for future work.

5.1. Approximation Factors

The quality of a polynomial-time approximation algorithm is mainly measured by the obtainable approximation factor α : the smaller it is the more feasibly solvable the problem is. Therefore, a lot of work has been invested into determining the smallest obtainable approximation factor α for all kinds of computationally hard problems. In the non-parameterized (i.e., NP-hardness) world, a whole spectrum of approximability has been discovered (cf. [3,4]): the most feasibly solvable NP-hard problems (e.g., the Knapsack problem) admit a so-called polynomial-time approximation scheme (PTAS), which is an algorithm computing a ( 1 + ε ) -approximation for any given constant ε > 0 . Some problems can be shown not to admit a PTAS (under reasonable complexity assumptions), but still allow constant approximation factors (e.g., the Steiner Tree problem). Yet others can only be approximated within a polylogarithmic factor (e.g., the Set Cover problem), while some are even harder than this, as the best approximation factor obtainable is polynomial in the input size (e.g., the Cliqueproblem).
In contrast to polynomial-time approximation algorithms, a full spectrum of obtainable approximation ratios is still missing when allowing parameterized runtimes. Instead, only some scattered basic results are known. In particular, most of parameterized approximation problems belongs to one of the following categories:
  • A parameterized approximation scheme (PAS) exists, i.e., for any constant ε > 0 a ( 1 + ε ) -approximation can be computed in f ( k ) n O ( 1 ) time for some parameter k. These are currently the most prevalent types of results in the literature. To just mention one example, the Steiner Tree problem is APX-hard, but admits a PAS [202] when parameterized by the number of non-terminals (so-called Steiner vertices) in the optimum solution (cf. Section 4.4).
  • A lower bound excluding any non-trivial approximation factor exists. For example, under ETH the Dominating Set problem has no g ( k ) -approximation in f ( k ) n o ( k ) time [35] for any functions g and f, where k is the size of the largest dominating set.
  • A polynomial-time approximation algorithm can achieve a similar approximation ratio, i.e., the parameterization is not very helpful. For instance, for the k-Center problem [286] a 2-approximation can be computed in polynomial time [287], but even when parameterizing by k no ( 2 ε ) -approximation is possible [187] for any ε > 0 , under standard complexity assumptions. A similar situation holds for Max k-Coverage, which we discussed in Section 4.2.2.
  • Constant or logarithmic approximation ratios can be shown, and which beat any approximation ratio obtainable in polynomial time. For instance, Strongly Connected Steiner Subgraph problem: under standard complexity assumptions, for this problem no polynomial-time O ( log 2 ε n ) -approximation algorithm exists [214], and there is no FPT algorithm parameterized by the number k of terminals [213]. However it is not hard to compute a 2-approximation in 2 O ( k ) n O ( 1 ) time [215], and no ( 2 ε ) -approximation algorithm with runtime f ( k ) n O ( 1 ) exists [50] under Gap-ETH, for any function f and any ε > 0 (cf. Section 4.4).
For many problems discussed in this survey, including Denest k-Subgraph, Steiner Tree with bounded doubling/highway dimension, it has not been determined which category they belong. There are also a lot of problems in the final category for which asymptotically tight approximation ratios have not been found, including Directed Multicut, Treewidth k-Deletion (both weighted and unweighted). The parameterized approximability of H -Minor Deletion for non-planar H is also widen open except Minimum Planarization ( H = { K 5 , K 3 , 3 } ) [259]. It is an immediate but still interesting direction to prove tight parameterized approximation ratios for these (and more) problems.
Digressing, we remark that this survey does not include FPT-approximation of counting problems, such as approximately counting number of k-paths in a graph. The best ( 1 + ε ) -multiplicative factor algorithm known [288,289] for counting number of k-paths runs in time 4 k f ( ε ) p o l y ( n ) for some subexponential function f (cf. [290]). So a natural question is: can we count k-paths approximately in time c k , where c is as close to the base of running time of the algorithm of deciding existence of k-Path in a graph (the best currently known c is roughly 1.657 [291,292])?

5.2. Parameterized Running Times

The quality of FPT algorithms is mainly measured in the obtainable runtime. Given a parameter k, for some problems the optimum solution can be computed in f ( k ) n g ( k ) time, for some functions f and g independent of the input size n (i.e., the degree of the polynomial also depends on the parameter). If such an algorithm exists the problem is slice-wise polynomial (XP), and the algorithm is called an XP algorithm. A typical example is if a solution of size k is to be found within a data set of size n, in which case often an n O ( k ) time exhaustive search algorithm exists. However, an FPT algorithm with runtime, say, O ( 2 k n ) is a lot more efficient than an XP algorithm with runtime n O ( k ) , and therefore the aim is usually to find FPT algorithms, while XP algorithms are counted as prohibitively slow. The discovery of the W-hierarchy in complexity theory has paved the way to providing evidence when an FPT algorithm is unlikely to exist. Assuming ETH, it is even possible to provide lower bounds on the runtimes obtainable by any FPT or XP algorithm. Similar to approximation algorithms, this has lead to the discovery of a spectrum of tractability (cf. [6]): starting from slightly sub-exponential 2 O ( k ) n O ( 1 ) time, through single exponential 2 O ( k ) n O ( 1 ) time, to double exponential 2 2 O ( k ) n O ( 1 ) time for FPT algorithms with matching asymptotic lower bounds under ETH (e.g., for the Planar Vertex Cover, Vertex Cover, and Edge Clique Cover problems, respectively, each parameterized by the solution size). For XP algorithms, asymptotically tight runtime bounds of the form n O ( k ) and n O ( k ) can be obtained under ETH (e.g., for the Clique problem parameterized by the solution size, and the Planar Bidirected Steiner Network problem parameterized by the number of terminals [50], respectively). Finally, problems that are NP-hard when the given parameter is constant do not even allow XP algorithms unless P = NP (e.g., the Graph Colouring problem where the parameter is the number of colours).
In terms of tight runtime bounds, existing results on parameterized approximation algorithms are few and far between. In particular, most of them show that for a given parameter k one of the following cases applies.
  • An approximation is possible in f ( k ) n O ( 1 ) time for some function f. Most current results are only concerned with the existence of an algorithm with this type of runtime, i.e., they do not provide any evidence that the obtained runtime is best possible, or try to optimize it. The only lower bounds known exclude certain types of approximation schemes when a hardness result for the parameterization by the solution size exists. For instance, it is known that if some problem does not admit a 2 o ( k ) n O ( 1 ) time algorithm for this parameter k then it also does not admit an EPTAS with runtime 2 o ( 1 / ε ) n O ( 1 ) (cf. [5,8]).
  • A certain approximation ratio cannot be obtained in f ( k ) n O ( 1 ) time for any function f. For example, it is known that while a 2-approximation for the Strongly Connected Steiner Subgraph problem can be computed in 2 O ( k ) n O ( 1 ) time [215], where k is the number of terminals, no ( 2 ε ) -approximation can be computed in f ( k ) n O ( 1 ) time [50] for any function f, under Gap-ETH (cf. Section 4.4).
Hence, matching lower bounds on the time needed to compute an approximation are missing. For example, is the runtime of 2 O ( k ) n O ( 1 ) best possible to compute a 2-approximation for the Strongly Connected Steiner Subgraph problem? Could there be a 2 O ( k ) n O ( 1 ) time algorithm to compute a 2-approximation as well? For PASs the exact obtainable runtime is often elusive, even if certain types of approximation schemes can be excluded. For instance, for the Steiner Tree problem parameterized by the number of Steiner vertices in the optimum solution a ( 1 + ε ) -approximation can be computed in 2 O ( k 2 / ε 4 ) n O ( 1 ) time [202]. Is the dependence on k and ε best possible? Could there be a 2 O ( k / ε 4 ) n O ( 1 ) or 2 O ( k 2 / ε ) n O ( 1 ) time algorithm as well?
We remark that, for problems for which straightforward algorithms are known to be (essentially) the best possible in FPT time, or for which an improvement over polynomial time approximation is not possible, sometimes tight running time lower bounds are known in conjunction with tight inapproximability ratios. This includes k-Dominating Set (Section 3.1.2), k-Clique (Section 3.2.1) and Max k-Coverage (Section 4.2.2).

5.3. Kernel Sizes

The development of compositionality has lead to a theory from which lower bounds on the size of the smallest possible kernel of a problem can be derived (under reasonable complexity assumptions). The spectrum (cf. [6]) here reaches from polynomial-sized kernels (e.g., for any q 3 and ε > 0 the q-SAT problem parameterized by the number of variables n has no O ( n q ε ) -sized kernel) to exponential-sized kernels (e.g., the Steiner Tree problem parameterized by the number of terminals does not admit any polynomial-sized kernel despite being FPT).
For approximate kernels, only a small number of publications exist, and the few known results fall into two categories:
  • A polynomial-sized approximate kernelization scheme (PSAKS) exists, i.e., for any ε > 0 there is a ( 1 + ε ) -approximate kernelization algorithm that computes a ( 1 + ε ) -approximate kernel of size polynomial in the parameter k. For example, the Steiner Tree problem admits a PSAKS for both the parameterization in the number of terminals [18] and in the number of Steiner vertices in the optimum [202], even though neither of these two parameters admits a polynomial-sized (exact) kernel.
  • A lower bound excluding any approximation factor for polynomial-sized kernels exists. For example, the Longest Path problem parameterized by the maximum path length has no α -approximate polynomial-sized kernel for any α [18], despite being FPT for this parameter [6].
Hence again the intermediate cases, for which tight constant or logarithmic approximation factors can be proved for polynomial-sized kernels, are missing. Studying approximate kernelization algorithms however is of undeniable importance to the field of parameterized approximation algorithms, as witnessed by the importance of exact kernelization to fixed-parameter tractability.

5.4. Completeness in Hardness of Approximation

A final direction we would like to highlight is to obtain more completeness in inapproximability results. Most of the results so far for FPT hardness of approximation either (i) rely on gap hypothesis or (ii) yield a hardness in terms of the W-hierarchy but the exact version of the problem is known to be complete on an even higher level (e.g., Dominating Set is known to be W[1]-hard to approximate but its exact version is W[2]-complete). We have discussed (i) extensively in Section 3.2 and some examples of (ii) in Section 3.1. There are also some examples of (ii) that are not covered here; for instance, Marx [293] showed W[t]-hardness for certain monotone/anti-monotone circuit satisfiability problems and the exact versions of these problems are known to be complete for higher levels of the W-hierarchy. The situation here is unlike that in the theory of NP-hardness of approximation; there the PCP Theorem [21,22] implies NP-completeness of optimization problems [294].
Thus, in the parameterized inapproximability arena, the main question here is whether we can prove completeness results for hardness of approximation for the aforementioned problems. The two important examples here are: is k-Clique W[1]-hard to approximate, and is k-Dominating Set W[2]-hard to approximate? As discussed in Section 3.2, the former is also closely related to resolving PIH.
Finally, we note that, while completeness results are somewhat rare in FPT hardness of approximation, some are known. We give two such examples here. First is the k-Steiner Orientation problem, discussed in Section 3.1.3; it is W[1]-complete to approximate [44]. Second is the Monotone Circuit Satisfiability problem (without depth bound), which was proved to be W[P]-complete by Marx [293]. However, it does not seem clear to us whether these techniques can be applied elsewhere, e.g., for k-Clique.

Author Contributions

Writing—original draft and writing—review and editing, A.E.F., K.C.S., E.L. and P.M. All authors have read and agreed to the published version of the manuscript.

Funding

Andreas Emil Feldmann is supported by the Czech Science Foundation GACR (grant #19-27871X), and by the Center for Foundations of Modern Computer Science (Charles Univ. project UNCE/SCI/004). Karthik C. S. is supported by ERC-CoG grant 772839, the Israel Science Foundation (grant number 552/16), and from the Len Blavatnik and the Blavatnik Family foundation. Euiwoong Lee is supported by the Simons Collaboration on Algorithms and Geometry.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cobham, A. The intrinsic computational difficulty of functions. In Proceedings of the 1964 Congress for Logic, Methodology, and the Philosophy of Science, Paris, France, 23–25 July 1964; pp. 24–30. [Google Scholar]
  2. Edmonds, J. Paths, Trees, and Flowers. Can. J. Math. 1965, 17, 449–467. [Google Scholar] [CrossRef]
  3. Vazirani, V.V. Approximation Algorithms; Springer: Berlin, Germany, 2001. [Google Scholar]
  4. Williamson, D.P.; Shmoys, D.B. The Design of Approximation Algorithms; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  5. Downey, R.G.; Fellows, M.R. Fundamentals of Parameterized Complexity; Springer: Berlin, Germany, 2013; Volume 4. [Google Scholar]
  6. Cygan, M.; Fomin, F.V.; Kowalik, L.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; Saurabh, S. Parameterized Algorithms; Springer: Berlin, Germany, 2015. [Google Scholar]
  7. Cai, L.; Chen, J. On fixed-parameter tractability and approximability of NP optimization problems. J. Comput. Syst. Sci. 1997, 54, 465–474. [Google Scholar] [CrossRef] [Green Version]
  8. Marx, D. Parameterized complexity and approximation algorithms. Comput. J. 2008, 51, 60–78. [Google Scholar] [CrossRef]
  9. Flum, J.; Grohe, M. Parameterized Complexity Theory; Springer: Berlin, Germany, 2006. [Google Scholar]
  10. Rubinstein, A.; Williams, V.V. SETH vs. Approximation. SIGACT News 2019, 50, 57–76. [Google Scholar] [CrossRef]
  11. Cesati, M.; Trevisan, L. On the efficiency of polynomial time approximation schemes. Inf. Process. Lett. 1997, 64, 165–171. [Google Scholar] [CrossRef]
  12. Cygan, M.; Lokshtanov, D.; Pilipczuk, M.; Pilipczuk, M.; Saurabh, S. Lower Bounds for Approximation Schemes for Closest String. In Proceedings of the 15th Scandinavian Symposium and Workshops on Algorithm Theory, Reykjavik, Iceland, 22–24 June 2016. [Google Scholar]
  13. Cai, L.; Chen, J. On fixed-parameter tractability and approximability of NP-hard optimization problems. In Proceedings of the IEEE 2nd Israel Symposium on Theory and Computing Systems, Natanya, Israel, 7–9 June 1993; pp. 118–126. [Google Scholar]
  14. Chen, J.; Huang, X.; Kanj, I.A.; Xia, G. Polynomial time approximation schemes and parameterized complexity. Discret. Appl. Math. 2007, 155, 180–193. [Google Scholar] [CrossRef] [Green Version]
  15. Kratsch, S. Polynomial kernelizations for MIN F+Π1 and MAX NP. Algorithmica 2012, 63, 532–550. [Google Scholar] [CrossRef] [Green Version]
  16. Guo, J.; Kanj, I.; Kratsch, S. Safe approximation and its relation to kernelization. In International Symposium on Parameterized and Exact Computation; Springer: Berlin, Germany, 2011; pp. 169–180. [Google Scholar]
  17. Cai, L.; Chen, J.; Downey, R.G.; Fellows, M.R. Advice Classes of Parameterized Tractability. Ann. Pure Appl. Log. 1997, 84, 119–138. [Google Scholar] [CrossRef] [Green Version]
  18. Lokshtanov, D.; Panolan, F.; Ramanujan, M.; Saurabh, S. Lossy Kernelization. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, Montreal, QC, Canada, 19–23 June 2017; pp. 224–237. [Google Scholar]
  19. Hermelin, D.; Kratsch, S.; Soltys, K.; Wahlström, M.; Wu, X. A Completeness Theory for Polynomial (Turing) Kernelization. Algorithmica 2015, 71, 702–730. [Google Scholar] [CrossRef] [Green Version]
  20. Fellows, M.R.; Kulik, A.; Rosamond, F.; Shachnai, H. Parameterized approximation via fidelity preserving transformations. In International Colloquium on Automata, Languages, and Programming; Springer: Berlin/Heidelberg, Germany, 2012; pp. 351–362. [Google Scholar]
  21. Arora, S.; Lund, C.; Motwani, R.; Sudan, M.; Szegedy, M. Proof Verification and the Hardness of Approximation Problems. J. ACM 1998, 45, 501–555. [Google Scholar] [CrossRef]
  22. Arora, S.; Safra, S. Probabilistic Checking of Proofs; A New Characterization of NP. In Proceedings of the 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, PA, USA, 24–27 October 1992; pp. 2–13. [Google Scholar] [CrossRef]
  23. Lin, B. The Parameterized Complexity of the K-Biclique Probl. J. ACM 2018, 65, 34:1–34:23. [Google Scholar] [CrossRef]
  24. Karthik, C.S.; Manurangsi, P. On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic. In Proceedings of the 10th Innovations in Theoretical Computer Science Conference ITCS, San Diego, CA, USA, 10–12 January 2019; pp. 17:1–17:16. [Google Scholar] [CrossRef]
  25. Chen, Y.; Lin, B. The Constant Inapproximability of the Parameterized Dominating Set Problem. SIAM J. Comput. 2019, 48, 513–533. [Google Scholar] [CrossRef] [Green Version]
  26. Lin, B. A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming ICALP, Patras, Greece, 9–12 July 2019; pp. 81:1–81:15. [Google Scholar] [CrossRef]
  27. Kann, V. On the Approximability of NP-complete Optimization Problems. Ph.D. Thesis, Royal Institute of Technology, Stockholm, Sweden, 1992. [Google Scholar]
  28. Recall that there is a pair of polynomial-time L-reductions between the minimum dominating set problem and the set cover problem. [27]
  29. Feige, U. A threshold of lnn for approximating set cover. J. ACM (JACM) 1998, 45, 634–652. [Google Scholar] [CrossRef]
  30. Lai, W. The Inapproximability of k-DominatingSet for Parameterized AC 0 Circuits. Algorithms 2019, 12, 230. [Google Scholar] [CrossRef] [Green Version]
  31. Bhattacharyya, A.; Bonnet, É.; Egri, L.; Ghoshal, S.; Karthik, C.S.; Lin, B.; Manurangsi, P.; Marx, D. Parameterized Intractability of Even Set and Shortest Vector Problem. Electron. Colloq. Comput. Complex. (ECCC) 2019, 26, 115. [Google Scholar]
  32. Downey, R.G.; Fellows, M.R.; Vardy, A.; Whittle, G. The Parametrized Complexity of Some Fundamental Problems in Coding Theory. SIAM J. Comput. 1999, 29, 545–570. [Google Scholar] [CrossRef]
  33. Van Emde-Boas, P. Another NP-Complete Partition Problem and the Complexity of Computing Short Vectors in a Lattice; Report Department of Mathematics; University of Amsterdam: Amsterdam, The Netherlands, 1981. [Google Scholar]
  34. Ajtai, M. The Shortest Vector Problem in 2 is NP-hard for Randomized Reductions (Extended Abstract). In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, Dallas, TX, USA, 23–26 May 1998; pp. 10–19. [Google Scholar] [CrossRef]
  35. Karthik, C.S.; Laekhanukit, B.; Manurangsi, P. On the parameterized complexity of approximating dominating set. J. ACM 2019, 66, 33. [Google Scholar]
  36. Goldreich, O. Computational Complexity: A Conceptual Perspective, 1st ed.; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
  37. We could have skipped this boosting step, had we chosen a different good code with distance α but over a larger alphabet. For example, taking the Reed Solomon code over alphabet logn/1−α would have sufficed. We chose not to do so, to keep the proof as elementary as possible.
  38. This reduction (which employs the hypercube set system) is used in [29] for proving hardness of approximating Max k-Coverage; for Set Cover, Feige used a more efficient set system which is not needed in our context.
  39. Chalermsook, P.; Cygan, M.; Kortsarz, G.; Laekhanukit, B.; Manurangsi, P.; Nanongkai, D.; Trevisan, L. From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More. In Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, CA, USA, 15–17 October 2017; pp. 743–754. [Google Scholar]
  40. Abboud, A.; Rubinstein, A.; Williams, R.R. Distributed PCP Theorems for Hardness of Approximation in P. In Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS, Berkeley, CA, USA, 15–17 October 2017; pp. 25–36. [Google Scholar]
  41. Berman, P.; Schnitger, G. On the Complexity of Approximating the Independent Set Problem. Inf. Comput. 1992, 96, 77–94. [Google Scholar] [CrossRef] [Green Version]
  42. Raz, R. A Parallel Repetition Theorem. SIAM J. Comput. 1998, 27, 763–803. [Google Scholar] [CrossRef]
  43. Dinur, I. The PCP theorem by gap amplification. J. ACM 2007, 54, 12. [Google Scholar] [CrossRef]
  44. Wlodarczyk, M. Inapproximability within W[1]: The case of Steiner Orientation. arXiv 2019, arXiv:1907.06529. [Google Scholar]
  45. Cygan, M.; Kortsarz, G.; Nutov, Z. Steiner Forest Orientation Problems. SIAM J. Discret. Math. 2013, 27, 1503–1513. [Google Scholar] [CrossRef] [Green Version]
  46. Pilipczuk, M.; Wahlström, M. Directed Multicut is W[1]-hard, Even for Four Terminal Pairs. TOCT 2018, 10, 13:1–13:18. [Google Scholar] [CrossRef] [Green Version]
  47. We remark that the original conjecture in [48] says that the problem is W[1]-hard to approximate. However, we choose to state the more relaxed form here.
  48. Lokshtanov, D.; Ramanujan, M.S.; Saurabh, S.; Zehavi, M. Parameterized Complexity and Approximability of Directed Odd Cycle Transversal. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, 5–8 January 2020; pp. 2181–2200. [Google Scholar] [CrossRef]
  49. Feige, U.; Goldwasser, S.; Lovász, L.; Safra, S.; Szegedy, M. Interactive Proofs and the Hardness of Approximating Cliques. J. ACM 1996, 43, 268–292. [Google Scholar] [CrossRef] [Green Version]
  50. Chitnis, R.; Feldmann, A.E.; Manurangsi, P. Parameterized Approximation Algorithms for Bidirected Steiner Network Problems. In Proceedings of the 26th Annual European Symposium on Algorithms (ESA), Helsinki, Finland, 20–22 August 2018; pp. 20:1–20:16. [Google Scholar] [CrossRef]
  51. Papadimitriou, C.H.; Yannakakis, M. Optimization, Approximation, and Complexity Classes. J. Comput. Syst. Sci. 1991, 43, 425–440. [Google Scholar] [CrossRef] [Green Version]
  52. Dinur, I. Mildly exponential reduction from gap 3SAT to polynomial-gap label-cover. Electron. Colloq. Comput. Complex. (ECCC) 2016, 23, 128. [Google Scholar]
  53. Manurangsi, P.; Raghavendra, P. A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs. In Proceedings of the 44th International Colloquium on Automata, Languages, and Programming ICALP, Warsaw, Poland, 10–14 July 2017; pp. 78:1–78:15. [Google Scholar] [CrossRef]
  54. The version where n denotes the number of variables is equivalent to the current formulation, because we can always assume without loss of generality that m = O(n) (see [52,53]).
  55. Chen, J.; Huang, X.; Kanj, I.A.; Xia, G. Linear FPT reductions and computational lower bounds. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC), Chicago, IL, USA, 13–16 June 2004; pp. 212–221. [Google Scholar] [CrossRef]
  56. Chen, J.; Huang, X.; Kanj, I.A.; Xia, G. Strong computational lower bounds via parameterized complexity. J. Comput. Syst. Sci. 2006, 72, 1346–1367. [Google Scholar] [CrossRef] [Green Version]
  57. Bellare, M.; Goldreich, O.; Sudan, M. Free Bits, PCPs, and Nonapproximability-Towards Tight Results. SIAM J. Comput. 1998, 27, 804–915. [Google Scholar] [CrossRef]
  58. Zuckerman, D. Simulating BPP Using a General Weak Random Source. Algorithmica 1996, 16, 367–391. [Google Scholar] [CrossRef]
  59. Dinur, I.; Manurangsi, P. ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network. In Proceedings of the 9th Innovations in Theoretical Computer Science Conference (ITCS), Cambridge, MA, USA, 11–14 January 2018; pp. 36:1–36:20. [Google Scholar]
  60. Bellare, M.; Goldwasser, S.; Lund, C.; Russeli, A. Efficient probabilistically checkable proofs and applications to approximations. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 16–18 May 1993; pp. 294–304. [Google Scholar] [CrossRef]
  61. Moshkovitz, D. The Projection Games Conjecture and the NP-Hardness of ln n-Approximating Set-Cover. Theory Comput. 2015, 11, 221–235. [Google Scholar] [CrossRef]
  62. See also the related Projection Game Conjecture (PGC) [61].
  63. Naturally, we say that two functions fi and fj agree iff fi (x) = fj (x) for all xSiSj.
  64. Raz, R.; Safra, S. A Sub-Constant Error-Probability Low-Degree Test, and a Sub-Constant Error-Probability PCP Characterization of NP. In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, El Paso, TX, USA, 4–6 May 1997; pp. 475–484. [Google Scholar] [CrossRef]
  65. Impagliazzo, R.; Kabanets, V.; Wigderson, A. New Direct-Product Testers and 2-Query PCPs. SIAM J. Comput. 2012, 41, 1722–1768. [Google Scholar] [CrossRef] [Green Version]
  66. Dinur, I.; Navon, I.L. Exponentially Small Soundness for the Direct Product Z-Test. In Proceedings of the 32nd Computational Complexity Conference, CCC, Riga, Latvia, 6–9 July 2017; pp. 29:1–29:50. [Google Scholar] [CrossRef]
  67. Arora, S.; Babai, L.; Stern, J.; Sweedyk, Z. The Hardness of Approximate Optimia in Lattices, Codes, and Systems of Linear Equations. In Proceedings of the 34th Annual Symposium on Foundations of Computer Science, Palo Alto, CA, USA, 3–5 November 1993; pp. 724–733. [Google Scholar] [CrossRef]
  68. Håstad, J. Some optimal inapproximability results. J. ACM 2001, 48, 798–859. [Google Scholar] [CrossRef]
  69. Chan, S.O. Approximation Resistance from Pairwise-Independent Subgroups. J. ACM 2016, 63, 27. [Google Scholar] [CrossRef]
  70. Manurangsi, P. Tight Running Time Lower Bounds for Strong Inapproximability of Maximum k-Coverage, Unique Set Cover and Related Problems (via t-Wise Agreement Testing Theorem). In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA), Salt Lake City, UT, USA, 5–8 January 2020; pp. 62–81. [Google Scholar] [CrossRef]
  71. Håstad, J. Clique is Hard to Approximate Within n1-ε. In Proceedings of the 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, VT, USA, 14–16 October 1996; pp. 627–636. [Google Scholar] [CrossRef]
  72. Khot, S. Ruling Out PTAS for Graph Min-Bisection, Dense k-Subgraph, and Bipartite Clique. SIAM J. Comput. 2006, 36, 1025–1071. [Google Scholar] [CrossRef]
  73. Bhangale, A.; Gandhi, R.; Hajiaghayi, M.T.; Khandekar, R.; Kortsarz, G. Bi-Covering: Covering Edges with Two Small Subsets of Vertices. SIAM J. Discret. Math. 2017, 31, 2626–2646. [Google Scholar] [CrossRef]
  74. Manurangsi, P. Inapproximability of maximum biclique problems, minimum k-cut and densest at-least-k-subgraph from the small set expansion hypothesis. Algorithms 2018, 11, 10. [Google Scholar] [CrossRef] [Green Version]
  75. We note, however, that strong inapproximability of Biclique is known under stronger assumptions [72,73,74]
  76. Manurangsi, P. Almost-polynomial ratio ETH-hardness of approximating densest k-subgraph. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing STOC, Montreal, QC, Canada, 19–23 June 2017; pp. 954–961. [Google Scholar] [CrossRef] [Green Version]
  77. Raghavendra, P.; Steurer, D. Graph expansion and the unique games conjecture. In Proceedings of the ACM Forty-Second ACM Symposium on Theory of Computing, Cambridge, MA, USA, 6–8 June 2010; pp. 755–764. [Google Scholar]
  78. Alon, N.; Arora, S.; Manokaran, R.; Moshkovitz, D.; Weinstein, O. Inapproximabilty of Densest k-Subgraph from Average Case Hardness. Unpublished Manuscript.
  79. Again, similar to Biclique, Densest k-Subgraph is known to be hard to approximate under stronger assumptions [72,76,77,78].
  80. Kovári, T.; Sós, V.T.; Turán, P. On a problem of K. Zarankiewicz. Colloq. Math. 1954, 3, 50–57. [Google Scholar] [CrossRef]
  81. Zuckerman, D. Linear degree extractors and the inapproximability of max clique and chromatic number. In Proceedings of the ACM Thirty-Eighth Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, 21–23 May 2006; pp. 681–690. [Google Scholar]
  82. Baker, B.S. Approximation algorithms for NP-complete problems on planar graphs. J. ACM (JACM) 1994, 41, 153–180. [Google Scholar] [CrossRef]
  83. Johnson, D.S.; Garey, M.R. Computers and Intractability: A Guide to the Theory of NP-Completeness; WH Freeman: San Francisco, CA, USA; New York, NY, USA, 1979; Volume 1. [Google Scholar]
  84. Demaine, E.D.; Hajiaghayi, M. Equivalence of local treewidth and linear local treewidth and its algorithmic applications. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 11–13 January 2004; pp. 840–849. [Google Scholar]
  85. Grohe, M.; Kawarabayashi, K.I.; Reed, B. A simple algorithm for the graph minor decomposition: Logic meets structural graph theory. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 6–8 January 2013; pp. 414–431. [Google Scholar]
  86. Demaine, E.D.; Hajiaghayi, M. The bidimensionality theory and its algorithmic applications. Comput. J. 2008, 51, 292–302. [Google Scholar] [CrossRef] [Green Version]
  87. Fomin, F.V.; Lokshtanov, D.; Raman, V.; Saurabh, S. Bidimensionality and EPTAS. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, USA, 23–25 January 2011; pp. 748–759. [Google Scholar]
  88. Demaine, E.D.; Hajiaghayi, M.; Kawarabayashi, K.i. Contraction decomposition in H-minor-free graphs and algorithmic applications. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, San Jose, CA, USA, 6–8 June 2011; pp. 441–450. [Google Scholar]
  89. Bansal, N.; Reichman, D.; Umboh, S.W. LP-based robust algorithms for noisy minor-free and bounded treewidth graphs. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 1964–1979. [Google Scholar]
  90. Magen, A.; Moharrami, M. Robust algorithms for on minor-free graphs based on the Sherali-Adams hierarchy. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques; Springer: Berlin, Germany, 2009; pp. 258–271. [Google Scholar]
  91. Demaine, E.D.; Goodrich, T.D.; Kloster, K.; Lavallee, B.; Liu, Q.C.; Sullivan, B.D.; Vakilian, A.; van der Poel, A. Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class. In Proceedings of the 27th Annual European Symposium on Algorithms (ESA), Dagstuhl, Germany, 9–11 September 2019. [Google Scholar]
  92. Katsikarelis, I.; Lampis, M.; Paschos, V.T. Structurally Parameterized d-Scattered Set. In Proceedings of the Graph-Theoretic Concepts in Computer Science—44th International Workshop WG, Cottbus, Germany, 27–29 June 2018; pp. 292–305. [Google Scholar] [CrossRef] [Green Version]
  93. Katsikarelis, I.; Lampis, M.; Paschos, V.T. Improved (In-)Approximability Bounds for d-Scattered Set. In Proceedings of the Approximation and Online Algorithms—17th International Workshop, WAOA, Munich, Germany, 12–13 September 2019; pp. 202–216, Revised Selected Papers. [Google Scholar] [CrossRef] [Green Version]
  94. Marx, D. Efficient approximation schemes for geometric problems. In European Symposium on Algorithms; Springer: Berlin, Germany, 2005; pp. 448–459. [Google Scholar]
  95. Adamaszek, A.; Wiese, A. Approximation schemes for maximum weight independent set of rectangles. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 27–29 October 2013; pp. 400–409. [Google Scholar]
  96. Grandoni, F.; Kratsch, S.; Wiese, A. Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack. In Proceedings of the 27th Annual European Symposium on Algorithms (ESA), Munich/Garching, Germany, 9–11 September 2019; pp. 53:1–53:16. [Google Scholar]
  97. Pilipczuk, M.; van Leeuwen, E.J.; Wiese, A. Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking. In Proceedings of the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS), Aalborg, Denmark, 21–25 August 2017; pp. 42:1–42:13. [Google Scholar]
  98. Clark, B.N.; Colbourn, C.J.; Johnson, D.S. Unit disk graphs. Discret. Math. 1990, 86, 165–177. [Google Scholar] [CrossRef] [Green Version]
  99. III, H.B.H.; Marathe, M.V.; Radhakrishnan, V.; Ravi, S.S.; Rosenkrantz, D.J.; Stearns, R.E. NC-Approximation Schemes for NP- and PSPACE-Hard Problems for Geometric Graphs. J. Algorithms 1998, 26, 238–274. [Google Scholar] [CrossRef] [Green Version]
  100. Alber, J.; Fiala, J. Geometric separation and exact solutions for the parameterized independent set problem on disk graphs. J. Algorithms 2004, 52, 134–151. [Google Scholar] [CrossRef]
  101. Stockmeyer, L. Planar 3-colorability is NP-complete. ACM Sigact News 1973, 5, 19–25. [Google Scholar] [CrossRef]
  102. Demaine, E.D.; Hajiaghayi, M.T.; Kawarabayashi, K.i. Algorithmic graph minor theory: Decomposition, approximation, and coloring. In Proceedings of the IEEE 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), Pittsburgh, PA, USA, 23–25 October 2005; pp. 637–646. [Google Scholar]
  103. Sometimes called Improper Coloring.
  104. Belmonte, R.; Lampis, M.; Mitsou, V. Parameterized (Approximate) Defective Coloring. In Proceedings of the 35th Symposium on Theoretical Aspects of Computer Science (STACS), Caen, France, 28 February–3 March 2018; pp. 10:1–10:15. [Google Scholar]
  105. Lampis, M. Parameterized Approximation Schemes Using Graph Widths. In Proceedings of the Automata, Languages, and Programming—41st International Colloquium (ICALP), Copenhagen, Denmark, 8–11 July 2014; pp. 775–786. [Google Scholar]
  106. Fellows, M.R.; Fomin, F.V.; Lokshtanov, D.; Rosamond, F.; Saurabh, S.; Szeider, S.; Thomassen, C. On the complexity of some colorful problems parameterized by treewidth. Inf. Comput. 2011, 209, 143–153. [Google Scholar] [CrossRef] [Green Version]
  107. Corneil, D.G.; Rotics, U. On the Relationship Between Clique-Width and Treewidth. SIAM J. Comput. 2005, 34, 825–847. [Google Scholar] [CrossRef] [Green Version]
  108. Katsikarelis, I.; Lampis, M.; Paschos, V.T. Structural parameters, tight bounds, and approximation for (k, r)-center. Discret. Appl. Math. 2019, 264, 90–117. [Google Scholar] [CrossRef] [Green Version]
  109. In [105] the runtime of these algorithms is stated as (logn/ε)O(k)2kℓnO(1), which can be shown to be upper bounded by (k/ε)O(kℓ)nO(1) (see e.g., ([108] Lemma 1)).
  110. Salavatipour, M.R. On sum coloring of graphs. Discret. Appl. Math. 2003, 127, 477–488. [Google Scholar] [CrossRef] [Green Version]
  111. Marx, D. Complexity results for minimum sum edge coloring. Discret. Appl. Math. 2009, 157, 1034–1045. [Google Scholar] [CrossRef] [Green Version]
  112. Giaro, K.; Kubale, M. Edge-chromatic sum of trees and bounded cyclicity graphs. Inf. Process. Lett. 2000, 75, 65–69. [Google Scholar] [CrossRef]
  113. Marx, D. Minimum sum multicoloring on the edges of planar graphs and partial k-trees. In International Workshop on Approximation and Online Algorithms; Springer: Berlin, Germany, 2004; pp. 9–22. [Google Scholar]
  114. Cygan, M. Improved approximation for 3-dimensional matching via bounded pathwidth local search. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 27–29 October 2013; pp. 509–518. [Google Scholar]
  115. Guruswami, V.; Lee, E. Inapproximability of H-Transversal/Packing. In Proceedings of the Approximation, Randomization, and Combinatorial Optimization, Algorithms and Techniques (APPROX/RANDOM), Princeton, NJ, USA, 24–26 August 2015; pp. 284–304. [Google Scholar]
  116. Lee, E. Partitioning a graph into small pieces with applications to path transversal. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 1546–1558. [Google Scholar]
  117. Fomin, F.V.; Le, T.N.; Lokshtanov, D.; Saurabh, S.; Thomassé, S.; Zehavi, M. Subquadratic kernels for implicit 3-hitting set and 3-set packing problems. ACM Trans. Algorithms (TALG) 2019, 15, 1–44. [Google Scholar]
  118. Friggstad, Z.; Salavatipour, M.R. Approximability of packing disjoint cycles. In International Symposium on Algorithms and Computation; Springer: Berlin, Germany, 2007; pp. 304–315. [Google Scholar]
  119. Lokshtanov, D.; Mouawad, A.E.; Saurabh, S.; Zehavi, M. Packing Cycles Faster Than Erdos–Posa. SIAM J. Discret. Math. 2019, 33, 1194–1215. [Google Scholar] [CrossRef]
  120. Bodlaender, H.L.; Thomassé, S.; Yeo, A. Kernel bounds for disjoint cycles and disjoint paths. Theor. Comput. Sci. 2011, 412, 4570–4578. [Google Scholar] [CrossRef]
  121. Batra, J.; Garg, N.; Kumar, A.; Mömke, T.; Wiese, A. New approximation schemes for unsplittable flow on a path. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA, 4–6 January 2015; pp. 47–58. [Google Scholar]
  122. Wiese, A. A (1 + ϵ)-approximation for Unsplittable Flow on a Path in fixed-parameter running time. In Proceedings of the 44th International Colloquium on Automata, Languages, and Programming (ICALP), Warsaw, Poland, 10–14 July 2017; pp. 67:1–67:13. [Google Scholar]
  123. Garg, N.; Kumar, A.; Muralidhara, V. Minimizing Total Flow-Time: The Unrelated Case. In International Symposium on Algorithms and Computation; Springer: Berlin/Heidelberg, Germany, 2008; pp. 424–435. [Google Scholar]
  124. Kellerer, H.; Tautenhahn, T.; Woeginger, G. Approximability and nonapproximability results for minimizing total flow time on a single machine. SIAM J. Comput. 1999, 28, 1155–1166. [Google Scholar] [CrossRef]
  125. Wiese, A. Fixed-Parameter approximation schemes for weighted flowtime. In Proceedings of the Approximation, Randomization, and Combinatorial Optimization, Algorithms and Techniques (APPROX/RANDOM 2018), Princeton, NJ, USA, 20–22 August 2018; pp. 28:1–28:19. [Google Scholar]
  126. Buss, J.F.; Goldsmith, J. Nondeterminism Within P. SIAM J. Comput. 1993, 22, 560–572. [Google Scholar] [CrossRef]
  127. Nemhauser, G.L.; Trotter, L.E., Jr. Vertex packings: Structural properties and algorithms. Math. Program. 1975, 8, 232–248. [Google Scholar] [CrossRef]
  128. Abu-Khzam, F.N. A kernelization algorithm for d-Hitting Set. J. Comput. Syst. Sci. 2010, 76, 524–531. [Google Scholar] [CrossRef]
  129. Cygan, M. Deterministic parameterized connected vertex cover. In Scandinavian Workshop on Algorithm Theory; Springer: Berlin, Germany, 2012; pp. 95–106. [Google Scholar]
  130. Dom, M.; Lokshtanov, D.; Saurabh, S. Kernelization Lower Bounds Through Colors and IDs. ACM Trans. Algorithms 2014, 11, 1–20. [Google Scholar] [CrossRef] [Green Version]
  131. Krithika, R.; Majumdar, D.; Raman, V. Revisiting connected vertex cover: FPT algorithms and lossy kernels. Theory Comput. Syst. 2018, 62, 1690–1714. [Google Scholar] [CrossRef] [Green Version]
  132. Majumdar, D.; Ramanujan, M.S.; Saurabh, S. On the Approximate Compressibility of Connected Vertex Cover. arXiv 2019, arXiv:1905.03379. [Google Scholar]
  133. Recall that a bi-kernel is similar to a kernel except that its the output need not be an instance of the original problem. Bi-PSAKS can be defined analogously to PSAKS, but with bi-kernel instead of kernel. In the case of Connected Dominating Set, the bi-kernel outputs an instance of an annotated variant of Connected Dominating Set, where some vertices are marked and do not need to be covered by the solution.
  134. Eiben, E.; Kumar, M.; Mouawad, A.E.; Panolan, F.; Siebertz, S. Lossy Kernels for Connected Dominating Set on Sparse Graphs. In Proceedings of the STACS, Caen, France, 28 February–3 March 2018; Volume 96, pp. 29:1–29:15. [Google Scholar]
  135. Angel, E.; Bampis, E.; Escoffier, B.; Lampis, M. Parameterized power vertex cover. In International Workshop on Graph-Theoretic Concepts in Computer Science; Springer: Berlin, Germany, 2016; pp. 97–108. [Google Scholar]
  136. Dom, M.; Lokshtanov, D.; Saurabh, S.; Villanger, Y. Capacitated domination and covering: A parameterized perspective. In International Workshop on Parameterized and Exact Computation; Springer: Berlin, Germany, 2008; pp. 78–90. [Google Scholar]
  137. See Definition 1 for the definition of the treewidth.
  138. Erdos, P.; Pósa, L. On Independent Circuits Contained in a Graph. Can. J. Math. 1965, 17, 347–352. [Google Scholar] [CrossRef]
  139. Raymond, J.F.; Thilikos, D.M. Recent techniques and results on the Erdos–Pósa property. Discret. Appl. Math. 2017, 231, 25–43. [Google Scholar] [CrossRef] [Green Version]
  140. Kim, E.J.; Kwon, O.j. Erdos-Pósa property of chordless cycles and its applications. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–10 January 2018; pp. 1665–1684. [Google Scholar]
  141. Van Batenburg, W.C.; Huynh, T.; Joret, G.; Raymond, J.F. A tight Erdos-Pósa function for planar minors. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA, 6–9 January 2019; pp. 1485–1500. [Google Scholar]
  142. Cornuejols, G.; Nemhauser, G.L.; Wolsey, L.A. Worst-case and probabilistic analysis of algorithms for a location problem. Oper. Res. 1980, 28, 847–858. [Google Scholar] [CrossRef]
  143. Cohen-Addad, V.; Gupta, A.; Kumar, A.; Lee, E.; Li, J. Tight FPT Approximations for k-Median and k-Means. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019); Baier, C., Chatzigiannakis, I., Flocchini, P., Leonardi, S., Eds.; Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Dagstuhl, Germany, 2019; Volume 132, Leibniz International Proceedings in Informatics (LIPIcs), pp. 42:1–42:14. [Google Scholar] [CrossRef]
  144. Badanidiyuru, A.; Kleinberg, R.; Lee, H. Approximating low-dimensional coverage problems. In Proceedings of the Twenty-Eighth Annual Symposium on Computational Geometry, Chapel Hill, NC, USA, 17–20 June 2012; pp. 161–170. [Google Scholar]
  145. Guo, J.; Niedermeier, R.; Wernicke, S. Parameterized complexity of vertex cover variants. Theory Comput. Syst. 2007, 41, 501–520. [Google Scholar] [CrossRef]
  146. Skowron, P.; Faliszewski, P. Chamberlin–Courant Rule with Approval Ballots: Approximating the MaxCover Problem with Bounded Frequencies in FPT Time. J. Artif. Intell. Res. 2017, 60, 687–716. [Google Scholar] [CrossRef] [Green Version]
  147. Manurangsi, P. A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019); Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2018. [Google Scholar]
  148. The argument of [146] was later independently rediscovered in [147] as well.
  149. Petrank, E. The hardness of approximation: Gap location. Comput. Complex. 1994, 4, 133–157. [Google Scholar] [CrossRef]
  150. Chlamtác, E.; Dinitz, M.; Konrad, C.; Kortsarz, G.; Rabanca, G. The Densest k-Subhypergraph Problem. SIAM J. Discret. Math. 2018, 32, 1458–1477. [Google Scholar] [CrossRef]
  151. Chlamtác, E.; Dinitz, M.; Makarychev, Y. Minimizing the Union: Tight Approximations for Small Set Bipartite Vertex Expansion. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms SODA, Barcelona, Spain, 16–19 January 2017; pp. 881–899. [Google Scholar] [CrossRef] [Green Version]
  152. The problem has also been referred to as Min k-Union and Small Set Bipartite Vertex Expansion in the literature [150,151].
  153. Gupta, A.; Lee, E.; Li, J. An FPT algorithm beating 2-approximation for k-cut. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–10 January 2018; pp. 2821–2837. [Google Scholar]
  154. Gupta, A.; Lee, E.; Li, J. Faster exact and approximate algorithms for k-cut. In Proceedings of the 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), Philadelphia, PA, USA, 18–21 October 2018; pp. 113–123. [Google Scholar]
  155. Byrka, J.; Pensyl, T.; Rybicki, B.; Srinivasan, A.; Trinh, K. An improved approximation for k-median, and positive correlation in budgeted optimization. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, Budapest, Hungary, 26–29 August 2014; pp. 737–756. [Google Scholar]
  156. Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. A local search approximation algorithm for k-means clustering. Comput. Geom. 2004, 28, 89–112. [Google Scholar] [CrossRef]
  157. Gonzalez, T.F. Clustering to minimize the maximum intercluster distance. Theor. Comput. Sci. 1985, 38, 293–306. [Google Scholar] [CrossRef] [Green Version]
  158. A special case that has received significant attention assumes P=F. In this case, the best approximation ratio for k-Center becomes 2.
  159. Guha, S.; Khuller, S. Greedy strikes back: Improved facility location algorithms. J. Algorithms 1999, 31, 228–248. [Google Scholar] [CrossRef]
  160. Chen, K. On k-median clustering in high dimensions. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, Miami, FL, USA, 22–26 January 2006; pp. 1177–1185. [Google Scholar]
  161. Feldman, D.; Langberg, M. A unified framework for approximating and clustering data. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, San Jose, CA, USA, 6–8 June 2011; pp. 569–578. [Google Scholar]
  162. Calinescu, G.; Chekuri, C.; Pál, M.; Vondrák, J. Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput. 2011, 40, 1740–1766. [Google Scholar] [CrossRef] [Green Version]
  163. Haussler, D. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf. Comput. 1992, 100, 78–150. [Google Scholar] [CrossRef] [Green Version]
  164. Lee, E.; Schmidt, M.; Wright, J. Improved and simplified inapproximability for k-means. Inf. Process. Lett. 2017, 120, 40–43. [Google Scholar] [CrossRef] [Green Version]
  165. Cohen-Addad, V.; Karthik, C.S. Inapproximability of Clustering in Lp-metrics. In Proceedings of the 2019 IEEE 60th Annual Symposium on Foundations of Computer Science, Baltimore, MD, USA, 9–12 November 2019. [Google Scholar]
  166. Arora, S.; Raghavan, P.; Rao, S. Approximation Schemes for Euclidean k-Medians and Related Problems. In Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, TX, USA, 23–26 May 1998; Volume 98, pp. 106–113. [Google Scholar]
  167. Arora, S. Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems. J. ACM (JACM) 1998, 45, 753–782. [Google Scholar] [CrossRef]
  168. Kolliopoulos, S.G.; Rao, S. A nearly linear-time approximation scheme for the Euclidean k-median problem. In Proceedings of the European Symposium on Algorithms, Prague, Czech Republic, 16–18 July 1999; Springer: Berlin, Germany, 1999; pp. 378–389. [Google Scholar]
  169. Matoušek, J. On approximate geometric k-clustering. Discret. Comput. Geom. 2000, 24, 61–84. [Google Scholar] [CrossRef]
  170. Bādoiu, M.; Har-Peled, S.; Indyk, P. Approximate clustering via core-sets. In Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, Montreal, QC, Canada, 19–21 May 2002; pp. 250–257. [Google Scholar]
  171. De La Vega, W.F.; Karpinski, M.; Kenyon, C.; Rabani, Y. Approximation schemes for clustering problems. In Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 9–11 June 2003; pp. 50–58. [Google Scholar]
  172. Har-Peled, S.; Mazumdar, S. On coresets for k-means and k-median clustering. In Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, 13–15 June 2004; pp. 291–300. [Google Scholar]
  173. Kumar, A.; Sabharwal, Y.; Sen, S. A simple linear time (1 + ε)-approximation algorithm for k-means clustering in any dimensions. In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, Rome, Italy, 17–19 October 2004; pp. 454–462. [Google Scholar]
  174. Kumar, A.; Sabharwal, Y.; Sen, S. Linear time algorithms for clustering problems in any dimensions. In International Colloquium on Automata, Languages, and Programming; Springer: Berlin, Germany, 2005; pp. 1374–1385. [Google Scholar]
  175. Feldman, D.; Monemizadeh, M.; Sohler, C. A PTAS for k-means clustering based on weak coresets. In Proceedings of the Twenty-Third Annual Symposium on Computational Geometry, Gyeongju, Korea, 6–8 June 2007; pp. 11–18. [Google Scholar]
  176. Sohler, C.; Woodruff, D.P. Strong coresets for k-median and subspace approximation: Goodbye dimension. In Proceedings of the 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), Paris, France, 7–9 October 2018; pp. 802–813. [Google Scholar]
  177. Becchetti, L.; Bury, M.; Cohen-Addad, V.; Grandoni, F.; Schwiegelshohn, C. Oblivious dimension reduction for k-means: Beyond subspaces and the Johnson-Lindenstrauss lemma. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Phoenix, AZ, USA, 23–26 June 2019; pp. 1039–1050. [Google Scholar]
  178. Huang, L.; Vishnoi, N.K. Coresets for Clustering in Euclidean Spaces: Importance Sampling is Nearly Optimal. arXiv 2020, arXiv:2004.06263. [Google Scholar]
  179. Braverman, V.; Jiang, S.H.C.; Krauthgamer, R.; Wu, X. Coresets for Clustering in Excluded-minor Graphs and Beyond. arXiv 2020, arXiv:2004.07718. [Google Scholar]
  180. Cohen-Addad, V.; Klein, P.N.; Mathieu, C. Local search yields approximation schemes for k-means and k-median in euclidean and minor-free metrics. SIAM J. Comput. 2019, 48, 644–667. [Google Scholar] [CrossRef] [Green Version]
  181. Friggstad, Z.; Rezapour, M.; Salavatipour, M.R. Local search yields a PTAS for k-means in doubling metrics. SIAM J. Comput. 2019, 48, 452–480. [Google Scholar] [CrossRef] [Green Version]
  182. Cohen-Addad, V. A fast approximation scheme for low-dimensional k-means. In Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2018; pp. 430–440. [Google Scholar]
  183. Cohen-Addad, V.; Feldmann, A.E.; Saulpic, D. Near-Linear Time Approximation Schemes for Clustering in Doubling Metrics. arXiv 2019, arXiv:1812.08664. [Google Scholar]
  184. Feldmann, A.E.; Marx, D. The Parameterized Hardness of the k-Center Problem in Transportation Networks. In Proceedings of the 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT), Malmö, Sweden, 18–20 June 2018; pp. 19:1–19:13. [Google Scholar] [CrossRef]
  185. Fox-Epstein, E.; Klein, P.N.; Schild, A. Embedding Planar Graphs into Low-Treewidth Graphs with Applications to Efficient Approximation Schemes for Metric Problems. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), San Diego, CA, USA, 6–9 January 2019; pp. 1069–1088. [Google Scholar]
  186. Becker, A.; Klein, P.N.; Saulpic, D. Polynomial-time approximation schemes for k-center, k-median, and capacitated vehicle routing in bounded highway dimension. In Proceedings of the 26th Annual European Symposium on Algorithms (ESA), Helsinki, Finland, 20–22 August 2018; pp. 8:1–8:15. [Google Scholar]
  187. Feldmann, A.E. Fixed Parameter Approximations for k-Center Problems in Low Highway Dimension Graphs. In 42nd International Colloquium on Automata, Languages, and Programming (ICALP); Springer: Berlin/Heidelberg, Germany, 2015; pp. 588–600. [Google Scholar] [CrossRef] [Green Version]
  188. Li, S. Approximating capacitated k-median with (1 + ε)k open facilities. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, Arlington, VA, USA, 10–12 January 2016; pp. 786–796. [Google Scholar]
  189. Demirci, G.; Li, S. Constant Approximation for Capacitated k-Median with (1 + ε)-Capacity Violation. arXiv 2016, arXiv:1603.02324. [Google Scholar]
  190. Adamczyk, M.; Byrka, J.; Marcinkowski, J.; Meesum, S.M.; Włodarczyk, M. Constant factor FPT approximation for capacitated k-median. arXiv 2018, arXiv:1809.05791. [Google Scholar]
  191. Cohen-Addad, V.; Li, J. On the Fixed-Parameter Tractability of Capacitated Clustering. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP), Patras, Greece, 9–12 July 2019; pp. 41:1–41:14. [Google Scholar]
  192. Xu, Y.; Zhang, Y.; Zou, Y. A constant parameterized approximation for hard-capacitated k-means. arXiv 2019, arXiv:1901.04628. [Google Scholar]
  193. Krishnaswamy, R.; Li, S.; Sandeep, S. Constant approximation for k-median and k-means with outliers via iterative rounding. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 25–29 June 2018; pp. 646–659. [Google Scholar]
  194. Swamy, C. Improved approximation algorithms for matroid and knapsack median problems and applications. ACM Trans. Algorithms (TALG) 2016, 12, 49. [Google Scholar] [CrossRef] [Green Version]
  195. Sometimes also the non-metric version of TSP is considered, which however is much harder than the metric one. We only consider the metric version here.
  196. Dreyfus, S.E.; Wagner, R.A. The Steiner problem in graphs. Networks 1971, 1, 195–207. [Google Scholar] [CrossRef]
  197. Fuchs, B.; Kern, W.; Molle, D.; Richter, S.; Rossmanith, P.; Wang, X. Dynamic programming for minimum Steiner trees. Theory Comput. Syst. 2007, 41, 493–500. [Google Scholar] [CrossRef] [Green Version]
  198. Nederlof, J. Fast Polynomial-Space Algorithms Using Möbius Inversion: Improving on Steiner Tree and Related Problems. In Proceedings of the Automata, Languages and Programming, 36th International Colloquium, ICALP, Rhodes, Greece, 5–12 July 2009; pp. 713–725. [Google Scholar]
  199. Borchers, A.; Du, D.Z. The k-Steiner Ratio in Graphs. SIAM J. Comput. 1997, 26, 857–869. [Google Scholar] [CrossRef]
  200. Byrka, J.; Grandoni, F.; Rothvoss, T.; Sanità, L. Steiner Tree Approximation via Iterative Randomized Rounding. J. ACM 2013, 60, 1–33. [Google Scholar] [CrossRef] [Green Version]
  201. Chlebík, M.; Chlebíková, J. The Steiner tree problem on graphs: Inapproximability results. Theor. Comput. Sci. 2008, 406, 207–214. [Google Scholar] [CrossRef] [Green Version]
  202. Dvořák, P.; Feldmann, A.E.; Knop, D.; Masařík, T.; Toufar, T.; Veselý, P. Parameterized Approximation Schemes for Steiner Trees with Small Number of Steiner Vertices. In Proceedings of the 35th Symposium on Theoretical Aspects of Computer Science (STACS), Caen, France, 28 February–3 March 2018; pp. 26:1–26:15. [Google Scholar] [CrossRef]
  203. Babay, A.; Dinitz, M.; Zhang, Z. Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network. In Proceedings of the 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), Ahmedabad, India, 11–13 December 2018; pp. 33:1–33:22. [Google Scholar]
  204. Hassin, R. Approximation schemes for the restricted shortest path problem. Math. Oper. Res. 1992, 17, 36–42. [Google Scholar] [CrossRef]
  205. Bockenhauer, H.J.; Hromkovic, J.; Kneis, J.; Kupke, J. The parameterized approximability of TSP with deadlines. Theory Comput. Syst. 2007, 41, 431–444. [Google Scholar] [CrossRef] [Green Version]
  206. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [Google Scholar] [CrossRef] [Green Version]
  207. Garey, M.R.; Graham, R.L.; Johnson, D.S. The complexity of computing Steiner minimal trees. SIAM J. Appl. Math. 1977, 32, 835–859. [Google Scholar] [CrossRef]
  208. Karpinski, M.; Lampis, M.; Schmied, R. New inapproximability bounds for TSP. JCSS 2015, 81, 1665–1677. [Google Scholar] [CrossRef]
  209. In [167] the runtime of these algorithms is stated as O(n(logn)O(k/ε)k−1), which can be shown to be upper bounded by kO(k/ε)k−1n2 (see e.g., ([108] Lemma 1)).
  210. Gottlieb, L. A Light Metric Spanner. In Proceedings of the 56th Annual Symposium on Foundations of Computer Science, FOCS, Berkeley, CA, USA, 17–20 October 2015; pp. 759–772. [Google Scholar]
  211. Talwar, K. Bypassing the embedding: Algorithms for low dimensional metrics. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, 13–16 June 2004; pp. 281–290. [Google Scholar]
  212. Feldmann, A.E.; Fung, W.S.; Könemann, J.; Post, I. A (1 + ε)-Embedding of Low Highway Dimension Graphs into Bounded Treewidth Graphs. SIAM J. Comput. 2018, 47, 1667–1704. [Google Scholar] [CrossRef] [Green Version]
  213. Guo, J.; Niedermeier, R.; Suchý, O. Parameterized Complexity of Arc-Weighted Directed Steiner Problems. SIAM J. Discret. Math. 2011, 25, 583–599. [Google Scholar] [CrossRef]
  214. Halperin, E.; Krauthgamer, R. Polylogarithmic inapproximability. In Proceedings of the 35th Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 9–11 June 2003; pp. 585–594. [Google Scholar]
  215. Chitnis, R.; Hajiaghayi, M.; Kortsarz, G. Fixed-Parameter and Approximation Algorithms: A New Look. In Proceedings of the Parameterized and Exact Computation—8th International Symposium, IPEC, Sophia Antipolis, France, 4–6 September 2013; pp. 110–122. [Google Scholar]
  216. Sometimes also called Directed Steiner Forest; note however that the optimum is not necessarily a forest.
  217. Leighton, T.; Rao, S. Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. J. ACM (JACM) 1999, 46, 787–832. [Google Scholar] [CrossRef] [Green Version]
  218. Arora, S.; Rao, S.; Vazirani, U. Expander flows, geometric embeddings and graph partitioning. J. ACM (JACM) 2009, 56, 5. [Google Scholar] [CrossRef] [Green Version]
  219. Marx, D. Parameterized graph separation problems. Theor. Comput. Sci. 2006, 351, 394–406. [Google Scholar] [CrossRef] [Green Version]
  220. Marx, D.; Razgon, I. Fixed-parameter tractability of multicut parameterized by the size of the cutset. SIAM J. Comput. 2014, 43, 355–388. [Google Scholar] [CrossRef] [Green Version]
  221. Chitnis, R.; Cygan, M.; Hajiaghayi, M.; Pilipczuk, M.; Pilipczuk, M. Designing FPT algorithms for cut problems using randomized contractions. SIAM J. Comput. 2016, 45, 1171–1229. [Google Scholar] [CrossRef] [Green Version]
  222. Cygan, M.; Lokshtanov, D.; Pilipczuk, M.; Pilipczuk, M.; Saurabh, S. Minimum Bisection is fixed-parameter tractable. SIAM J. Comput. 2019, 48, 417–450. [Google Scholar] [CrossRef] [Green Version]
  223. Garg, N.; Vazirani, V.V.; Yannakakis, M. Approximate max-flow min-(multi) cut theorems and their applications. SIAM J. Comput. 1996, 25, 235–251. [Google Scholar] [CrossRef]
  224. Chawla, S.; Krauthgamer, R.; Kumar, R.; Rabani, Y.; Sivakumar, D. On the hardness of approximating multicut and sparsest-cut. Comput. Complex. 2006, 15, 94–114. [Google Scholar] [CrossRef] [Green Version]
  225. Sharma, A.; Vondrák, J. Multiway cut, pairwise realizable distributions, and descending thresholds. arXiv 2013, arXiv:1309.2729. [Google Scholar]
  226. Bérczi, K.; Chandrasekaran, K.; Király, T.; Madan, V. Improving the Integrality Gap for Multiway Cut. In International Conference on Integer Programming and Combinatorial Optimization; Springer: Berlin, Germany, 2019; pp. 115–127. [Google Scholar]
  227. Cohen-Addad, V.; De Verdière, É.C.; De Mesmay, A. A near-linear approximation scheme for multicuts of embedded graphs with a fixed number of terminals. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–10 January 2018; pp. 1439–1458. [Google Scholar]
  228. Chekuri, C.; Madan, V. Approximating multicut and the demand graph. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 855–874. [Google Scholar]
  229. Agarwal, A.; Alon, N.; Charikar, M.S. Improved approximation for directed cut problems. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 11–13 June 2007; pp. 671–680. [Google Scholar]
  230. Lee, E. Improved Hardness for Cut, Interdiction, and Firefighter Problems. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017); Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2017. [Google Scholar]
  231. Chuzhoy, J.; Khanna, S. Polynomial flow-cut gaps and hardness of directed cut problems. J. ACM (JACM) 2009, 56, 6. [Google Scholar] [CrossRef]
  232. Naor, J.; Zosin, L. A 2-approximation algorithm for the directed multiway cut problem. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science, Miami Beach, FL, USA, 19–22 October 1997; pp. 548–553. [Google Scholar]
  233. Chitnis, R.; Feldmann, A.E. FPT Inapproximability of Directed Cut and Connectivity Problems. arXiv 2019, arXiv:1910.01934. [Google Scholar]
  234. Feige, U.; Hajiaghayi, M.; Lee, J.R. Improved approximation algorithms for minimum weight vertex separators. SIAM J. Comput. 2008, 38, 629–657. [Google Scholar] [CrossRef] [Green Version]
  235. Räcke, H. Optimal hierarchical decompositions for congestion minimization in networks. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, Budapest, Hungary, 26–29 August 2008; pp. 255–264. [Google Scholar]
  236. Feige, U.; Mahdian, M. Finding small balanced separators. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, 21–23 May 2006; pp. 375–384. [Google Scholar]
  237. Karger, D.R.; Stein, C. A new approach to the minimum cut problem. J. ACM (JACM) 1996, 43, 601–640. [Google Scholar] [CrossRef]
  238. Thorup, M. Minimum k-way cuts via deterministic greedy tree packing. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, Budapest, Hungary, 26–29 August 2008; pp. 159–166. [Google Scholar]
  239. Gupta, A.; Lee, E.; Li, J. The number of minimum k-cuts: Improving the Karger-Stein bound. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Phoenix, AZ, USA, 23–26 June 2019; pp. 229–240. [Google Scholar]
  240. Kawarabayashi, K.i.; Thorup, M. The minimum k-way cut of bounded size is fixed-parameter tractable. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, Palm Springs, CA, USA, 22–25 October 2011; pp. 160–169. [Google Scholar]
  241. Saran, H.; Vazirani, V.V. Finding k cuts within twice the optimal. SIAM J. Comput. 1995, 24, 101–108. [Google Scholar] [CrossRef]
  242. Kawarabayashi, K.I.; Lin, B. A nearly 5/3-approximation FPT Algorithm for Min-k-Cut. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, 5–8 January 2020; pp. 990–999. [Google Scholar]
  243. Lokshtanov, D.; Saurabh, S.; Surianarayanan, V. A Parameterized Approximation Scheme for Min k-Cut. arXiv 2020, arXiv:2005.00134. [Google Scholar]
  244. Lund, C.; Yannakakis, M. The approximation of maximum subgraph problems. In International Colloquium on Automata, Languages, and Programming; Springer: Berlin, Germany, 1993; pp. 40–51. [Google Scholar]
  245. Khot, S. On the power of unique 2-prover 1-round games. In Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, Montreal, QC, Canada, 19–21 May 2002; pp. 767–775. [Google Scholar]
  246. Heggernes, P.; Van’t Hof, P.; Jansen, B.M.; Kratsch, S.; Villanger, Y. Parameterized complexity of vertex deletion into perfect graph classes. In International Symposium on Fundamentals of Computation Theory; Springer: Berlin, Germany, 2011; pp. 240–251. [Google Scholar]
  247. Fomin, F.V.; Lokshtanov, D.; Misra, N.; Saurabh, S. Planar F-deletion: Approximation, kernelization and optimal FPT algorithms. In Proceedings of the 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, Brunswick, NJ, USA, 20–23 October 2012; pp. 470–479. [Google Scholar]
  248. Marx, D. Chordal deletion is fixed-parameter tractable. Algorithmica 2010, 57, 747–768. [Google Scholar] [CrossRef] [Green Version]
  249. Cao, Y.; Marx, D. Interval deletion is fixed-parameter tractable. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, Portland, OR, USA, 5–7 January 2014; pp. 122–141. [Google Scholar]
  250. Courcelle, B. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Inf. Comput. 1990, 85, 12–75. [Google Scholar] [CrossRef] [Green Version]
  251. Bodlaender, H.L. Treewidth: Structure and algorithms. In International Colloquium on Structural Information and Communication Complexity; Springer: Berlin, Germany, 2007; pp. 11–25. [Google Scholar]
  252. Arnborg, S.; Corneil, D.G.; Proskurowski, A. Complexity of finding embeddings in ak-tree. SIAM J. Algebr. Discret. Methods 1987, 8, 277–284. [Google Scholar] [CrossRef]
  253. Bodlaender, H.L. A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM J. Comput. 1996, 25, 1305–1317. [Google Scholar] [CrossRef]
  254. Bodlaender, H.L.; Drange, P.G.; Dregi, M.S.; Fomin, F.V.; Lokshtanov, D.; Pilipczuk, M. A ckn 5-Approximation Algorithm for Treewidth. SIAM J. Comput. 2016, 45, 317–378. [Google Scholar] [CrossRef] [Green Version]
  255. Gupta, A.; Lee, E.; Li, J.; Manurangsi, P.; Włodarczyk, M. Losing treewidth by separating subsets. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, San Diego, CA, USA, 6–9 January 2019; pp. 1731–1749. [Google Scholar]
  256. Jansen, B.M.; Pieterse, A. Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations. In Proceedings of the 26th Annual European Symposium on Algorithms (ESA), Helsinki, Finland, 20–22 August 2018; pp. 48:1–48:15. [Google Scholar]
  257. Donkers, H.; Jansen, B.M. A Turing Kernelization Dichotomy for Structural Parameterizations of F-Minor-Free Deletion. In International Workshop on Graph-Theoretic Concepts in Computer Science; Springer: Berlin, Germany, 2019; pp. 106–119. [Google Scholar]
  258. Chekuri, C.; Chuzhoy, J. Polynomial bounds for the grid-minor theorem. J. ACM (JACM) 2016, 63, 40. [Google Scholar] [CrossRef]
  259. Kawarabayashi, K.i.; Sidiropoulos, A. Polylogarithmic approximation for minimum planarization (almost). In Proceedings of the 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, CA, USA, 15–17 October 2017; pp. 779–788. [Google Scholar]
  260. Agrawal, A.; Lokshtanov, D.; Misra, P.; Saurabh, S.; Zehavi, M. Polylogarithmic Approximation Algorithms for Weighted-F-Deletion Problems. In Proceedings of the Approximation, Randomization, and Combinatorial Optimization, Algorithms and Techniques (APPROX/RANDOM 2018), Princeton, NJ, USA, 20–22 August 2018; pp. 1:1–1:15. [Google Scholar]
  261. Fiorini, S.; Joret, G.; Pietropaoli, U. Hitting diamonds and growing cacti. In International Conference on Integer Programming and Combinatorial Optimization; Springer: Berlin, Germany, 2010; pp. 191–204. [Google Scholar]
  262. Here 1.1 can be replaced by 1 + ε for any constant ε >0.
  263. Marx, D.; Pilipczuk, M. Everything you always wanted to know about the parameterized complexity of Subgraph Isomorphism (but were afraid to ask). In Proceedings of the 31st International Symposium on Theoretical Aspects of Computer Science (STACS), Lyon, France, 5–8 March 2014; pp. 542–553. [Google Scholar]
  264. Ebenlendr, T.; Kolman, P.; Sgall, J. An Approximation Algorithm for Bounded Degree Deletion. Preprint 2009. [Google Scholar]
  265. Alon, N.; Yuster, R.; Zwick, U. Color-coding. J. ACM 1995, 42, 844–856. [Google Scholar] [CrossRef]
  266. Jansen, B.M.; Pilipczuk, M. Approximation and kernelization for chordal vertex deletion. SIAM J. Discret. Math. 2018, 32, 2258–2301. [Google Scholar] [CrossRef]
  267. Cao, Y.; Sandeep, R. Minimum fill-in: Inapproximability and almost tight lower bounds. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 875–880. [Google Scholar]
  268. Giannopoulou, A.C.; Pilipczuk, M.; Raymond, J.F.; Thilikos, D.M.; Wrochna, M. Linear Kernels for Edge Deletion Problems to Immersion-Closed Graph Classes. In Proceedings of the 44th International Colloquium on Automata, Languages, and Programming ICALP, Warsaw, Poland, 10–14 July 2017; pp. 57:1–57:15. [Google Scholar]
  269. Bliznets, I.; Cygan, M.; Komosa, P.; Pilipczuk, M. Hardness of approximation for H-free edge modification problems. ACM Trans. Comput. Theory (TOCT) 2018, 10, 9. [Google Scholar] [CrossRef]
  270. Chen, J.; Liu, Y.; Lu, S. Directed feedback vertex set problem is FPT. In Proceedings of the Structure Theory and FPT Algorithmics for Graphs, Digraphs and Hypergraphs, Dagstuhl, Germany, 8–13 July 2007. [Google Scholar]
  271. Chen, J.; Kanj, I.A.; Xia, G. Improved parameterized upper bounds for vertex cover. In International Symposium on Mathematical Foundations of Computer Science; Springer: Berlin, Germany, 2006; pp. 238–249. [Google Scholar]
  272. Bourgeois, N.; Escoffier, B.; Paschos, V.T. Efficient Approximation of Combinatorial Problems by Moderately Exponential Algorithms. In Proceedings of the Algorithms and Data Structures, 11th International Symposium, WADS, Banff, AB, Canada, 21–23 August 2009; pp. 507–518. [Google Scholar] [CrossRef]
  273. Brankovic, L.; Fernau, H. Combining Two Worlds: Parameterised Approximation for Vertex Cover. In International Symposium on Algorithms and Computation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 390–402. [Google Scholar]
  274. Brankovic, L.; Fernau, H. A novel parameterised approximation algorithm for minimum vertex cover. Theor. Comput. Sci. 2013, 511, 85–108. [Google Scholar] [CrossRef]
  275. Bansal, N.; Chalermsook, P.; Laekhanukit, B.; Nanongkai, D.; Nederlof, J. New Tools and Connections for Exponential-Time Approximation. Algorithmica 2019, 81, 3993–4009. [Google Scholar] [CrossRef] [Green Version]
  276. Manurangsi, P.; Trevisan, L. Mildly Exponential Time Approximation Algorithms for Vertex Cover, Balanced Separator and Uniform Sparsest Cut. In Proceedings of the Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM, Princeton, NJ, USA, 20–22 August 2018; pp. 20:1–20:17. [Google Scholar] [CrossRef]
  277. Bar-Yehuda, R.; Bendel, K.; Freund, A.; Rawitz, D. Local ratio: A unified framework for approxmation algrithms. ACM Comput. Surv. 2004, 36, 422–463. [Google Scholar] [CrossRef]
  278. Escoffier, B.; Monnot, J.; Paschos, V.T.; Xiao, M. New results on polynomial inapproximability and fixed parameter approximability of edge dominating set. Theory Comput. Syst. 2015, 56, 330–346. [Google Scholar] [CrossRef] [Green Version]
  279. Bonnet, É.; Paschos, V.T.; Sikora, F. Parameterized exact and approximation algorithms for maximum k-set cover and related satisfiability problems. RAIRO-Theor. Inform. Appl. 2016, 50, 227–240. [Google Scholar] [CrossRef] [Green Version]
  280. Arora, S.; Barak, B.; Steurer, D. Subexponential Algorithms for Unique Games and Related Problems. J. ACM 2015, 62, 42:1–42:25. [Google Scholar] [CrossRef]
  281. Barak, B.; Raghavendra, P.; Steurer, D. Rounding Semidefinite Programming Hierarchies via Global Correlation. In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS, Palm Springs, CA, USA, 22–25 October 2011; pp. 472–481. [Google Scholar] [CrossRef] [Green Version]
  282. Fernau, H. Saving on Phases: Parameterized Approximation for Total Vertex Cover. In Proceedings of the Combinatorial Algorithms, 23rd International Workshop, IWOCA, Tamil Nadu, India, 19–21 July 2012; pp. 20–31, Revised Selected Papers. [Google Scholar] [CrossRef]
  283. Halperin, E. Improved Approximation Algorithms for the Vertex Cover Problem in Graphs and Hypergraphs. SIAM J. Comput. 2002, 31, 1608–1623. [Google Scholar] [CrossRef]
  284. Impagliazzo, R.; Paturi, R.; Zane, F. Which Problems Have Strongly Exponential Complexity? J. Comput. Syst. Sci. 2001, 63, 512–530. [Google Scholar] [CrossRef] [Green Version]
  285. Lampis, M. A kernel of order 2 k-c log k for vertex cover. Inf. Process. Lett. 2011, 111, 1089–1091. [Google Scholar] [CrossRef]
  286. Here we consider the version where the set of candidate centers is not separately given.
  287. Hochbaum, D.S.; Shmoys, D.B. A unified approach to approximation algorithms for bottleneck problems. J. ACM 1986, 33, 533–550. [Google Scholar] [CrossRef]
  288. Brand, C.; Dell, H.; Husfeldt, T. Extensor-coding. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, 25–29 June 2018; pp. 151–164. [Google Scholar] [CrossRef]
  289. Björklund, A.; Lokshtanov, D.; Saurabh, S.; Zehavi, M. Approximate Counting of k-Paths: Deterministic and in Polynomial Space. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming, ICALP, Patras, Greece, 9–12 July 2019; pp. 24:1–24:15. [Google Scholar] [CrossRef]
  290. Pratt, K. Waring Rank, Parameterized and Exact Algorithms. In Proceedings of the 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), Baltimore, MD, USA, 9–12 November 2019; pp. 806–823. [Google Scholar]
  291. Björklund, A. Determinant Sums for Undirected Hamiltonicity. SIAM J. Comput. 2014, 43, 280–299. [Google Scholar] [CrossRef] [Green Version]
  292. Björklund, A.; Husfeldt, T.; Kaski, P.; Koivisto, M. Narrow sieves for parameterized paths and packings. J. Comput. Syst. Sci. 2017, 87, 119–139. [Google Scholar] [CrossRef] [Green Version]
  293. Marx, D. Completely inapproximable monotone and antimonotone parameterized problems. J. Comput. Syst. Sci. 2013, 79, 144–151. [Google Scholar] [CrossRef] [Green Version]
  294. To be more precise, these problems need to be phrased as promise problems and NP-hardness is with respect to these. We will not go into details here.

Share and Cite

MDPI and ACS Style

Feldmann, A.E.; Karthik C. S.; Lee, E.; Manurangsi, P. A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms. Algorithms 2020, 13, 146. https://doi.org/10.3390/a13060146

AMA Style

Feldmann AE, Karthik C. S., Lee E, Manurangsi P. A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms. Algorithms. 2020; 13(6):146. https://doi.org/10.3390/a13060146

Chicago/Turabian Style

Feldmann, Andreas Emil, Karthik C. S., Euiwoong Lee, and Pasin Manurangsi. 2020. "A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms" Algorithms 13, no. 6: 146. https://doi.org/10.3390/a13060146

APA Style

Feldmann, A. E., Karthik C. S., Lee, E., & Manurangsi, P. (2020). A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms. Algorithms, 13(6), 146. https://doi.org/10.3390/a13060146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop