On the Descriptive Complexity of Color Coding

Color coding is an algorithmic technique used in parameterized complexity theory to detect"small"structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing -- purely in terms of the syntactic structure of describing formulas -- when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.


Introduction
Descriptive complexity provides a powerful link between logic and complexity theory: We use a logical formula to describe a problem and can then infer the computational complexity of the problem just from the syntactic structure of the formula. As a striking example, Fagin's Theorem [10] tells us that 3-colorability lies in NP just because its describing formula ("there exist three colors such that all adjacent vertex pairs have different colors") is an existential second-order formula. In the context of fixed-parameter tractability theory, methods from descriptive complexity are also used a lot -but commonly to show that problems are difficult. For instance, the A-and W-hierarchies are defined in logical terms [12], but their hard problems are presumably "beyond" the class FPT of fixed-parameter tractable problems.
The methods of descriptive complexity are only rarely used to show that problems are in FPT. More precisely, the syntactic structure of the natural logical descriptions of standard parameterized problems found in textbooks are not known to imply that the problems lie in FPT -even though this is known to be the case for many of them. To appreciate the underlying difficulties, consider the following three parameterized problems: p-matching, p-triangle-packing, and p-clique. In each case, we are given an undirected graph as input and a number k and we are then asked whether the graph contains k vertex-disjoint edges (a size-k matching), k vertex-disjoint triangles, or a clique of size k, respectively. The problems are known to have widely different complexities (maximal matchings can actually be found in polynomial time, triangle packing lies at least in FPT, while finding cliques is W[1]-complete) but very similar logical descriptions: The family (α k ) k∈N of formulas is clearly a natural "slicewise" description of the matching problem: A graph G has a size-k matching if, and only if, G |= α k . The families (β k ) k∈N and (γ k ) k∈N are natural parameterized descriptions of the triangle packing and the clique problems, respectively. Well-known results on the descriptive complexity of parameterized problems allow us to infer [12] from the above descriptions that all three problems lie in W [1], but offer no hint why the first two problems actually lie in the class FPT -syntactically the clique problem arguably "looks like the easiest one" when in fact it is semantically the most difficult one. The results of this paper will remedy this: We will show that the syntactic structures of the formulas α k and β k imply membership of p-matching and p-triangle-packing in FPT. The road to deriving the computational complexity of parameterized problems just from the syntactic properties of slicewise first-order descriptions involves three major steps: First, a characterization of when the color coding technique is applicable in terms of syntactic properties of second-order quantifiers. Second, an exploration of how these results on second-order formulas apply to first-order formulas, leading to the notion of strong and weak quantifiers and to an elimination theorem for weak quantifiers. Third, we add a new characterization to the body of known characterizations of how classes like FPT can be characterized in a slicewise fashion by logical formulas.
Our Contributions I: A Syntactic Characterization of Color Coding. The hard triangle packing problem from above becomes almost trivial when we just wish to check whether a vertex-colored graph contains a red triangle, a green triangle, a blue triangle, a yellow triangle, and so on for k different colors. The ingenious idea behind the color coding technique of Alon, Yuster, and Zwick [1] is to reduce the original problem to the much simpler colored version by simply randomly coloring the graph. Of course, even if there are k disjoint triangles, we will most likely not color them monochromatically and differently, but the probability of "getting lucky" is nonzero and depends only on the parameter k. Even better, Alon et al. point out that one can derandomize the coloring easily by using universal hash functions to color each vertex with its hash value.
Applying this idea in the setting of descriptive complexity was recently pioneered by Chen et al. [6]. Transferred to the triangle packing problem, their argument would roughly be: "Testing for each color i whether there is a monochromatic triangle of color i can be done in first-order logic using something like k i=1 ∃x∃y∃z(Exy ∧ Eyz ∧ Exz ∧ C i x ∧ C i y ∧ C i z). Next, instead of testing whether x has color i using the formula C i x, we can test whether x gets hashed to i by a hash function. Finally, since computing appropriate universal hash functions only involves addition and multiplication, we can express the derandomized algorithm using an arithmetic first-order formula of low quantifier rank." Phrased differently, Chen et al. would argue that k i=1 ∃x∃y∃z(Exy ∧ Eyz ∧ Exz ∧ C i x ∧ C i y ∧ C i z) together with the requirement that the C i are pairwise disjoint is (ignoring some details) equivalent to δ k = ∃p∃q k i=1 ∃x∃y∃z(Exy ∧ Eyz ∧ Exz ∧ hash k (x, p, q) = i ∧ hash k (y, p, q) = i ∧ hash k (z, p, q) = i), where hash k (x, p, q) = i is a formula that is true when "x is hashed to i by a member of a universal family of hash functions indexed by q and p." The family (δ k ) k∈N may seem rather technical and, indeed, its importance becomes visible only in conjunction with another result by Chen et al. [6]: They show that a parameterized problem lies in para-AC 0 , one of the smallest "sensible" subclasses of FPT, if it can be described by a family (φ k ) k∈N of FO[+, ×] formulas of bounded quantifier rank such that the finite models of φ k are exactly the elements of the kth slice of the problem. Since the triangle packing problem can be described in this way via the family (δ k ) k∈N of formulas, all of which have a quantifier rank 5 plus the constant number of quantifiers used to express the arithmetics in the formulas hash k (x, p, q) = i, we get p-triangle-packing ∈ FPT.
Clearly, this beautiful idea cannot work in all situations: If it also worked for the formula mentioned earlier expressing 3-colorability, 3-colorability would be first-order expressible, which is known to be impossible. Our first main contribution is a syntactic characterization of when the color coding technique is applicable, that is, of why color coding works for triangle packing but not for 3-colorability: For triangle packing, the colors C i are applied to variables only inside existential scopes ("∃x∃y∃z") while for 3-colorability the colors R, G, and B are also applied to variables inside universal scopes ("for all adjacent vertices"). In general, see Theorem 3.1 for the details, we show that a second-order quantification over an arbitrary number of disjoint colors C i can be replaced by a fixed number of first-order quantifiers whenever none of the C i is used in a universal scope.
Our Contributions II: New First-Order Quantifier Elimination Rules. The "purpose" of the colors C i in the formulas k i=1 ∃x∃y∃z(Exy ∧ Eyz ∧ Exz ∧ C i x ∧ C i y ∧ C i z) is not that the three vertices of a triangle get a particular color, but just that they get a color different from the color of all other triangles. Indeed, our "real" objective in these formulas is to ensure that the vertices of a triangle are distinct from the vertices in the other triangles -and giving vertices different colors is "just a means" of ensuring this.
In our second main contribution we explore this idea further: If the main (indeed, the only) use of colors in the context of color coding is to ensure that certain vertices are different, let us do away with colors and instead focus on the notion of distinctness. To better explain this idea, consider the following family, also describing triangle packing, where the only change is that we now require (a bit superfluously) that even the vertices inside a triangle get different colors: k j=1 ∃x∃y∃z(Exy ∧ Eyz ∧ Exz ∧ C 3j−2 x ∧ C 3j−1 y ∧ C 3j z). Observe that each C i is now applied to exactly one variable (x, y, or z in one of the many literals) and the only "effect" that all these applications have is to ensure that the variables are different. In particular, the formula is equivalent to and these formulas are clearly equivalent to the almost identical formulas from (2). In a sense, in (4) the many existential quantifiers ∃x i and the many x i = x j literals come "for free" from the color coding technique, while ∃x, ∃y, and ∃z have nothing to do with color coding. Our key observation is a syntactic property that tells us whether a quantifier comes "for free" in this way (we will call it weak ) or not (we will call it strong): Definition 3.4 states (essentially) that weak quantifiers have the form ∃x(φ) such that x is not free in any universal scope of φ and x is used in at most one literal that is not of the form x = y. To make weak quantifiers easier to spot, we mark their bound variables with a dot (note that this is a "syntactic hint" without semantic meaning). Formulas (4) now read ∃ẋ 1 · · · ∃ẋ 3k i =jẋ i =ẋ j ∧ k j=1 ∃x∃y∃z(Exy ∧ Exz ∧ Eyz ∧ẋ 3j−2 = x ∧ẋ 3j−1 = y ∧ẋ 3j = z).
Observe that x, y, and z are not weak since each is used in three literals that are not inequalities.
We show in Theorem 3.5 that each φ is equivalent to a φ ′ whose quantifier rank depends only on the strong quantifier rank of φ (meaning that we ignore the weak quantifiers) and whose number of variables depends only on the number of strong variables in φ ′ . For instance, the formulas from (4) all have strong quantifier rank 3 and, thus, the triangle packing problem can be described by a family of constant (normal) quantifier rank. Applying Chen et al.'s characterization yields membership in para-AC 0 .
As a more complex example, let us sketch a "purely syntactic" proof of the result [3,5] that the embedding problem for graphs H of tree depth at most d lies in para-AC 0 for each d. Once more, we construct a family (φ H ) of formulas of constant strong quantifier rank that describes the problem. For a graph H and a rooted tree T of depth d such that H is contained in T 's transitive closure (this is the definition of "H has tree depth d"), let c 1 be the root of T and let children(c) be the children of c in T . Then the following formula of strong quantifier rank d describes that H can be embedded into a structure: ..,d}:(ci,cj)∈E(H) En i n j ) . . . ))) .
Our Contributions III: Slicewise Descriptions and Variable Set Sizes. Our third contribution is a new result in the same vein as the already repeatedly mentioned result of Chen et al. [6]: Theorem 2.3 states that a parameterized problem can be described slicewise by a family (φ k ) k∈N of arithmetic first-order formulas that all use only a bounded number of variables if, and only if, the problem lies in para-AC 0↑ -a class that has been encountered repeatedly in the literature [2,3,8,17], but for which no characterization was known. It contains all parameterized problems that can be decided by AC-circuits whose depth depends only on the parameter and whose size is of the form f (k) · n c .
As an example, consider the problem of deciding whether a graph contains a path of length k (no vertex may be visited twice). It can be described (for odd k) by: ∃s∃t∃x(Esx ∧ ∃ẋ 1 (ẋ 1 = x ∧ ∃y(Exy ∧ ∃ẋ 2 (ẋ 2 = y ∧∃x(Eyx∧∃ẋ 3 (ẋ 3 = x∧∃y(Exy ∧∃ẋ 4 (ẋ 4 = y ∧· · ·∧∃x(Eyx∧x = t∧∃ẋ k (ẋ k = x∧ i =jẋ i =ẋ j ) . . . )))). Note that, now, the strong quantifier rank depends on k and, thus, is not constant. However, there are now only four strong variables, namely s, t, x, and y. By Theorem 3.5 we see that the above formulas are equivalent to a family of formulas with a bounded number of variables and by Theorem 2.3 we see that p-long-path ∈ para-AC 0↑ ⊆ FPT. These ideas also generalize easily and we give a purely syntactic proof of the seminal result from the original color coding paper [1] that the embedding problem for graphs of bounded tree width lies in FPT. The core observation -which unifies the results for tree width and depth -is that for each graph with a given tree decomposition, the embedding problem can be described by a formula whose strong nesting structure mirrors the tree structure and whose strong variables mirror the bag contents.
Related Work. Flum and Grohe [11] were the first to give characterizations of FPT and of many subclasses in terms of the syntactic properties of formulas describing their members. Unfortunately, these syntactic properties do not hold for the descriptions of parameterized problems found in the literature. For instance, they show that FPT contains exactly the problems that can be described by families of FO[lfp]-formulas of bounded quantifier rank -but actually describing problems like p-vertex-cover in this way is more or less hopeless and yields little insights into the structure or complexity of the problem. We believe that it is no coincidence that no applications of these beautiful characterizations to concrete problems could be found in the literature -at least prior to very recent work by Chen and Flum [7], who study slicewise descriptions of problems on structures of bounded tree depth, and the already cited article of Chen et al. [6], who do present a family of formulas that describe the vertex cover problem. This family internally uses the color coding technique and is thus closely related to our results. The crucial difference is, however, that we identify syntactic properties of logical formulas that imply that the color coding technique can be applied. It then suffices to find a family describing a given problem that meets the syntactic properties to establish the complexity of the problem: there is no need to actually construct the color-coding-based formulas -indeed, there is not even a need to understand how color coding works in order to decide whether a quantifier is weak or strong.
Organization of this Paper. In Section 2 we first review some of the existing work on the descriptive complexity of parameterized problems. We add to this work in the form of the mentioned characterization of the class para-AC 0↑ in terms of a bounded number of variables. Our main technical results are then proved in Section 3, where we establish and prove the syntactic properties that formulas must have in order for the color coding method to be applicable. In Section 4 we then apply the findings and show how membership of different natural problems in para-AC 0 and para-AC 0↑ (and, thus, in FPT) can be derived entirely from the syntactic structure of the formulas describing them.

Describing Parameterized Problems
A happy marriage of parameterized complexity and descriptive complexity was first presented in [11]. We first review the most important definitions from [11] and then prove a new characterization, namely of the class para-AC 0↑ that contains all problems decidable by AC-circuits of parameter-dependent depth and "FPT-like" size. Since the results and notions will be useful later, but do not lie at the paper's heart, we keep this section brief.
Logical Terminology. We only consider first-order logic and use standard notations, with the perhaps only deviations being that we write relational atoms briefly as Exy instead of E(x, y) and that the literal x = y is an abbreviation for ¬ x = y (recall that a literal is an atom or a negated atom). Signatures, typically denoted τ , are always finite and may only contain relation symbols and constant symbolswith one exception: The special unary function symbol succ may also be present in a signature. Let us write succ k for the k-fold application of succ, so succ 3 (x) is short for succ(succ(succ(x))). It allows us to specify any fixed non-negative integer without having to use additional variables. An alternative is to dynamically add constant symbols for numbers to signatures as done in [6], but we believe that following [11] and adding the successor function gives a leaner formal framework. Let arity(τ ) be the maximum arity of relation symbols in τ .
We denote by struc[τ ] the class of all τ -structures and by |A| the universe of A. As is often the case in descriptive complexity theory, we only consider ordered structures in which the ternary predicates add and mult are available and have their natural meaning. Formally, we say τ is arithmetic if it contains all of the predicates <, add, mult, the function symbol succ, and the constant symbol 0 (it is included for convenience only). In this case, struc[τ ] contains only those A for which < A is a linear ordering of |A| and the other operations have their natural meaning relative to < A (with the successor of the maximum element of the universe being itself and with 0 being the minimum with respect to < A ). We write φ ∈ FO[+, ×] when φ is a τ -formula for an arithmetic τ .
A τ -problem is a set Q ⊆ struc[τ ] closed under isomorphisms. A τ -formula φ describes a τ -problem Q if Q = {A ∈ struc[τ ] | A |= φ} and it describes Q eventually if φ describes a set Q ′ that differs from Q only on structures of a certain maximum size.
Proof. The statement of the lemma would be quite simple if we did not require α and β to be quantifierfree: Without this requirement, all we need to do is to use α and β to fix φ on the finitely many (up to isomorphisms) structures on which φ errs by "hard-wiring" which of these structures are elements of Q and which are not. However, the natural way to do this "hard-wiring" of size-m structures is to use m quantifiers to bind all elements of the universe. This is exactly what we do not wish to do. Rather, we use the successor function to refer to the elements of the universe without using any quantifiers.
In detail, let m be a number such that for all A ∈ struc[τ ] with A ≥ m (that is, the size A of the universe |A| is at least m) we have A |= φ if, and only if, A ∈ Q. We set α to universe ≥m , a shorthand for succ m−1 (0) = succ m (0), which is true only for universes of size at least m. We define β so that it is true exactly for all τ -structures A ∈ Q of size at most m (for simplicity we assume that E 2 is the only relation symbol in τ ): We write qr(φ) for the quantifier rank of a formula and bound(φ) for the set of its bound variables. For instance, for φ = ∃x∃y(Exz) ∨ ∀y(P x) we have qr(φ) = 2, since the maximum nesting is caused by the two nested existential quantifiers, and bound(φ) = {x, y}.
Let us say that φ is in negation normal form if negations are applied only to atomic formulas.
Describing Parameterized Problems. When switching from classical complexity theory to descriptive complexity theory, the basic change is that "words" get replaced by "finite structures." The same idea works for parameterized complexity theory and, following Flum and Grohe [11], let us define parameterized problems as subsets Q ⊆ struc[τ ] × N where Q is closed under isomorphisms. In a pair (A, k) ∈ struc[τ ] × N the number k is, of course, the parameter value of the pair. Flum and Grohe now propose to describe such problems slicewise using formulas. Since this will be the only way in which we describe problems, we will drop the "slicewise" in the phrasings and just say that a computable family (φ k ) k∈N of formulas describes a problem Q ⊆ struc[τ ] × N if for all (A, k) ∈ struc[τ ] × N we have (A, k) ∈ Q if, and only if, A |= φ k . One can also define a purely logical notion of reductions between two problems Q and Q ′ , but we will need this notion only inside the proof of Theorem 4.2 and postpone the definition till then. For a class Φ of computable families (φ k ) k∈N , let us write XΦ for the class of all parameterized problems that are described by the members of Φ (we chose "X" to represent a "slicewise" description, which seems to be in good keeping with the usual use of X in other classes such as XP or XL). For instance, the mentioned characterization of FPT in logical terms by Flum and Grohe can be written as We remark that instead of describing parameterized problems using families, a more standard and at the same time more flexible way is to use reductions to model checking problems. Clearly, if a family (φ k ) k∈N of L-formulas describes Q ⊆ struc[τ ] × N, then there is a very simple parameterized reduction from Q to the model checking problem p φ -mc(L), where the input is a pair (A, num(φ)) and the question is whether both A |= φ and φ ∈ L hold. (The function num encodes mathematical objects like φ or later tuples like (φ, δ) as unique natural numbers.) The reduction simply maps a pair (A, k) to (A, num(φ k )). Even more interestingly, without going into any of the technical details, it is also not hard to see that as long as a reduction is sufficiently simple, the reverse implication holds, that is, we can replace a reduction to the model checking problem by a family of formulas that describe the problem. We can, thus, use whatever formalism seems more appropriate for the task at hand and -as we hope that this paper shows -it is sometimes quite natural to write down a family that describes a problem.
Parameterized Circuits. For our descriptive setting, we need to slightly adapt the definition of the circuit classes para-AC 0 and para-AC 0↑ from [2,3]: Let us say that a problem Q ⊆ struc[τ ] × N is in para-AC 0 , if there is a family (C n,k ) n,k∈N of AC-circuits (Boolean circuits with unbounded fan-in) such that for all (A, k) ∈ struc[τ ] × N we have, first, (A, k) ∈ Q if, and only if, C |x|,k (x) = 1 where x is a binary encoding of A; second, the size of C n,k is at most f (k) · n c for some computable function f ; third, the depth of C n,k is bounded by a constant; and, fourth, the circuit family satisfies a dlogtimeuniformity condition. The class para-AC 0↑ is defined the same way, but the depth may be g(k) for some computable g instead of only O(1). The following fact and theorem show how these two circuit classes are closely related to descriptions of parameterized problems using formulas: Proof. The basic idea behind the proof is quite "old": we need to establish links between circuit depth and size and the number of variables used in a formula -and such links are well-known, see for instance [18]: The quantifier rank of a first-order formula naturally corresponds to the depth of a circuit that solves the model checking problem for the formula. The number of variables corresponds to the exponent of the polynomial that bounds the size of the circuit (the paper [15] is actually entitled DSPACE[n k ] = VAR[k + 1]). One thing that is usually not of interest (because only one formula is usually considered) is the fact that the length of the formula is linked multiplicatively to the size of the circuit.
In detail, suppose we are given a problem Q ⊆ struc[τ ] × N with Q ∈ para-AC 0↑ via a circuit family (C n,k ) n,k∈N of depth g(k) and size f (k)n c . For a fixed k, we now need to construct a formula φ k that correctly decides the k-th slice. In other words, we need a FO[+, ×]-formula φ k whose finite models are exactly those on which the family (C n,k ) n∈N (note that k no longer indexes the family) evaluates to 1 (when the models are encoded as bitstrings). It is well-known how such a formula can be constructed, see for instance [18], we just need a closer look at how the quantifier rank and number of variables relate to the circuit depth and size.
The basic idea behind the formula φ k is the following: The circuit has f (k)n c gates and we can "address" these gates using c variables (which gives us n c possibilities) plus a number i ∈ {1, . . . , f (k)} (which gives us f (k) · n c possibilities). Since for fixed k the number f (k) is also fixed, it is permissible that the formula φ k contains f (k) copies of some subformula, where each subformula handles another value of i. The basic idea is then to start with formulas ψ 0 i for i ∈ {1, . . . , f (k)}, each of which has c free variables, so that ψ 0 i (x 1 , . . . , x c ) is true exactly if the tuple (x 1 , . . . , x c , i) represents an input gate set to 1. At this point, the uniformity condition basically tells us that such formulas can be constructed and that they all have a fixed quantifier rank. Next, we construct formulas ψ 1 i (x 1 , . . . , x c ) that are true if (x 1 , . . . , x c , i) addresses a gate for which the input values are all already computed by the ψ 0 j and that evaluates to 1. Next, formulas ψ 2 i are constructed, but, now, we can reuse the variables used in the ψ 0 j . In this way, we finally build formulas ψ g(k) i and apply it to the "address" of the output gate. All told, we get a formula whose quantifier rank is c · g(k) + O(1) and in which at most 2c + O(1) variables are used (note that the size of the formula depends on f (k)). Clearly, this means that the family (φ k ) k∈N created in this way does, indeed, only use a bounded number of variables (namely O(c) many) and decides Q.
For the other direction, suppose (φ k ) k∈N describes Q and that all φ k contain at most v variables (since they contain no free variables, this is same as the number of bound variables). Clearly, we may assume that the φ k are in negation normal form. We may also assume that they are flat, by which we mean that they contain no subformulas of the form (α ∨ β) ∧ γ or α ∧ (β ∨ γ): using the distributive laws of propositional logic, any first-order formula can be turned into an equivalent flat formula with the same number of variables and the same quantifier rank. Lastly, we may assume that the succ function symbol is only used in atoms of the form x = succ s (0) for some variable x and some number s: We can replace for instance E succ 6 (x) succ 3 (y) by the equivalent formula ∃x ′ ∃x ′′ ∃y ′ ∃y ′′ (x ′ = succ 6 (0) ∧ add xx ′ x ′′ ∧ y ′ = succ 3 (0) ∧ add yy ′ y ′′ ∧ Ex ′′ y ′′ ) without raising the number of variables and the quantifier rank by more than 4 (or, in general, by more than the constant 2 · arity(τ )).
As before, it is now known that for each φ k there is a family (C n,k ) n∈N that evaluates to 1 exactly on the (encoded) models of φ k . These circuits are constructed as follows: While φ k has no free variables, a subformula ψ of φ k can have up to v free variables. For each such subformula, the circuits use n v gates to keep track of all assignments to these v variables that make the subformula true. Clearly, this is relatively easy to achieve for literals in a constant number of layers, including literals of the form x = succ s (0) since s is a fixed number depending only on k. Next, if a formula is of the form i α i and for some assignment we have one gate for each α i that tells us whether it is true, we can feed all these wires into one ∧-gate. We can take care of a formula of the form i α i in the same way -and note that in a flat formula there will be at most one alternation from to before we encounter a quantifier. Now, for subformulas of the form ∃x φ, the correct values for the n v−1 gates can be obtained by a big ∨-gate attached to the outputs from the gates for φ. Similarly, ∀x φ can be handled using a big ∧-gate.
Based on these observations, it is now possible to build a circuit of size |φ k |n v and depth O(qr(φ k )). In particular, the resulting overall circuit family has a depth that depends only on the parameter (since the quantifier rank can be at most |φ k |, which depends only on k) and has a size of at most f (k) · n c for f (k) = |φ k |. It can also be shown that the necessary uniformity conditions are satisfied.
We remark that the above proof also implies Fact 2.2, namely for g(k) = O(1) for the first direction and for qr(φ k ) = O(1) for the second direction.

Syntactic Properties Allowing Color Coding
The color coding technique [1] is a powerful method from parameterized complexity theory for "discovering small objects" in larger structures. Recall the example from the introduction: While finding k disjoint triangles in a graph is difficult in general, it is easy when the graph is colored with k colors and the objective is to find for each color one triangle having this color. The idea behind color coding is to reduce the (hard) uncolored version to the (easy) colored version by randomly coloring the graph and then "hoping" that the coloring assigns a different color to each triangle. Since the triangles are "small objects," the probability that they do, indeed, get different colors depends only on k. Even more importantly, Alon et al. noticed that we can derandomize the coloring procedure simply by coloring each vertex by its hash value with respect to a simple family of universal hash functions that only use addition and multiplication [1]. This idea is beautiful and works surprisingly well in practice [14], but using the method inside proofs can be tricky: On the one hand, we need to "keep the set sizes under control" (they must stay roughly logarithmic in size) and we "need to actually identify the small set based just on its random coloring." Especially for more complex proofs this can lead to rather subtle arguments.
In the present section, we identify syntactic properties of formulas that guarantee that the color coding technique can be applied. The property is that the colors (the predicates C i in the formulas) are not in the scope of a universal quantifier (this restriction is necessary, as the example of the formula describing 3-colorability shows).
As mentioned already in the introduction, the main "job" of the colors in proofs based on color coding is to ensure that vertices of a graph are different from other vertices. This leads us to the idea of focusing entirely on the notion of distinctness in the second half of this section. This time, there will be syntactic properties of existentially bounded first-order variables that will allow us to apply color coding to them.

Formulas With Color Predicates
In graph theory, a coloring of a graph can either refer to an arbitrary assignment that maps each vertex to a color or to such an assignment in which vertices connected by an edge must get different colors (sometimes called proper colorings). For our purposes, colorings need not be proper and are thus partitions of the vertex set into color classes. From the logical point of view, each color class can be represented by a unary predicate. A k-coloring of a τ -structure A is a structure B over the signature τ k-colors = τ ∪ {C 1 1 , . . . , C 1 k }, where the C i are fresh unary relation symbols, such that A is the τ -restriction of B and such that the sets C B 1 to C B k form a partition of the universe |A| of A. Let us now formulate and prove the first syntactic version of color coding. An example of a possible formula φ in the theorem is , for which the theorem tells us that there is a formula φ ′ of constant quantifier rank that is true exactly when there are pairwise disjoint sets C i that make φ true.
Theorem 3.1. Let τ be an arithmetic signature and let k be a number. For each first-order τ k-colorssentence φ in negation normal form in which no C i is inside a universal scope, there is a τ -sentence φ ′ such that: (Let us clarify that O(1) represents a global constant that is independent of τ and k.) Proof. Let τ , k, and φ be given as stated in the theorem. If necessary, we modify φ to ensure that there is no literal of the form ¬C i x j , by replacing each such literal by the equivalent l =i C l x j . After this transformation, the C i in φ are neither in the scope of universal quantifiers nor of negations -and this is also true for all subformulas α of φ. We will now show by structural induction that all these subformulas (and, hence, also φ) have two semantic properties, which we call the monotonicity property and the small witness property (with respect to the C i ). Afterwards, we will show that these two properties allow us to apply the color coding technique.
Establishing the Monotonicity and Small Witness Properties. Some notations will be useful: Given a τ -structure A with universe A and given sets A i ⊆ A for i ∈ {1, . . . , k}, let us write A |= φ(A 1 , . . . , A k ) to indicate that B is a model of φ where B is the τ k-colors -structure with universe A in which all symbols from τ are interpreted as in A and in which the symbol C i is interpreted as A i , that is, Subformulas γ of φ may have free variables and suppose that x 1 to x m are the free variables in γ and let a i ∈ A for i ∈ {1, . . . , m}. We write A |= γ(A 1 , . . . , A k , a 1 , . . . , a m ) to indicate that γ holds in the just-described structure B when each x i is interpreted as a i . Definition 3.2. Let γ be a τ k-colors -formula with free variables x 1 to x m . We say that γ has the monotonicity and the small witness properties with respect to the C i if for all τ -structures A with universe A and all values a 1 , . . . , a m ∈ A the following holds: . . , a m ).

Small witness property:
If there are any pairwise disjoint sets We now show that φ has these two properties (for m = 0). For monotonicity, just note that the C i are not in the scope of any negation and, thus, if some A i make φ true, so will all supersets B i of the A i .
To see that the small witness property holds, we argue by structural induction: If φ is any formula that does not involve any C i , then φ is true or false independently of the B i and, in particular, if it is true at all, it is also true for If φ = α ∧ β, then α and β have the small witness property by the induction hypothesis. Let B 1 , . . . , B k ⊆ A make φ hold in A. Then they also make both α and β hold in i ⊆ B i be the witnesses for α and let A β 1 , . . . , A β k ⊆ A be the witnesses for β. Then by the monotonicity property, A α 1 ∪ A β 1 , . . . , A α k ∪ A β k makes both α and β true, that is and the same holds for β. Note that A α i ∪ A β i ⊆ B i still holds and that they have sizes depending only on α and β and thereby on φ.
For φ = α ∨ β we can argue in exactly the same way as for the logical and. The last case for the structural induction is φ = ∃x m (α). Consider pairwise disjoint B 1 , . . . , B k ⊆ A that make φ true. Then there is a value a m ∈ A such that A |= α(B 1 , . . . , B k , a 1 , . . . , a m ). Now, since α has the small witness property by the induction hypothesis, we get A i ⊆ B i of size depending on α for which we also have A |= α(A 1 , . . . , A k , a 1 , . . . , a m ). But then, by the definition of existential quantifiers, these A i also witness A |= ∃x m φ(A 1 , . . . , A k , a 1 , . . . , a m−1 ). (Observe that this is the point where the argument would not work for a universal quantifier: Here, for each possible value of a m we might have a different set of A i 's as witnesses and their union would then no longer have small size.) Applying Color Coding. Our next step in the proof is to use color coding to produce the partition. First, let us recall the basic lemma on universal hash functions formulated below in a way equivalent to [12, page 347]: There is an n 0 ∈ N such that for all n ≥ n 0 and all subsets X ⊆ {0, . . . , n − 1} there exist a prime p < |X| 2 log 2 n and a number q < p such that the function h p,q (m) = (q · m mod p) mod |X| 2 is injective on X.
As has already been observed by Chen et al. [6], if we set k = |X| we can easily express the computation underlying h p,q : {0, . . . , n − 1} → {0, . . . , k 2 − 1} using a fixed FO[+, ×]-formula ρ(k, p, q, x, y). That is, if we encode the numbers k, p, q, x, y ∈ {0, . . . , n − 1} as corresponding elements of the universe with respect to the ordering of the universe, then ρ(k, p, q, x, y) holds if, and only if, h p,q (x) = y. Note that the p and q from the lemma could exceed n for very large X (they can reach up to n 2 log 2 n ≤ n 3 ), but, first, this situation will not arise in the following and, second, this could be fixed by using three variables to encode p and three variables to encode q. Trivially, ρ(k, p, q, x, y) has some constant quantifier rank (the formula explicitly constructed by Chen et al. has qr(ρ) = 9, assuming k 2 < n).
Next, we will need the basic idea or "trick" of Alon et al.'s [1] color coding technique: While for appropriate p and q the function h p,q will "just" be injective on {0, . . . , k 2 − 1}, we actually want a function that maps each element x ∈ X to a specific element ("the color of x") of {1, . . . , k}. Fortunately, this is easy to achieve by concatenating h p,q with an appropriate function g : {0, . . . , k 2 − 1} → {1, . . . , k}.
In detail, to construct φ ′ from the claim of the theorem, we construct a family of formulas φ g (p, q) where p and q are new free variables and the formulas are indexed by all possible functions g : {0, . . . , k 2 − 1} → {1, . . . , k}: In φ, replace every occurrence of C i x j by the following formula π g i (p, q, x j ): wherek andŷ are fresh variables that we bind to the numbers k and y (if the universe is large enough). Note that the formula C i x j has x j as a free variable, while π g i (p, q, x j ) additionally has p and q as free variables. As an example, for the formula φ = ∃x(C 2 x ∨ ∃yC 5 y) we would have φ g = ∃x(π g 2 (p, q, x) ∨ ∃yπ g 5 (p, q, y)). Clearly, each φ g has the property qr(φ g ) = qr(φ) + O(1). The desired formula φ ′ is (almost) simply g:{0,...,k 2 −1}→{1,...,k} ∃p∃q(φ g (p, q)). The "almost" is due to the fact that this formula works only for structures with a sufficiently large universe -but by Lemma 2.1 it suffices to consider only this case. Let us prove that for every σ-structure A with universe A = {0, . . . , n − 1} and n ≥ c for some to-be-specified constant c, the following two statements are equivalent: Let us start with the implication of item 2 to 1. Suppose there is a function g : In other words, A i contains all elements of A that are first hashed to an element of {0, . . . , k 2 − 1} that is then mapped to i by the function g. Trivially, the A i form a partition of the universe A.
Assuming that the universe size is sufficiently large, namely for k 2 log 2 n < n, inside φ g all uses of ρ(k, p, q, x,ŷ) will have the property that A |= ρ(k, p, q, x,ŷ) if, and only if, h p,q (x) =ŷ. Clearly, there is a constant c depending only on k such that for all n > c we have k 2 log 2 n < n.
With the property established, we now see that π g i (p, g, x j ) holds inside the formula φ g if, and only if, the interpretation of x j is an element of A i . This means that if we interpret each C i by A i , then we get A |= φ(A 1 , . . . , A k ) and the A i form a partition of the universe. In other words, we get item 1. Now assume that item 1 holds, that is, there is a partition B 1∪ · · ·∪ B k = A with A |= φ(B 1 , . . . , B k ). We must show that there are a g : {0, . . . , k 2 − 1} → {1, . . . , k} and p, q ∈ A such that A |= φ g (p, q).
At this point, we use the small witness property that we established earlier for the partition. By this property there are pairwise disjoint sets A i ⊆ A such that, first, |A i | depends only on φ and, second, Then |X| depends only on φ and let s φ be a φ-dependent upper bound on this size. By the universal hashing lemma, there are now p and q such that h p,q : With these definition, we now define the following sets D 1 to D k : Let D i = {x ∈ A | g(h p,q (x)) = i} wherex is the index of x in A with respect to the ordering (that is,x = |{y ∈ A | y < A x}| and for the special case that A = {0, . . . , n − 1} and that < A is the natural ordering,x = x). Observe that D i ⊇ A i holds for all D i and that the D i form a partition of the universe A. By the monotonicity property, However, by definition of the D i and of the formulas π g i , for a sufficiently large universe size n (namely s 2 φ log 2 n < n), we now also have A |= φ g (p, q), which in turn implies A |= g ∃p∃qφ g .
In the theorem we assumed that φ is a sentence to keep the notation simple, both the theorem and later theorems still hold when φ(x 1 , . . . , x n ) has free variables x 1 to x n . Then there is a corresponding φ ′ (x 1 , . . . , x n ) such that first item becomes that for all A ∈ struc[τ ] and all a 1 , . . . , a n ∈ |A| we have A |= φ ′ (a 1 , . . . , a n ) if, and only if, there is a k-coloring B of A with B |= φ(a 1 , . . . , a n ). Note that the syntactic transformations in the theorem do not add dependencies of universal quantifiers on the free variables.

Formulas With Weak Quantifiers
If one has a closer look at proofs based on color coding, one cannot help but notice that the colors are almost exclusively used to ensure that certain vertices in a structure are distinct from certain other vertices: recall the introductory example k j=1 ∃x∃y∃z(Exy ∧Eyz ∧Exz ∧C 3j−2 x∧C 3j−1 y ∧C 3j z), which describes the triangle packing problem when we require that the C i form a partition of the universe. Since the C i are only used to ensure that the many different x, y, and z are different, we already rewrote the formula in (4) as . While this rewriting gets rid of the colors and moves us back into the familiar territory of simple first-order formulas, the quantifier rank and the number of variables in the formula have now "exploded" (from the constant 3 to the parameter-dependent value 3k + 3) -which is exactly what we need to avoid in order to apply Fact 2.2 or Theorem 2.3.
We now define a syntactic property that the x i have that allows us to remove them from the formula and, thereby, to arrive at a family of formulas of constant quantifier rank. For a (sub)formula α of the form ∀d(φ) or ∃d(φ), we say that d depends on all free variables in φ (at the position of α in a larger formula). For instance, in Exy ∧ ∀b(Exb ∧ ∃z(Eyz)) ∧ ∃b(Exx), the variable b depends on x and y at its first binding (∀b) and on x at the second binding (∃b).
Definition 3.4. We call the leading quantifier in a formula ∃x(φ) in negation normal form strong if 1. some universal binding inside φ depends on x or 2. there is a subformula α ∧ β of φ such that both α and β contain x in literals that are not of the form x = y for some variable y.
If neither of the above hold, we call the quantifier weak. The strong quantifier rank strong-qr(φ) is the quantifier rank of φ, where weak quantifiers are ignored; strong-bound(φ) contains all variables of φ that are bound by non-weak quantifiers.
(Later on we extend the definition to the dual notion of weak universal quantifiers, but for the moment let us only call existential quantifiers weak. ) We place a dot on the variables bound by weak quantifiers to make them easier to spot. For example, in φ = ∃x∃y∃ż(Rxxżż ∧ x = y ∧ y =ż ∧ P x ∧ ∀w Ewyy) the quantifier ∃ż is weak, but neither are ∃x (since x is used in two literals joined by a conjunction, namely in Rxxżż and P x) nor ∃y (since w depends on y in ∀w Ewyy). We have qr(φ) = 4, but strong-qr(φ) = 3, and bound(φ) = {x, y,ż}, but strong-bound(φ) = {x, y}.
Admittedly, the definition of weakness is a bit technical, but note that there is a rather simple sufficient condition for a variable x to be weak: If it not used in universal binding and used in only one literal that is not an inequality, then x is weak. This condition almost always suffices for identifying the weak variables, although there are of course exceptions like ∃ẋ(Pẋ ∨ Qẋ).
Theorem 3.5. Let τ be an arithmetic signature. Then for every τ -formula φ in negation normal form there is a τ -formula φ ′ such that Before giving the detailed proof, we briefly sketch the overall idea: Using simple syntactic transformations, we can ensure that all weak quantifiers follow in blocks after universal quantifiers. We can also ensure that inequality literals directly follow the blocks of weak quantifiers and are joined by conjunctions. If the inequality literals following a block happen to require that all weak variables from the block are different (that is, if for all pairsẋ i andẋ j of different weak variables there is an inequalityẋ i =ẋ j ), then we can remove the weak quantifiers ∃ẋ i and at the (single) place whereẋ i is used, we use a color C i instead. For instance, ifẋ i is used in the literalẋ i = y, we replace the literal by C i y. Ifẋ i is used for instance in ¬Eẋ i y, we replace this by ∃x(C i x ∧ ¬Exy). In this way, for each block we get an equivalent formula to which we can apply Theorem 3.1. A more complicated situation arises when the inequality literals in a block "do not require complete distinctness," but this case can also be handled by considering all possible ways in which the inequalities can be satisfied in parallel. In result, all weak quantifiers get removed and for each block a constant number of new quantifiers are introduced. Since each block follows a different universal quantifier, the new total quantifier rank is at most the strong quantifier rank times a constant factor; and the new number of variables is only a constant above the number of original strong variables.
Proof. Let φ be given. We first apply a number of simple syntactic transformations to move the weak quantifiers directly behind universal quantifiers and to move inequality literals directly behind blocks of weak quantifiers. Then we show how sets of inequalities can be "completed" if necessary. Finally, we inductively transform the formula in such a way that Theorem 3.1 can be applied repeatedly.
As a running example, we use the (semantically not very sensible, but syntactically interesting) formula and for each transformation we show how it applies to this example.
Preliminaries. It will be useful to require that all weak variables are different. Thus, as long as necessary, when a variable is bound by a weak quantifier and once more by another quantifier, replace the variable used by the weak quantifier by a fresh variable. Note that this may increase the number of distinct (weak) variables in the formula, but we will get rid of all of them later on anyway. From now on, we may assume that the weak variables are all distinct from one another and also from all other variables.
It will also be useful to assume that φ starts with a universal quantifier. If this is not the case, replace φ by the equivalent formula ∀v(φ) where v is a fresh variable. This increases the quantifier rank by at most 1.
Finally, it will also be useful to assume that the formula has been "flatten" as in the proof Theorem 2.3: We use the distributive laws of propositional logic to repeatedly replace subformulas of the form (α∨β)∧γ Note that this transformation does not change which variables are weak.
For our running example, applying the described preprocessing yields: Syntactic Transformations I: Blocks of Weak Quantifiers. The first interesting transformation is the following: We wish to move weak quantifiers "as far up the syntax tree as possible." To achieve this, we apply the following equivalences as long as possible by always replacing the left-hand side (and also commutatively equivalent formulas) by the right-hand side: Note that β does not containẋ since we made all weak variables distinct and, of course, by ∃y we mean a strong quantifier. Once the transformations have been applied exhaustively, all weak quantifiers will be directly preceded in φ by either a universal quantifier or another weak quantifier. This means that all weak quantifiers are now arranged in blocks inside φ, each block being preceded by a universal quantifier.
φ ≡ ∀v∃ẋ 1 ∃ẋ 2 ∃a Eaẋ 1 ∧ẋ 2 =ẋ 1 ∧ ∀c∃ẋ 3 ∃ẋ 4 (Eẋ 3ẋ4 ∨ ∃z(ẋ 3 =ẋ 4 ∧ P z ∧ Qc)) Syntactic Transformations II: Weak and Strong Literals. In order to apply color coding later on, it will be useful to have only three kinds of literals in φ: 1. Strong literals are literals that do not contain any weak variables. Let us call all other kinds of literals bad. This includes literals like Eẋẋ or Ezẏ that contain a relation symbol and some weak variables, but also inequalitiesẋ = y involving a weak and a strong variable, an equalitiesẋ =ẏ involving two weak variables, or an equality literal like the one in ∀y∃ẋ(ẋ = y). Finally, literals involving the successor function and weak variables are also bad. In order to get rid of the bad literals, we will replace them by equivalent formulas that do not contain any bad literals. The idea is that we bind the variable or term that be wish to get rid of using a new existential quantifier. In order to avoid introducing too many new variables, for all of the following transformations we use the set of fresh variables v 1 , v 2 , and so on, where we may need more than one of these variables per literal, but will need no more than O(arity(τ )) (recall that arity(τ ) is the maximum arity of relation symbols in τ ).
Let us first get rid of the successor functions. If a bad literal λ contains succ k (x) (wherex indicates that x may be strong or weak), we replace λ by Here, λ[t 1 ֒→ t 2 ] is our notation for the substitution of t 1 by t 2 in λ. The number i is chosen minimally so that λ contains neither v i nor v i+1 . Clearly, if we repeatedly apply this transformation to all literals containing the successor function, we get an equivalent formula in which no bad literal contains the successor function. Note that we use at most 2 arity(τ ) of the variables v i .
Next, we get rid of the remaining bad literals, which are literals λ that contain a weak variableẋ, but are neither weak equalities not weak inequalities. This time, we replace λ by ∃v where, once more, i is chosen minimally to avoid a name clash. Since this transformation reduces the number of weak variables in λ and does not introduce a bad literal, sooner or later we will have gotten rid of all bad literals. Once more, for each literal we use at most arity(τ ) new variables from the v i .
Overall, we get that φ is equivalent to a formula without any bad literals in which we use at most 3 arity(τ ) additional variables and whose quantifier rank is larger than that of φ by at most 3 arity(τ ). Note that the transformation ensures that weak variables stay weak. Applied to our example formula, we get: Syntactic Transformations III: Accumulating Weak Inequalities. We now wish to move all weak inequalities to the "vicinity" of the corresponding block of weak quantifiers. More precisely, just as we did earlier, we apply the following equivalences (interpreted once more as rules that are applied from left to right): Note that these rules do not change which variables are weak. When these rules can no longer be applied, the weak inequality are "next" to their quantifier block, that is, each subformula starting with weak quantifiers has the form where the α i contain no weak inequalities while all λ j i are weak inequalities. For our example formula, we get: Finally, we now swap each block of weak quantifiers with the following disjunction, that is, we apply the following equivalence from left to right: If necessary, we rename weak variables to ensure once more that they are unique. For our example, the different transformations yield: ∃ẋ 5 ∃ẋ 6 (ẋ 5 =ẋ 6 ∧ ∃z(P z ∧ Qc)) .
We make the following observation at this point: Inside each ψ i , each of the variablesẋ i1 toẋ i k is used at most once outside of weak inequalities. The reason for this is that rules (6) and (7) ensure that there are no disjunctions inside the ψ i that involve a weak variableẋ. Thus, the requirement "in any subformula of ψ i of the form α ∧ β only α or β -but not both -may useẋ in a literal that is not a weak inequality" from the definition of weak variables just boils down to "ẋ may only be used once in ψ i in a literal that is not a weak inequality." Syntactic Transformations IV: Completing Weak Inequalities. The last step before we can apply the color coding method is to "complete" the conjunctions of weak inequalities. After all the previous transformations have been applied, each block of weak quantifiers has now the form ∃ẋ 1 · · · ∃ẋ k i λ i ∧ α where the λ i are all weak inequalities (between some or all pairs ofẋ 1 toẋ k ) and α contains no weak inequalities involving theẋ i (but may, of course, contain weak equalities involving theẋ i ). Actually, the weak variables need not beẋ 1 toẋ k , but let us assume this to keep to notation simple.
The formula i λ i expresses that some of the variablesẋ i must be different. If the formula encompasses all possible weak inequalities between distinctẋ i andẋ j , then the formula would require that allẋ i must be distinct -exactly the situation in which color coding can be applied. However, some weak inequalities may be "missing" such as in the formulaẋ 1 =ẋ 2 ∧ẋ 2 =ẋ 3 ∧ẋ 1 =ẋ 3 ∧ẋ 3 =ẋ 4 : This formula requires thatẋ 1 toẋ 3 must be distinct and thatẋ 4 must be different fromẋ 3 -but it would be allowed thaṫ x 4 equalsẋ 1 orẋ 2 . Indeed, it might be the case that the only way to make α true is to makeẋ 1 equal toẋ 4 . This leads to a problem in the context of color coding: We want to colorẋ 1 ,ẋ 2 , andẋ 3 differently, using, say, red, green, and blue. In order to ensureẋ 3 =ẋ 4 , we must giveẋ 4 a color different from blue. However, it would be wrong to color it red or green or using a new color like yellow since each would rule outẋ 4 being equal or different from eitherẋ 1 orẋ 2 -and each possibility must be considered to ensure that we miss no assignment that makes α true.
The trick at this point is to reduce the problem of missing weak inequalities to the situation where all weak inequalities are present by using a large disjunction over all possible ways to unify weak variables without violating the weak inequalities.
In detail, let us call a partition P 1∪ · · ·∪ P l of the set {ẋ 1 , . . . ,ẋ k } allowed by the λ i if the following holds: For each P j and any two differentẋ p ,ẋ q ∈ P j none of the λ i is the inequalityẋ p =ẋ q . In other words, the λ i do not forbid that the elements of any P j are identical. Clearly, the partition with P j = {ẋ j } is always allowed by any λ i , but in the earlier example, the partition We introduce the following notation: For a partition P 1∪ · · ·∪ P l = {ẋ 1 , . . . ,ẋ k } we will write distinct(P 1 , . . . , P l ) for 1≤i<j≤l,ẋp∈Pi,ẋq∈Pjẋ p =ẋ q . We claim the following: Claim. For any weak inequalities λ i we have i λ i ≡ P1∪ · · ·∪ P l is allowed by the λi distinct(P 1 , . . . , P l ). Proof. For the implication from left to right, assume that A |= i λ i (a 1 , . . . , a k ) for some (not necessarily distinct) a 1 , . . . , a k ∈ |A|. The elements induce a natural partition P 1∪ · · ·∪ P l = {ẋ 1 , . . . ,ẋ k } where two variablesẋ p andẋ q are in the same set P j if, and only if, a p = a q . Then, clearly, for all i and j with 1 ≤ i < j ≤ l and anyẋ p ∈ P i andẋ q ∈ P j we have a i = a j . Thus, all inequalities in distinct(P 1 , . . . , P l ) are satisfied and, hence, the right-hand side.
For the other direction, suppose that A is a model of the right hand side for some a 1 to a k . Then there must be a partition P 1∪ · · ·∪P l that is allowed by the λ i such that A is also a model of distinct(P 1 , . . . , P l ). Furthermore, each λ i is actually present in this last formula: Ifẋ p =ẋ q is one of the λ i , then by the very definition of "P 1∪ · · ·∪ P l is allowed for the λ i " we must have thatẋ p andẋ q lie in different P i and P j -which, in turn, implies thatẋ p =ẋ q is present in distinct(P 1 , . . . , P l ).
As in the previous transformations we now apply the equivalence from the corollary from left to right. If we create copies of α during this process, we rename the weak variables in these copies to ensure, once more, that each weak variable is unique. In our example formula φ, there is only one place where the transformation changes anything: The middle weak quantifier block (the ∃ẋ 3 ∃ẋ 4 block). For the first and the last block, the literalsẋ 1 =ẋ 2 andẋ 5 =ẋ 6 , respectively, already rule out all partitions except for the trivial one. For the middle block, however, there are no weak inequalities at all and, hence, there are now two allowed partitions: First, P 1 = {ẋ 3 }, P 2 = {ẋ 4 }, but also P 1 = {ẋ 3 ,ẋ 4 }. This means that we get a copy of the middle block whereẋ 3 andẋ 4 are required to be different -and we renumber them toẋ 7 andẋ 8 : ∃ẋ 5 ∃ẋ 6 (ẋ 5 =ẋ 6 ∧ ∃z(P z ∧ Qc)) .
Applying Color Coding. We are now ready to apply the color coding technique; more precisely, to repeatedly apply Theorem 3.1 to the formula φ. Before we do so, let us summarize the structure of φ: 1. All weak quantifiers come in blocks, and each such block either directly follows a universal quantifier or follows a disjunction after a universal quantifier. In particular, on any root-to-leaf path in the syntax tree of φ between any two blocks of weak quantifiers there is at least one universal quantifier.

All blocks of weak quantifiers have the form
∃ẋ i1 · · · ∃ẋ i k distinct(P 1 , . . . , P l ) ∧ α for some partition P 1∪ · · ·∪ P l = {x i1 , . . . , x i k } and for some α in which the only literals that contain anyẋ ij are of the formẋ ij = y for a strong variable y that is bound by an existential quantifier inside α. Furthermore, none of these weak equality literals is in the scope of a universal quantifier inside α. (Of course, all variables in φ are in the scope of a universal quantifier since we added one at the start, but the point is that none of theẋ i is in the scope of a universal quantifier that is inside α.) In φ there may be several blocks of weak quantifiers, but at least one of them (let us call it β) must have the form (9) where α contains no weak variables other thanẋ i1 toẋ i k . (For instance, in our example formula, this is the case for the blocks starting with ∃ẋ 3 ∃ẋ 4 , for ∃ẋ 7 ∃ẋ 8 , and for ∃ẋ 5 ∃ẋ 6 , but not for ∃ẋ 1 ∃ẋ 2 since, here, the corresponding α contains all of the rest of the formula.) In our example, we could choose β = ∃ẋ 7 ∃ẋ 8 (ẋ 7 =ẋ 8 ∧ ∃v 1 (v 1 =ẋ 7 ∧ ∃v 2 (v 2 =ẋ 8 ∧ Ev 1 v 2 ))) and would then have We build a new formula α ′ from α as follows: We replace each occurrence of a weak equalityẋ i = y in α for some weak variableẋ i ∈ P j and some strong variable y by the formula C j y. In our example, where P 1 = {ẋ 7 } and P 2 = {ẋ 8 } we would get ).
An important observation at this point is that α ′ contains no weak variables any longer, while no additional variables have been added. In particular, the quantifier rank of α ′ equals the strong quantifier rank of α and the number of variables in α ′ equals the number of strong variables in α.
Note that the literals C j y and alsoẋ i = y are positive since the formulas are in negation normal form. Hence, they have the following monotonicity property: If some structure together with some assignment to the free variables is a model of α or α ′ , but a literalẋ i = y or C j y is false, the structure will still be a model if we replace the literal by a tautology.
For simplicity, in the following, we assume thatẋ i1 toẋ i k are justẋ 1 toẋ k . Also for simplicity we assume that β contains no free variables when, in fact, it can. However, these variables cannot be any of the variables y for which we make changes and, thus, it keeps the notation simpler to ignore the additional free variables here. The following statement simply holds for all assignments to them: Claim. Let P 1∪ · · ·∪ P l = {x 1 , . . . , x k }. Then for each structure A, the following are equivalent: 1. A |= ∃ẋ 1 · · · ∃ẋ k distinct(P 1 , . . . , P l ) ∧ α .
2. There are elements a 1 , . . . , a k ∈ |A| with A |= α(a 1 , . . . , a k ) and such that a p = a q wheneveṙ x p ∈ P i ,ẋ q ∈ P j , and i = j.
3. There is an l-coloring B of A such that B |= α ′ .
Proof. For the proof of the claim, it will be useful to apply some syntactic transformations to α and α ′ . Just like the many transformations we encountered earlier, these transformations yield equivalent formulas and, thus, it suffices to prove the claim for them (since the claim is about the models of α and α ′ ). However, these transformation are needed only to prove the claim, they are not part of the "chain of transformations" that is applied to the original formula (they increase the number of strong variables far too much).
In α there will be some occurrences of literals of the formẋ i = y. For each such occurrence, there will be exactly one subformula in α of the form ∃y(γ) where γ containsẋ i = y. We now apply two syntactic transformations: First, we replace y in ∃y(γ) by a fresh new variable y i (that is, we replace all free occurrences of y inside γ by y i and we replace the leading ∃y by ∃y i ). Second, we "move all ∃y i to the front" by simply deleting all occurrences of ∃y i from α, resulting in a formula δ, and then adding the block ∃y 1 · · · ∃y k before δ. As an example, if we apply these transformations to α = ∃v 1 (ẋ 7 = v 1 ∧ ∃v 2 (ẋ 8 = v 2 ∧ Ev 1 v 2 )), the first transformation yields ∃y 7 (ẋ 7 = y 7 ∧ ∃y 8 (ẋ 8 = y 8 ∧ Ey 7 y 8 )) and the second one yield the new α = ∃y 1 · · · ∃y 8 (ẋ 7 = y 7 ∧ẋ 8 = y 8 ∧ Ey 7 y 8 ).
In α ′ , we apply exactly the same transformations, only now the literals we look for are notẋ i = y, but C j y. We still apply the same renaming of y (namely to y i and not to y j ) as in α and apply the same movement of the quantifiers. This results in a new formula α ′ of the form ∃y 1 · · · ∃y k (δ ′ ). For α ′ = ∃v 1 (C 1 v 1 ∧ ∃v 2 (C 2 v 2 ∧ Ev 1 v 2 )) we get the new α ′ = ∃y 1 · · · ∃y 8 (C 1 y 7 ∧ C 2 y 8 ∧ Ey 7 y 8 ) and δ ′ is now the inner part without the quantifiers.
Let us now prove the claim. The first two items are trivially equivalent by the definition of distinct(P 1 , . . . , P l ).
The second statement implies the third: To show this, for j ∈ {1, . . . , l} we first set C B j = {a i |ẋ i ∈ P j } and then add |A| \ {a 1 , . . . , a k } to, say, C B 1 in order to create a correct partition. This setting clearly ensures that wheneverẋ i = y holds in α, we also have C j y holding in α ′ . Since α ′ differs from α only on the literals of the formẋ i = y (which got replaced by C j y), since we just saw that whenẋ i = y holds in α, the replacements C j y holds in α ′ , and since α has the monotonicity property (by which it does matter when more literals of the form C i y hold in α ′ than did in α), we get the third statement.
The third statement implies the second: Let an l-coloring B of A be given with B |= α ′ . Since α ′ = ∃y 1 · · · ∃y k (δ ′ ), there must now be elements b 1 , . . . , b k ∈ |A| such that B |= δ ′ (b 1 , . . . , b k ). We define new elements a i ∈ |A| as follows: If b i ∈ C B i , let a i = b i . Otherwise, let a i be an arbitrary element of C B i . We show in the following that the a i constructed in this way can be used in the second statement, that is, we claim that A |= α(a 1 , . . . , a k ) and the a i have the distinctness property from the claim.
First, recall that α is of the form ∃y 1 · · · ∃y k (δ) (because of the syntactic transformations we applied for the purposes of the proof of this claim) and δ contains literals of the formẋ i = y i , where theẋ i are the free variables for which the values a i and now plugged in. We claim that A |= δ(a 1 , . . . , a k , b 1 , . . . , b k ), that is, we claim that if we plug in a 1 to a k for the free variablesẋ 1 toẋ k in δ and we plug in b 1 to b k for the (additional) free variables y 1 to y k in δ, then δ holds in A. To see this, recall that B |= δ ′ (b 1 , . . . , b k ) holds and δ ′ is identical to δ except thatẋ i = y i got replaced by C j y i . In particular, by construction of the a i , whenever C j y i holds in B with y i being set to b i (that is, whenever b i ∈ C B j ), we clearly also have thatẋ i = y i holds in A withẋ i being set to a i and y i being set to b i (since we let a i = b i whenever b i ∈ C B j ). But, then, by the monotonicity property, we know that A |= δ(a 1 , . . . , a k , b 1 , . . . , b k ) will hold. Second, we argue that the distinctness property holds, that is, a p = a q wheneverẋ p ∈ P i ,ẋ q ∈ P j , and i = j. However, our construction ensured that we always have a r ∈ C B s for the s withẋ r ∈ P s . In particular,ẋ p ∈ P i andẋ q ∈ P j for i = j implies that a p and a q lie in two different color classes and are, hence, distinct.
By the claim, A |= β is equivalent to there being an l-coloring B of A such that B |= α ′ . We now apply Theorem 3.1 to α ′ (as φ), which yields a new formula α ′′ (called φ ′ in the theorem) with the property A |= α ′′ ⇐⇒ A |= β. The interesting thing about α ′′ is, of course, that it has the same quantifier rank and the same number of variables as α ′ plus some constant. Most importantly, we already pointed out earlier that α ′ does not contain any weak variables and, hence, the quantifier rank of α ′′ is the same as the strong quantifier rank of β and the number of variables in α ′′ is the same as the number of strong variables in β -plus some constant.
Applying this transformation to our running example φ and choosing as β once more the subformula starting with ∃ẋ 7 ∃ẋ 8 , we would get the following formula (ignoring the technical issues how, exactly, the hashing is implemented, see the proof of Theorem 3.1 for the details): We can now repeat the transformation to replace each block β in this way. Observe that in each transformation we can reuse the variables (in particular, p and q) introduced by the color coding: ∀v g ∃p∃q ∃a∃v 1 (hash g (v 1 , p, q) = 1 ∧ Eav 1 ) ∧ ∀c g ∃p∃q∃v 1 (hash g (v 1 , p, q) = 1 ∧ ∃v 2 (hash g (v 2 , p, q) = 1 ∧ Ev 1 v 2 )) ∨ g ∃p∃q∃v 1 (hash g (v 1 , p, q) = 1 ∧ ∃v 2 (hash g (v 2 , p, q) = 2 ∧ Ev 1 v 2 )) ∨ g ∃p∃q∃z(P z ∧ Qc) .
In conclusion, we see that we can transform the original formula φ to a new formula φ ′ with the following properties: • We added new variables and quantifiers to φ ′ compared to φ during the first transformation steps, but the number we added depended only on the signature τ (it was three times the maximum arity of relations in τ ).
• We then removed all weak variables from φ in φ ′ .
• We added some variables to φ ′ each time we applied Theorem 3.1 to a block β. The number of variables we added is constant since Theorem 3.1 adds only a constant number of variables and since we can always reuse the same set of variables each time the theorem is applied.
• We also added some quantifiers to φ ′ each time we applied Theorem 3.1, which increases the quantifier rank of φ ′ compared to φ by more than a constant. However, the essential quantifiers we add are ∃p∃q and these are always added directly after a universal quantifier or directly after a disjunction after a universal quantifier. Since the strong quantifier rank of φ is at least the quantifier rank of φ where we only consider the universal quantifiers (the "universal quantifier rank"), the two added nested quantifiers per universal quantifiers can add to the quantifier rank of φ ′ at most twice the universal quantifier rank.
We already mentioned that the notion of weak existential quantifiers begs a dual: By Theorem 3.5, for φ = ∃ẋ 1 · · · ∃ẋ k (ψ) there is an equivalent formula φ ′ with qr(φ ′ ) = O(strong-qr(φ)). Since, trivially, qr(¬φ ′ ) = qr(φ ′ ), the formula ¬φ is also equivalent to a formula of quantifier rank O(strong-qr(φ)). The normal form of ¬φ starts with ∀x 1 · · · ∀x k to which Theorem 3.5 does not apply "at all" -but the dual of the theorem applies, where we call the leading quantifier in a (sub)formula ∀x(φ) weak if no existential binding inside φ depends on x and in all subformulas of φ of the form α ∨ β at most one of α and β may contain a literal that contains x and is not of the form x = y (note that this is now an equality). More interestingly, we can even show that both kinds of weak quantifiers may be present: Theorem 3.6. Theorem 3.5 still holds when φ may contain both existential and universal weak variables, none of which count towards the strong quantifier rank nor count as strong bound variables.
Proof. Given a formula φ that contains both existential and universal weak quantifiers, we apply a syntactic preprocessing that "separates these quantifiers and moves them before their dual strong quantifiers." The key observation that makes these transformations possible in the mixed case is that weak existential and weak universal quantifiers commute: For instance, ∃ẋ(α ∧ ∀ẏ(β)) ≡ ∀ẏ(β ∧ ∃ẋ(α)) sinceẋ andẏ cannot depend on one another by the core property of weak quantifiers (α cannot containẏ and β cannot containẋ). Once we have sufficiently separated the quantifiers, we can repeatedly apply Theorem 3.5 or its dual to each block individually.
As a running example, let us use the following formula φ: which mixes existential and universal weak variables rather freely. Similar to the proof of Theorem 3.5, for technical reasons we first add the superfluous quantifiers ∃v∀v for a fresh strong variable v at the beginning of the formula.
Our main objective is to get rid of alternations of weak universal and weak existential quantifiers without a strong quantifier in between. In the example, this is the case, for instance, for ∃ẋ(. . . ∀ẏ(. . . ∃ż . . . )). We get rid of these situations by pushing all quantifiers (weak or strong) down as far as possible (later on, when we apply Theorem 3.5, we will push them up once more). Let us writex to indicate that x may both be a weak or a strong variable.
If β does not containx as a free variable, we can apply the following equivalences from left to right (and, of course, commutatively equivalent ones): Note that the definition of weak variables forbids that a universally bound variable depends on an existential weak variable (and vice versa). This means that in the first two lines, ifx is actually the weak variableẋ and if β starts with ∀ẏ, we can automatically apply both equivalences. Similarly, if in the last two lines β starts with ∃ẏ, we can also apply both equivalences. Furthermore, we also apply the following general equivalences as long as possible: Applied to our example, we would get: ∃v∀v ∃ẋ∃a(Eẋa ∧ ∀b(Eba ∨ Eab) ∧ ∀ẏ(Eaẏ) ∧ ∃ż(Eaż) ∧ ∃ẇ(Eẇa)).
The purpose of the transformations was to achieve the situation described in the next claim: Claim. Assume that the above transformations have been applied exhaustively to φ and assume φ contains both existential and universal weak variables. Consider the maximal subformulas α i of φ that contain no weak universal variables and the maximal subformulas β i of φ that contain no weak existential variables. Then for some i and some γ one of the following formulas is a subformula of φ: ∀x(α i ∨ γ) or ∃x(γ ∧ β i ).
Proof. Consider any α among the α i . Since α is maximal but not all of φ, there must be a β among the β i such that either α ∨ β or α ∧ β is also a subformula of φ. Let us call it δ and consider the minimal subformula η of φ that contains δ and starts with a quantifier.
This quantifier cannot be a weak quantifier: Suppose it is ∃ẋ (the case ∀ẋ is perfectly symmetric). Since we can no longer apply one of the equivalences (10) to (17), the formula η must have the form ∃ẋ i ψ i (where the ψ i are not of the form ρ ∧ σ) such that all ψ i containẋ (otherwise (10) would be applicable) and such that none of the ψ i is of the form ρ ∨ σ (otherwise (14) would be applicable). This implies that all ψ i start with a quantifier. Since η was minimal to contain δ, we conclude that one ψ i must be α and another one must be β. But, then, β contains a weak existential variable, namelyẋ, which we ruled out.
Since η does not start with a weak quantifier, it must start with a strong quantifier. If it is ∃x, by the same argument as before we get that η must have the form ∃x i ψ i with some ψ i equal to α and some other ψ j equal to β. But, then, we have found the desired subformula of φ if we set γ to i =j ψ i . If the strong quantifier is ∀x, a perfectly symmetric argument shows that η must have the form ∀x i ψ i with some ψ j = α, which implies the claim for γ = i =j ψ i .
The importance of the claim for our argument is the following: As long as φ still contains both existential and universal weak variables, we still find a subformula α or β that contains only existential or universal weak variables such that if we go up from this subformula in the syntax tree of φ, the next quantifier we meet is a strong quantifier. This means that we can now apply Theorem 3.5 or its dual to this subformula, getting an equivalent new formula α ′ or β ′ whose quantifier rank equals the strong quantifier rank of α or β, respectively, times a constant factor. Furthermore, similar to the argument at the end of the proof of Theorem 3.5 where we processed one β after another, each time a replacement takes place, there is a strong quantifier that contributes to the strong quantifier rank of φ.

Syntactic Proofs and Natural Problems
The special allure of descriptive complexity theory lies in the possibility of proving that a problem has a certain complexity just by describing the problem in the right way. The "right way" is, of course, a logical description that has a certain syntax (such as having a bounded strong quantifier rank). In the following we present such descriptions for several natural problems and thereby bound their complexity "in a purely syntactic way." First, however, we present "syntactic tools" for describing problems more easily. These tools are built on top of the notion of strong and weak quantifiers.

Syntactic Tools: New Operators
It is common in mathematical logic to distinguish between the core syntax and additional "shorthands" built on top of the core syntax. For instance, while ¬ and ∨ are typically considered to be part of the core syntax of propositional logic, the notation a → b is often seen as a shorthand for ¬a ∨ b. In a similar way, we now consider the notions of weak variables and quantifiers introduced in the previous section as our "core syntax" and build a number of useful shorthands on top of them. Of course, just as a → b has an intended semantic meaning that the expansion ¬a ∨ b of the shorthand must reflect, the shorthands we introduce also have an intended semantic meaning, which we specify.
As a first example, consider the common notation ∃ ≥k x(φ(x)), whose intended semantics is "there are at least k different elements in the universe that make φ(x) true." While this notation is often considered as a shorthand for ∃x 1 · · · ∃x k i =j x i = x j ∧ k i=1 φ(x i ) we will consider it a shorthand for the equivalent, but slightly more complicated formula ∃ẋ 1 · · · ∃ẋ k i =jẋ i =ẋ j ∧ k i=1 ∃x(x =ẋ i ∧ φ(x)). The difference is, of course, that the strong quantifier rank is now much lower and, hence, by Theorem 3.5 we can replace any occurrence of ∃ ≥k x(φ(x)) by a formula of quantifier rank qr(φ) + O (1). In all of the following notations, k and s are arbitrary values. The indicated strong quantifier rank for the notation is that of its expansion. The semantics describe which structures A are models of the formula.
The next notation is useful for "binding" a set of vertices to weak or strong variables. The binding contains the allowed "single use" of the weak variables in the sense of Definition 3.4, but they can still be used in inequality literals. Letx indicate that x may be weak or strong.
..,k},|I|=s i,j∈I,i =jx i =x j . // ensure |{x 1 , . . . ,x k }| ≥ s The final notation can be thought of as a "generalization of ∃ =k " where we not only ask whether there are exactly k distinct a i with a property φ, but whether these a i then also have an arbitrary special additional property. Formally, let Q ⊆ struc[τ ] be an arbitrary τ -problem. We write A[I] for the substructure of A induced on a subset I ⊆ |A|.
Strong-qr: 1 + strong-qr(φ) + arity(τ ) Semantics The set I = {a ∈ |A| | A |= φ(a)} has size exactly k and A[I] ∈ Q. Expansion Assuming for simplicity that τ contains only E 2 as non-arithmetic predicate: where π i (x) is a shorthand for φ(x) ∧ ∃ =i−1 z(z < x ∧ φ(z)), which binds x to the ith element of the universe with property φ.

Bounded Strong-Rank Description of Vertex Cover
A vertex cover of a graph G = (V, E) is a subset X ⊆ V with e ∩ X = ∅ for all e ∈ E. The problem p k -vertex-set asks whether a graph has a cover X with |X| ≤ k. Proof. We describe the problem using a family (φ k ) k∈N of constant strong quantifier rank that expresses the well-known Buss kernelization "using logic": Let high(x) = ∃ ≥k+1 y(Exy) expresses that x is a highdegree vertex. Buss observed that all high-degree vertices must be part of a vertex cover of size at most k. Thus, h ≤ k must hold for the unique h with ∃ =h x(high(x)). A remaining vertex is interesting if it is connected to at least one non-high-degree vertex: interesting(x) = ¬ high(x) ∧ ∃y(Exy ∧ ¬ high(y)). If there are more than (k − h)(k + 1) ≤ k 2 + k interesting vertices, there cannot be a vertex cover -and if there are less, the graph induced on the interesting vertices must have a vertex cover of size k − h. In symbols: φ k = k h=0 ∃ =h x(high(x)) ∧ induced size≤k 2 +k {x | interesting(x)} ∈ Q k−h for Q s = {G | G has a vertex cover of size s}.

Bounded Strong-Rank Description of Hitting Set
Hitting sets generalize the notion of vertex covers to hypergraphs, which are pairs (V, E) where the members of E are called hyperedges and we have e ⊆ V for all e ∈ E. Hitting sets are still sets X ⊆ V with e ∩ X = ∅ for all e ∈ E. The problem p k,d -hitting-set asks whether a hypergraph with max e∈E |e| ≤ d has a hitting set X with |X| ≤ k. Note that p-vertex-cover is exactly this problem restricted to d = 2. Note that we allow the universe of H to contain elements e that are neither vertices nor hyperedgerepresenting elements, but their set(e) do not contribute to E. We also allow that two different elements e, e ′ ∈ |H| represent the same set set(e) = set(e ′ ). This can be problematic in a kernelization: When we identify a kernel set E ′ of hyperedges, there could still be a large (non-parameter-dependent) number of elements in the universe that represent these hyperedges -meaning that these elements do not form a kernel themselves. Fortunately, this can be fixed: We can easily check whether two elements represent the same set using ∀x(in xe ↔ in xe ′ ) and then always consider only the first representing element with respect to the ordering < of the universe. For this reason, we will assume in the following that for any subset s ⊆ V there is at most one e ∈ |H| with s = set(e).
Let d(H) be the maximum size of any hyperedge in H and let d(H) = d(H(H)).
A hitting set for a hypergraph (V, E) is a set X ⊆ V with e ∩ X = ∅ for all e ∈ E. The problem p k,d -hitting-set is the set of all pairs (H, num(k, d)) such that H(H) is a hypergraph with d(H) ≤ d and for which there is a hitting set of size at most k.
Proof. The idea behind the proof is a (very strong) generalization of the Buss kernel argument from the proof of Theorem 4.1. As in that proof, we will present a family (φ k,d ) k,d∈N of bounded strong quantifier rank that describes p k,d -hitting-set. First, there are two simple preliminaries: Testing whether d(H) ≤ d holds is easy to achieve using ∀e(hyperedge e → ∃ ≤d v(in ve)), so let us assume that this is the case and let us write H = (V, E) for H(H). Furthermore, let us write subset ef for ∀x(in xe → in xf ), which indicates that set(e) ⊆ set(f ).
Representing Subsets of Hyperedges. Recall that the core idea of the kernelization of the vertex cover problem is that a "high-degree vertex" must be part of a vertex cover. Rephrased in the language of hypergraphs, a graph is a hypergraph H with d(H) = 2, a vertex cover is a hitting set, and making a high-degree vertex v part of a hitting set is (in essence) the same as removing all edges containing v and then adding the singleton hyperedge {v}, which can clearly only be hit by making v part of the hitting set.
In the general case, we will also remove hyperedges from the hypergraph and replace them by smaller hyperedges (though, no longer, by singletons) and we will do so repeatedly. The problem is that adding hyperedges is difficult in our encoding since this means that we would have to add elements to the universe of the logical structure that represent the new hyperedges. Although these problems can be circumvented by complex syntactic trickery, we feel it is cleaner to do the following at this point: We reduce the original hitting set problem to a new version, where the universe already contains all the necessary elements for representing the hyperedges we might wish to add later on.
In detail, we define a subset p k,d -hitting-set ′ ⊆ p k,d -hitting-set as follows: It contains only those (H, num(k, d)) such that for every e ∈ hyperedge H and every subset s ⊆ set(e) there is an e ′ ∈ |H| with s = set(e ′ ). In other words, for every subset s of any hyperedge there must already be an element e "in store" in the universe that represents it.
We can reduce p k,d -hitting-set to p k,d -hitting-set ′ by adding for an input H, if necessary, elements to the universe that represent all these subsets. We are helped by the fact that we have an upper bound d on the size of the hyperedges, which means that the maximum blowup of the universe in this reduction is by the parameter-dependent value of 2 d . However, we have not yet defined which notion of reductions between parameterized problems we wish to use and there are many definitions in the literature. Since para-AC 0 is severely restricted computation-wise, we must use a weak one.
We postpone this question until after the proof, where we present a suitable definition for reductions (Definition 4.4) such that all considered classes are closed under them and then show in Lemma 4.7 that p k,d -hitting-set reduces to p k,d -hitting-set ′ . Thus, in the following, we may assume that for every hyperedge in the input structure for all subsets of this hyperedge we already have an element in the universe representing this subset.
Finding Sunflowers. We first show a way of kernelizing the hitting set problem, due to Chen et al. [6], that "almost works." The core idea is to detect and collapse sunflowers in the input hypergraph [9]. A sunflower of size k + 1 with core c is a set {p 1 , . . . , p k+1 } ⊆ E of distinct hyperedges, called petals, such that for all i = j we have p i ∩ p j = c. In other words, all petals contain the core but are otherwise pairwise distinct. For convenience, we also assume that all petals are proper supersets of the core. The important observation is that if a sunflower of size k + 1 has a hitting set of size k, then the core must also be hit -and when the core is hit, all petals are hit. This means that we can just replace a sunflower by its core when we are looking for size-k hitting sets.
The following formula tests whether set(c) is the core of a sunflower of size k + 1: Here, (18) guarantees that the petals are pairwise disjoint outside the core and (19) checks that the petals are supersets of c and when we add p 1 i to p d i (which are not necessarily disjoint) to c, we get a present hyperedge.
The "collapsing" of sunflowers to their cores can now be done as follows: We define a formula with e as a free variable that is true when set(e) is a core or when set(e) is not a superset of any core (otherwise, we need not include set(e) since we include the core of a sunflower that contains it, instead): core e ∨ (hyperedge e ∧ ¬∃c(core c ∧ subset ce)). (20) The importance of the above formula lies in the following fact: The number of hyperedges for which the second part of the formula is true (that is, which are not supersets of a core of a sunflower of size k + 1), is bounded by a function in k and d. This is due to the famous Sunflower Lemma [9] which states that if a hypergraph has more than k d d! hyperedges, it contains a sunflower of size k + 1 (which has a core). This means that if core e were to hold for just a few hyperedges, (20) would describe a kernel for the hitting set problem and we would be done: Just as in the proof of Theorem 4.1, we could use the induced notation to solve the hitting set problem on the vertices and hyperedges for which (20) holds. Unfortunately, it is possible to construct hypergraphs such that core e still holds for a very large number of hyperedges.
However, we know that a core always has a smaller size than any petal in its sunflower. In particular, all cores have maximum size d − 1. Thus, if we "view core as our new hyperedge predicate," we get "cores of cores": Note that strong-qr(core 2 c) = strong-qr(core c) + 1 = 2 since we had to add a new strong quantifier (∃e ′ ) whose scope contains core e ′ , which adds its own strong quantifier (∃e).
By the same argument as earlier, we get that the number of e for which the following formula holds equals the number of cores of cores plus something that only depends on the parameters k and d: core 2 e ∨ (core e ∧ ¬∃c(core 2 c ∧ subset ce)) ∨ (hyperedge e ∧ ¬∃c(core c ∧ subset ce)).
Still, the number of cores of cores can be large, but they all have size at most d − 2. Repeating the argument a further d − 2 times, we finally get the predicate kernel e: where core 0 is of course hyperedge and core d e can only be true for the (sole) e representing the empty set (in which case, there is not hitting set). Unfortunately, the strong quantifier rank of core d is d since the definition of core i in terms of core i−1 always adds one strong quantifier nesting (through a new ∃e ′...′ ). Thus, (21) also has a strong quantifier rank of d while we need O(1).
Finding Pseudo-Sunflowers. At this point, we need a way of describing cores of cores of cores and so on using a bounded strong quantifier rank. The idea how this can be done was presented in [4], where the notions of pseudo-cores and pseudo-sunflowers are introduced. The definitions are somewhat technical, see below, but the interesting fact about these definitions is that they can be expressed very nicely in a way similar to (18) and (19).
For a level L and a number k, let T k L denote the rooted tree in which all leaves are at the same depth L and all inner nodes have exactly k + 1 children. The root of T k L will always be called r in the following. Thus, T k 1 is just a star consisting of r and its k + 1 children, while in T k 2 each of the k + 1 children of r has k + 1 new children, leading to (k + 1) 2 leaves in total. For each l ∈ leaves(T k L ) = {l | l is a leaf of T k L } there is a unique path (l 0 , l 1 , . . . , l L ) from l 0 = r to l L = l.
Here, (22) ensures, similarly to (19) for normal sunflowers, that S(l, 0)∪S(l, 1)∪· · ·∪S(l, L) is a hyperedge, item 2 of the definition. The inequalities (23) ensure that item 3 of the definition holds, while (24) ensures item 4. The important observation is that pseudocore L has a strong quantifier rank that is independent of L. Since, as shown in [4], we can use pseudocore L as a replacement for core L in (21), we get that the hitting set problem can be described by a family of formulas of constant strong quantifier rank.
In the proof we used reductions (from p k,d -hitting-set to p k,d -hitting-set ′ ) although we have not yet given a definition of a notion of reductions that is appropriate for the context of the present paper. Clearly, we need a notion of parameterized reductions that is very weak to ensure that the smallest class we study, para-AC 0 , is closed under them. Such a reduction is used in the literature [3], boringly named para-AC 0 -reduction, but both its definition as well as the definition of other kinds of parameterized reductions found in the literature do not fit well with our logical framework: The reductions are defined in terms of machines or circuits that get as input a string that explicitly or implicitly contains the parameter k and output a new problem instance that once more explicitly or implicitly contains a new parameter value k ′ .
In contrast, in our setting the inputs and outputs must be logical structures that we wish to define in terms of formulas. Furthermore, "outputting a parameter value" is difficult in our formal framework since parameter values are not elements of the universe, but indices of the formulas. All of these problems can be circumvented, see for instance [7,Definition 5.3], but we believe it gives a cleaner formalism to give a new "purely logical" definition of reductions between parameterized problems. We will not prove this, but remark that the power of this reduction is the same as that of para-AC 0 -reductions. • each f k is a first-order query from τ -structures to τ ′ -structures and Let us briefly explain the ingredients of this definition: Each f k maps all τ -structures A to τ ′structures A ′ . The fact that we have one function for each parameter value allows us to make our mapping depend on the parameter. The job of the formulas ι k,k ′ is solely to "compute" the new parameter value k ′ , based not only on the original value k, but also on A. If, as is the case in many reductions, the new parameter value k ′ just depends on k (typically, it even is k), we can just set ι k,k ′ to a trivial tautology ⊤ and all other ι k,k ′′ to the contradiction ⊥.
In the definition, we referred to first-order queries, which are a standard way of defining a logical τ ′ -structure in terms of a τ -structure. A detailed account can be found in [16], but here is the basic idea: Suppose we wish to map graphs ((E 2 )-structures) to their underlying undirected graphs ((U 2 )-structures, where U represent the underlying symmetric edge set). In this case, there is a simple formula φ U (x, y) that tells us when U xy holds in the new structure: Exy ∨ Eyx. More importantly, if we have a formula ψ that internally uses U xy to check whether there is an undirected edge in the mapped graph, we can easily turn this into a formula ψ[f ], where we replace all occurrences of U xy by φ U (x, y), that gives the same answer as ψ when fed the original graph. In other words, if a first-order query maps A to A ′ and we wish to check whether A ′ |= ψ holds, we can just as well check whether A |= ψ[f ] holds.
The just-described example of a first-order query did not change the universe, which is something we sometimes wish to do (indeed, the whole point of the reduction between the two versions of the hitting set problem was a change of the universe). This is achieved by allowing the width w of the query to be larger than 1. The effect is that the universe U gets replaced by U w and, now, elements of this new universe can be described by tuples of variables of length w. We can also reduce the size of the universe using a formula φ universe (x 1 , . . . , x w ) that is true only for the tuples we wish to keep in the new structure's universe.
. By definition, we have A |= φ k if, and only if, f k (A) |= φ ′ k ′ for the unique k ′ with A |= ι k,k ′ . If we can argue that the substitutions do not increase the quantifier rank or number of variables by more than a constant, we get the claim.
Unfortunately, simple substitutions fail to preserve the quantifier rank in a single case: When a formula φ ′ k ′ contains a large number of nested applications of the successor function. Suppose, for instance, φ ′ k ′ is something like ∃x∃y(succ 1000 x = y). While this formula has quantifier rank 2 and uses only two variables, a simple substitution of each occurrence of the one thousand succ operators in φ ′ k ′ by any nontrivial formula in f k that describes the successor function will yield a quantifier rank of at least 1000.
The trick is to use color coding once more: We can easily modify any formula so that all occurrences of the successor function are of the form x = succ i 0 for some number i. This means that we "only" need a way of identifying the ith element of the new universe using a bounded quantifier rank. However, assuming for simplicity a width of 1 and assuming that φ universe (x) and φ < (x, y) describe how f k restricts the universe and possibly reorders it, respectively, the formula φ universe (x)∧∃ =i−1 y(φ universe (y)∧φ < (y, x)) is true exactly for the ith element of the universe -and we saw already that we can express the ∃ =i−1 y quantifier using a constant quantifier rank that is independent of i. Example 4.6. We have p k,δ -dominating-set ≤ br p k,d -hitting-set where the first problem is parameterized by both the size k of the sought dominating set and a bound δ on the maximum vertex degree. For each parameter (k, δ), the first-order query f k,δ maps the input graph to the hypergraph where there is a hyperedge for the closed neighborhood of each vertex. This is achieved through φ vertex (x) = ⊤, φ hyperedge (x) = ⊤, and φ in (x, y) = ((x = y) ∨ Exy). The new parameter is δ + 1, which is achieved by ι num(k,δ),num(k,δ+1) = ⊤ and ι x,x ′ = ⊥ otherwise. Observe that ι * is clearly computable. By Lemma 4.5 and since p k,d -hitting-set ∈ para-AC 0 , we also have p k,δ -dominating-set ∈ para-AC 0 .
Proof. In the reduction, we do not change the parameter, so the ι num(k,d),num(k,d) = ⊤ and ι x,x ′ = ⊥ otherwise. For the first-order queries, we wish to map a hypergraph H to a new version H ′ in which for every subset of a hyperedge there is already an element in the universe representing this subset. This means that the size of the universe can increase from |H| to at most 2 d |H|. (If there is a hyperedge of size larger than d in the input, we can yield a trivial "no" instance as output.) We use a first-order query of width 2, meaning that the universe size gets enlarged from |H| to |H| 2 . This will be larger than 2 d |H| for all sufficiently large universes. Since, with respect to f num(k,d) the number 2 d is a constant, we can apply Lemma 2.1 to take care of those inputs whose universes are smaller than 2 d and directly map them to the correct instances. For the large instances, we now have a universe that is "large enough" to contain an element for each subset of a hyperedge and it is not difficult (but technical) to use the bit predicate to define the correct predicates hyperedge, vertex, and in in terms of the original structure.

Bounded Strong-Rank Description of Model Checking for First-Order Logic
An important result by Flum and Grohe [11] states that the model checking problem for first-order logic lies in FPT on structures whose Gaifman graph has bounded degree. Once more, this result can now be obtained "syntactically." For simplicity, we only consider graphs and let p ψ,δ -mc(FO) = (G, num(ψ, δ)) G ∈ struc[(E 2 )], ψ ∈ FO, G |= ψ, max-degree(G) ≤ δ . Proof. We present a family (φ ψ,δ ) ψ∈FO,δ∈N with a bound on the number of strong variables that describes p ψ,δ -mc(FO). Fix ψ and δ. Recall that we fixed the signature for ψ to just τ = (E 2 ) for simplicity and, thus, τ -structures are just graphs G. In particular, there are no arithmetic predicates available to ψ (one could, of course, also consider them, but then the Gaifman graph would always be a clique and the claim of the theorem would be boring). In contrast, the φ ψ,δ are normal FO[+, ×] formulas and they have access to arithmetics.
For a graph G let us writeḠ for the underlying undirected graph and let us writeĒxy as a shorthand for Exy ∨ Eyx. The first thing we check is that the maximum degree of the input graphḠ is, indeed, δ. This is rather easy: ∀x∃ ≤δ y(Ēxy).
For the hard part of determining whether G |= ψ, letd(a, b) denote the distance of two vertices inḠ and let N r (a) = b ∈ |G| d (a, b) ≤ r be the ball around a of radius r inḠ. Let G[N r (a)] denote the subgraph of G induced on N r (a). By Gaifman's Theorem [13] we can rewrite ψ as a Boolean combination of formulas of the following form: where γd (xi,xj)>2r expresses, of course, thatd(x i , x j ) > 2r should hold and ρ is r-local, meaning that for all a ∈ |G| we have G |= ρ(a) ⇐⇒ G[N r (a)] |= ρ(a) (the minimum number for which is the case is called the locality rank of ρ).
We now wish to express the above formula using only a constant number of strong variables. The problem is, of course, that the x i are not (yet) weak since they are used many times. We fix this in two steps. First, let us tackle (25): Clearly, the x i will have a pairwise distance of at least 2r, if the balls of radius r that surround them are pairwise disjoint. Now, because of the bounded degree of the graph, a ball of radius r can have maximum size δ r . This allows us to bind all members of each ball and testing disjointness is, of course, what weak variables are all about.
In detail, let γd (x,y)≤r be the standard formula with two bound variables expressing that there is path from x to y of length at most r inḠ. Then we can express (25) as follows: In the formula, at the end we bind the variablesẏ 1 i , . . . ,ẏ δ r i exactly to the elements of the ball aroundẋ i or radius r; and in the first part we require that all these balls are pairwise disjoint. Note that we do not require allẏ p i to be different: If the size of a ball is less than δ r , we must allow someẏ p i andẏ q i to be identical.
In order to express (26), we just have to check for eachẋ i that the ball of radius δ r around it is a model of ρ(ẋ i ). Since the size of this ball is at most δ r , we can use the induced notation. There is, however, a technical problem: We basically wish to check whether G[N r (a)] ∈ {H | H |= ρ(a)} holds for a given a, but {H | H |= ρ(a)} obviously depends on a -which is not compatible with the induced notation. Fortunately, this problem can be fixed: For i ∈ N let Q i = {H | i ≤ H , a is the ith element of |H| with respect to < H , H |= ρ(a)}. If we know for some element a ∈ |G| that it is the ith element in N r (a), then our problematic test can be replaced by G[N r (a)] ∈ Q i . Since testing whether a is the ith element in N r (a) is possible using a formula like ι i (a) = ∃ =i−1 b(b < a ∧ γd (a,b)≤r ), we get the following complete formula φ ψ,δ : This formula uses only a constant number of strong variables. Its strong quantifier rank would also be constant except that the formula γd (x,y)≤r uses r nested (strong) quantifiers (but only 2 variables). This means that the strong quantifier rank of φ ψ,δ will be O(locality-rank(ψ)). We assign numbers to all elements in the bags of the children of the root in this way and, then, we recursively use the same method for the children's children and so on. Note that, not only, we do not run out of numbers, but the consistency condition is also met: Once an element drops out of a bag, we will never see it again in a later bag and, hence, we cannot inadvertently assign a different number to it later on. As a running example, we will use the graph H and the tree decomposition (T, B) of it from Figure 1.  Figure 1: An example graph H together with a tree decomposition for it, consisting of the tree T and the bag function B indicated using the small gray mapping arrows. A consistent numbering p is indicated in red.

Bounded Strong-Rank Description of Embedding Graphs of Constant Tree Width or Constant Tree Depth
The consistent numbering indicated in Figure 1 is obtained by mapping the vertices in the root node's bag (just 3 in the example) to the index 1, so p(3) = 1. For the child node a, the bag {1, 3} contains a new vertex, namely 1, which gets the next free index, in this case p(1) = 2. In the same way, for the other child b of the root, the new vertex 5 also gets the index 2. For the leaves, in the bags of c and d we just have an additional vertex, which gets the last free index and, thus, p(2) = 3 and p(4) = 3. For e with B(e) = {5, 6, 7}, we must reuse a number for the first time: from B(b) to B(e), the number 3 drops out of the bag and, thus, we can (even must) reuse its index (which was 1) for one of the nodes 6 or 7. Let us set p(6) = 1 and p(7) = 3.
Let us now define φ H,T,B . We may assume V (H) = {1, . . . , |V (H)|}. The first step is to bind all vertices of H to weak variables using ∃ẋ 1 · · · ∃ẋ |V (H)| i =jẋ i =ẋ j ∧ ψ, where ψ must now express that the bound elements form an embedding. To achieve this, we build ψ recursively, starting at the root r of T and ψ = ψ r .
For any node n ∈ V (T ), let the elements of the set B(n) be named b 1 to b s (these are just temporary names that have nothing to do with the consisting numbering p) and let the first t of them be new, that is, not present in the parent bag (for the root, s = t and all elements are new; if there are no new elements, t = 0). The formula ψ n will now express the following: First, it binds the new elements using strong variables that are made equal to the weak variables representing the elements in the input structure. This makes the new strong variables disjoint from one another and also from from all other (images of) vertices of H. Second, we check that for all edges {x, y} ∈ E(H) between elements x and y of B(n), that their images (which we have been bound to strong variables) are also connected in the input structure. Third, we require that these properties also hold for all children of n. In symbols, we set: ψ n = ∃v p(b1) · · · ∃v p(bt) v p(b1) =ẋ b1 ∧ · · · ∧ v p(bt) =ẋ bt ∧ x,y∈B(n),{x,y}∈E(H) Ev p(x) v p(y) ∧ c∈children(n) ψ c .
For our example, let us start with the root r. Here, we have B(r) = {3} and p(3) = 1 and there are no edges between the vertices in the bag (there is just one vertex, after all). This yields: ψ n = ∃v 1 (v 1 = x 3 ∧ ψ a ∧ ψ b ).
For the node b the situation is very similar, but there is now an edge {3, 5} in H. This means that we must check that the nodes v 1 , representing 3, and v 2 , representing 5, are connected in the input structure. This yields ψ b = ∃v 2 (v 2 =ẋ 5 ∧ Ev 1 v 2 ∧ ψ e ).
In a similar way, we get ψ d = ∃v 3 (v 3 =ẋ 4 ∧ Ev 3 v 1 ∧ Ev 3 v 2 ) and observe that the only difference is that v 3 is made equal toẋ 4 instead ofẋ 2 , the rest is the same.
Finally, for the node e, we bind two strong variables since there are two new vertices (6 and 7), but we reuse variable v 1 for 6 since the vertex 3 that used to have index 1 has dropped out of the bag. We get ψ e = ∃v 1 ∃v Putting it all together, we get the following φ H,T,B , whose structure closely mirrors T 's: It remains to argue that φ H,T,B has the claimed properties. Clearly, by construction, the strong quantifier rank and number of strong bound variables are as claimed. The semantic correctness also follows easily from the construction: If the input structure is a model of the formula then, clearly, the assignments of theẋ i to elements of the universe form an embedding since for every edge {u, v} ∈ E(H) somewhere in the formula we test whether Ev p(u) v p(v) holds where v p(u) is equal toẋ u and v p(v) toẋ v . The other way round, given a model of the formula, any assignment to theẋ i that makes it true is an embedding since, first, we require that allẋ i are different and we require Ev p(u) v p(v) for all {u, v} ∈ E(H). This concludes the proof of the claim.
With the claim established, we can now easily derive the statement of the theorem. To show p-emb td≤c ∈ para-AC 0 , we must present a family (φ H ) H∈struc[τ ],td(H)≤c that describes p-emb td≤c and that has bounded quantifier rank. Clearly, we can just set φ H to φ H,T,B where (T, B) is a tree-depth decomposition of H of depth c (which must exist by the assumption that td(H) ≤ c). The second item of the claim immediately tells us that all φ H will have a strong quantifier rank of at most c; and we can use the characterization of para-AC 0 from Fact 2.2. For the second statement, p-emb tw≤c ∈ para-AC 0↑ , we use a different family (ψ H ) H∈struc[τ ],tw(H)≤c , this time setting ψ H to φ H,T,B where (T, B) is a tree decomposition of H of width c. Now the third item of the claim gives us the bound on the number of strong variables; and we can use the characterization of para-AC 0↑ from Theorem 2.3.

Conclusion
In the present paper, we showed how the color coding technique can be turned into a powerful tool for parameterized descriptive complexity theory. This tool allows us to show that important results from parameterized complexity theory -like the fact that the embedding problem for graphs of bounded tree width lies in FPT -follow just from the syntactic structure of the formulas that describe the problem.
In all our syntactic characterizations it was important that variables or color predicates were not allowed to be within a universal scope. The reason was that literals, disjunctions, conjunctions, and existential quantifiers all have what we called the small witness property, which universal quantifiers do not have. However, there are other quantifiers, from more powerful logics that we did not explore, that also have the small witness property. An example are operators that test whether there is a path of length at most k from one vertex to another for some fixed k: if such a path exists, its vertices form a "small witness." Weak variables may be used inside these operators, leading to broader classes of problems that can be described by families of bounded strong quantifier rank. On the other hand, we cannot add the full transitive closure operator tc (for which it is well-known that FO[tc] = NL) and hope that Theorems 3.1 and 3.5 still hold: If this were the case, we should be able to turn a formula that uses two colors C 1 and C 2 to express that there are two vertex-disjoint paths between two vertices into a FO[tc] formulathus proving the unlikely result that the NP-hard disjoint path problem is in NL.
Another line of inquiry into the descriptive complexity of parameterized problems was already started in the repeatedly cited paper by Chen et al. [6]: They give first syntactic properties for families of formulas describing weighted model checking problems that imply membership in para-AC 0 . We believe that it might be possible to base an alternative notion of weak quantifiers on these syntactic properties. Ideally, we would like to prove a theorem similar to Theorem 3.5 in which there are just more quantifiers that count as weak and, hence, even more families have bounded strong quantifier rank. This would allow us to prove for even more problems that they lie in FPT just because of the syntactic structure of the natural formula families that describe them.