#
Practical Access to Dynamic Programming on Tree Decompositions^{ †}

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

`D-Flat`, an Answer Set Programming (ASP) solver for problems on tree decompositions [7]; and

`Sequoia`, an MSO solver based on model checking games [8]. Both solvers allow to solve very general problems and the corresponding overhead might, thus, be large compared to a straightforward implementation of the dynamic programs for specific problems.

#### Our Contributions

`Jdrasil`with an easy to use interface for such programs: the user only needs to specify the update rules for the different kind of nodes within the tree decomposition. The remaining work—computing a suitable optimized tree decomposition and performing the actual run of the dynamic program—is done by

`Jdrasil`. This allows users to implement a wide range of algorithms within very few lines of code and, thus, gives the opportunity to test the practicability of these algorithms quickly. This interface is presented in Section 3.

`D-Flat`,

`Sequoia`, and

`Jatatosk`for various problems. It turns out that

`Jatatosk`is competitive against the other solvers and, furthermore, its behaviour is much more consistent (that is, it does not fluctuate greatly on similar instances). We conclude that concentrating on a small fragment of MSO gives rise to practically fast solvers, which are still able to solve a large class of problems on graphs of bounded treewidth.

## 2. Preliminaries

**Example**

**1.**

## 3. An Interface for Dynamic Programming on Tree Decompositions

#### 3.1. The Tree Automaton Perspective

**Definition 1**(Tree Automaton)

**.**

**Definition 2**(Computation of a Tree Automaton)

**.**

#### Simulating Tree Automata

**Observation**

**1.**

**Definition 3**(Symmetric Tree Automaton)

**.**

**Observation**

**2.**

#### 3.2. The Interface

`Jdrasil`, which originally was developed for the computation of tree decompositions only. The interface is built up from two classes:

`StateVectorFactory`and

`StateVector`. The only job of the factory is to generate

`StateVector`objects for the leaves of the tree decomposition, or with the terms of the previous section: “to define the initial states of the tree automaton.” The

`StateVector`class is meant to model a vector of potential states in which the nondeterministic tree automaton is at a specific node of the tree decomposition. Our interface does not define what a “state” is, or how a collection of states is managed. The only thing the interface requires a user to implement is the behaviour of the tree automaton when it reaches a node of the tree decomposition, i.e., given a

`StateVector`for some node x of the tree decomposition and the information that the next node y reached by the automaton is of a certain type, the user has to compute the

`StateVector`for y. To this end, the interface contains the methods shown in Listing 1.

StateVector<T> introduce(Bag<T> b, T v, Map<T, Integer> idx); StateVector<T> forget(Bag<T> b, T v, Map<T, Integer> idx); StateVector<T> join(Bag<T> b, StateVector<T> o, Map<T, Integer> idx); StateVector<T> edge(Bag<T> b, T v, T w, Map<T, Integer> idx);

`Jdrasil`. In detail, given a graph and an implementation of the interface,

`Jdrasil`will compute a tree decomposition (see [11] for the concrete algorithms used by

`Jdrasil`), transform this decomposition into a very nice tree decomposition, potentially optimize the tree decomposition for the following dynamic program, and finally traverse through the tree decomposition and simulate the tree automaton described by the implementation of the interface. The result of this procedure is the

`StateVector`object assigned to the root of the tree decomposition.

#### 3.3. Example: 3-Coloring

`State`of the automaton can be modeled as a simple integer array that stores a color (an integer) for every vertex in the bag. A

`StateVector`stores a set of

`State`objects, that is, essentially a set of integer arrays. Introducing a vertex v to a

`StateVector`therefore means that three duplicates of each stored state have to be created, and, for every duplicate, a different color has to be assigned to v. Listing 2 illustrates how this operation could be realized in Java.

`introduce`method for 3-coloring. In the listing, the variable

`states`is stored by the

`StateVector`object and represents all currently possible states.

StateVector<T> introduce(Bag<T> b, T v, Map<T, Integer> idx) { Set<State> newStates =newHashSet<>();for(State state : states) { // ’states’ is the set of statesfor(intcolor = 1; color <= 3; color++) { State newState =newState(state); // copy the state newState.colors[idx.get(v)] = color; newStates.add(newState); } } states = newStates;return this; }

`forget`-method, we set the color of v to 0; in the

`edge`-method, we remove states in which both endpoints of the edge have the same color; and, in the

`join`-method, we compute the intersection of the state sets of both

`StateVector`objects. Note that, when we forget a vertex v, multiple states may become identical, which is handled here by the implementation of the Java

`Set`-class, which takes care of duplicates automatically.

`Jdrasil`[13]. Note that this implementation is only meant to illustrate the interface and that we did not make any effort to optimize it. Nevertheless, this very simple implementation (the part of the program that is responsible for the dynamic program only contains about 120 lines of structured Java-code) performs surprisingly well.

## 4. A Lightweight Model Checker for an MSO-Fragment

`D-Flat`and

`Sequoia`against a concrete implementation of a tree automaton for a specific problem (see Section 5). This is not necessarily surprising, as a general solver needs to keep track of way more information. In fact, an MSO model checker on formula $\varphi $ can probably (unless $\mathrm{P}=\mathrm{NP}$) not run in time $f(|\varphi |+\mathrm{tw})\xb7\mathrm{poly}(n)$ for any elementary function f [14]. On the other hand, it is not clear (in general) what the concrete running time of such a solver is for a concrete formula or problem (see e.g., [15] for a sophisticated analysis of some running times in

`Sequoia`). We seek to close this gap between (slow) general solvers and (fast) concrete algorithms. Our approach is to concentrate only on a fragment of MSO, which is powerful enough to express many natural problems, but which is restricted enough to allow model checking in time that matches or is close to the running time of a concrete algorithm for the problem. As a bonus, we will be able to derive upper bounds on the running time of the model checker directly from the syntax of the input formula.

`Jdrasil`, we have implemented a publicly available prototype called

`Jatatosk`[16]. In Section 5, we describe various experiments on different problems on multiple sets of graphs. It turns out that

`Jatatosk`is competitive against the state-of-the-art solvers

`D-Flat`and

`Sequoia`. Arguably, these two programs solve a more general problem and a direct comparison is not entirely fair. However, the experiments do reveal that it seems very promising to focus on smaller fragments of MSO (or perhaps any other description language) in the design of treewidth based solvers.

#### 4.1. Description of the Used MSO-Fragment

#### 4.2. A Syntactic Extension of the Fragment

#### 4.3. Description of the Model Checker

#### 4.3.1. The Partition Quantifier

#### 4.3.2. The Connected Quantifier

- $v\notin X$, then the corresponding bits are already cleared and nothing happens;
- $v\in X$ and $v\in {P}_{i}$ with $|{P}_{i}|>1$, then v is just removed and the corresponding bits are cleared;
- $v\in X$ and $v\in {P}_{i}$ with $|{P}_{i}|=1$ and there are other vertices w in the bag with $w\in X$, then the automaton rejects the configuration, as v is the last vertex of ${P}_{i}$ and may not be connected to any other partition anymore;
- $v\in X$ is the last vertex of the bag that is contained in X, then the connected component is “done”, the corresponding bits are cleared and one additional bit is set to indicate that the connected component cannot be extended anymore.

#### 4.4. Extending the Model Checker to Optimization Problems

#### 4.5. Handling Symmetric and Non-Symmetric Joins

**Lemma**

**1.**

**Proof.**

## 5. Applications and Experiments

`Jdrasil`was used with Java 8.1 and both

`Sequoia`and

`D-Flat`were compiled with gcc 7.2. All compilations were performed with the default optimization settings. The implementation of

`Jatatosk`uses hashing to realize Lemma 1, which works well in practice. We use a data set assembled from different sources containing graphs with 18 to 956 vertices and treewidth 3 to 13. The first source is a collection of transit graphs from GTFS-transit feeds [17] that was also used for experiments in [18], the second source is real-world instances collected in [19], and the last one is that of the PACE challenge [2] with treewidth at most 11. In each of the experiments, the left picture always shows the difference of

`Jatatosk`against

`D-Flat`and

`Sequoia`. A positive bar means that Jatatosk is faster by this amount in seconds, and a negative bar means that either

`D-Flat`or

`Sequoia`is faster by that amount. The bars are capped at 100 seconds. On every instance,

`Jatatosk`was compared against the solver that was faster on this particular instance. The image also shows for every instance the treewidth of the input. The right image always shows a cactus plot that visualizes the number of instances that can be solved by each of the solvers in x seconds, that is, faster growing functions are better. For each experiment, there is a table showing the average, standard deviation, and median of the time (in seconds) each solver needed to solve the problem. The best values are highlighted.

`Jatatosk`is faster than its competitors on many instances. However, there are also formulas such as the one for the vertex cover problem on which one of the other solvers performs better on some of the instances. For an overall picture, the cactus plot in Figure 7 is the sum of all cactus plots from the experiments. It reveals that overall

`Jatatosk`in fact outperforms its competitors. However, we stress once more that the comparison is not completely fair, as both

`Sequoia`and

`D-Flat`are powerful enough to model check the whole of MSO (and actually also ${\mathrm{MSO}}_{2}$), while

`Jatatosk`can only handle a fragment of ${\mathrm{MSO}}_{1}$.

`Jatatosk`outperforms

`Sequoia`and

`D-Flat`if we consider the graph coloring problem. Its average time is more than a factor of 10 smaller than the average time of its competitors and

`Jatatosk`solves about 30% of the instances more. This result is not surprising, as the fragment used by

`Jatatosk`is directly tailored towards coloring and, thus,

`Jatatosk`has a natural advantage.

`Sequoia`, which becomes apparent if we consider the difference plot. Furthermore, the average time used by

`Sequoia`is better than the time used by

`Jatatosk`. However, considering the cactus plot, the difference between

`Jatatosk`and

`Sequoia`with respect to solved instances is small, while

`D-Flat`falls behind a bit. We assume that the similarity between

`Jatatosk`and

`Sequoia`is because both compile internally a similar algorithm.

`Jatatosk`performs best with respect to all: the difference plot, the cactus plot, and the used average time. However, the difference to

`Sequoia`is small and there are in fact a couple of instances that are solved faster by

`Sequoia`. We are surprised by this result, as the worst-case running time of

`Jatatosk`for dominating-set is ${O}^{*}({8}^{k})$ and, thus, far from optimal. Furthermore,

`Sequoia`promises in theory a better performance with a running time of the form ${O}^{*}({5}^{k})$ [15].

`Jatatosk`and

`Sequoia`is minor: while

`Jatatosk`has a slightly better peek-performance, there are more instances that are solved faster by

`Sequoia`than the other way around.

`Jatatosk`outperforms its competitors, which is reflected in the difference plot, the cactus plot, and the used average time. We assume this is because

`Jatatosk`uses the dedicated forest quantifier directly, while the other tools have to infer the algorithmic strategy from a more general formula.

## 6. Conclusions and Outlook

`D-Flat`and

`Sequoia`. It therefore seems promising, from a practical perspective, to study smaller fragments of MSO.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Cygan, M.; Fomin, F.V.; Kowalik, Ł.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; Saurabh, S. Parameterized Algorithms; Springer: Berlin, Germany, 2015. [Google Scholar] [CrossRef]
- Dell, H.; Husfeldt, T.; Jansen, B.M.P.; Kaski, P.; Komusiewicz, C.; Rosamond, F.A. The first parameterized algorithms and computational experiments challenge. In Proceedings of the 11th International Symposium on Parameterized and Exact Computation (IPEC), Aarhus, Denmark, 24–26 August 2016; pp. 30:1–30:9. [Google Scholar] [CrossRef]
- Dell, H.; Komusiewicz, C.; Talmon, N.; Weller, M. The PACE 2017 parameterized algorithms and computational experiments challenge: The second iteration. In Proceedings of the 12th International Symposium on Parameterized and Exact Computation (IPEC), Vienna, Austria, 6–8 September 2017; pp. 30:1–30:12. [Google Scholar] [CrossRef]
- Courcelle, B. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Inf. Comput.
**1990**, 85, 12–75. [Google Scholar] [CrossRef][Green Version] - Bodlaender, H.L. A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM J. Comput.
**1996**, 25, 1305–1317. [Google Scholar] [CrossRef] - Tamaki, H. Positive-Instance driven dynamic programming for treewidth. In Proceedings of the 25th Annual European Symposium on Algorithms (ESA), Vienna, Austria, 4–6 September 2017; pp. 68:1–68:13. [Google Scholar] [CrossRef]
- Abseher, M.; Bliem, B.; Charwat, G.; Dusberger, F.; Hecher, M.; Woltran, S. D-FLAT: Progress Report. Available online: https://www.dbai.tuwien.ac.at/research/report/dbai-tr-2014-86.pdf (accessed on 5 June 2019).
- Langer, A.J. Fast Algorithms for Decomposable Graphs. Ph.D. Thesis, The RWTH Aachen University, Aachen, Germany, 2013. [Google Scholar]
- Diestel, R. Graph Theory: Springer Graduate Text Gtm 173, 4th ed.; Springer: Berlin, Germany, 2012; Volume 173. [Google Scholar]
- Flum, J.; Grohe, M. Parameterized Complexity Theory; Springer: Berlin, Germany, 2006. [Google Scholar] [CrossRef]
- Bannach, M.; Berndt, S.; Ehlers, T. Jdrasil: A modular library for computing tree decompositions. In Proceedings of the 16th International Symposium on Experimental Algorithms (SEA), London, UK, 21–23 June 2017; pp. 28:1–28:21. [Google Scholar] [CrossRef]
- Bannach, M. Jdrasil for Graph Coloring. Available online: https://github.com/maxbannach/Jdrasil-for-GraphColoring (accessed on 23 January 2019). Commit: a5e52a8.
- Bannach, M.; Berndt, S.; Ehlers, T. Jdrasil. Available online: http://www.github.com/maxbannach/jdrasil (accessed on 5 June 2019). Commit: dfa1eee.
- Frick, M.; Grohe, M. The complexity of first-order and monadic second-order logic revisited. Ann. Pure Appl. Logic
**2004**, 130, 3–31. [Google Scholar] [CrossRef][Green Version] - Kneis, J.; Langer, A.; Rossmanith, P. Courcelle’s theorem—A game-theoretic approach. Discret. Optim.
**2011**, 8, 568–594. [Google Scholar] [CrossRef] - Bannach, M.; Berndt, S. Jatatosk. Available online: https://github.com/maxbannach/Jatatosk/commit/45e306cfac5a273416870ec0bd9cd2c7f39a6932 (accessed on 8 April 2019).
- Fichte, J.K. gtfs2graphs—A Transit Feed to Graph Format Converter. Available online: https://github.com/daajoe/gtfs2graphs/commit/219944893f874b365de1ed87fc265fd5d19d5972 (accessed on 20 April 2018).
- Fichte, J.K.; Lodha, N.; Szeider, S. SAT-Based local improvement for finding tree decompositions of small width. In Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), Melbourne, Australia, 28 August–1 September 2017; Springer: Cham, Switzerland, 2017; pp. 401–411. [Google Scholar]
- Abseher, M.; Dusberger, F.; Musliu, N.; Woltran, S. Improving the efficiency of dynamic programming on tree decompositions via machine learning. J. Artif. Intell. Res.
**2015**, 58, 275–282. [Google Scholar] [CrossRef]

**Figure 1.**The left picture shows a part of a tree decomposition of the grid graph with vertices $\{0,\dots ,8\}$ (i.e., with rows $\{i,i+1,i+2\}$ for $i\in \{0,3,6\}$ and columns $\{i,i+3,i+6\}$ for $i\in \{0,1,2\}$). The index of a bag shows the type of the bag: a positive sign means “introduce”, a negative one “forget”, a pair represents an “edge”-bag, and the text is self explanatory. Solid lines represent real edges of the decomposition, while dashed lines illustrate a path (that is, some bags are skipped). On the left branch of the decomposition, a run of a nondeterministic tree automaton with tree-index $\left(\begin{array}{ccccccccc}0& 1& 2& 3& 4& 5& 6& 7& 8\\ 2& 3& 0& 1& 2& 3& 0& 1& 0\end{array}\right)$ for 3-coloring is illustrated. To increase readability, states of the automaton are connected to the corresponding bags with gray lines, and, for some nodes, the states are omitted. In the right picture, the same automaton is simulated deterministically.

**Table 1.**The table shows the precise number of bits we reserve in the description of a state of the tree automaton for different quantifiers and formulas. The values are with respect to a tree decomposition with maximum bag size k.

Quantifier/Formula | Number of Bits |
---|---|

free variables ${X}_{1},\dots ,{X}_{q}$ | $q\xb7k$ |

${\exists}^{\mathrm{partition}}{X}_{1},\dots ,{X}_{q}$ | $k\xb7logq$ |

${\exists}^{\mathrm{connected}}X$ | $k\xb7logk+1$ |

${\exists}^{\mathrm{forest}}X$ | $k\xb7logk$ |

$\forall x\forall y\phantom{\rule{0.277778em}{0ex}}E(x,y)\to {\chi}_{i}$ | 0 |

$\forall x\exists y\phantom{\rule{0.277778em}{0ex}}E(x,y)\wedge {\chi}_{i}$ | k |

$\exists x\forall y\phantom{\rule{0.277778em}{0ex}}E(x,y)\to {\chi}_{i}$ | $k+1$ |

$\exists x\exists y\phantom{\rule{0.277778em}{0ex}}E(x,y)\wedge {\chi}_{i}$ | 1 |

$\forall x\phantom{\rule{0.277778em}{0ex}}{\chi}_{i}$ | 0 |

$\exists x\phantom{\rule{0.277778em}{0ex}}{\chi}_{i}$ | 1 |

**Table 2.**The table gives an overview of formulas $\varphi $ used within this paper, together with the values $\mathrm{symmetric}(\varphi ,k)$ and $\mathrm{asymmetric}(\varphi ,k)$, as well as the precise time our algorithm will require to model check an instance for that particular formula.

$\mathit{\varphi}$ | $\mathbf{symmetric}(\mathit{\varphi},\mathit{k})$ | $\mathbf{asymmetric}(\mathit{\varphi},\mathit{k})$ | Time |
---|---|---|---|

${\varphi}_{3\mathrm{col}}$ | $k\xb7log(3)$ | 0 | ${O}^{*}({3}^{k})$ |

${\varphi}_{\mathrm{vc}}(S)$ | k | 0 | ${O}^{*}({2}^{k})$ |

${\varphi}_{\mathrm{ds}}(S)$ | k | k | ${O}^{*}({8}^{k})$ |

${\varphi}_{\mathrm{triangle}-\mathrm{minor}}$ | 0 | $3k\xb7log(k)+3$ | ${O}^{*}({k}^{6k})$ |

${\varphi}_{\mathrm{fvs}}(S)$ | k | $k\xb7log(k)$ | ${O}^{*}({2}^{k}{k}^{2k})$ |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bannach, M.; Berndt, S.
Practical Access to Dynamic Programming on Tree Decompositions. *Algorithms* **2019**, *12*, 172.
https://doi.org/10.3390/a12080172

**AMA Style**

Bannach M, Berndt S.
Practical Access to Dynamic Programming on Tree Decompositions. *Algorithms*. 2019; 12(8):172.
https://doi.org/10.3390/a12080172

**Chicago/Turabian Style**

Bannach, Max, and Sebastian Berndt.
2019. "Practical Access to Dynamic Programming on Tree Decompositions" *Algorithms* 12, no. 8: 172.
https://doi.org/10.3390/a12080172