# Analytic Combinatorics for Computing Seeding Probabilities

^{1}

^{2}

Next Article in Journal

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Dr. Aiguader 88, Barcelona 08003, Spain

University Pompeu Fabra, Dr. Aiguader 80, Barcelona 08003, Spain

Received: 12 November 2017
/
Revised: 7 January 2018
/
Accepted: 8 January 2018
/
Published: 10 January 2018

(This article belongs to the Special Issue Bioinformatics Algorithms and Applications)

Seeding heuristics are the most widely used strategies to speed up sequence alignment in bioinformatics. Such strategies are most successful if they are calibrated, so that the speed-versus-accuracy trade-off can be properly tuned. In the widely used case of read mapping, it has been so far impossible to predict the success rate of competing seeding strategies for lack of a theoretical framework. Here, we present an approach to estimate such quantities based on the theory of analytic combinatorics. The strategy is to specify a combinatorial construction of reads where the seeding heuristic fails, translate this specification into a generating function using formal rules, and finally extract the probabilities of interest from the singularities of the generating function. The generating function can also be used to set up a simple recurrence to compute the probabilities with greater precision. We use this approach to construct simple estimators of the success rate of the seeding heuristic under different types of sequencing errors, and we show that the estimates are accurate in practical situations. More generally, this work shows novel strategies based on analytic combinatorics to compute probabilities of interest in bioinformatics.

Bioinformatics is going through a transition driven by the ongoing developments of high throughput sequencing [1,2]. To cope with the surge of sequencing data, the bioinformatics community is under pressure to produce faster and more efficient algorithms. A common strategy to scale up analyses to large data sets is to use heuristics that are faster, but do not guarantee to return the optimal result. Good heuristics are thus based on a good understanding of the input data. With the right data model, one can calculate the risk of not returning the optimum and adjust the algorithm to achieve more precision or more speed. When the data is poorly understood, heuristics may be slow or inefficient for unknown reasons.

A particular area of bioinformatics where heuristics have been in use for a long time is the field of sequence alignment [3]. Computing the best alignment between two sequences is carried out by dynamic programming in time $O\left(mn\right)$, where m and n are the sequence lengths [4]. Heuristics are necessary when at least one of the sequences is long (e.g., a genome). The most studied heuristics for sequence alignment are called seeding methods [5]. The principle is to search short regions of the two sequences that are identical (or very similar) and use them as candidates to anchor the dynamic programming alignment. These short subsequences are called “seeds”. The benefit of the approach is that seeds can be found in short time. The risk is that they may not exist.

This strategy was most famously implemented in Basic Local Alignment Search Tool (BLAST) for the purpose of finding local homology between proteins or DNA [6]. By working out an approximate distribution of the identity score for the hits [7,8], the authors were able to calibrate the BLAST heuristic very accurately in order to gain speed. However, part of the calibration was empirical for lack of a theory to predict the probability that the hits contain seeds of different scores or sizes.

Seeding methods are heavily used in the mapping problem, where the original sequence of a read must be found in a reference genome. Seeding is used to reduce the search space and dynamic programming is used to choose the candidate sequence with the best alignment score. The dicovery of indexing methods based on the Burrows–Wheeler transform [9] was instrumental to develop short read mappers such as Burrows-Wheeler Aligner (BWA) and Bowtie [10,11]. With such indexes, one can know the number of occurrences of a substring in a genome in time $O\left(m\right)$, where m is the size of the substring [9] (i.e., independent of genome size). This yields a powerful seeding strategy whereby all the substrings of the read are queried in the genome.

The heuristic should be calibrated based on the probability that a seed of given length can be found in the read. The answer depends on the length of the seed, the size of the read, and on the types and frequencies of sequencing errors. Without a proper theoretical framework, computing such seeding probabilities is not straightforward.

Here, we focus on computing seeding probablities in the read mapping problem. We answer this question for realistic error models using the powerful theory of analytic combinatorics [12,13,14]. We show how to compute the probability that a read contains a seed of given size under different error models. Using symbolic constructions, we find the weighted generating functions of reads without seed and approximate the probabilities of interest by singularity analysis. The computational cost is equivalent to solving a polynomial equation. The approximations converge exponentially fast and are sufficiently accurate in practice. The weighted generating functions also allow us to specify recurrences in closed form, from which the probabilities can be computed at higher accuracy. Overall, the analytic combinatorics approach provides a practical solution to the problem of choosing an appropriate seed length based on the error profile of the sequencing instrument.

The work presented here borrows, among others, theoretical developments from the related field of pattern matching on random strings. For instance, see [15] for a thorough review of finite automata, their application to pattern matching in biological sequences and the use of generating functions to compute certain probabilities of occurrence. In [16], Fu and Koutras study the distribution of runs in Bernoulli trials using Markov chain embeddings. In [17] Régnier and collaborators study the problem of matching multiple occurrences of a set of words in a random text. Their method is to compute the probability of interest from the traversals of a constructed overlap graph. In [18], Nuel introduces the notion of pattern Markov chain to find the probability of occurrence of structured motifs in biological sequences. In this case, patterns represented as finite automata are translated into Markov chains from which the probabilities of interest are computed by recurrence. In [19] Nuel and Delos show how to combine Markov chain embeddings with non-deterministic finite automata in order to improve the computation speed on patterns of high complexity.

Regarding seeding per se, Chaisson and Tesler in [20] develop a method to compute seeding probabilities in long reads. They focus on the case of uniform substitutions and use generating functions to compute this probability under the assumption that the number of errors is constant.

In this section, we present the concepts of analytic combinatorics that are necessary to expose the main result regarding seeding probabilities in the read mapping problem. The analytic combinatorics strategy is to represent objects by generating functions, use a symbolic language to construct the generating functions of complex objects and finally approximate their probability of occurrence from the singularities of their generating function. The weighted generating function can also be used to extract a exact recurrence equation that can be used to compute the probabilities of interest with higher accuracy.

The central object of analytic combinatorics is the generating function [13] (p. 92). Here, we will need the slightly more general concept of weighted generating function, which are used in many areas of mathematics and physics, sometimes under different names and with different notations (see [21] (pp. 44–45) for an example in the context of combinatorial species and [22] (Theorem 1.3.2) for a more recent example in combinatorics).

Let $\mathcal{A}$ be a set of combinatorial objects characterized by a size and a weight that are nonnegative integer and nonnegative real numbers, respectively. The weighted generating function of
where $\left|a\right|$ and $w\left(a\right)$ denote the size and weight of the object a (see [23] (Equation (1)) and [14] (p. 357, Equation (108))). Expression (1) also defines a sequence of nonnegative real numbers ${\left({a}_{k}\right)}_{k\ge 0}$ such that

$$A\left(z\right)=\sum _{a\in \mathcal{A}}w\left(a\right){z}^{\left|a\right|},$$

$$A\left(z\right)=\sum _{k=0}^{\infty}{a}_{k}{z}^{k}.$$

By definition, ${a}_{k}={\sum}_{a\in {A}_{k}}w\left(a\right)$, where ${A}_{k}$ is the class of objects of size k in $\mathcal{A}$. The number ${a}_{k}$ is called the total weight of objects of size k. Expression (2) shows that the terms ${a}_{k}$ are the coefficients of the Taylor series expansion of the function $A\left(z\right)$.

Combinatorial operations on sets of objects translate into mathematical operations on their weighted generating functions (see [13] (p. 95) and [14] (p. 166)). If two sets $\mathcal{A}$ and $\mathcal{B}$ are disjoint and have weighted generating functions $A\left(z\right)$ and $B\left(z\right)$, respectively, the weighted generating function of $\mathcal{A}\cup \mathcal{B}$ is $A\left(z\right)+B\left(z\right)$. This follows from

$$\sum _{c\in \mathcal{A}\cup \mathcal{B}}w\left(c\right){z}^{\left|c\right|}=\sum _{a\in \mathcal{A}}w\left(a\right){z}^{\left|a\right|}+\sum _{b\in \mathcal{B}}w\left(b\right){z}^{\left|b\right|}.$$

Size and weight can be defined for pairs of objects in $\mathcal{A}\times \mathcal{B}$ as $\left|\right(a,b\left)\right|=\left|a\right|+\left|b\right|$ and $w(a,b)=w\left(a\right)w\left(b\right)$. In other words, the sizes are added and the weights are multiplied. With this convention, the weighted generating function of the Cartesian product $\mathcal{A}\times \mathcal{B}$ is $A\left(z\right)B\left(z\right)$. This simply follows from expression (1) and

$$A\left(z\right)B\left(z\right)=\sum _{a\in \mathcal{A}}w\left(a\right){z}^{\left|a\right|}\sum _{b\in \mathcal{B}}w\left(b\right){z}^{\left|b\right|}=\sum _{(a,b)\in \mathcal{A}\times \mathcal{B}}w\left(a\right)w\left(b\right){z}^{\left|a\right|+\left|b\right|}.$$

Let $\mathcal{A}=\left\{a\right\}$ and $\mathcal{B}=\left\{b\right\}$ be alphabets with a single letter of size 1. Assume $w\left(a\right)=p$ and $w\left(b\right)=q$. The weighted generating functions of $\mathcal{A}$ and $\mathcal{B}$ are then $A\left(z\right)=pz$ and $B\left(z\right)=qz$, respectively. The weighted generating function of the alphabet $\mathcal{A}\cup \mathcal{B}=\{a,b\}$ is $pz+qz=A\left(z\right)+B\left(z\right)$. The set ${(\mathcal{A}\cup \mathcal{B})}^{2}$ contains the four pairs of letters $(a,a)$, $(a,b)$, $(b,a)$ and $(b,b)$. They have size 2 and respective weight ${p}^{2}$, $pq$, $qp$, and ${q}^{2}$, so the weighted generating function of ${(\mathcal{A}\cup \mathcal{B})}^{2}$ is $({p}^{2}+2pq+{q}^{2}){z}^{2}={\left(A\left(z\right)+B\left(z\right)\right)}^{2}$.

We can further extend the definition of size and weight to any finite Cartesian product in the same way. The sizes are always added and the weights are always multiplied. The generating function of a Cartesian product then comes as the product of their generating functions. This allows us to construct the weighted genearting function of finite sequences of objects using formal power series (for more details, see [14] (p. 28 and p. 731)).

Let $\mathcal{A}$ be a set with weighted generating function $A\left(z\right)={\sum}_{k\ge 0}{a}_{k}{z}^{k}$. If ${a}_{0}=0$, the weighted generating function of the set ${\mathcal{A}}^{+}={\cup}_{k=1}^{\infty}{\mathcal{A}}^{k}$ is well defined and is equal to

$$\frac{A\left(z\right)}{1-A\left(z\right)}.$$

For $k\ge 1$, the weighted generating function of ${\mathcal{A}}^{k}$ is $A{\left(z\right)}^{k}$. The sets ${\mathcal{A}}^{k}$ are mutually exclusive so the weighted generating function of their union is $A\left(z\right)+A{\left(z\right)}^{2}+A{\left(z\right)}^{3}+\dots $ This sum converges in the sense of formal power series. Indeed, since ${a}_{0}=0$, the coefficient of ${z}^{n}$ in the partial sums $A\left(z\right)+A{\left(z\right)}^{2}+\dots +A{\left(z\right)}^{k}$ is constant for $k\ge n$. The formula of the weighted generating function follows from the equality $(A\left(z\right)+A{\left(z\right)}^{2}+A{\left(z\right)}^{3}+\dots )(1-A\left(z\right))=A\left(z\right)$. ☐

Nonempty finite sequences of a or b correspond to the set ${(\mathcal{A}\cup \mathcal{B})}^{+}={\cup}_{k=1}^{\infty}{\{a,b\}}^{k}$. If, as in Example 1, the weighted generating function of $\mathcal{A}\cup \mathcal{B}$ is $(p+q)z$, the weighted generating function of ${(\mathcal{A}\cup \mathcal{B})}^{+}$ is $(p+q)z/\left(1-(p+q)z\right)$.

In many combinatorial applications, one needs to count the sequences where a pattern does not occur, or where some symbol may not follow another. A convenient way to find the weighted generating functions of such sequences is to encode this information in so-called transfer matrices [14,24]. Generalizing the notion of incidence matrix of a graph, every transfer matrix is associated with a unique transfer graph.

A transfer graph is a directed graph whose edges are labelled by weighted generating functions. In addition, a transfer graph must contain a head vertex with only outgoing edges, and a tail vertex with only incoming edges. The matrix whose entry at position $(i,j)$ is the weighted generating function labelling the edge between vertices i and j is called the transfer matrix of the graph.

When all the weighted generating functions are polynomials, transfer graphs are equivalent to weighted sized graphs defined in [14] (p. 357, Definitions V.7 and V.8), where each monomial is considered a distinct edge between the same vertices.

Following the edges of a transfer graph from the head vertex to the tail vertex describes a sequence of combinatorial objects. The associated weighted generating function is the product of the functions labelling the edges (thus, an absent edge is associated with the function 0).

By convention, the vertices are ordered in the transfer matrix so that that the head vertex is first and the tail vertex is last. The first column of the transfer matrix is always 0 because the head vertex has no incoming edge, and the last row is always 0 because the tail vertex has no outgoing edge.

Say that a transfer graph with $m+2$ vertices is represented by the $(m+2)\times (m+2)$ transfer matrix ${M}_{\ast}\left(z\right)$. The “body” of the transfer graph designates the sub-graph containing the m vertices that are neither the head nor the tail. $M\left(z\right)$ denotes the $m\times m$ matrix obtained by removing from the transfer matrix the rows and columns that correspond to the head and the tail. $M\left(z\right)$ will be referred to as the “body” of the transfer matrix ${M}_{\ast}\left(z\right)$. The rationale for breaking ${M}_{\ast}\left(z\right)$ into blocks is that in general only $M\left(z\right)$ contributes to the asymptotic growth rate.

We also introduce $H\left(z\right)$, the row vector of m weighted generating functions associated with the m edges from the head vertex to the body of the graph, and $T\left(z\right)$, the column vector of m weighted generating functions associated with the edges from the body of the graph to the tail vertex. $H\left(z\right)$ and $T\left(z\right)$ are called the “head” and “tail” vectors, respectively. The weighted generating function labelling the edge from the head to the tail vertex is denoted $\psi \left(z\right)$.

The main interest of transfer graphs and transfer matrices is that they allow us to compute the weighted generating function of the sequences that correspond to paths from the head vertex to the tail vertex. The theorem below is useful for calculations, and it also shows that if all the entries of ${M}_{\ast}\left(z\right)$ are polynomials, then only $M\left(z\right)$ contributes to the asymptotic growth rate of the coefficients.

Given a transfer matrix
where $M\left(z\right)$ is a $m\times m$ matrix, $H\left(z\right)$ and $T\left(z\right)$ are vectors of dimension m and $\psi \left(z\right)$ has dimension 1, the weighted generating function of the sequences that correspond to all the possible paths from the head to the tail vertex of the transfer graph of ${M}_{\ast}\left(z\right)$ is
where we assume that $M\left(0\right)=0$ (the null matrix) and that all the eigenvalues of $M\left(z\right)$ have modulus less than or equal to 1.

$${M}_{\ast}\left(z\right)=\left(\begin{array}{ccc}0& H\left(z\right)& \psi \left(z\right)\\ 0& M\left(z\right)& T\left(z\right)\\ 0& 0& 0\end{array}\right),$$

$$\psi \left(z\right)+H\left(z\right)\xb7{(I-M\left(z\right))}^{-1}\xb7T\left(z\right),$$

Generalizing the proof of Proposition 1 shows that the weighted generating function of paths of the transfer graph from vertex i to vertex j is the entry at position $(i,j)$ of the matrix ${(I-{M}_{\ast}\left(z\right))}^{-1}$. We thus need to compute the top-right entry of this matrix, which corresponds to paths from the head to the tail vertex. Using the matrix inversion formula with the matrix of cofactors, this term is equal to ${(-1)}^{m+2}C/det(I-{M}_{\ast}\left(z\right))$, where C is the determinant

$$\left|\begin{array}{cc}-H\left(z\right)& -\psi \left(z\right)\\ I-M\left(z\right)& -T\left(z\right)\end{array}\right|.$$

Developing the determinant of $(I-{M}_{\ast}\left(z\right))$ along the first column and then along the last row, we obtain $det(I-{M}_{\ast}\left(z\right))={(-1)}^{m}det(I-M\left(z\right))$. Developing C along the first row and then along the last column, we obtain
where ${C}_{i,j}$ is the cofactor of $I-M\left(z\right)$ at position $(i,j)$. Using once more the matrix inversion formula with the matrix of cofactors, we obtain
which concludes the proof. ☐

$$C={(-1)}^{m}\psi \left(z\right)det(I-M\left(z\right))+\sum _{i=1}^{m}\sum _{j=1}^{m}{H}_{i}\left(z\right){(-1)}^{i+j}{C}_{i,j}\left(z\right){T}_{j}\left(z\right),$$

$$\frac{{(-1)}^{m+2}C}{det(I-{M}^{\ast}\left(z\right))}=\psi \left(z\right)+H\left(z\right)\xb7{(I-M\left(z\right))}^{-1}\xb7T\left(z\right),$$

Theorem 1 above will be instrumental in finding the weighted generating function of sequences defined from a transfer graph and its associated transfer matrix.

The notion of weight corresponds to the frequency or the probability of the associated objects. The point of the analytic combinatorics approach is that we can create objects of increasing complexity and find their weighted generating function using Theorem 1 or equivalent. Meanwhile, we know from expression (2) that we can recover the total weight of objects of size k from the Taylor expansion of their weighted generating function. We will see below that there exists an efficient way to approximate those coefficients.

Since we will often need to refer to ${a}_{k}$ in expressions of the form $A\left(z\right)={\sum}_{k=1}^{\infty}{a}_{k}{z}^{k}$, we define the symbol $\left[{z}^{k}\right]A\left(z\right)$, referred to as the “coefficient of ${z}^{k}$ in $A\left(z\right)$”. Theorem 2 below is a special case of a very important theorem of the field, showing how to extract the coefficients of a generating function (see more general cases in [25] (p. 498, Theorem 4) and in [12] (pp. 5–9, Theorem 1 and Corollary 3)).

If $A\left(z\right)$ can be written as the ratio of two polynomials $P\left(z\right)/Q\left(z\right)$ with P and Q coprime and $Q\left(0\right)\ne 0$, and if Q has exactly one root ${z}_{1}$ with minimum modulus and with multiplicity 1, then

$$\left[{z}^{k}\right]A\left(z\right)\sim -\frac{P\left({z}_{1}\right)}{{Q}^{\prime}\left({z}_{1}\right)}\frac{1}{{z}_{1}^{k+1}}.$$

For $n\ge 0$ and every complex number $a\ne 0$,

$$\frac{1}{{(1-z/a)}^{n+1}}=\sum _{k=0}^{\infty}\left(\genfrac{}{}{0pt}{}{k+n}{n}\right)\frac{{z}^{k}}{{a}^{k}}.$$

Using $A\left(z\right)=z/a$ in Proposition 1 and adding 1 to the final result, we obtain

$$\frac{1}{1-z/a}=\sum _{k=0}^{\infty}\frac{{z}^{k}}{{a}^{k}}.$$

Differentiating this equality n times, we obtain

$$\frac{n!}{{a}^{n}}\frac{1}{{(1-z/a)}^{n+1}}=\sum _{k=n}^{\infty}k(k-1)\dots (k-n+1)\frac{{z}^{k-n}}{{a}^{k}}.$$

Rearranging the terms on both sides of the equality and shifting the index of the sum yields expression (5). ☐

Assume without loss of generality that the degree of P is lower than the degree of Q. Say that Q can be factored as $(z-{z}_{1}){(z-{z}_{2})}^{{\nu}_{2}}\dots {(z-{z}_{n})}^{{\nu}_{n}}$, where $|{z}_{1}|<|{z}_{2}|\le \dots \le |{z}_{n}|$. There exist complex numbers ${\beta}_{1},{\beta}_{2,1},\dots ,{\beta}_{2,{\nu}_{2}},\dots ,{\beta}_{n,1},\dots ,{\beta}_{n,{\nu}_{n}}$ such that the partial fraction expansion of $P\left(z\right)/Q\left(z\right)$ can be written as

$$P\left(z\right)/Q\left(z\right)=\frac{{\beta}_{1}}{{z}_{1}-z}+\frac{{\beta}_{2,1}}{{z}_{2}-z}+\frac{{\beta}_{2,2}}{{({z}_{2}-z)}^{2}}+\dots +\frac{{\beta}_{2,{\nu}_{2}}}{{({z}_{2}-z)}^{{\nu}_{2}}}+\dots +\frac{{\beta}_{n,1}}{{z}_{n}-z}+\dots +\frac{{\beta}_{n,{\nu}_{n}}}{{({z}_{2}-z)}^{{\nu}_{n}}}.$$

From Lemma 1, we can expand the terms of the sum as

$$\frac{{\beta}_{j,m}}{{({z}_{j}-z)}^{m}}=\frac{{\beta}_{j,m}}{{z}_{j}^{m}{(1-z/{z}_{j})}^{m}}=\frac{{\beta}_{j,m}}{{z}_{j}^{m-1}}\sum _{k=0}^{\infty}\left(\genfrac{}{}{0pt}{}{k+m-1}{m-1}\right)\frac{{z}^{k}}{{z}_{j}^{k+1}}.$$

Substituting the expression above in expression (6), we obtain
where

$$\left[{z}^{k}\right]A\left(z\right)=\frac{{\beta}_{1}}{{z}_{1}^{k+1}}+\frac{{\alpha}_{2,k}}{{z}_{2}^{k+1}}+\dots +\frac{{\alpha}_{n,k}}{{z}_{n}^{k+1}},$$

$${\alpha}_{j,k}=\sum _{m=1}^{{\nu}_{j}}\left(\genfrac{}{}{0pt}{}{k+m-1}{m-1}\right)\frac{{\beta}_{j,m}}{{z}_{j}^{m-1}}=O\left({k}^{{\nu}_{j}-1}\right).$$

Since ${z}_{1}$ is the root with smallest modulus, the sum (7) is dominated by the term ${z}_{1}^{-k-1}$ as k increases, so the coefficient of ${z}^{k}$ in $A\left(z\right)$ is asymptotically equivalent to

$$\left[{z}^{k}\right]A\left(z\right)\sim \frac{{\beta}_{1}}{{z}_{1}^{k+1}}.$$

To find the value of ${\beta}_{1}$, we keep only the first term of the partial fraction decomposition. More specifically, there exist two polynomials ${P}_{1}$ and ${Q}_{1}$ such that

$$\frac{P\left(z\right)}{Q\left(z\right)}=\frac{P\left(z\right)}{({z}_{1}-z){Q}_{1}\left(z\right)}=\frac{{\beta}_{1}}{{z}_{1}-z}+\frac{{P}_{1}\left(z\right)}{{Q}_{1}\left(z\right)}.$$

Since $({z}_{1}-z)$ does not divide ${Q}_{1}$, we can multiply this expression through by $({z}_{1}-z)$ and set $z={z}_{1}$ to obtain $P\left({z}_{1}\right)/{Q}_{1}\left({z}_{1}\right)={\beta}_{1}$. Differentiating the expression $Q\left(z\right)=({z}_{1}-z){Q}_{1}\left(z\right)$ shows that ${Q}^{\prime}\left({z}_{1}\right)=-{Q}_{1}\left({z}_{1}\right)$, and thus that ${\beta}_{1}=-P\left({z}_{1}\right)/{Q}^{\prime}\left({z}_{1}\right)$, which concludes the proof. ☐

Theorem 2 says that the asymptotic growth of the coefficients of the series expansion of $A\left(z\right)$ is dictated by the singularity with smallest modulus, also known as the “dominant singularity”. An important observation is that the relative error in expression (4) is $O\left(\right|{z}_{1}/{z}_{2}{|}^{k})$, i.e., it decreases exponentially fast as k increases.

The hypotheses of Theorem 2 that there is only one dominant singularity and that it has multiplicity 1 are essential. Otherwise, expression (4) does not hold and other asymptotic regimes occur [25] (p. 498, Theorem 4). One can show that the conditions of Theorem 2 are satisfied for the weighted generating functions described in the next section. Importantly, one can also show that, in every case, the root with smallest modulus ${z}_{1}$ is a real number greater than 1. This has the important consequence that we can search ${z}_{1}$ in the space of real numbers greater than 1 using numerical methods such as Newton–Raphson or bisection. The proofs of these statements are outside the scope of this manuscript; the key observation is that, in all the cases, the body of the transfer graph is irreducible and aperiodic in the sense of Markov chains [14] (p. 341, Definitions V.5 and V.6). One can thus apply [14] (Theorem V.7 p. 434 and statement V.44 p. 358) to the body of the transfer graph, which by Theorem 1 is the sole contributor to the asymptotic growth of the coefficients. We refer the interested reader to [14] (pp. 336–58).

Note that expression (7) in the proof of Theorem 2 gives the exact value of the coefficients of the weighted generating function. Computing this expression requires finding all the singularities of the weighted generating function, and all the coefficients ${\beta}_{1},{\beta}_{2,1},\dots ,{\beta}_{2,{\nu}_{2}},\dots ,{\beta}_{n,1},\dots ,{\beta}_{n,{\nu}_{n}}$. When all the singularities of the weighted generating function are simple poles, this expression is particularly simple.

With the hypotheses of Theorem 2, if ${z}_{1},{z}_{2},\dots ,{z}_{n}$ (the roots of Q) all have multiplicity 1, then

$$\left[{z}^{k}\right]A\left(z\right)=-\sum _{j=1}^{n}\frac{P\left({z}_{j}\right)}{{Q}^{\prime}\left({z}_{j}\right)}\frac{1}{{z}_{j}^{k+1}}.$$

Corollary 1 can be used to find the exact value of the coefficients, but, in general, it is easier to use the weighted generating function to set up a linear recurrence to compute the coefficients (see, for instance, [13] (§3.3)).

If the weighted generating function $A\left(z\right)={\sum}_{k\ge 0}{a}_{k}{z}^{k}$ can be written as the ratio of two polynomials $P\left(z\right)/Q\left(z\right)$ with $deg\left(P\right)<deg\left(Q\right)$, then the sequence ${\left({a}_{k}\right)}_{k\ge 0}$ satisfies a linear recurrence with constant coefficients.

Say that $P\left(z\right)={p}_{0}+{p}_{1}z+\dots +{p}_{m}{z}^{m}$ and that $Q\left(z\right)={q}_{0}+{q}_{1}z+\dots +{q}_{n}{z}^{n}$. To balance the coefficient of ${z}^{0}$ on both sides of the equation $P\left(z\right)=Q\left(z\right){\sum}_{k\ge 0}{a}_{k}{z}^{k}$, we must have ${p}_{0}={q}_{0}{a}_{0}$, yielding ${a}_{0}={p}_{0}/{q}_{0}$. The coefficient of ${z}^{1}$ must also be balanced, which implies ${p}_{1}={q}_{1}{a}_{0}+{q}_{0}{a}_{1}$, yielding ${a}_{1}=({p}_{1}-{q}_{1}{a}_{0})/{q}_{0}$. The process is repeated to find the next values of ${a}_{k}$. For $k<n$, the solution depends on the previous values and on the first $k+1$ coefficients of P and Q. For $k\ge n$, we have ${a}_{k}=-({q}_{1}{a}_{k-1}+\dots +{q}_{n}{a}_{k-n})/{q}_{0}$. This is a linear recurrence of order $n=deg\left(Q\right)$ with constant coefficients, whose initial conditions depend on the coefficients of P. ☐

From the experimental point of view, a sequencing read is the result of an assay on some polymer of nucleic acid. The output of the assay is the decoded sequence of monomers that compose the molecule. Three types of sequencing errors can occur: substitutions, deletions and insertions. A substitution is a nucleotide that is different in the molecule and in the read, a deletion is a nucleotide that is present in the molecule but not in the read, and an insertion is a nucleotide that is absent in the molecule but present in the read. For our purpose, the focus is not the nucleotide sequence per se, but whether the nucleotides are correct. Thus, we need only four symbols to describe a read: one for each type of error, plus one for correct nucleotides. In this view, a read is a finite sequence of letters from an alphabet of four symbols. Figure 1 shows the typical structure of a read.

A read can be partitioned uniquely into maximal sequences of identical symbols referred to as “intervals”. Thus, reads can also be seen as sequences of either error-free intervals or error symbols (Figure 2). As detailed below, this will allow us to control the size of the largest error-free interval.

These concepts established, we can compute seeding probabilities in the read mapping problem. We define an “exact $\gamma $-seed”, or simply a seed, as an exact match of minimum size $\gamma $ between the read and the actual sequence of the molecule. In other words, an exact $\gamma $-seed is an error-free interval of size at least $\gamma $. Because of sequencing errors, it could be that the read contains no seed. In this case, the read cannot be mapped to the correct location if the mapping algorithm requires seeds of size $\gamma $ or greater.

Our goal is to construct estimators of the probability that a read contains an exact $\gamma $-seed based on expected sequencing errors. For this, we will construct the weighted generating functions of reads that do not contain an exact $\gamma $-seed by decomposing them as sequences of either error symbols or error-free intervals of size less than $\gamma $. We will obtain their weighted generating functions from Theorem 1 and use Theorem 2 to approximate their probability of occurrence. With the weighted generating function of all the reads $R\left(z\right)$, and that of reads without an exact $\gamma $-seed ${S}_{\gamma}\left(z\right)$, the probability that a read of size k has no exact $\gamma $-seed can be computed as $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)/\left[{z}^{k}\right]R\left(z\right)$, i.e., the total weight of reads of size k without seed divided by the total weight of reads of size k.

In the simplest model, we assume that errors can be only substitutions, and that they occur with the same probability p for every nucleotide. Importantly, the model is not overly simple and it has some real applications. For instance, it describes reasonably well the error model of the Illumina platforms, where p is around $0.01$ [26].

Under this error model, reads are sequences of single substitutions or error-free intervals. They can be thought of as walks on the transfer graph shown in Figure 3. The symbol ${\Delta}_{0}$ stands for an error-free interval and the symbol S stands for a single substitution. $F\left(z\right)$ and $pz$ are the weighted generating functions of error-free intervals and substitutions, respectively. The fact that an error-free interval cannot follow another error-free interval is a consequence of the definition: two consecutive intervals are automatically merged into a single one.

A substitution is a single nucleotide and thus has size 1. Because substitutions have probability p, their weighted generating function is $pz$. Conversely, the weighted generating function of correct nucleotides is $qz$, where $q=1-p$. Error-free intervals are non-empty sequences of correct nucleotides, so by Proposition 1 their weighted generating function is

$$F\left(z\right)=qz+{\left(qz\right)}^{2}+{\left(qz\right)}^{3}+\dots =\frac{qz}{1-qz}.$$

The transfer matrix of the graph shown in Figure 3 is

$${M}_{\ast}\left(z\right)=\begin{array}{cc}\begin{array}{c}\end{array}& \begin{array}{cccc}\hfill \circ & \hspace{0.17em}{\scriptstyle {\Delta}_{0}}& \hspace{0.17em}{\scriptstyle S}& \u2022\hfill \end{array}\\ \begin{array}{c}\circ \hfill \\ {\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\\ \u2022\end{array}& \begin{array}{c}\left(\begin{array}{cccc}0& F\left(z\right)& pz& 1\\ 0& 0& pz& 1\\ 0& F\left(z\right)& pz& 1\\ 0& 0& 0& 0\end{array}\right)\end{array}\end{array}.$$

With the notations of Theorem 1, we have $H\left(z\right)=\left(F\right(z),pz)$, $T\left(z\right)={(1,1)}^{\top}$, $\psi \left(z\right)=1$ and

$$M\left(z\right)=\begin{array}{cc}& \begin{array}{cc}\hfill {\scriptstyle {\Delta}_{0}}& {\scriptstyle S}\hfill \end{array}\\ \begin{array}{c}{\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\end{array}& \hfill \left(\begin{array}{cc}0& pz\\ F\left(z\right)& pz\end{array}\right)\end{array}.$$

Applying Theorem 1, $R\left(z\right)$, the weighted generating function of all reads is found from the formula $R\left(z\right)=\psi \left(z\right)+H\left(z\right)\xb7{(I-M\left(z\right))}^{-1}\xb7T\left(z\right)$, which, in this case, translates to
where $\lambda \left(z\right)=1-pz(1+F(z\left)\right)$ is the determinant of $I-M\left(z\right)$. Using equation (8), this expression simplifies to

$$R\left(z\right)=1+(F\left(z\right),pz)\xb7\frac{1}{\lambda \left(z\right)}\left(\begin{array}{cc}1-pz& pz\\ F\left(z\right)& 1\end{array}\right)\xb7\left(\begin{array}{c}1\\ 1\end{array}\right),$$

$$R\left(z\right)=\frac{1+F\left(z\right)}{1-pz(1+F(z\left)\right)}=\frac{1}{1-z}.$$

Since $1/(1-z)=1+z+{z}^{2}+\dots $, the total weight of reads of size k is equal to 1 for any $k\ge 0$. As a consequence, $\left[{z}^{k}\right]R\left(z\right)=1$ and the probability that a read of size k has no exact $\gamma $-seed is equal to $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$. To find the weighted generating function of reads without an exact $\gamma $-seed, we limit error-free intervals to a maximum size of $\gamma -1$. To do this, we can replace $F\left(z\right)$ in expression (9) by its truncation ${F}_{\gamma}\left(z\right)=qz+{\left(qz\right)}^{2}+\dots +{\left(qz\right)}^{\gamma -1}$. We obtain

$${S}_{\gamma}\left(z\right)=\frac{1+{F}_{\gamma}\left(z\right)}{1-pz\left(1+{F}_{\gamma}\left(z\right)\right)}=\frac{1+qz+\dots +{\left(qz\right)}^{\gamma -1}}{1-pz\left(1+qz+\dots +{\left(qz\right)}^{\gamma -1}\right)}.$$

Now applying Theorem 2 to the expression of ${S}_{\gamma}\left(z\right)$ above, we obtain the following proposition.

The probability that a read of size k has no seed under the uniform substitutions model is asymptotically equivalent to
where ${z}_{1}$ is the root with smallest modulus of $1-pz(1+qz+\dots +{\left(qz\right)}^{\gamma -1})$, and where

$$\frac{C}{{z}_{1}^{k+1}},$$

$$C=\frac{{(1-q{z}_{1})}^{2}}{{p}^{2}{z}_{1}\left(1-(\gamma +1-\gamma q{z}_{1}){\left(q{z}_{1}\right)}^{\gamma}\right)}.$$

Approximate the probability that a read of size $k=100$ has no seed for $\gamma =17$ and for a substitution rate $p=0.1$. To find the dominant singularity of ${S}_{17}$, we solve $1-0.1z\times (1+0.9z+\dots +{\left(0.9z\right)}^{16})=0$. We rewrite the equation as $1-0.1z\times (1-{\left(0.9z\right)}^{17})/(1-0.9z)=0$ and use numerical bisection to obtain ${z}_{1}\approx 1.0268856$. Substituting this value in equation (11) yields $C\approx 1.396145$, so the probability that a read contains no seed is approximately ${1.396145/1.0268856}^{101}\approx 0.095763$. For comparison, a 99% confidence interval obtained by performing 10 billion random simulations is $0.09575-0.09577$. The computational cost of the analytic combinatorics approach is infinitesimal compared to the random simulations, and the precision is much higher for $k=100$.

Overall, the analytic combinatorics estimates are accurate. Figure 4 illustrates the precision of the estimates for different values of the error rate p and of the read size k.

One can also compute the probabilities by recurrence using Theorem 3, after replacing the term $1+qz+\dots +{\left(qz\right)}^{\gamma -1}$ by $(1-{\left(qz\right)}^{\gamma})/(1-qz)$ in expression (10). Denoting $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$ as ${s}_{k}$, one obtains for every positive integer $\gamma $

$${s}_{k}=\left\{\begin{array}{cc}1,\hfill & \mathrm{if}0\le k\gamma ,\hfill \\ 1-{q}^{\gamma},\hfill & \mathrm{if}k=\gamma ,\hfill \\ {s}_{k-1}-p{q}^{\gamma}\xb7{s}_{k-\gamma -1},\hfill & \mathrm{if}k\gamma .\hfill \end{array}\right.$$

We now consider a model where errors can be deletions or substitutions, but not insertions. This case is not very realistic, but it will be useful to clarify how to construct reads with potential deletions. As in the case of uniform substitutions, we assume that every nucleotide call is false with a probability p and true with a probability $1-p=q$. Here, we also assume that between every pair of decoded nucleotides in the read, an arbitrary number of nucleotides from the original molecule are deleted with probability $\delta $. Regardless of the number of deleted nucleotides, all the deletions are equivalent when the read is viewed as a sequence of error-free intervals or error symbols (see Figure 2).

A deletion may be adjacent to a substitution, or lie between two correct nucleotides. In the first case, the deletion does not interrupt any error-free interval so it does not change the probability that the read contains a seed. For this reason, we ignore deletions next to substitutions. More precisely, we assume that they can occur, but whether they do has no importance for the problem.

Under this error model, a read can be thought of as a walk on the transfer graph shown in Figure 5. The graph is almost the same as the one shown Figure 3; the only difference is the edge labelled $\delta F\left(z\right)$ from ${\Delta}_{0}$ to ${\Delta}_{0}$. This edge represents the fact that an error-free interval can follow another one if a deletion with weighted generating function $\delta $ is present in between (as illustrated for instance in Figure 2).

The weighted generating function of error-free intervals $F\left(z\right)$ has a different expression from that of Section 4.2. When the size of an error-free interval is 1, the weighted generating function is just $qz$. For a size $k>1$, there are $k-1$ “spaces” between the nucleotides, so the weighted generating function is ${(1-\delta )}^{k-1}{\left(qz\right)}^{k}$. Summing for all the possible sizes, we obtain the weighted generating function of error-free intervals as

$$F\left(z\right)=qz+(1-\delta ){\left(qz\right)}^{2}+{(1-\delta )}^{2}{\left(qz\right)}^{3}+\dots =\frac{qz}{1-(1-\delta )qz}.$$

The transfer matrix of the graph shown in Figure 5 is

$${M}_{\ast}\left(z\right)=\begin{array}{cc}\begin{array}{c}\end{array}& \begin{array}{cccc}\hfill \circ & \hspace{0.17em}{\scriptstyle {\Delta}_{0}}& \hspace{0.17em}{\scriptstyle S}& \u2022\hfill \end{array}\\ \begin{array}{c}\circ \hfill \\ {\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\\ \u2022\end{array}& \begin{array}{c}\left(\begin{array}{cccc}0& F\left(z\right)& pz& 1\\ 0& \delta F\left(z\right)& pz& 1\\ 0& F\left(z\right)& pz& 1\\ 0& 0& 0& 0\end{array}\right)\end{array}\end{array}.$$

With the notations of Theorem 1, $H\left(z\right)=\left(F\right(z),pz)$, $T\left(z\right)={(1,1)}^{\top}$, $\psi \left(z\right)=1$ and

$$M\left(z\right)=\begin{array}{cc}& \begin{array}{cc}{\scriptstyle {\Delta}_{0}}\hfill & \hspace{1em}{\scriptstyle S}\hfill \end{array}\\ \begin{array}{c}{\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\end{array}& \hfill \left(\begin{array}{cc}\delta F\left(z\right)& pz\\ F\left(z\right)& pz\end{array}\right)\end{array}.$$

From Theorem 1, the weighted generating function of all reads is
where $\lambda \left(z\right)=1-pz-\left(pz\right(1-\delta )+\delta )F\left(z\right)$ is the determinant of $I-M\left(z\right)$. Using equation (13), this expression simplifies to

$$R\left(z\right)=1+(F\left(z\right),pz)\xb7\frac{1}{\lambda \left(z\right)}\left(\begin{array}{cc}1-pz& pz\\ F\left(z\right)& 1-\delta F\left(z\right)\end{array}\right)\xb7\left(\begin{array}{c}1\\ 1\end{array}\right),$$

$$R\left(z\right)=\frac{1+(1-\delta )F\left(z\right)}{1-pz-\left(pz(1-\delta )+\delta \right)F\left(z\right)}=\frac{1}{1-z}.$$

As in Section 4.2, the result is $1/(1-z)=1+z+{z}^{2}+\dots $, which means that the probability that a read of size k contains no seed is equal to $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$. To find the weighted generating function of reads without an exact $\gamma $-seed, we bound the size of error-free intervals to a maximum of $\gamma -1$, i.e., we replace $F\left(z\right)$ by its truncation ${F}_{\gamma}\left(z\right)=qz+(1-\delta ){\left(qz\right)}^{2}+\dots +{(1-\delta )}^{\gamma -2}{\left(qz\right)}^{\gamma -1}$. With this, the weighted generating function of reads without seed is

$${S}_{\gamma}\left(z\right)=\frac{1+(1-\delta ){F}_{\gamma}\left(z\right)}{1-pz-\left(pz(1-\delta )+\delta \right){F}_{\gamma}\left(z\right)}.$$

Applying Theorem 2 to this expression, we obtain the following proposition.

The probability that a read of size k has no seed under the model of uniform substitutions and deletions is asymptotically equivalent to
where ${z}_{1}$ is the root with smallest modulus of $1-pz-\left(pz(1-\delta )+\delta \right)(qz+(1-\delta ){\left(qz\right)}^{2}+\dots +{(1-\delta )}^{\gamma -2}{\left(qz\right)}^{\gamma -1})$, and where

$$\frac{C}{{z}_{1}^{k+1}},$$

$$\begin{array}{cc}\hfill C& =\frac{{z}_{1}{\left(1-(1-\delta )q{z}_{1}\right)}^{2}}{\left((p+q\delta ){z}_{1}-{c}_{1}{(1-\delta )}^{\gamma -1}{\left(q{z}_{1}\right)}^{\gamma}\right)\left(\delta +(1-\delta )p{z}_{1}\right)},\phantom{\rule{4.pt}{0ex}}with\hfill \\ \hfill {c}_{1}& =\gamma \delta -(1-\delta )\left((\gamma -1)\delta -p\left(\right(\gamma -1)\delta +\gamma +1)\right){z}_{1}-\gamma {(1-\delta )}^{2}pq{z}_{1}^{2}.\hfill \end{array}$$

If ${z}_{1}=1/\left((1-\delta )q\right)$, expression (16) is undefined and the constant C should be computed as $-P\left({z}_{1}\right)/{Q}^{\prime}\left({z}_{1}\right)$, where $P\left(z\right)$ and $Q\left(z\right)$ are the respective numerator and denominator of ${S}_{\gamma}\left(z\right)$ in expression (15).

Approximate the probability that a read of size $k=100$ has no seed for $\gamma =17$, $p=0.05$ and $\delta =0.15$. To find the dominant singularity of ${S}_{17}$, we solve $1-0.05z-\left(0.0425z+0.15\right)\left(0.95z+0.85{\left(0.95z\right)}^{2}+\dots +{0.85}^{15}{\left(0.95z\right)}^{16}\right)=0$. We write it as $1-0.05z-(0.0425z+0.15)(0.95z-{0.85}^{16}{\left(0.95z\right)}^{17})/(1-0.8075z)=0$ and use numerical bisection to obtain ${z}_{1}\approx 1.006705$. Now, substituting the obtained value in Equation (16) gives $C\approx 1.096177$, so the probability is approximately ${1.096177/1.006705}^{101}\approx 0.558141$. For comparison, a 99% confidence interval obtained by performing 10 billion random simulations is $0.55813-0.55816$.

Once again, the analytic combinatorics estimates are accurate. Figure 6 illustrates the precision of the estimates for different values of the deletion rate $\delta $ and of the read size k.

The probabilities can also be computed by recurrence using Theorem 3, after replacing ${F}_{\gamma}\left(z\right)$ by $qz(1-{\left((1-\delta )qz\right)}^{\gamma -1})/(1-(1-\delta )qz)$ in expression (15). Denoting $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$ as ${s}_{k}$, one obtains for every integer $\gamma >1$

$${s}_{k}=\left\{\begin{array}{cc}1,\hfill & \mathrm{if}0\le k\gamma ,\hfill \\ 1-{(1-\delta )}^{\gamma -1}{q}^{\gamma},\hfill & \mathrm{if}k=\gamma ,\hfill \\ {s}_{k-1}-\delta {q}^{\gamma}{(1-\delta )}^{\gamma -1}\xb7{s}_{k-\gamma}-p{q}^{\gamma}{(1-\delta )}^{\gamma}\xb7{s}_{k-\gamma -1},\hfill & \mathrm{if}k\gamma .\hfill \end{array}\right.$$

Here, we consider a model where all types of errors are allowed (also referred to as the “full error model”). Introducing insertions brings two additional difficulties: the first is that a substitution is indistinguishable from an insertion followed by a deletion (or a deletion followed by an insertion). By convention, we will count all these cases as substitutions. As a consequence, a deletion can never be found next to an insertion. The second difficulty is that insertions usually come in bursts. This is also the case of deletions, but we could neglect it because this does not affect the size of the interval (all deletions have size 0).

To model insertion bursts, we need to assign a probability r to the first insertion, and a probability $\tilde{r}>r$ to all subsequent insertions of the burst. We will still denote the probability of a substitution p and that of a correct nucleotide q, but here $p+q+r=1$. We will also assume that an insertion burst stops with probability $1-\tilde{r}$ at each position of the burst.

Under this error model, reads can be thought of as walks on the transfer graph shown in Figure 7. To not overload the figure, the body of the transfer graph is represented on the left, and the head and tail vertices on the right. The symbols ${\Delta}_{0}$, S and I stand for error-free intervals, single substitutions and single insertions, respectively. The terms $F\left(z\right)$, $pz$ and $\delta F\left(z\right)$ are the same as in Section 4.3. The terms $rz$ and $\tilde{r}z$ are the weighted generating functions of the first inserted nucleotide and of all subsequent nucleotides of the insertion burst, respectively. The burst terminates with probability $1-\tilde{r}$ and is followed by an error-free interval or by a substitution. The total weight of these two cases is $p+q<1$, so we need to further scale the weighted generating functions by a factor $p+q=1-r$.

The expression of the weighted generating function of error-free intervals $F\left(z\right)$ is the same as in Section 4.3, namely
$$F\left(z\right)=qz+(1-\delta ){\left(qz\right)}^{2}+{(1-\delta )}^{2}{\left(qz\right)}^{3}+\dots =\frac{qz}{1-(1-\delta )qz}.$$

The transfer matrix of the graph shown in Figure 7 is

$${M}_{\ast}\left(z\right)=\begin{array}{cc}\begin{array}{c}\end{array}& \begin{array}{ccccc}\hfill \circ \hspace{1em}& {\scriptstyle {\Delta}_{0}}& \hspace{1em}{\scriptstyle S}\hspace{1em}& \hfill I& \u2022\hfill \end{array}\\ \begin{array}{c}\circ \hfill \\ {\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\\ I\\ \u2022\end{array}& \begin{array}{c}\left(\begin{array}{ccccc}0& F\left(z\right)& pz& rz& 1\\ 0& \delta F\left(z\right)& pz& rz& 1\\ 0& F\left(z\right)& pz& rz& 1\\ 0& \frac{1-\tilde{r}}{1-r}F\left(z\right)& \frac{1-\tilde{r}}{1-r}pz& \tilde{r}z& 1\\ 0& 0& 0& 0& 0\end{array}\right)\end{array}\end{array}.$$

With the notations of Theorem 1, $H\left(z\right)=\left(F\right(z),pz,rz)$, $T\left(z\right)={(1,1,1)}^{\top}$, $\psi \left(z\right)=1$ and

$$M\left(z\right)=\begin{array}{cc}& \begin{array}{ccc}\hspace{1em}{\scriptstyle {\Delta}_{0}}\hfill & \hspace{1em}\hspace{1em}{\scriptstyle S}& I\hfill \end{array}\\ \begin{array}{c}{\scriptstyle {\Delta}_{0}}\\ {\scriptstyle S}\\ I\end{array}& \hfill \begin{array}{c}\left(\begin{array}{ccc}\delta F\left(z\right)& pz& rz\\ F\left(z\right)& pz& rz\\ \frac{1-\tilde{r}}{1-r}F\left(z\right)& \frac{1-\tilde{r}}{1-r}pz& \tilde{r}z\end{array}\right)\end{array}\end{array}.$$

From Theorem 1, the weighted generating function of all reads is $\psi \left(z\right)+H\left(z\right)\xb7{(I-M\left(z\right))}^{-1}\xb7T\left(z\right)$, which is equal to
where $a\left(z\right)$ and $b\left(z\right)$ are second degree polynomials defined as

$$R\left(z\right)=\frac{(1-r)\left(1-(\tilde{r}-r)z\right)\left(1+(1-\delta )F\left(z\right)\right)}{1-a\left(z\right)-b\left(z\right)F\left(z\right)},$$

$$\begin{array}{c}a\left(z\right)=r+(1-r)(p+\tilde{r})z-p(\tilde{r}-r){z}^{2},\phantom{\rule{4.pt}{0ex}}\mathrm{and}\\ b\left(z\right)=\delta (1-r)+\left((1-r)(p-\delta (p+\tilde{r}))+(1-\tilde{r})r\right)z-p(1-\delta )(\tilde{r}-r){z}^{2}.\end{array}$$

Substituting in (18) the expressions of $F\left(z\right)$, $a\left(z\right)$ and $b\left(z\right)$, we find

$$R\left(z\right)=\frac{1}{1-z}.$$

Again, we obtain the simple expression $1/(1-z)=1+z+{z}^{2}+\dots $ and the probability that a read of size k contains no seed is $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$. To find the weighted generating function of reads without an exact $\gamma $-seed, we replace $F\left(z\right)$ in expression (18) by its truncated version

$${F}_{\gamma}\left(z\right)=qz+(1-\delta ){\left(qz\right)}^{2}+{(1-\delta )}^{2}{\left(qz\right)}^{3}+\dots +{(1-\delta )}^{\gamma -2}{\left(qz\right)}^{\gamma -1}.$$

We obtain the following expression
where $a\left(z\right)$ and $b\left(z\right)$ are defined as in expression (19).

$${S}_{\gamma}\left(z\right)=\frac{(1-r)\left(1-(\tilde{r}-r)z\right)\left(1+(1-\delta ){F}_{\gamma}\left(z\right)\right)}{1-a\left(z\right)-b\left(z\right){F}_{\gamma}\left(z\right)},$$

Note that when $r=\tilde{r}=0$, then $a\left(z\right)=pz$ and $b\left(z\right)=pz(1-\delta )+\delta $, expression (21) becomes

$${S}_{\gamma}\left(z\right)=\frac{1+(1-\delta ){F}_{\gamma}\left(z\right)}{1-pz-(pz(1-\delta )+\delta )){F}_{\gamma}\left(z\right)}.$$

This is expression (15), i.e., the model described in Section 4.3. When we also have $\delta =0$, this expression further simplifies to

$${S}_{\gamma}\left(z\right)=\frac{1+{F}_{\gamma}\left(z\right)}{1-pz(1+{F}_{\gamma}\left(z\right))}.$$

This is expression (10), i.e., the model described in Section 4.2. In other words, the error models described previously are special cases of this error model.

As in the previous sections, we can use Theorem 2 to obtain asymptotic approximations for the probability that the reads contain no seed.

The probability that a read of size k has no seed under the error model with substitutions, deletions and insertions is asymptotically equivalent to
where ${z}_{1}$ is the root with smallest modulus of the polynomial $1-a\left(z\right)-b\left(z\right){F}_{\gamma}\left(z\right)$ and

$$\frac{C}{{z}_{1}^{k+1}},$$

$$C=\frac{(1-r)\left(1-(\tilde{r}-r){z}_{1}\right)\left(1+(1-\delta ){F}_{\gamma}\left({z}_{1}\right)\right)}{{a}^{\prime}\left({z}_{1}\right)+{b}^{\prime}\left({z}_{1}\right){F}_{\gamma}\left({z}_{1}\right)+b\left({z}_{1}\right){F}_{\gamma}^{\prime}\left({z}_{1}\right)}.$$

If ${z}_{1}=1/\left((1-\delta )q\right)$, then ${F}_{\gamma}\left({z}_{1}\right)=(\gamma -1)/(1-\delta )$ and ${F}_{\gamma}^{\prime}\left({z}_{1}\right)=q\gamma (\gamma -1)/2$. Otherwise,

$$\begin{array}{c}{F}_{\gamma}\left({z}_{1}\right)=q{z}_{1}\frac{1-{\left((1-\delta )q{z}_{1}\right)}^{\gamma -1}}{1-(1-\delta )q{z}_{1}},\phantom{\rule{4.pt}{0ex}}and\\ {F}_{\gamma}^{\prime}\left({z}_{1}\right)=q\frac{1+\left((1-\delta )(\gamma -1)q{z}_{1}-\gamma \right){\left((1-\delta )q{z}_{1}\right)}^{\gamma -1}}{{\left(1-(1-\delta )q{z}_{1}\right)}^{2}}.\end{array}$$

If ${z}_{1}=1/(\tilde{r}-r)$, then $1-(\tilde{r}-r)z$ divides the numerator and the denominator, which should be simplified to remain coprime. In this case, Theorem 2 should be applied to the simplified rational function.

Approximate the probability that a read of size $k=100$ has no seed for $\gamma =17$, $p=0.05$, $\delta =0.15$, $r=0.05$ and $\tilde{r}=0.45$. With these values, $a\left(z\right)=0.05+0.475z-0.02{z}^{2}$ and $b\left(z\right)=0.1425+0.00375z-0.017{z}^{2}$. We need to solve $0.95-0.475z+0.02{z}^{2}-(0.1425+0.00375z-0.017{z}^{2})(0.9z+0.85{\left(0.9z\right)}^{2}+\dots +{0.85}^{15}{\left(0.9z\right)}^{16})=0$. We rewrite the equation as $0.95-0.475z+0.02{z}^{2}-(0.1425+0.00375z-0.017{z}^{2})(0.9z-{0.85}^{15}{\left(0.9z\right)}^{16})/(1-0.765z)=0$ and use bisection to solve it numerically, yielding ${z}_{1}\approx 1.00295617$. From expression (22), we obtain $C\approx 1.042504$, so the probability that a read contains no seed is approximately ${1.042504/1.00295617}^{101}\approx 0.773749$. For comparison, a 99% confidence interval obtained by performing 10 billion random simulations is $0.77373-0.77376$.

Once again, the analytic combinatorics estimates are accurate. Figure 8 illustrates the precision of the estimates for different values of the insertion rate r and of the read size k.

The probabilities can also be computed by recurrence using Theorem 3, after replacing ${F}_{\gamma}\left(z\right)$ by $qz(1-{\left((1-\delta )qz\right)}^{\gamma -1})/(1-(1-\delta )qz)$ in expression (21). Denoting $\left[{z}^{k}\right]{S}_{\gamma}\left(z\right)$ as ${s}_{k}$, one obtains for every integer $\gamma >2$

$${s}_{k}=\left\{\begin{array}{cc}1,\hfill & \mathrm{if}0\le k\gamma ,\hfill \\ 1-{(1-\delta )}^{\gamma -1}{q}^{\gamma},\hfill & \mathrm{if}k=\gamma ,\hfill \\ 1-{(1-\delta )}^{\gamma -1}{q}^{\gamma}\left(\frac{1-r\tilde{r}}{1-r}+p+q\delta \right),\hfill & \mathrm{if}k=\gamma +1,\hfill \\ (1+\tilde{r}-r)\xb7{s}_{k-1}-(\tilde{r}-r)\xb7{s}_{k-2}\hfill \\ \phantom{\rule{2.em}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}p{q}^{\gamma}{(1-\delta )}^{\gamma}\frac{\tilde{r}-r}{1-r}\xb7{s}_{k-\gamma}\hfill & \mathrm{if}k\gamma +1.\hfill \\ \phantom{\rule{2.em}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}{q}^{\gamma}{(1-\delta )}^{\gamma -1}\left(\delta (p+\tilde{r})-p-r\frac{1-\tilde{r}}{1-r}\right)\xb7{s}_{k-\gamma -1}\hfill \\ \phantom{\rule{2.em}{0ex}}-\phantom{\rule{3.33333pt}{0ex}}\delta {q}^{\gamma}{(1-\delta )}^{\gamma -1}\xb7{s}_{k-\gamma -2},\hfill \end{array}\right.$$

So far, all the examples showed that the analytic combinatorics approximations are accurate. Indeed, the main motivation for our approach is to find estimates that converge exponentially fast to the target value. To find out whether we can use the approximations in place of the true values, we need to describe the behavior of the estimates in the worst conditions. The approximations become more accurate as the size of the sequence increases, i.e., as the reads become longer. This is somewhat inconvenient: the read size is usually fixed by the technology or by the problem at hand, so the user does not have easy ways to improve the accuracy. Overall, the approximations described above tend to be less accurate for short reads.

Another aspect is convergence speed. The proof of Theorem 2 shows that the rate of convergence is fastest when the dominant singularity has a significantly smaller modulus than the other singularities. Conversely, convergence is slowest when at least one other singularity is almost as close to 0. The worst case for the approximation is thus when the reads are small and when the parameters are such that singularities have relatively close moduli. It can be shown that, for the error model of uniform substitutions, this corresponds to small values of the error rate p (see Appendix A).

In practical terms, the situation above describes the specifications of the Illumina technology, where errors are almost always substitutions, occurring at a frequency around 1% on current instruments. Since the reads are often around 50 nucleotides, the analytic combinatorics estimates of the seeding probabilities are typically less accurate than suggested in the previous sections.

Figure 9 shows the accuracy of the estimates in one of the worst cases. The analytic combinatorics estimates are clearly distinct from the simulation estimates at the chosen scale, but the absolute difference is never higher than approximately $0.015$ (and lower for read sizes above 40). Whether this error is acceptable depends on the problem. Often p itself must be estimated, which is a more serious limitation on the precision than the convergence speed of the estimates. In most practical applications, the approximation error of Theorem 2 can be tolerated even in the worst case, but it is important to bear in mind that it may not be negligible for reads of size 50 or lower. If this level of precision is insufficient, the best option is to compute the coefficients by recurrence.

For long reads, the approximations rapidly gain in accuracy. Importantly, the calculations are also numerically stable, even for very long reads and for very high values of the error rate. To explore the behavior of the estimates, they were computed over a wide range of conditions, and compared to the values obtained by computing recurrence (12).

The value of ${s}_{k}$ is the exact probability that a read of size k does not contain a seed in the uniform substitution error model, when the probability of substitution is equal to p. However, because the numbers are represented with finite precision, the computed value of ${s}_{k}$ is also inexact in practice. Figure 10 shows the relative error of the estimates given by Proposition 2, as compared to the value of ${s}_{k}$ computed through equation (12) in double precision arithmetic. For all the tested values of p, the relative error first decreases to approximately ${10}^{-15}$ and then slowly rises. The reason is that the theoretical accuracy of the estimates increases with the read size k, as justified by Theorem 2, but the errors in numerical approximations also increase and they finally dominate the error.

In spite of this artefact, it is clear that the relative error of the estimates remains low for very large read sizes. This means that the estimates are sufficiently close to the exact value, and that the calculations are numerically stable. Figure 10 also confirms that the value of p has a large influence on the convergence speed of the approximation. For high values of p, the relative error drops faster than for low values of p, as argued above.

The main reason for the numerical stability of the estimates is that the polynomial equations to solve can be properly represented with double precision numbers. For instance, in the extreme case of seeds of size 30 with a substitution rate $p=0.5$, the leading term of $Q\left(z\right)$ in Proposition 2 is $-p{(1-p)}^{29}{z}^{30}\approx -{10}^{-9}{z}^{30}$. This is several orders of magnitude above the machine epsilon (approximately equal to $2.2\times {10}^{-16}$ on most computers), sufficient to guarantee that this term will not underflow during the calculations, and thus that the dominant singularity will be computed with an adequate precision. The same applies for the other error models, as long as the rate of sequencing errors remains above $0.5$, which is the case in the vast majority of practical applications.

In summary, the estimates presented above are accurate and numerically stable. In the case of short reads with low error rate, the precision may be limiting for some applications, but the approximations can be replaced by exact solutions. The approach presented here is thus a practical solution for computing seeding probabilities.

In this article, we exposed the analytic combinatorics approach to compute seeding probabilities in the read mapping problem. The general strategy of analytic combinatorics is to define combinatorial “atoms” with simple weighted generating functions (e.g., nucleotide symbols), combine these atoms into objects of increasing complexity (e.g., error-free intervals or reads without seed), construct their weighted generating functions from simple rules (e.g., through Theorem 1), and finally analyze the singularities of the weighted generating functions to approximate the quantities of interest (e.g., through Theorem 2). We can also use the generating function to set up an exact recurrence to find those quantities.

The seeding probabilities derived here are robust and relatively straightforward, as they only entail solving a polynomial equation in real space. For short reads, where the precision of the approximations may be an issue, the solution is better computed by recurrence. Mapping high throughput sequencing reads generates a sufficient amount of data to estimate the parameters of the error model. One can thus envision auto-tuning the seeding heuristic of read mapping during the run. This can give tight and automatic control over the seeding probability. Alternatively, the theory developed above could be used to help users choose the parameter values of the mapping algorithm.

Seeding is not only used in mapping, but also in other alignment problems. In this regard, the work presented above can be applied to different contexts. That said, mapping high throughput sequencing reads is a “sweet spot” for analytic combinatorics because the sequences are usually long enough for the approximations to be accurate.

In summary, analytic combinatorics is a powerful strategy that comes with a rich toolbox that has many applications in modern bioinformatics. More applications will see the light when this theory is more widely known in the bioinformaics community.

I would like to thank the anonymous reviewers for their important contributions to this work. I would also like to thank Eduard Valera Zorita, Patrick Berger and Roman Cheplyaka for their comments on this work. I acknowledge the financial support of the Spanish Ministry of Economy and Competitiveness (‘Centro de Excelencia Severo Ochoa 2013-2017’, Plan Nacional BFU2012-37168), of the CERCA (Centres de Recerca de Catalunya) Programme / Generalitat de Catalunya, and of the European Research Council (Synergy Grant 609989).

The author declares no conflict of interest.

Here, we show that in the error model of Section 4.2, the moduli of the singularities of ${S}_{\gamma}\left(z\right)$ expressed in (10) get closer to each other as p decreases. More specifically, $|{z}_{j}|\sim |{z}_{m}|\phantom{\rule{0.277778em}{0ex}}(p\downarrow 0)$ for any two singularities ${z}_{j}$ and ${z}_{m}$.

Recall that the singularities of ${S}_{\gamma}\left(z\right)$ are the roots of $Q\left(z\right)=1-pz\left(1+qz+\dots +{\left(qz\right)}^{\gamma -1}\right)$, where $q=1-p$. Let ${z}_{j}$ be a root of Q and rearrange the terms of the equation $Q\left({z}_{j}\right)=0$ to obtain ${z}_{j}\left(1+q{z}_{j}+\dots +{\left(q{z}_{j}\right)}^{\gamma -1}\right)=1/p$. As $p\downarrow 0$, the right-hand side tends to $+\infty $ so the left-hand side must also tend to $+\infty $, imposing ${lim}_{p\downarrow 0}\left|{z}_{j}\right|=+\infty $.

Multiply $Q\left(z\right)=0$ by $(1-qz)$ and use $1+qz+\dots +{\left(qz\right)}^{\gamma -1}=\left(1-{\left(qz\right)}^{\gamma}\right)/(1-qz)$, where $z\ne 1/q$ to see that every singularity ${z}_{j}$ solves the equation $(1-qz)Q\left(z\right)=1-z+p{q}^{\gamma}{z}^{\gamma +1}=0$ or equivalently

$$1-1/{z}_{j}=p{q}^{\gamma}{z}_{j}^{\gamma},\phantom{\rule{1.em}{0ex}}{z}_{j}\ne \frac{1}{q}.$$

Since ${lim}_{p\downarrow 0}\left|{z}_{j}\right|=+\infty $, taking the limit of the equation above yields

$$\underset{p\downarrow 0}{lim}{\left(q{p}^{1/\gamma}{z}_{j}\right)}^{\gamma}=1,\phantom{\rule{1.em}{0ex}}\mathrm{i}.\mathrm{e}.,\phantom{\rule{1.em}{0ex}}\left|{z}_{j}\right|\sim \frac{1}{q{p}^{1/\gamma}}\phantom{\rule{1.em}{0ex}}(p\downarrow 0).$$

This is sufficient to prove that $|{z}_{j}|\sim |{z}_{m}|\phantom{\rule{0.277778em}{0ex}}(p\downarrow 0)$ for any two singularities ${z}_{j}$ and ${z}_{m}$, but we can further show that

$${z}_{j}\sim \frac{{e}^{2i(j-1)\pi /\gamma}}{q{p}^{1/\gamma}}\phantom{\rule{1.em}{0ex}}(p\downarrow 0),\phantom{\rule{1.em}{0ex}}j=1,2,\dots ,\gamma .$$

From (A1), we see that the terms $q{p}^{1/\gamma}{z}_{j}$ tend to $\gamma $-th roots of unity, which have $\gamma $ possible values. If we prove that Q has $\gamma $ distinct roots, then each of them must correspond to a different $\gamma $-th root of unity and (A2) will follow. Since Q is a polynomial of degree $\gamma $, we must prove that all its roots have multiplicity 1, i.e., that they do not solve ${Q}^{\prime}\left(z\right)=0$.

Let $V\left(z\right)=(1-qz)Q\left(z\right)=1-z+p{q}^{\gamma}{z}^{\gamma +1}$, so ${V}^{\prime}\left(z\right)=-1+(\gamma +1)p{q}^{\gamma}{z}^{\gamma}$, and compute the greatest common divisor of $V\left(z\right)$ and ${V}^{\prime}\left(z\right)$:

$$gcd\left(V\left(z\right),{V}^{\prime}\left(z\right)\right)=gcd(V\left(z\right)-\frac{z}{\gamma +1}{V}^{\prime}\left(z\right),{V}^{\prime}\left(z\right))=gcd(1-\frac{\gamma z}{\gamma +1},{V}^{\prime}\left(z\right)).$$

Up to a constant factor independent of z, the greatest common divisor is either 1 or $1-\gamma z/(\gamma +1)$. If $z=1+1/\gamma $ is not a root of ${V}^{\prime}\left(z\right)$, then $V\left(z\right)$ and ${V}^{\prime}\left(z\right)$ are relatively prime, so $V\left(z\right)$ does not have any double roots and neither does $Q\left(z\right)$.

If $z=1+1/\gamma $ is a root of ${V}^{\prime}\left(z\right)$, then the greatest common divisor is $1-\gamma z/(\gamma +1)$ so $z=1+1/\gamma $ is a root of $V\left(z\right)$ with multiplicity 2. This case arises when $p=1/(\gamma +1)$, which implies that $z=1+1/\gamma =1/q$. In $V\left(z\right)=(1-qz)Q\left(z\right)$, the factor $1-qz$ contributes one occurrence of the root, so $Q\left(z\right)$ contributes the other occurrence and $z=1+1/\gamma $ is thus a single root of $Q\left(z\right)$.

- Reuter, J.A.; Spacek, D.V.; Snyder, M.P. High-throughput sequencing technologies. Mol. Cell
**2015**, 58, 586–597. [Google Scholar] [CrossRef] [PubMed] - Quilez, J.; Vidal, E.; Dily, F.L.; Serra, F.; Cuartero, Y.; Stadhouders, R.; Graf, T.; Marti-Renom, M.A.; Beato, M.; Filion, G. Parallel sequencing lives, or what makes large sequencing projects successful. Gigascience
**2017**, 6, 1–6. [Google Scholar] [CrossRef] [PubMed] - Li, H.; Homer, N. A survey of sequence alignment algorithms for next-generation sequencing. Brief. Bioinform.
**2010**, 11, 473–483. [Google Scholar] [CrossRef] [PubMed] - Durbin, R.; Eddy, S.R.; Krogh, A.; Mitchison, G. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
- Sun, Y.; Buhler, J. Choosing the best heuristic for seeded alignment of DNA sequences. BMC Bioinform.
**2006**, 7, 133. [Google Scholar] [CrossRef] [PubMed] - Altschul, S.F.; Gish, W.; Miller, W.; Myers, E.W.; Lipman, D.J. Basic local alignment search tool. J. Mol. Biol.
**1990**, 215, 403–410. [Google Scholar] [CrossRef] - Karlin, S.; Altschul, S.F. Applications and statistics for multiple high-scoring segments in molecular sequences. Proc. Natl. Acad. Sci. USA
**1993**, 90, 5873–5877. [Google Scholar] [CrossRef] [PubMed] - Karlin, S.; Altschul, S.F. Methods for assessing the statistical significance of molecular sequence features by using general scoring schemes. Proc. Natl. Acad. Sci. USA
**1990**, 87, 2264–2268. [Google Scholar] [CrossRef] [PubMed] - Ferragina, P.; Manzini, G. Opportunistic Data Structures with Applications. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, CA, USA, 12–14 November 2000; pp. 390–398. [Google Scholar]
- Li, H.; Durbin, R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics
**2009**, 25, 1754–1760. [Google Scholar] [CrossRef] [PubMed] - Langmead, B.; Trapnell, C.; Pop, M.; Salzberg, S.L. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol.
**2009**, 10, R25. [Google Scholar] [CrossRef] [PubMed] - Flajolet, P.; Odlyzko, A. Singularity analysis of generating functions. SIAM J. Discrete Math.
**1990**, 3, 216–240. [Google Scholar] [CrossRef] - Flajolet, P.; Sedgewick, R. An introduction to the analysis of algorithms, 2nd ed.; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1996. [Google Scholar]
- Flajolet, P.; Sedgewick, R. Analytic Combinatorics, 1st ed.; Cambridge University Press: New York, NY, USA, 2009. [Google Scholar]
- Lladser, M.E.; Betterton, M.D.; Knight, R. Multiple pattern matching: A Markov chain approach. J. Math. Biol.
**2008**, 56, 51–92. [Google Scholar] [CrossRef] [PubMed] - Fu, J.C.; Koutras, M.V. Distribution Theory of Runs: A Markov Chain Approach. J. Am. Stat. Assoc.
**1994**, 89, 1050–1058. [Google Scholar] [CrossRef] - Regnier, M.; Kirakossian, Z.; Furletova, E.; Roytberg, M. A word counting graph. In London Algorithmics 2008: Theory and Practice (Texts in Algorithmics); Chan, J., Daykin, J.W., Sohel, M., Eds.; Rahman London College Publications: London, UK, 2009; p. 31. [Google Scholar]
- Nuel, G. Pattern Markov Chains: Optimal Markov Chain Embedding Through Deterministic Finite Automata. J. Appl. Prob.
**2008**, 45, 226–243. [Google Scholar] [CrossRef] - Nuel, G.; Delos, V. Counting Regular Expressions in Degenerated Sequences Through Lazy Markov Chain Embedding. In Forging Connections between Computational Mathematics and Computational Geometry: Papers from the 3rd International Conference on Computational Mathematics and Computational Geometry; Chen, K., Ravindran, A., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 235–246. [Google Scholar]
- Chaisson, M.J.; Tesler, G. Mapping single molecule sequencing reads using basic local alignment with successive refinement (BLASR): Application and theory. BMC Bioinform.
**2012**, 13, 238. [Google Scholar] [CrossRef] [PubMed] - Joyal, A. Une théorie combinatoire des séries formelles. Adv. Math.
**1981**, 42, 1–82. [Google Scholar] [CrossRef] - Bona, M. Handbook of Enumerative Combinatorics; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Flajolet, P.; Gardy, D.; Thimonier, L. Birthday Paradox, Coupon Collectors, Caching Algorithms and Self-organizing Search. Discrete Appl. Math.
**1992**, 39, 207–229. [Google Scholar] [CrossRef] - Pemantle, R.; Wilson, M.C. Analytic Combinatorics in Several Variables; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
- Bender, E.A. Asymptotic Methods in Enumeration. SIAM Rev.
**1974**, 16, 485–515. [Google Scholar] [CrossRef] - Nakamura, K.; Oshima, T.; Morimoto, T.; Ikeda, S.; Yoshikawa, H.; Shiwa, Y.; Ishikawa, S.; Linak, M.C.; Hirai, A.; Takahashi, H.; et al. Sequence-specific error profile of Illumina sequencers. Nucleic Acids Res.
**2011**, 39, e90. [Google Scholar] [CrossRef] [PubMed] - R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2015. [Google Scholar]

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).