Next Article in Journal
Allometric Scaling of Mutual Information in Complex Networks: A Conceptual Framework and Empirical Approach
Next Article in Special Issue
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf’s Law for Word Frequencies
Previous Article in Journal
Nonlinear Canonical Correlation Analysis:A Compressed Representation Approach
Previous Article in Special Issue
Criticality in Pareto Optimal Grammars?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Analysis of the kth Subword Complexity

1
Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA
2
Department of Statistics, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Current address: Department of Mathematics, 5500 University Parkway, San Bernardino, CA 92407, USA.
Entropy 2020, 22(2), 207; https://doi.org/10.3390/e22020207
Submission received: 25 December 2019 / Revised: 28 January 2020 / Accepted: 4 February 2020 / Published: 12 February 2020
(This article belongs to the Special Issue Information Theory and Language)

Abstract

:
Patterns within strings enable us to extract vital information regarding a string’s randomness. Understanding whether a string is random (Showing no to little repetition in patterns) or periodic (showing repetitions in patterns) are described by a value that is called the kth Subword Complexity of the character string. By definition, the kth Subword Complexity is the number of distinct substrings of length k that appear in a given string. In this paper, we evaluate the expected value and the second factorial moment (followed by a corollary on the second moment) of the kth Subword Complexity for the binary strings over memory-less sources. We first take a combinatorial approach to derive a probability generating function for the number of occurrences of patterns in strings of finite length. This enables us to have an exact expression for the two moments in terms of patterns’ auto-correlation and correlation polynomials. We then investigate the asymptotic behavior for values of k = Θ ( log n ) . In the proof, we compare the distribution of the kth Subword Complexity of binary strings to the distribution of distinct prefixes of independent strings stored in a trie. The methodology that we use involves complex analysis, analytical poissonization and depoissonization, the Mellin transform, and saddle point analysis.

1. Introduction

Analyzing and understanding occurrences of patterns in a character string is helpful for extracting useful information regarding the nature of a string. We classify strings to low-complexity and high-complexity, according to their level of randomness. For instance, we take the binary string X = 10101010 . . . , which is constructed by repetitions of the pattern w = 10 . This string is periodic, and therefore has low randomness. Such periodic strings are classified as low-complexity strings, whereas strings that do not show periodicity are considered to have high complexity. An effective way of measuring a string’s randomness is to count all distinct patterns that appear as contiguous subwords in the string. This value is called the Subword Complexity. The name is given by Ehrenfeucht, Lee, and Rozenberg [1], and initially was introduced by Morse and Hedlund in 1938 [2]. The higher the Subword Complexity, the more complex the string is considered to be.
Assessing information about the distribution of the Subword Complexity enables us to better characterize strings, and determine atypically random or periodic strings that have complexities far from the average complexity [3]. This type of string classification has applications in fields such as data compression [4], genome analysis (see [5,6,7,8,9]), and plagiarism detection [10]. For example, in data compression, a data set is considered compressible if it has low complexity, as consists of repeated subwords. In computational genomics, Subword Complexity (known as k-mers) is used in detection of repeated sequences and DNA barcoding [11,12]. k-mers are composed of A, T, G, and C nucleotides. For instance, 7-mers for a DNA sequence GTAGAGCTGT is four, meaning that there are 4-hour distinct substrings of length 7 in the given DNA sequence. Counting k-mers becomes challenging for longer DNA sequences. Our results can be easily extended to the alphabet { A , T , G , C } and directly applied in theoretical analysis of the genomic k-mer distributions under the Bernoulli probabilistic model, particularly when the length n of the sequence approaches infinity.
There are two variations for the definition of the Subword Complexity: the one that counts all distinct subwords of a given string (also known as Complexity Index and Sequence Complexity [13]), and the one that only counts the subwords of the same length, say k, that appear in the string. In our work, we analyze the latter, and we call it the kth Subword Complexity to avoid any confusion.
Throughout this work, we consider the kth Subword Complexity of a random binary string of length n over a memory-less source, and we denote it by X n , k . We analyze the first and second factorial moments of X n , k (1) for the range k = Θ ( log n ) , as n . More precisely, will divide the analysis into three ranges as follows.
i .
1 log q 1 log n < k < 2 log q 1 + log p 1 log n ,
i i .
2 log q 1 + log p 1 log n < k < 1 q log q 1 + p log p 1 log n , and
i i i .
1 q log q 1 + p log p 1 log n < k < 1 log p 1 log n .
Our approach involves two major steps. First, we choose a suitable model for the asymptotic analysis, and afterwards we provide proofs for the derivation of the asymptotic expansion of the first two factorial moments.

1.1. Part I

This part of the analysis is inspired by the earlier work of Jacquet and Szpankowski [14] on the analysis of suffix trees by comparing them to independent tries. A trie, first introduced by René de la Briandais in 1959 (see [15]), is a search tree that stores n strings, according to their prefixes. A suffix tree, introduced by Weiner in 1973 (see [16]), is a trie where the strings are suffixes of a given string. An example of these data structures are given in Figure 1.
A direct asymptotic analysis of the moments is a difficult task, as patterns in a string are not independent from each other. However, we note that each pattern in a string can be regarded as a prefix of a suffix of the string. Therefore, the number of distinct patterns of length k in a string is actually the number of nodes of the suffix tree at level k and lower. It is shown by I. Gheorghiciuc and M. D. Ward [17] that the expected value of the k-th Subword Complexity of a Bernoulli string of length n is asymptotically comparable to the expected value of the number of nodes at level k of a trie built over n independent strings generated by a memory-less source.
We extend this analysis to the desired range for k, and we prove that the result holds for when k grows logarithmically with n. Additionally, we show that asymptotically, the second factorial moment of the k-th Subword Complexity can also be estimated by admitting the same independent model generated by a memory-less source. The proof of this theorem heavily relies on the characterization of the overlaps of the patterns with themselves and with one another. Autocorrelation and correlation polynomials explicitly describe these overlaps. The analytic properties of these polynomials are key to understanding repetitions of patterns in large Bernoulli strings. This, in conjunction with Cauchy’s integral formula (used to compare the generating functions in the two models) and the residue theorem, provides solid verification that the second factorial moment in the Subword Complexity behaves the same as in the independent model.
To make this comparison, we derive the generating functions of the first two factorial moments in both settings. In a paper published by F. Bassino, J. Clément, and P. Nicodème in 2012 [18], the authors provide a multivariate probability generating function f ( z , x ) for the number of occurrences of patterns in a finite Bernoulli string. That is, given a pattern w, the coefficient of the term z n x m in f ( z , x ) is the probability in the Bernoulli model that a random string of size n has exactly m occurrences of the pattern w. Following their technique, we derive the exact expression for the generating functions of the first two factorial moments of the kth Subword Complexity. In the independent model, the generating functions are obtained by basic probability concepts.

1.2. Part II

This part of the proof is analogous to the analysis of profile of tries [19]. To capture the asymptotic behavior, the expressions for the first two factorial moments in the independent trie are further improved by means of a Poisson process. The poissonized version yields generating functions in the form of harmonic sums for each of the moments. The Mellin transform and the inverse Mellin transforms of these harmonic sums establish a connection between the asymptotic expansion and singularities of the transformed function. This methodology is sufficient for when the length k of the patterns are fixed. However, allowing k to grow with n, makes the analysis more challenging. This is because for large k, the dominant term of the poissonized generating function may come from the term involving k, and singularities may not be significant compared to the growth of k. This issue is treated by combining the singularity analysis with a saddle point method [20]. The outcome of the analysis is a precise first-order asymptotics of the moments in the poissonized model. Depoissonization theorems are then applied to obtain the desired result in the Bernoulli model.

2. Results

For a binary string X = X 1 X 2 . . . X n , where X i ’s ( i = 1 , . . . , n ) are independent and identically distributed random variables, we assume that P ( X i = 1 ) = p , P ( X i = 0 ) = q = 1 p , and p > q . We define the kth Subword Complexity, X n , k , to be the number of distinct substrings of length k that appear in a random string X with the above assumptions. In this work, we obtain the first order asymptotics for the average and the second factorial moment of X n , k . The analysis is done in the range k = Θ ( log n ) . We rewrite this range as k = a log n , and by performing a saddle point analysis, we will show that
1 / log q 1 < a < 1 / log p 1
In the first step, we compare the kth Subword Complexity to an independent model constructed in the following way: We store a set of n independently generated strings by a memory-less source in a trie. This means that each string is a sequence of independent and identically distributed Bernoulli random variables from the binary alphabet A = { 0 , 1 } , with P ( 1 ) = p , P ( 0 ) = q = 1 p . We denote the number of distinct prefixes of length k in the trie by X ^ n , k , and we call it the kth prefix complexity. Before proceeding any further, we remind that factorial moments of a random variable are defined as following.
Definition 1.
The jth factorial moment of a random variable X is defined as
E [ ( X ) j ] = E [ ( X ) ( X 1 ) ( X 2 ) . . . ( X j + 1 ) ] ,
where j = 1, 2, … will show that the first and second factorial moments of X n , k are asymptotically comparable to those of X ^ n , k , when k = Θ ( log n ) . We have the following theorems.
Theorem 1.
For large values of n, and for k = Θ ( log n ) , there exists M > 0 such that
E [ X n , k ] E [ X ^ n , k ] = O ( n M ) .
We also prove a similar result for the second factorial moments of the kth Subword Complexity and the kth Prefix Complexity:
Theorem 2.
For large values of n, and for k = Θ ( log n ) , there exists ϵ > 0 such that
E [ ( X n , k ) 2 ] E [ ( X ^ n , k ) 2 ] = O ( n ϵ ) .
In the second part of our analysis, we derive the first order asymptotics of the kth Prefix Complexity. The methodology used here is analogous to the analysis of profile of tries [19]. The rate of the asymptotic growth depends on the location of the value a as seen in (1). For instance, for the average kth Subword Complexity, E [ X n , k ] , we have the following observations.
i.
For the range I 1 : 1 log q 1 < a < 2 log q 1 + log p 1 , the growth rate is of order O ( 2 k ) ,
ii.
in the range I 2 : 2 log q 1 + log p 1 < a < 1 q log q 1 + p log p 1 , we observe some oscillations with n, and
iii.
in the range I 3 : 1 q log q 1 + p log p 1 < a < 1 log p 1 , the average has a linear growth O ( n ) .
The above observations will be discussed in depth in the proofs of the following theorems.
Theorem 3.
The average of the kth Prefix Complexity has the following asymptotic expansion
i. 
For a I 1 ,
E [ X ^ n , k ] = 2 k Φ 1 ( ( 1 + log p ) log p / q n ) n ν log n 1 + O 1 log n ,
where ν = r 0 + a log ( p r 0 + q r 0 ) , and
Φ 1 ( x ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 π log p / q j Z Γ ( r 0 + i t j ) e 2 π i j x
is a bounded periodic function.
ii. 
For a I 2 ,
E [ X ^ n , k ] = Φ 1 ( ( 1 + log p ) log p / q n ) n ν log n 1 + O 1 log n .
iii. 
For a I 3
E [ X ^ n , k ] = n + O ( n ν 0 ) ,
for some ν 0 < 1 .
Theorem 4.
The second factorial moment of the kth Prefix Complexity has the following asymptotic expansion.
i. 
For a I 1 ,
E [ ( X ^ n , k ) 2 ] = 2 k Φ 1 ( log p / q n ( 1 + log p ) ) n ν log n 1 + O 1 log n 2 .
ii. 
For a I 2 ,
E [ ( X ^ n , k ) 2 ] = Φ 1 2 ( log p / q n ( 1 + log p ) ) n 2 ν log n 1 + O 1 log n .
iii. 
For a I 3 ,
E [ ( X ^ n , k ) 2 ] = n 2 + O ( n 2 ν 0 ) .
The periodic function Φ 1 ( x ) in Theorems 3 and 4 is shown in Figure 2.
The results in Theorem 4 will follow for the second moment of the kth Subword Complexity as the analysis can be easily extended from the second factorial moment to the second moment. The variance however, as seen in Figure 3, does not show the same asymptotic behavior as the variance of kth Subword Complexity.

3. Proofs and Methods

3.1. Groundwork

We first introduce a few terminologies and lemmas regarding overlaps of patterns and their number of occurrences in texts. Some of the notations we use in this work are borrowed from [18] and [21].
Definition 2.
For a binary word w = w 1 . . . w k of length k, The autocorrelation set S w of the word w is defined in the following way.
S w = { w i + 1 . . . w k | w 1 . . . w i = w k i + 1 . . . w k } .
The autocorrelation index set is
P ( w ) = { i | w 1 . . . w i = w k i + 1 . . . w k } ,
And the autocorrelation polynomial is
S w ( z ) = i P ( w ) P ( w i + 1 . . . w k ) z k i .
Definition 3.
For the distinct binary words w = w 1 . . . w k and w = w 1 . . . w k , the correlation set S w , w of the words w and w is
S w , w = { w i + 1 . . . w k | w 1 . . . w i = w k i + 1 . . . w k } .
The correlation index set is
P ( w , w ) = { i | w 1 . . . w i = w k i + 1 . . . w k } ,
The correlation polynomial is
S w , w ( z ) = i P ( w , w ) P ( w i + 1 . . . w k ) z k i .
The following two lemmas present the probability generating functions for the number of occurrences of a single pattern and a pair of distinct pattern, respectively, in a random text of length n. For a detailed dissection on obtaining such generating functions, refer to [18].
Lemma 1.
The Occurrence probability generating function for a single pattern w in a binary text over a memoryless source is given by F w ( z , x 1 ) , where
F w ( z , t ) = 1 1 A ( z ) t P ( w ) z k 1 t ( S w ( z ) 1 ) ,
The coefficient [ z n x m ] F w ( z , x 1 ) is the probability that a random binary string of length n has m occurrences of the pattern w.
Lemma 2.
The Occurrence PGF for two distinct Patterns of length k in a Bernoulli random text is given by F w , w ( z , x 1 1 , x 2 1 ) where,
F w , w ( z , t 1 , t 2 ) = 1 1 A ( z ) M ( z , t 1 , t 2 ) ,
and
M ( z , t 1 , t 2 ) = P ( w ) z k t 1 P ( w ) z k t 2 I ( S w ( z ) 1 ) t 1 S w , w ( z ) t 2 S w , w ( z ) t 1 ( S w ( z ) 1 ) t 2 1 1 1 .
The coefficient [ z n x 1 m 1 x 2 m 2 ] F w , w ( z , x 1 1 , x 2 1 ) is the probability that there are m 1 occurrences of w and m 2 occurrences of w in a random string of length n.
The above results will be used to find the generating functions for the first two factorial moments of the kth Subword Complexity in the following section.

3.2. Derivation of Generating Functions

Lemma 3.
For generating functions H k ( z ) = n 0 E [ X n , k ] z n and G k ( z ) = n 0 E [ ( X n , k ) 2 ] z n , we have
i. 
H k ( z ) = w A k 1 1 z S w ( z ) D w ( z ) ,
where D w ( z ) = P ( w ) z k + ( 1 z ) S w ( z ) , and
ii. 
G k ( z ) = w , w A k w w 1 1 z S w ( z ) D w ( z ) S w ( z ) D w ( z ) + S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) ,
where
D w , w ( z ) = ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) + z k P ( w ) ( S w ( z ) S w , w ( z ) ) + P ( w ) ( S w ( z ) S w , w ( z ) ) .
Proof. 
i . We define
X n , k ( w ) = 1 if w appears at least once in string X 0 otherwise .
This yields
E [ X n , k ( w ) ] = P ( X n , k ( w ) = 1 ) = 1 P ( X n , k ( w ) = 0 ) = 1 [ z n x 0 ] F w ( z , x ) .
We observe that [ z n x 0 ] F w ( z , x ) = [ z n ] F w ( z , 0 ) . By defining f w ( z ) = F w ( z , 0 ) and from (10), we obtain
f w ( z ) = S w ( z ) P ( w ) z k + ( 1 z ) S w ( z ) .
Having the above function, we derive the following result.
H ( z ) = n 0 E [ X n , k ] z n = n 0 w A k ( 1 [ z n ] f w ( z ) ) z n = w A k 1 1 z f w ( z ) = w A k 1 1 z S w ( z ) D w ( z ) .
i i . For this part, we first note that
E [ ( X n , k ) 2 ] = E [ X n , k 2 ] E [ X n , k ] = E ( X n , k ( w ) + . . . + X n , k ( w ( r ) ) ) 2 E X n , k ( w ) + . . . + X n , k ( w ( r ) ) = w A k E ( X n , k ( w ) ) 2 + w , w A k w w E X n , k ( w ) X n , k ( w ) w A k E X n , k ( w ) = w , w A k w w E X n , k ( w ) X n , k ( w ) .
Due to properties of indicator random variables, we observe that the expected value of the second factorial moment has only one term:
E [ ( X n , k ) 2 ] = w , w A k w w E X n , k ( w ) X n , k ( w ) .
We proceed by defining a second indicator variable as following.
X n , k ( w ) X n , k ( w ) = 1 if X n , k ( w ) = X n , k ( w ) = 1 0 otherwise .
This gives
E [ X n , k ( w ) X n , k ( w ) ] = P X n , k ( w ) = 1 , X n , k ( w ) = 1 = 1 P X n , k ( w ) = 0 X n , k ( w ) = 0 = 1 P X n , k ( w ) = 0 P X n , k ( w ) = 0 + P X n , k ( w ) = 0 , X n , k ( w ) = 0 .
Finally, we are able to express E [ ( X n , k ) 2 ] in the following
E [ ( X n , k ) 2 ] = w , w A k w w 1 z n f w ( z ) [ z n ] f w ( z ) + [ z n ] f w w ( z ) ,
where f w , w ( z ) = F w , w ( z , 0 , 0 ) and [ z n ] F w , w ( z , 0 , 0 ) = [ z n x 1 0 x 2 0 ] F w , w ( z , x 1 , x 2 ) . By (11) we have
f w , w ( z ) = S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z )
Having the above expression, we finally obtain
G k ( z ) = n 0 E [ ( X n , k ) 2 ] z n = w , w A k w w n 0 1 [ z n ] f w ( z ) [ z n ] f w ( z ) + [ z n ] f w , w ( z ) z n = w , w A k w w 1 1 z f w ( z ) f w ( z ) + f w , w ( z ) = w , w A k w w 1 1 z S w ( z ) D w ( z ) S w ( z ) D w ( z ) + S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) .
In the following lemma, we present the generating functions for the first two factorial moments for the kth Prefix Complexity in the independent model.
Lemma 4.
For H ^ k ( z ) = n 0 E [ X ^ n , k ] z n and G ^ k ( z ) = n 0 E [ ( X ^ n , k ) 2 ] z n , which are the generating functions for E [ X ^ n , k ] and E [ ( X ^ n , k ) 2 ] respectively, we have
i. 
H ^ k ( z ) = w A k 1 1 z 1 1 ( 1 P ( w ) ) z .
ii. 
G ^ k ( z ) = w , w A k w w 1 1 z 1 1 ( 1 P ( w ) ) z 1 1 ( 1 P ( w ) ) z + w , w A k w w 1 1 ( 1 P ( w ) P ( w ) ) z .
Proof. 
i . We define the indicator variable X ^ n , k ( w ) as follows.
X ^ n , k ( w ) = 1 if w is a prefix of at least one string in P 0 otherwise .
For each X ^ n , k ( w ) , we have
E [ X ^ n , k ( w ) ] = P ( X ^ n , k ( w ) = 1 ) = 1 P ( X ^ n , k ( w ) = 0 ) = 1 1 P ( w ) n .
Summing over all words w of length k, determines the generating function H ^ ( z ) :
H ^ ( z ) = n 0 E [ X ^ n , k ] z n = w A k 1 1 z 1 1 ( 1 P ( w ) ) z .
i i . Similar to in (18) and (20), we obtain
E [ ( X ^ n , k ) 2 ] = w , w A k w w E [ X ^ n , k ( w ) X ^ n , k ( w ) ] = w , w A k w w 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n .
Subsequently, we obtain the generating function below.
G ^ ( z ) = n 0 E [ ( X ^ n , k ) 2 ] z n = w , w A k w w n 0 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n z n = w , w A k w w 1 1 z 1 1 ( 1 P ( w ) ) z 1 1 ( 1 P ( w ) ) z + w , w A k w w 1 1 ( 1 P ( w ) P ( w ) ) z .
Our first goal is to compare the coefficients of the generating functions in the two models. The coefficients are expected to be asymptotically equivalent in the desired range for k. To compare the coefficients, we need more information on the analytic properties of these generating functions. This will be discussed in Section 3.3.

3.3. Analytic Properties of the Generating Functions

Here, we turn our attention to the smallest singularities of the two generating functions given in Lemma 3. It has been shown by Jacquet and Szpankowski [21] that D w ( z ) has exactly one root in the disk | z | ρ . Following the notations in [21], we denote the root within the disk | z | ρ of D w ( z ) by A w , and by bootstrapping we obtain
A w = 1 + 1 S w ( 1 ) P ( w ) + O P ( w ) 2 .
We also denote the derivative of D w ( z ) at the root A w , by B w , and we obtain
B w = S w ( 1 ) + k 2 S w ( 1 ) S w ( 1 ) P ( w ) + O P ( w ) 2 .
In this paper, we will prove a similar result for the polynomial D w , w ( z ) through the following work.
Lemma 5.
If w and w are two distinct binary words of length k and δ = p , there exists ρ > 1 , such that ρ δ < 1 and
w A k [ [ | S w , w ( ρ ) | ( ρ δ ) k θ ] ] P ( w ) 1 θ δ k .
Proof. 
If the minimal degree of S w , w ( z ) is greater than > k / 2 , then
| S w , w ( ρ ) | ( ρ δ ) k θ .
for θ = ( 1 p ) 1 . For a fixed w , we have
w A k [ [ S w , w ( z ) has minimal degree k / 2 ] ] P ( w ) = i = 1 k / 2 w A k [ [ S w , w ( z ) has minimal degree = i ] ] P ( w ) = i = 1 k / 2 w 1 . . . w i A i P ( w 1 . . . w i ) w i + 1 . . . w k A k i [ [ S w , w ( z ) has minimal degree = i ] ] P ( w i + 1 . . . w k ) i = 1 k / 2 w 1 . . w i A i P ( w i + 1 . . . w k ) p k i = i = 1 k / 2 p k i w 1 . . w i A i P ( w 1 . . . w i ) = i = 1 k / 2 p k i p k k / 2 1 p .
This leads to the following
w A k [ [ every term of S w , w ( z ) is of degree > k / 2 ] ] P ( w ) = 1 w A k [ [ S w , w ( z ) has a term of degree k / 2 ] ] P ( w ) 1 p k / 2 1 p 1 θ δ k .
Lemma 6.
There exist K > 0 , and ρ > 1 such that p ρ < 1 , and such that, for every pair of distinct words w, and w of length k K , and for | z | ρ , we have
| S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) | > 0 .
In other words, S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) does not have any roots in | z | ρ .
Proof. 
There are three cases to consider:
Case i . When either S w ( z ) = 1 or S w ( z ) = 1 , then every term of S w , w ( z ) S w , w ( z ) has degree k or larger, and therefore
| S w , w ( z ) S w , w ( z ) | k ( p ρ ) k 1 p ρ .
There exists K 1 > 0 , such that for k > K 1 , we have lim k k ( p ρ ) k 1 p ρ = 0 . This yields
| S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) | | S w ( z ) S w ( z ) | | S w , w ( z ) S w , w ( z ) | 1 k ( p ρ ) k 1 p ρ > 0 .
Case i i . If the minimal degree for S w ( z ) 1 or S w ( z ) 1 is greater than k / 2 , then every term of S w , w ( z ) S w , w ( z ) has degree at least k / 2 . We also note that, by Lemma 9, | S w ( z ) S w ( z ) | > 0 . Therefore, there exists K 2 > 0 , such that
| S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) | | S w ( z ) S w ( z ) | | S w , w ( z ) S w , w ( z ) | > 0 for k > K 2 .
Case i i i . The only remaining case is where the minimal degree for S w ( z ) 1 and S w ( z ) 1 are both less than or equal to k / 2 . If w = w 1 . . . w k , then w = u w 1 . . . w k m , where u is a word of length m 1 . Then we have
S w , w ( z ) = P ( w k m + 1 . . . w k ) z m S w ( z ) O ( p z ) k m .
There exists K 3 > 0 , such that
| S w , w ( z ) | ( p ρ ) m | S w ( z | + O ( p ρ ) k m = ( p ρ ) m | S w ( z ) | + O ( p ρ ) k < | S w ( z ) | for k > K 3 .
Similarly, we can show that there exists K 3 , such that | S w , w ( z ) | < | S w ( z ) | . Therefore, for k > K 3 we have
| S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) | | S w ( z ) | | S w ( z ) | | S w , w ( z ) | | S w , w ( z ) | > | S w ( z ) | | S w ( z ) | | S w ( z ) | | S w ( z ) | = 0 .
We complete the proof by setting K = max { K 1 , K 2 , K 3 , K 3 } . □
Lemma 7.
There exist K w , w > 0 and ρ > 1 such that p ρ < 1 , and for every word w and w of length k K w , w , the polynomial
D w , w ( z ) = ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) + z k P ( w ) ( S w ( z ) S w , w ( z ) ) + P ( w ) ( S w ( z ) S w , w ( z ) ) ,
has exactly one root in the disk | z | ρ .
Proof. 
First note that
| S w ( z ) S w , w ( z ) | | S w ( z ) | + | S w , w ( z ) | 1 1 p ρ + p ρ 1 p ρ = 1 + p ρ 1 p ρ .
This yields
z k P ( w ) ( S w ( z ) S w , w ( z ) ) + P ( w ) ( S w ( z ) S w , w ( z ) ) ( p ρ ) k | S w ( z ) S w , w ( z ) | + | S w ( z ) S w , w ( z ) | ( p ρ ) k 2 ( 1 + p ρ ) 1 p ρ .
There exist K , K large enough, such that, for k > K , we have
| ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) | β > 0 ,
and for k > K ,
( p ρ ) k 2 ( 1 + p ρ ) 1 p ρ < ( ρ 1 ) β .
If we define K w , w = max { K , K } , then we have, for k K w , w ,
( p ρ ) k 2 ( 1 + p ρ ) 1 p ρ < ( ρ 1 ) β < | ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) | .
by Rouché’s theorem, as ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) has only one root in | z | ρ , then also D w , w ( z ) has exactly one root in | z | ρ . □
We denote the root within the disk | z | ρ of D w , w ( z ) by α w , w , and by bootstrapping we obtain
α w , w = 1 + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + O ( p 2 k ) .
We also denote the derivative of D w , w ( z ) at the root α w , w , by β w , w , and we obtain
β w , w = S w , w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) + O ( k p k ) .
We will refer to these expressions in the residue analysis that we present in the next section.

3.4. Asymptotic Difference

We begin this section by the following lemmas on the autocorrelation polynomials.
Lemma 8
(Jacquet and Szpankowski, 1994). For most words w, the autocorrelation polynomial S w ( z ) is very close to 1, with high probably. More precisely, if w is a binary word of length k and δ = p , there exists ρ > 1 , such that ρ δ < 1 and
w A k [ [ | S w ( ρ ) 1 | ( ρ δ ) k θ ] ] P ( w ) 1 θ δ k ,
where θ = ( 1 p ) 1 . We use Iverson notation
[ [ A ] ] = 1 i f A h o l d s 0 o t h e r w i s e
Lemma 9
(Jacquet and Szpankowski, 1994). There exist K > 0 and ρ > 1 , such that p ρ < 1 , and for every binary word w with length k K and | z | ρ , we have
| S w ( z ) | > 0 .
In other words, S w ( z ) does not have any roots in | z | ρ .
Lemma 10.
With high probability, for most distinct pairs { w , w } , the correlation polynomial S w , w ( z ) is very close to 0. More precisely, if w and w are two distinct binary words of length k and δ = p , there exists ρ > 1 , such that ρ δ < 1 and
w A k [ [ | S w , w ( ρ ) | ( ρ δ ) k θ ] ] P ( w ) 1 θ δ k
We will use the above results to prove that the expected values in the Bernoulli model and the model built over a trie are asymptotically equivalent. We now prove Theorem 1 below.
Proof of Theorem 1.
From Lemmas 3 and 4, we have
H ( z ) = w A k 1 1 z S w ( z ) D w ( z ) ,
and
H ^ ( z ) = w A k 1 1 z 1 1 ( 1 P ( w ) ) z .
subtracting the two generating functions, we obtain
H ( z ) H ^ ( z ) = w A k 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z ) .
We define
Δ w ( z ) = 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z ) .
Therefore, by Cauchy integral formula (see [20]), we have
[ z n ] Δ w ( z ) = 1 2 π i Δ w ( z ) d z z n + 1 = Res z = 0 Δ w ( z ) d z z n + 1 ,
where the path of integration is a circle about zero with counterclockwise orientation. We note that the above integrand has poles at z = 0 , z = 1 1 P ( w ) , and z = A w (refer to expression (29)). Therefore, we define
I w ( ρ ) : = 1 2 π i | z | = ρ Δ w ( z ) d z z n + 1 ,
where the circle of radius ρ contains all of the above poles. By the residue theorem, we have
I w ( ρ ) = Res z = 0 Δ w ( z ) z n + 1 + Res z = A w Δ w ( z ) z n + 1 + Res z = 1 / 1 P ( w ) Δ w ( z ) z n + 1 = [ z n ] Δ w ( z ) Res z = A w H w ( z ) z n + 1 + Res z = 1 / 1 P ( w ) H ^ w ( z ) z n + 1
We observe that
Res z = A w Δ w ( z ) z n + 1 = S w ( A w ) B w A w n + 1 , where B w is as in ( 30 ) Res z = 1 / 1 P ( w ) H ^ w ( z ) z n + 1 = ( 1 P ( w ) ) n + 1 .
Then we obtain
[ z n ] Δ w = I w ( ρ ) S w ( A w ) B w A w n + 1 ( 1 P ( w ) ) n + 1 ,
and finally, we have
[ z n ] ( H ( z ) H ^ ( z ) ) = w A k [ z n ] Δ w = w A k I n w ( ρ ) w A k S w ( A w ) B w A w n + 1 + ( 1 P ( w ) ) n + 1 .
First, we show that, for sufficiently large n, the sum w A k S w ( A w ) B w A w n + 1 + ( 1 P ( w ) ) n + 1 approaches zero. □
Lemma 11.
For large enough n, and for k = Θ ( log n ) , there exists M > 0 such that
w A k S w ( A w ) B w A w n + 1 + ( 1 P ( w ) ) n + 1 = O ( n M ) .
Proof. 
We let
r w ( z ) = ( 1 P ( w ) ) z + S w ( A w ) B w A w z .
The Mellin transform of the above function is
r w * ( s ) = Γ ( s ) log s 1 1 P ( w ) S w ( A w ) B w Γ ( s ) log s ( A w ) .
We define
C w = S w ( A w ) B w = S w ( A w ) S w ( 1 ) + O ( k P ( w ) ) ,
which is negative and uniformly bounded for all w. Also, for a fixed s, we have
ln s 1 1 P ( w ) = ln s 1 + P ( w ) + O P ( w ) 2 = P ( w ) + O P ( w ) 2 s = P ( w ) s 1 + O P ( w ) s = P ( w ) s 1 + O P ( w ) ,
ln s ( A w ) = ln s 1 P ( w ) S w ( 1 ) + O P ( w ) 2 = P ( w ) S w ( 1 ) + O P ( w ) 2 s = P ( w ) S w ( 1 ) s 1 + O P ( w ) s = P ( w ) S w ( 1 ) s 1 + O P ( w ) ,
and therefore, we obtain
r w * ( s ) = Γ ( s ) P ( w ) s 1 1 S w ( 1 ) s O ( 1 ) .
From this expression, and noticing that the function has a removable singularity at s = 0 , we can see that the Mellin transform r w * ( s ) exists on the strip where ( s ) > 1 . We still need to investigate the Mellin strip for the sum w A k r w * ( s ) . In other words, we need to examine whether summing r w * ( s ) over all words of length k (where k grows with n) has any effect on the analyticity of the function. We observe that
w A k | r w * ( s ) | = w A k | Γ ( s ) P ( w ) s 1 1 S w ( 1 ) s O ( 1 ) | | Γ ( s ) | w A k P ( w ) ( s ) 1 1 S w ( 1 ) ( s ) O ( 1 ) = ( q k ) ( s ) 1 | Γ ( s ) | w A k P ( w ) ( 1 S w ( 1 ) ( s ) ) O ( 1 ) .
Lemma 8 allows us to split the above sum between the words for which S w ( 1 ) 1 + O ( δ k ) and words that have S w ( 1 ) > 1 + O ( δ k ) .
Such a split yields the following
w A k | r w * ( s ) | = ( q k ) ( s ) 1 | Γ ( s ) | O ( δ k ) .
This shows that w A k r w * ( s ) is bounded above for ( s ) > 1 and, therefore, it is analytic. This argument holds for k = Θ ( log n ) as well, as ( q k ) ( s ) 1 would still be bounded above by a constant M s , k that depends on s and k.
We would like to approximate w A k r w * ( s ) when z . By the inverse Mellin transform, we have
w A k r w ( z ) = 1 2 π i c i c + i w A k r w * ( s ) z s d s .
We choose c ( 1 , M ) for a fixed M > 0 . Then by the direct mapping theorem [22], we obtain
w A k r w ( z ) = O ( z M ) .
and subsequently, we get
w A k S w ( A w ) B w A w n + 1 + ( 1 P ( w ) ) n + 1 = O ( n M ) .
We next prove the asymptotic smallness of I n w ( ρ ) in (54).
Lemma 12.
Let
I n w ( ρ ) = 1 2 π i | z | = ρ 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z ) d z z n + 1 .
For large n and k = Θ ( log n ) , we have
w A k I n w ( ρ ) = O ρ n ( ρ δ ) k .
Proof. 
We observe that
| I n w ( ρ ) | 1 2 π | z | = ρ P ( w ) z z k 1 S w ( z ) D w ( z ) ( 1 ( 1 P ( w ) ) z ) 1 z n + 1 d z .
For | z | = ρ , we show that the denominator in (71) is bounded away from zero.
| D w ( z ) | = | ( 1 z ) S w ( z ) + P ( w ) z k | | 1 z | | S w ( z ) | P ( w ) | z k | ( ρ 1 ) α ( p ρ ) k , where α > 0 by Lemma 9 . > 0 , we assume k to be large enough such that ( p ρ ) k < α ( ρ 1 ) .
To find a lower bound for | 1 ( 1 P ( w ) ) z | , we can choose K w large enough such that
| 1 ( 1 P ( w ) ) z | 1 ( 1 P ( w ) ) | z | | 1 ρ ( 1 p K w ) | > 0 .
We now move on to finding an upper bound for the numerator in (71), for | z | = ρ .
| z k 1 S w ( z ) | | S w ( z ) 1 | + | 1 z k 1 | ( S w ( ρ ) 1 ) + ( 1 + ρ k 1 ) = ( S w ( ρ ) 1 ) + O ( ρ k ) .
Therefore, there exists a constant μ > 0 such that
| I n w | μ ρ P ( w ) ( S w ( ρ ) 1 ) + O ( ρ k ) 1 ρ n + 1 = O ( ρ n ) P ( w ) ( S w ( ρ ) 1 ) + P ( w ) O ( ρ k ) .
Summing over all patterns w, and applying Lemma 8, we obtain
w A k | I n w ( ρ ) | = O ( ρ n ) w A k P ( w ) ( S w ( ρ ) 1 ) + O ( ρ n + k ) w A k P ( w ) = O ( ρ n ) θ ( ρ δ ) k + p ρ 1 p ρ θ δ k + O ( ρ n + k ) = O ( ρ n ( ρ δ ) k ) ,
which approaches zero as n and k = Θ ( log n ) . This completes the proof of of Theorem 1. □
Similar to Theorem 1, we provide a proof to show that the second factorial moments of the kth Subword Complexity and the kth Prefix Complexity, have the same first order asymptotic behavior. We are now ready to state the proof of Theorem 2.
Proof of Theorem 2.
As discussed in Lemmas 3 and 4, the generating functions representing E [ ( X n , k ) 2 ] and E [ ( X ^ n , k ) 2 ] respectively, are
G ( z ) = w , w A k w w 1 1 z S w ( z ) D w ( z ) S w ( z ) D w ( z ) + S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) ,
and
G ^ ( z ) = w , w A k w w 1 1 z 1 1 ( 1 P ( w ) ) z 1 1 ( 1 P ( w ) ) z + w , w A k w w 1 1 ( 1 P ( w ) P ( w ) ) z .
Note that
G ( z ) G ^ ( z ) = w A k w w w A k 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z )
+ w A k w w w A k 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z )
+ w , w A k w w 1 1 ( 1 P ( w ) P ( w ) ) z S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z )
In Theorem 1, we proved that for every M > 0 (which does not depend on n or k), we have
H ( z ) H ^ ( z ) = w A k 1 1 ( 1 P ( w ) ) z S w ( z ) D w ( z ) = O ( n M ) .
Therefore, both (77) and (78) are of order ( 2 k 1 ) O ( n M ) = O ( n M + a log 2 ) for k = a log n . Thus, to show the asymptotic smallness, it is enough to choose M = a log 2 + ϵ , where ϵ is a small positive value. Now, it only remains to show (79) is asymptotically negligible as well. We define
Δ w , w ( z ) = 1 1 ( 1 P ( w ) P ( w ) ) z S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) .
Next, we extract the coefficient of z n
[ z n ] Δ w , w ( z ) = 1 2 π i Δ w , w ( z ) d z z n + 1 ,
where the path of integration is a circle about the origin with counterclockwise orientation. We define
I n w , w ( ρ ) = 1 2 π i | z | = ρ Δ w , w ( z ) d z z n + 1 ,
The above integrand has poles at z = 0 , z = α w , w (as in (46)), and z = 1 1 P ( w ) P ( w ) . We have chosen ρ such that the poles are all inside the circle | z | = ρ . It follows that
I n w , w ( ρ ) = Res z = 0 Δ w , w ( z ) z n + 1 + Res z = α w , w Δ w , w ( z ) z n + 1 + Res z = 1 1 P ( w ) P ( w ) Δ w ( z ) z n + 1 ,
and the residues give us the following.
Res z = 1 1 P ( w ) P ( w ) 1 1 ( 1 P ( w ) P ( w ) ) z ) z n + 1 = ( 1 P ( w ) P ( w ) ) n + 1 ,
and
Res z = α w , w S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) = S w ( α w , w ) S w ( α w , w ) S w , w ( α w , w ) S w , w ( α w , w ) β w , w α w , w n + 1 ,
where β w , w is as in (47). Therefore, we get
w , w A k w w [ z n ] Δ w , w ( z ) = w , w A k w w I n w , w ( ρ ) w , w A k w w ( S w ( α w , w ) S w ( α w , w ) S w , w ( α w , w ) S w , w ( α w , w ) β w , w α w , w n + 1 + ( 1 P ( w ) P ( w ) ) n + 1 ) .
We now show that the above two terms are asymptotically small. □
Lemma 13.
There exists ϵ > 0 where the sum
w , w A k w w S w ( α w , w ) S w ( α w , w ) S w , w ( α w , w ) S w , w ( α w , w ) β w , w α w , w n + 1 + ( 1 P ( w ) P ( w ) ) n + 1
is of order O( n ϵ ).
Proof. 
We define
r w , w ( z ) = S w ( α w , w ) S w ( α w , w ) S w , w ( α w , w ) S w , w ( α w , w ) β w , w α w , w z + ( 1 P ( w ) P ( w ) ) z .
The Mellin transform of the above function is
r w , w * ( s ) = Γ ( s ) log s 1 1 P ( w ) p ( w ) + C w , w Γ ( s ) log s ( α w , w ) ,
where C w , w = S w ( α w , w ) S w ( α w , w ) S w , w ( α w , w ) S w , w ( α w , w ) β w , w . We note that C w , w is negative and uniformly bounded from above for all w , w A k .For a fixes s, we also have,
ln s 1 1 P ( w ) P ( w ) = ln s 1 + P ( w ) + P ( w ) + O p 2 k = P ( w ) + P ( w ) + O p 2 k s = ( P ( w ) + P ( w ) ) s 1 + O p k s = ( P ( w ) + P ( w ) ) s 1 + O p k ,
and
ln s ( α w , w ) = ( S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + O ( p 2 k ) ) s = ( S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ) s 1 + O ( p k ) .
Therefore, we have
r w , w * ( s ) = Γ ( s ) P ( w ) + P ( w ) s ( 1 + O ( p k ) ) Γ ( s ) ( S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ) s 1 + O ( p k ) O ( 1 ) .
To find the Mellin strip for the sum w A k r w , w * ( s ) , we first note that
( x + y ) a x a + y a , for any real x , y > 0 and a 1 .
Since ( s ) < 1 , we have
P ( w ) + P ( w ) ( s ) P ( w ) ( s ) + P ( w ) ( s ) ,
and
S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ı n + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ( s ) S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ( s ) + S w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) P ( w ) ( s ) .
Therefore, we get
w , w A k w w | r w , w * ( s ) | | Γ ( s ) | O ( 1 ) ( w , w A k w w P ( w ) ( s ) 1 S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w , w ( 1 ) ( s ) + w , w A k w w P ( w ) ( s ) 1 S w ( 1 ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) S w ( 1 ) S w , w ( 1 ) ( s ) ) ( q k ) ( s ) 1 | Γ ( s ) | O ( 1 )
( w A k w w w A k P ( w ) 1 ( S w ( 1 ) ) ( s ) 1 S w , w ( 1 ) S w ( 1 ) ( s )
+ w A k w w w A k P ( w ) S w , w ( 1 ) ( s ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) ( s )
+ w A k w w w A k P ( w ) 1 ( S w ( 1 ) ) ( s ) 1 S w , w ( 1 ) S w ( 1 ) ( s )
+ w A k w w w A k P ( w ) S w , w ( 1 ) ( s ) S w ( 1 ) S w , w ( 1 ) S w , w ( 1 ) ( s ) ) .
By Lemma 10, with high probability, a randomly selected w has the property S w , w ( 1 ) = O ( δ k ) , and thus
1 S w , w ( 1 ) S w ( 1 ) ( s ) = 1 + O ( δ k ) .
With that and by Lemma 8, for most words w,
1 S w ( 1 ) ( s ) ( 1 + O ( δ k ) ) = O ( δ k ) .
Therefore, both sums (91) and (93) are of the form ( 2 k 1 ) O ( δ k ) . The sums (92) and (94) are also of order ( 2 k 1 ) O ( δ k ) by Lemma 10. Combining all these terms we will obtain
w , w A k w w | r w , w * ( s ) | ( 2 k 1 ) ( q k ) ( s ) 1 | Γ ( s ) | O ( δ k ) O ( 1 ) .
By the inverse Mellin transform, for k = a log n , M = a log 2 + ϵ and c ( 1 , M ) , we have
w , w A k w w r w , w ( z ) = 1 2 π i c i c + i w , w A k w w r w , w * ( s ) z s d s = O ( z M ) O ( 2 k ) = O ( z ϵ ) .
In the following lemma we show that the first term in (85) is asymptotically small.
Lemma 14.
Recall that
I n w , w ( ρ ) = 1 2 π i | z | = ρ Δ w , w ( z ) d z z n + 1 .
We have
w , w A k w w I n w , w ( ρ ) = O ρ n + 2 k δ k .
Proof. 
First note that
Δ w , w ( z ) = 1 1 ( 1 P ( w ) P ( w ) ) z S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) D w , w ( z ) = z P ( w ) S w , w ( z ) S w , w ( z ) S w ( z ) S w ( z ) + z k 1 S w ( z ) z k 1 S w , w ( z ) 1 ( 1 P ( w ) P ( w ) ) z D w , w ( z ) + z P ( w ) S w , w ( z ) S w , w ( z ) S w ( z ) S w ( z ) + z k 1 S w ( z ) z k 1 S w , w ( z ) 1 ( 1 P ( w ) P ( w ) ) z D w , w ( z ) .
We saw in (73) that | 1 ( 1 P ( w ) ) z | c 2 , and therefore, it follows that
| 1 ( 1 P ( w ) P ( w ) ) z | c 1
For z = ρ , | D w , w ( z ) | is also bounded below as the following
| D w , w ( z ) | = | ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) + z k P ( w ) ( S w ( z ) S w , w ( z ) ) + P ( w ) ( S w ( z ) S w , w ( z ) ) | | ( 1 z ) ( S w ( z ) S w ( z ) S w , w ( z ) S w , w ( z ) ) | z k P ( w ) ( S w ( z ) S w , w ( z ) ) + P ( w ) ( S w ( z ) S w , w ( z ) ) ( ρ 1 ) β ( p ρ ) k 2 ( 1 + p ρ ) 1 p ρ ,
which is bounded away from zero by the assumption of Lemma 7. Additionally, we show that the numerator in (98) is bounded above, as follows
| S w , w ( z ) S w , w ( z ) S w ( z ) S w ( z ) + z k 1 S w ( z ) z k 1 S w , w ( z ) | | S w ( z ) ( z k 1 S w ( z ) ) | + | S w , w ( z ) ( S w , w ( z ) z k 1 ) | S w ( ρ ) ( S w ( ρ ) 1 ) + O ( ρ k ) + S w , w ( ρ ) S w , w ( ρ ) + O ( ρ k ) .
This yields
w , w A k w w | I n w , w | O ( ρ n ) w A k w w S w ( ρ ) w A k P ( w ) ( S w ( ρ ) 1 ) + O ( ρ k ) + O ( ρ n ) w A k w w w A k P ( w ) S w , w ( ρ ) S w , w ( ρ ) + O ( ρ k ) .
By (75), the first term above is of order ( 2 k 1 ) O ( ρ n + k ) and by Lemma 10 and an analysis similar to (75), the second term yields ( 2 k 1 ) O ( ρ n + k ) as well. Finally, we have
w , w A k w w | I n w , w | O ( ρ n + 2 k δ k ) .
Which goes to zero asymptotically, for k = Θ ( log n ) . □
This lemma completes our proof of Theorem 2.

3.5. Asymptotic Analysis of the kth Prefix Complexity

We finally proceed to analyzing the asymptotic moments of the kth Prefix Complexity. The results obtained hold true for the moments of the kth Subword Complexity. Our methodology involves poissonization, saddle point analysis (the complex version of Laplace’s method [23]), and depoissonization.
Lemma 15
(Jacquet and Szpankowski, 1998). Let G ˜ ( z ) be the Poisson transform of a sequence g n . If G ˜ ( z ) is analytic in a linear cone S θ with θ < π / 2 , and if the following two conditions hold:
(I) For z S θ and real values B, r > 0 , ν
| z | > r | G ˜ ( z ) | B | z ν | Ψ ( | z | ) ,
where Ψ ( x ) is such that, for fixed t, lim x Ψ ( t x ) Ψ ( x ) = 1 ;
(II) For z S θ and A , α < 1
| z | > r | G ˜ ( z ) e z | A e α | z | .
Then, for every non-negative integer n, we have
g n = G ˜ ( n ) + O ( n ν 1 Ψ ( n ) ) .
On the Expected Value: To transform the sequence of interest, ( E [ X ^ n , k ] ) n 0 , into a Poisson model, we recall that in (25) we found
E [ X ^ n , k ] = w A k 1 1 P ( w ) n .
Thus, the Poisson transform is
E ˜ k ( z ) = n = 0 E [ X ^ n , k ] z n n ! e z = n = 0 w A k 1 ( 1 P ( w ) ) n z n n ! e z = w A k 1 e z P ( w ) .
To asymptotically evaluate this harmonic sum, we turn our attention to the Mellin Transform once more. The Mellin transform of E ˜ k ( z ) is
E ˜ k * ( s ) = Γ ( s ) w A k P ( w ) s = Γ ( s ) ( p s + q s ) k ,
which has the fundamental strip s 1 , 0 . For c ( 1 , 0 ) , the inverse Mellin integral is the following
E ˜ k ( z ) = 1 2 π i c i c + i E ˜ k * ( s ) · z s d s = 1 2 π i c i c + i z s Γ ( s ) ( p s + q s ) k d s = 1 2 π i c i c + i Γ ( s ) e k ( s log z k log ( p s + q s ) ) d s = 1 2 π i c i c + i Γ ( s ) e k h ( s ) d s ,
where we define h ( s ) = s a log ( p s + q s ) for k = a log z . We emphasize that the above integral involves k, and k grows with n. We evaluate the integral through the saddle point analysis. Therefore, we choose the line of integration to cross the saddle point r 0 . To find the saddle point r 0 , we let h ( r 0 ) = 0 , and we obtain
p / q r 0 = a log p 1 1 1 a log q 1 ,
and therefore,
r 0 = 1 log p / q log a log q 1 1 1 a log p 1 ,
where 1 log q 1 < a < 1 log p 1 .
By (108) and the fact that p / q i t j = 1 for t j = 2 π j log p / q and j Z , we can see that there are actually infinitely many saddle points z j of the form r 0 + i t j on the line of integration.
We remark that the location of r 0 depends on the value of a. We have r 0 as a 1 log q 1 , and r 0 as a 1 log p 1 . We divide the analysis into three parts, for the three ranges r 0 ( 0 , ) , r 0 ( 1 , 0 ) , and r 0 ( , 1 ) .
In the first range, which corresponds to
1 log q 1 < a < 2 log q 1 + log p 1 ,
we perform a residue analysis, taking into account the dominant pole at s = 1 . In the second range, we have
2 log q 1 + log p 1 < a < 1 q log q 1 + p log p 1 ,
and we get the asymptotic result through the saddle point method. The last range corresponds to
1 q log q 1 + p log p 1 < a < 1 log p 1 ,
and we approach it with a combination of residue analysis at s = 0 , and the saddle point method. We now proceed by stating the proof of Theorem 3.
Proof of Theorem 3.
We begin with proving part i i which requires a saddle point analysis. We rewrite the inverse Mellin transform with integration line at ( s ) = r 0 as
E ˜ k ( z ) = 1 2 π z ( r 0 + i t ) Γ ( r 0 + i t ) ( p ( r 0 + i t ) + q ( r 0 + i t ) ) k d t = 1 2 π Γ ( r 0 + i t ) e k ( ( r 0 + i t ) log z k log ( p ( r 0 + i t ) + q ( r 0 + i t ) ) ) d t .
Step one: Saddle points’ contribute to the integral estimation
First, we are able to show those saddle points with | t j | > log n do not have a significant asymptotic contribution to the integral. To show this, we let
T k ( z ) = | t | > log n z r 0 i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t .
Since | Γ ( r 0 + i t ) | = O ( | t | r 0 1 2 e π | t | 2 ) as | t | ± , we observe that
T k ( z ) = O z r 0 ( p r 0 + q r 0 ) k log n t r 0 / 2 1 / 2 e π t / 2 d t = O z r 0 ( p r 0 + q r 0 ) k ( log n ) r 0 / 4 1 / 4 log n e π t / 2 d t = O z r 0 ( p r 0 + q r 0 ) k ( log n ) r 0 / 4 1 / 4 e π log n / 2 = O ( log n ) r 0 / 4 1 / 4 e π log n / 2 ,
which is very small for large n. Note that for t ( log n , ) , t r 0 / 2 1 / 2 is decreasing, and bounded above by ( log n ) r 0 / 4 1 / 4 .
Step two: Partitioning the integral
There are now only finitely many saddle points to work with. We split the integral range into sub-intervals, each of which contains exactly one saddle point. This way, each integral has a contour traversing a single saddle point, and we will be able to estimate the dominant contribution in each integral from a small neighborhood around the saddle point. Assuming that j * is the largest j for which 2 π j log p / q log n , we split the integral E ˜ k ( z ) as following
E ˜ k ( z ) = 1 2 π | j | < j * | t t j | π log p / q z r 0 + i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t 1 2 π π log p / q | t j * | < log n Γ ( r + i t ) z r 0 + i t ( p r 0 i t + q r 0 i t ) k d t .
By the same argument as in (115), the second term in (116) is also asymptotically negligible. Therefore, we are only left with
E ˜ k ( z ) = | j | < j * S j ( z ) ,
where S j ( z ) = 1 2 π | t t j | π log p / q z r 0 + i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t ) .
Step three: Splitting the saddle contour
For each integral S j , we write the expansion of h ( t ) about t j , as follows
h ( t ) = h ( t j ) + 1 2 h ( t j ) ( t t j ) 2 + O ( ( t t j ) 3 ) .
The main contribution for the integral estimate should come from an small integration path that reduces k h ( t ) to its quadratic expansion about t j . In other words, we want the integration path to be such that
k ( t t j ) 2 , and k ( t t j ) 3 0 .
The above conditions are true when | t t j | k 1 / 2 and | t t j | k 1 / 3 . Thus, we choose the integration path to be | t t j | k 2 / 5 . Therefore, we have
S j ( z ) = 1 2 π | t t j | k 2 / 5 z r 0 + i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t 1 2 π k 2 / 5 < | t t j | < π log p / q z r 0 + i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t .
Saddle Tails Pruning.
We show that the integral is small for k 2 / 5 < | t t j | < π log p / q . We define
S j ( 1 ) ( z ) = 1 2 π k 2 / 5 < | t t j | < π log p / q z r 0 + i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t .
Note that for | t t j | π log p / q , we have
| p r 0 i t + q r 0 i t | = ( p r 0 + q r 0 ) 1 2 p r 0 q r 0 ( p r 0 + q r 0 ) 2 ( 1 cos t log p / q ) ( p r 0 + q r 0 ) 1 p r 0 q r 0 ( p r 0 + q r 0 ) 2 ( 1 cos t t j ) log p / q sin ce 1 x 1 x 2 for x [ 0 , 1 ] ( p r 0 + q r 0 ) 1 2 p r 0 q r 0 π 2 ( p r 0 + q r 0 ) 2 ( ( t t j ) log p / q ) 2 sin ce 1 cos x 2 x 2 π 2 for | x | π ( p r 0 + q r 0 ) e γ ( t t j ) 2 ,
where γ = 2 p r 0 q r 0 log 2 p / q π 2 ( p r 0 + q r 0 ) 2 . Thus,
S j ( 1 ) ( z ) = O z r 0 | Γ ( r 0 + i t ) | k 2 / 5 < | t t j | < π log p / q | p r 0 i t + q r 0 i t | d t = O z r 0 ( p r 0 + q r 0 ) k k 2 / 5 e γ k u 2 d u = O z r 0 ( p r 0 + q r 0 ) k k 3 / 5 e γ k 1 / 5 , sin ce erf ( x ) = O e x 2 / x .
Central Approximation.
Over the main path, the integrals are of the form
S j ( 0 ) ( z ) = 1 2 π | t t j | k 2 / 5 Γ ( r 0 + i t ) z r 0 + i t ( p r 0 i t + q r 0 i t ) k d t = 1 2 π | t t j | k 2 / 5 Γ ( r 0 + i t ) e k h ( t ) d t .
We have
h ( t j ) = log 2 p / q ( ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 ) 2 ,
and
p r 0 i t j + q r 0 i t j = p i t j ( p r 0 + q r 0 ) .
Therefore, by Laplace’s theorem (refer to [22]) we obtain
S j ( 0 ) ( z ) = 1 2 π k h ( t j ) Γ ( r 0 + i t j ) e k h ( t j ) ( 1 + O ( k 1 / 2 ) ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 π log p / q × z r 0 ( p r 0 + q r 0 ) k Γ ( r 0 + i t j ) z i t j p i k t j k 1 / 2 1 + O 1 k .
We finally sum over all j ( | j | < j * ) , and we get
E ˜ k ( z ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 π log p / q × | j | < j * z r 0 ( p r 0 + q r 0 ) k Γ ( r 0 + i t j ) z i t j p i k t j k 1 / 2 1 + O 1 k .
We can rewrite E ˜ k ( z ) as
E ˜ k ( z ) = Φ 1 ( ( 1 + a log p ) log p / q n ) z ν log n 1 + O 1 log n ,
where ν = r 0 + a log ( p r 0 + q r 0 ) , and
Φ 1 ( x ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 a π log p / q | j | < j * Γ ( r 0 + i t j ) e 2 π i j x .
For part i i , we move the line of integration to r 0 ( 0 , ) . Note that in this range, we must consider the contribution of the pole at s = 0 . We have
E ˜ k ( z ) = Res s = 0 E ˜ k * ( s ) z s + r 0 i r 0 + i E ˜ k * ( z ) z s d s .
Computing the residue at s = 0 , and following the same analysis as in part i for the above integral, we arrive at
E ˜ k ( z ) = 2 k Φ 1 ( ( 1 + a log p ) log p / q n ) z ν log n 1 + O 1 log n .
For part i i i . of Theorem 3, we shift the line of integration to c 0 ( 2 , 1 ) , then we have
E ˜ k ( z ) = Res s = 1 E ˜ k * ( s ) z s + c i c + i E ˜ k * ( z ) z s d s = z + O z c 0 ( p c 0 + q c 0 ) k = z a log 2 + O ( z ν 0 ) ,
where ν 0 = c 0 + a log ( p c 0 + q c 0 ) < 1 .
Step four: Asymptotic depoissonization
To show that both conditions in (15) hold for E ˜ k ( z ) , we extend the real values z to complex values z = n e i θ , where | θ | < π / 2 . To prove (103), we note that
| e i θ ( r 0 + i t ) Γ ( r 0 + i t ) | = O ( | t | r 0 1 / 2 e t θ π | t | / 2 ) ,
and therefore
E ˜ k ( n e i θ ) = 1 2 π e i θ ( r 0 + i t ) n r 0 i t Γ ( r 0 + i t ) ( p r 0 i t + q r 0 i t ) k d t
is absolutely convergent for | θ | < π / 2 . The same saddle point analysis applies here and we obtain
| E ˜ k ( z ) | B | z ν | log n ,
where B = | Φ 1 ( ( 1 + a log p ) log p / q n ) | , and ν is as in (128). Condition (103) is therefore satisfied. To prove condition (104) We see that for a fixed k,
| E ˜ k ( z ) e z | w A k | e z e z ( 1 P ( w ) ) | 2 k + 1 e | z | cos ( θ ) .
Therefore, we have
E [ X ^ n , k ] = E ˜ ( n ) + O n ν 1 log n .
This completes the proof of Theorem 3. □
On the Second Factorial Moment: We poissonize the sequence ( E [ ( X ^ n , k ) 2 ] ) n 0 as well. By the analysis in (27),
E [ ( X ^ n , k ) 2 ] = w , w A k w w 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n ,
which gives the following poissonized form
G ˜ ( z ) = n 0 E [ ( X ^ n , k ) 2 ] z n n ! e z = w , w A k w w 1 e P ( w ) z e P ( w ) z + e ( P ( w ) + P ( w ) ) z = w , w A k w w 1 e P ( w ) z 1 e P ( w ) z = w A k 1 e P ( w ) z 2 w A k 1 e P ( w ) z 2 = ( E ˜ k ( z ) ) 2 w A k 1 e P ( w ) z 2 = ( E ˜ k ( z ) ) 2 w A k 1 2 e P ( w ) z + e 2 P ( w ) z .
We show that in all ranges of a the leftover sum in (138) has a lower order contribution to G ˜ k ( z ) compared to ( E ˜ k ( z ) ) 2 . We define
L ˜ k ( z ) = w A k 1 2 e P ( w ) z + e 2 P ( w ) z .
In the first range for k, we take the Mellin transform of L ˜ k ( z ) , which is
L ˜ k * ( s ) = 2 Γ ( s ) w A k P ( w ) s + Γ ( s ) w A k ( 2 P ( w ) ) s = 2 Γ ( s ) ( p s + q s ) k + Γ ( s ) 2 s ( p s + q s ) k = Γ ( s ) ( p s + q s ) k ( 2 s 1 1 ) ,
and we note that the fundamental strip for this Mellin transform of is 2 , 0 as well. The inverse Mellin transform for c ( 2 , 0 ) is
L ˜ k ( z ) = 1 2 π i c i c + i L ˜ k * ( s ) z s d s = 1 π i c i c + i Γ ( s ) ( p s + q s ) k ( 2 s 1 1 ) z s d s
We note that this range of r 0 corresponds to
2 log q 1 + log p 1 < a < p 2 + q 2 q 2 log q 1 + p 2 log p 1 .
The integrand in (141) is quite similar to the one seen in (107). The only difference is the extra term 2 s 1 1 . However, we notice that 2 s 1 1 is analytic and bounded. Thus, we obtain the same saddle points with the real part as in (109) and the same imaginary parts in the form of 2 π i j log p / q , j Z . Thus, the same saddle point analysis for the integral in (107) applies to L ˜ k ( z ) as well. We avoid repeating the similar steps, and we skip to the central approximation, where by Laplace’s theorem (ref. [22]), we get
L ˜ k ( z ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 π log p / q × | j | < j * z r 0 ( p r 0 + q r 0 ) k ( 2 r 0 1 i t j 1 ) × Γ ( r 0 + i t j ) z i t j p i k t j k 1 / 2 1 + O 1 k ,
which can be represented as
L ˜ k ( z ) = Φ 2 ( ( 1 + a log p ) log p / q n ) z ν log n 1 + O 1 log n ,
where
Φ 2 ( x ) = ( p / q ) r 0 / 2 + ( p / q ) r 0 / 2 2 a π log p / q | j | < j * ( 2 r 0 1 i t j 1 ) Γ ( r 0 + i t j ) e 2 π i j x .
This shows that L ˜ k ( z ) = O z ν log n , when
2 log q 1 + log p 1 < a < p 2 + q 2 q 2 log q 1 + p 2 log p 1 .
Subsequently, for 1 log q 1 < a < 2 log q 1 + log p 1 , we get
L ˜ k ( z ) = 2 k Φ 2 ( ( 1 + a log p ) log p / q n ) z ν log n 1 + O 1 log n ,
and for p 2 + q 2 q 2 log q 1 + p 2 log p 1 < a < 1 log p 1 , we get
L ˜ k ( z ) = O ( n 2 ) .
It is not difficult to see that for each range of a as stated above, L ˜ k ( z ) has a lower order contribution to the asymptotic expansion of G ˜ k ( z ) , compared to ( E ˜ k ( z ) ) 2 . Therefore, this leads us to Theorem 4, which will be proved bellow.
Proof of Theorem 4. 
It is only left to show that the two depoissonization conditions hold: For condition (103) in Theorem 15, from (135) we have
| G ˜ k ( z ) | B 2 | z 2 ν | log n ,
and for condition (104), we have, for fixed k,
| G ˜ k ( z ) e z | w , w A k w w e z e ( 1 P ( w ) ) z e ( 1 P ( w ) ) z + e ( 1 ( P ( w ) + P ( w ) ) ) z 4 k e | z | cos θ .
Therefore both depoissonization conditions are satisfied and the desired result follows. □
Corollary. A Remark on the Second Moment and the Variance
For the second moment we have
E ( X ^ n , k ) 2 = w , w A k w w E X ^ n , k ( w ) X ^ n , k ( w ) + w A k E [ X ^ n , k ( w ) ] = w , w A k w w 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n + w A k 1 1 P ( w ) n .
Therefore, by (105) and (138) the Poisson transform of the second moment, which we denote by G ˜ k ( 2 ) ( z ) is
G ˜ k ( 2 ) ( z ) = ( E ˜ k ( z ) ) 2 + E ˜ k ( z ) w A k 1 2 e P ( w ) z + e 2 P ( w ) z ,
which results in the same first order asymptotic as the second factorial moment. Also, it is not difficult to extend the proof in Chapter 6 to show that the second moments of the two models are asymptotically the same. For the variance we have
Var [ X ^ n , k ] = E ( X ^ n , k ) 2 E X ^ n , k 2 = w , w A k w w 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n + w A k 1 1 P ( w ) n w , w A k w w 1 ( 1 P ( w ) ) n ( 1 P ( w ) ) n + ( 1 P ( w ) P ( w ) ) n w A k 1 1 P ( w ) n 1 P ( w ) n + 1 P ( w ) 2 n = w A k 1 P ( w ) n 1 P ( w ) 2 n .
Therefore the Poisson transform, which we denote by G ˜ k var ( z ) is
G ˜ k var ( z ) = w A k e P ( w ) z e ( 2 P ( w ) + ( P ( w ) ) 2 ) z .
The Mellin transform of the above function has the following form
G * ˜ k var ( z ) = Γ ( s ) ( p s + q s ) k ( 1 + O ( P ( w ) ) ) .
This is quite similar to what we saw in (106), which indicates that the variance has the same asymptotic growth as the expected value. But the variance of the two models do not behave in the same way (cf. Figure 2).

4. Summary and Conclusions

We studied the first-order asymptotic growth of the first two (factorial) moments of the kth Subword Complexity. We recall that the kth Subword Complexity of a string of length n is denoted by X n , k , and is defined as the number of distinct subwords of length k, that appear in the string. We are interested in the asymptotic analysis for when k grows as a function of the string’s length. More specifically, we conduct the analysis for k = Θ ( log n ) , and as n .
The analysis is inspired by the earlier work of Jacquet and Szpankowski on the analysis of suffix trees, where they are compared to independent tries (cf. [14]). In our work, we compare the first two moments of the kth Subword Complexity to the kth Prefix Complexity over a random trie built over n independently generated binary strings. We recall that we define the kth Prefix Complexity as the number of distinct prefixes that appear in the trie at level k and lower.
We obtain the generating functions representing the expected value and the second factorial moments as their coefficients, in both settings. We prove that the first two moments have the same asymptotic growth in both models. For deriving the asymptotic behavior, we split the range for k into three intervals. We analyze each range using the saddle point method, in combination with residue analysis. We close our work with some remarks regarding the comparison of the second moment and the variance to the kth Prefix Complexity.

5. Future Challenges

The intervals’ endpoints for a in Theorems 3 and 4 are not investigated in this work. The asymptotic analysis of the end points can be studied using van der Waerden saddle point method [24].
The analogous results are not (yet) known in the case where the underlying probability source has Markovian dependence or in the case of dynamical sources.

Author Contributions

This paper is based on a Ph.D. dissertation conducted by the L.A. under the supervision of the M.D.W. All authors have read and agreed to the published version of the manuscript.

Funding

M.D.W. Ward’s research is supported by FFAR Grant 534662, by the USDA NIFA Food and Agriculture Cyberinformatics and Tools (FACT) initiative, by NSF Grant DMS-1246818, by the NSF Science & Technology Center for Science of Information Grant CCF-0939370, and by the Society Of Actuaries.

Acknowledgments

The authors thank Wojciech Szpankowski and Mireille Régnier for insightful conversations on this topic.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PGFProbabilty Generating Function
P Probability
E Expected value
VarVariance
E [ ( X n , k ) 2 ] The second factorial moment of X n , k

References

  1. Ehrenfeucht, A.; Lee, K.; Rozenberg, G. Subword complexities of various classes of deterministic developmental languages without interactions. Theor. Comput. Sci. 1975, 1, 59–75. [Google Scholar] [CrossRef] [Green Version]
  2. Morse, M.; Hedlund, G.A. Symbolic Dynamics. Am. J. Math. 1938, 60, 815–866. [Google Scholar] [CrossRef]
  3. Jacquet, P.; Szpankowski, W. Analytic Pattern Matching: From DNA to Twitter; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  4. Bell, T.C.; Cleary, J.G.; Witten, I.H. Text Compression; Prentice-Hall: Upper Saddle River, NJ, USA, 1990. [Google Scholar]
  5. Burge, C.; Campbell, A.M.; Karlin, S. Over-and under-representation of short oligonucleotides in DNA sequences. Proc. Natl. Acad. Sci. USA 1992, 89, 1358–1362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Fickett, J.W.; Torney, D.C.; Wolf, D.R. Base compositional structure of genomes. Genomics 1992, 13, 1056–1064. [Google Scholar] [CrossRef]
  7. Karlin, S.; Burge, C.; Campbell, A.M. Statistical analyses of counts and distributions of restriction sites in DNA sequences. Nucleic Acids Res. 1992, 20, 1363–1370. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Karlin, S.; Mrázek, J.; Campbell, A.M. Frequent Oligonucleotides and Peptides of the Haemophilus Influenzae Genome. Nucleic Acids Res. 1996, 24, 4263–4272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Pevzner, P.A.; Borodovsky, M.Y.; Mironov, A.A. Linguistics of Nucleotide Sequences II: Stationary Words in Genetic Texts and the Zonal Structure of DNA. J. Biomol. Struct. Dyn. 1989, 6, 1027–1038. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, X.; Francia, B.; Li, M.; Mckinnon, B.; Seker, A. Shared information and program plagiarism detection. IEEE Trans. Inf. Theory 2004, 50, 1545–1551. [Google Scholar] [CrossRef]
  11. Chor, B.; Horn, D.; Goldman, N.; Levy, Y.; Massingham, T. Genomic DNA k-mer spectra: models and modalities. Genome Biol. 2009, 10, R108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Price, A.L.; Jones, N.C.; Pevzner, P.A. De novo identification of repeat families in large genomes. Bioinformatics 2005, 21, i351–i358. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Janson, S.; Lonardi, S.; Szpankowski, W. On the Average Sequence Complexity. In Annual Symposium on Combinatorial Pattern Matching; Springer: Berlin/Heidelberger, Germany, 2004; pp. 74–88. [Google Scholar]
  14. Jacquet, P.; Szpankowski, W. Autocorrelation on words and its applications: Analysis of suffix trees by string-ruler approach. J. Comb. Theory Ser. A 1994, 66, 237–269. [Google Scholar] [CrossRef] [Green Version]
  15. Liang, F.M. Word Hy-phen-a-tion by Com-put-er; Technical Report; Stanford University: Stanford, CA, USA, 1983. [Google Scholar]
  16. Weiner, P. Linear pattern matching algorithms. In Proceedings of the 14th Annual Symposium on Switching and Automata Theory (swat 1973), Iowa City, IA, USA, 15–17 October 1973; pp. 1–11. [Google Scholar]
  17. Gheorghiciuc, I.; Ward, M.D. On correlation Polynomials and Subword Complexity. Discrete Math. Theor. Comput. Sci. 2007, 7, 1–18. [Google Scholar]
  18. Bassino, F.; Clément, J.; Nicodème, P. Counting occurrences for a finite set of words: Combinatorial methods. ACM Trans. Algorithms 2012, 8, 31. [Google Scholar] [CrossRef]
  19. Park, G.; Hwang, H.K.; Nicodème, P.; Szpankowski, W. Profile of Tries. In Latin American Symposium on Theoretical Informatics; Springer: Berlin/Heidelberger, Germany, 2008; pp. 1–11. [Google Scholar]
  20. Flajolet, P.; Sedgewick, R. Analytic Combinatorics; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  21. Lothaire, M. Applied Combinatorics on Words; Cambridge University Press: Cambridge, UK, 2005; Volume 105. [Google Scholar]
  22. Szpankowski, W. Average Case Analysis of Algorithms on Sequences; John Wiley & Sons: Chichester, UK, 2011; Volume 50. [Google Scholar]
  23. Widder, D.V. The Laplace Transform (PMS-6); Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  24. van der Waerden, B.L. On the method of saddle points. Appl. Sci. Res. 1952, 2, 33–45. [Google Scholar] [CrossRef]
Figure 1. The suffix tree in (a) is built over the first four suffixes of string X = 101110 . . . , and the trie in (b) is build over strings X 1 = 111 . . . , X 2 = 101 . . . , X 3 = 100 , and X 4 = 010 . . . .
Figure 1. The suffix tree in (a) is built over the first four suffixes of string X = 101110 . . . , and the trie in (b) is build over strings X 1 = 111 . . . , X 2 = 101 . . . , X 3 = 100 , and X 4 = 010 . . . .
Entropy 22 00207 g001
Figure 2. Left: Φ 1 ( x ) at p = 0 . 90 , and various levels of r 0 . The amplitude increases as r 0 increases. Right: Φ 1 ( x ) at r 0 = 1 , and various levels of p. The amplitude tends to zero as p 1 / 2 + .
Figure 2. Left: Φ 1 ( x ) at p = 0 . 90 , and various levels of r 0 . The amplitude increases as r 0 increases. Right: Φ 1 ( x ) at r 0 = 1 , and various levels of p. The amplitude tends to zero as p 1 / 2 + .
Entropy 22 00207 g002
Figure 3. Approximated second moments (left), and variances (right) of the kth Subword Complexity (red), and the kth Prefix Complexity (blue), for n = 4000 , at different probability levels, averaged over 10,000 iterations.
Figure 3. Approximated second moments (left), and variances (right) of the kth Subword Complexity (red), and the kth Prefix Complexity (blue), for n = 4000 , at different probability levels, averaged over 10,000 iterations.
Entropy 22 00207 g003

Share and Cite

MDPI and ACS Style

Ahmadi, L.; Ward, M.D. Asymptotic Analysis of the kth Subword Complexity. Entropy 2020, 22, 207. https://doi.org/10.3390/e22020207

AMA Style

Ahmadi L, Ward MD. Asymptotic Analysis of the kth Subword Complexity. Entropy. 2020; 22(2):207. https://doi.org/10.3390/e22020207

Chicago/Turabian Style

Ahmadi, Lida, and Mark Daniel Ward. 2020. "Asymptotic Analysis of the kth Subword Complexity" Entropy 22, no. 2: 207. https://doi.org/10.3390/e22020207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop