Next Article in Journal
A Multinomial DGA Classifier for Incipient Fault Detection in Oil-Impregnated Power Transformers
Previous Article in Journal
Computation of Implicit Representation of Volumetric Shells with Predefined Thickness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Branching Densities of Cube-Free and Square-Free Words

Department of Algebra and Fundamental Informatics, Ural Federal University, 620075 Yekaterinburg, Russia
*
Author to whom correspondence should be addressed.
Algorithms 2021, 14(4), 126; https://doi.org/10.3390/a14040126
Submission received: 25 March 2021 / Revised: 17 April 2021 / Accepted: 18 April 2021 / Published: 20 April 2021
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)

Abstract

:
Binary cube-free language and ternary square-free language are two “canonical” representatives of a wide class of languages defined by avoidance properties. Each of these two languages can be viewed as an infinite binary tree reflecting the prefix order of its elements. We study how “homogenious” these trees are, analysing the following parameter: the density of branching nodes along infinite paths. We present combinatorial results and an efficient search algorithm, which together allowed us to get the following numerical results for the cube-free language: the minimal density of branching points is between 3509 / 9120 0.38476 and 13 / 29 0.44828 , and the maximal density is between 0.72 and 67 / 93 0.72043 . We also prove the lower bound 223 / 868 0.25691 on the density of branching points in the tree of the ternary square-free language.
MSC:
68R15; 68Q45

1. Introduction

A formal language, which is a subset of the set of all finite words over some (usually finite) alphabets, is one of the most common objects in discrete mathematics and computer science. Languages are often defined by properties of their elements, and many “good” properties are hereditary—all factors (=contiguous subwords) of a word with such a property also possesses this property. Typical hereditary properties are “to be a factor of a certain infinite word” or “to contain no factors from a given set”. A factorial language forms posets under some natural order relations; the relation “to be a prefix of” is probably the simplest relation of this sort. The diagram of this relation is called a prefix tree; the structure of this tree reflects the properties of the language. For example, the prefix tree of a language L can be viewed as a deterministic (finite or infinite) automaton accepting L: each edge has the form ( w , w a ) and is labeled by the letter a, the root is the initial state, all nodes are final states.
An important class of factorial languages is constituted by power-free languages. Any language of this class contains no factors from the set of α -powers for a certain integer or rational α ; an α -power of a nonempty word u is the prefix of an infinite word u u u of length α | u | , where | u | stands for the length of u. Power-free languages are studied in hundreds of papers starting with the seminal work by Thue [1], but the topic still contains a number of challenging open problems. One group of problems concerns the structure of prefix trees of infinite power-free languages. Let us briefly recall the related known results. In the following text, the subtree of a prefix tree means a tree consisting of some node w and all its descendants.
For infinite power-free languages, there is a natural partition into “small” and “big” [2,3,4,5,6,7,8]: in binary languages avoiding small powers (up to 7 / 3 ), the number of words grows only polynomially with length, while all other infinite power-free languages are conjectured to have exponential growth. This conjecture has been proved [4,5,6,7,8] for almost all power-free languages (up to a finite number of cases). Polynomial-size binary power-free languages possess several distinctive properties (see, e.g., [9] Section 2.2); all properties stem from a close relation of all words from these languages to a single infinite word, called the Thue–Morse word [10]. Among these languages, the overlap-free language, avoiding all α -powers with α > 2 , attracted the most attention; as a result, it is studied very well. For example, the asymptotic order of growth for this language is computed exactly [11,12]. Further, it is decidable whether a subtree of the prefix tree, rooted at a given word w, is finite or infinite [13]. Moreover, the results of [14] imply that the depth of a finite subtree can be computed in time linear in | w | , and the isomorphism of two given subtrees can be decided in linear time also. Most of the results about the overlap-free language can be extended, with additional and sometimes tedious technicalities, to all small binary power-free languages (see, e.g., [4]).
The knowledge about “big” power-free languages is rather limited. For all these languages, any subtree has at least one leaf [15]. Further, for any fixed alphabet and fixed integer α , it is decidable whether a given word generates finite or infinite subtrees, and every infinite subtree often branches out infinitely [16,17]. All other results concern two particular languages: the binary 3-free (cube-free) language CF and the ternary 2-free (square-free) language SF . These two languages are the most interesting “test cases”, the analysis of which was initiated by Thue [1]. Note that the prefix tree of SF is binary in spite of the ternary alphabet, because a square-free word has no factors of the form a a . For the prefix tree of SF , it is known that (a) finite subtrees of arbitrary depth exist and can be built efficiently [18], (b) in any infinite path, the fraction of nodes with two children is at least 2 / 9 [19], and (c) if a node of depth n has a single descendant of depth n + m , then m = O ( log n ) [19]. If we take the tree consisting of all infinite branches of the original prefix tree, then the analog of (c) with the bound m = O ( n 2 / 3 ) is known [20,21]. For the prefix tree of CF , the property (a) was proved in [22]. The properties (b) (with the constant 23 / 78 ) and (c) were proved in [23].
In this paper, we study the branching of prefix trees, continuing the line of research related to the property (b). Most of our results are about the language CF . By branching point, we mean a node of the prefix tree with two children. Branching density of an infinite path w is the limit of the ratios between the numbers of branching points in prefixes of w and lengths of these prefixes; if no limit exists, we consider lower/upper density as the corresponding lim inf / lim sup . Speaking about lower/upper bounds for density, we mean lower bounds for lower density and upper bounds for upper density. Our contribution is as follows:
  • We establish the lower bound 3509 / 9120 0.38476 on the branching density in the prefix tree of CF (Theorem 3) and the lower bound 223 / 868 0.25691 on the branching density in the prefix tree of SF (Theorem 4), significantly improving the bounds from [19,23];
  • We construct infinite paths in the prefix tree of CF with the branching density as small as 13 / 29 0.44828 (Theorem 5);
  • We establish the upper bound 67 / 93 0.72043 on the branching density in the prefix tree of CF and construct infinite paths, with the branching density as big as 18 / 25 = 0.72 (Theorem 6).
Let us comment on the results. The proof of each of the lower bounds consists of two parts: one is purely combinatorial, while the other requires a computer search. For the cube-free language, we significantly improve the combinatorial part (Theorem 1) over the paper [23], correcting, on the way, an error in the technical statement [23] (Theorem 7); as to the search part, we present an efficient (quadratic) algorithm replacing an exponential algorithm of [23]. There is a chance that the new bound can be slightly improved if more computational resources will be used. We also use the same search algorithm to improve the bound for the square-free language; the combinatorial part, presented in [19], is much simpler than for the cube-free case, and we see no way to improve it.
As a byproduct of the search algorithm, we find “building blocks” to construct an infinite path with small branching density. We call it small because it is smaller than the fraction of branching points at the nth level of the tree for each n that is big enough. (See Section 4.2 for the details.) Finally, a separate combinatorial argument allows us to obtain an upper bound on the branching density for the cube-free case and present an example which is very close to this bound.
After preliminaries, we state and prove the results in Section 3, Section 4 and Section 5. In Section 3, we prove Theorem 1, which constitutes the combinatorial part of Theorem 3. The tools for the search part are described in Section 4.1. Section 4.2 presents the results of the search, Theorems 3 and 4, and a short discussion. Section 4.3 is devoted to Theorem 5. Finally, Section 5 contains Theorem 6 and its proof.

2. Preliminaries

We study words and languages over the binary alphabet { a , b } (apart from Section 4.2, where the result over a ternary alphabet is also presented), writing λ for the empty word and | w | for the length of a finite word w. If w = x y z for some words x , y , and z (any of which can be empty), then x , y , z are, respectively, a prefix, a factor, and a suffix of w. We write y w to indicate that y is a factor of w. The set of all finite (nonempty finite, infinite) words over an alphabet Σ is denoted by Σ * (resp., Σ + , Σ ). Elements of Σ + ( Σ ) are treated as functions w : { 1 , , n } Σ (resp., w : N Σ ). We write [ i . . j ] for the range i , i + 1 , , j of positive integers; the notation w [ i . . j ] stands for the factor of the word w occupying this range, as well as for the particular occurrence of this factor in w; w [ i . . i ] = w [ i ] is just the ith letter of w. A factor w [ i . . j ] is internal if i > 1 and j < | w | . By position of a factor, we mean its starting (=leftmost) position. The distance between factors of a word is the difference of their positions; for example, the distance between occurrences of a a in a a b a a is 3. A cyclic shift of a finite word w is any word w [ i . . | w | ] w [ 1 . . i 1 ] . The complement of a finite or infinite word w is the image of w under the map which replaces all a’s by b’s and all b’s by a’s.
A word w has period p < | w | if w [ 1 . . | w | p ] = w [ p + 1 . . | w | ] . We use two basic properties of periodic words (see, e.g., [24]).
Lemma 1
(Lyndon, Schützenberger; [25]). If u v = v w λ , then there are words x λ , y and an integer n such that u = x y , w = y x and v = ( x y ) n x .
Lemma 2
(Fine, Wilf; [26]). If a word u has periods p and q and | u | p + q gcd ( p , q ) , then u has period gcd ( p , q ) .
The prefix tree of a language L is a directed tree, whose set of nodes is the set of all prefixes of words from L, and the set of edges consists of all pairs ( u , u c ) such that c is a letter. Edges are labelled by the last letter of the destination node: u c u c . The only node having no incoming edges, and thus the root of the tree, is λ . A prefix tree is (in)finite whenever L is (in)finite. A finite prefix tree is often considered as a finite automaton and called trie.
A cube is a nonempty word of the form u u u . A word is cube-free if it has no cubes as factors; a cube is minimal if it contains no other cubes as factors. A p-cube is a minimal cube with the minimal period p (i.e., | u | = p ). Other important repetitions include squares (words of the form u u ) and overlaps (words having a period strictly smaller than half of their length).
The language CF of binary cube-free words is infinite and can be represented by its prefix tree T , in which the nodes are precisely all cube-free words. The label of every path from the root coincides, as a word, with the terminal node of this path. A node in T is either a leaf (infinite paths contain no leaves), or has a single child (fixed node; its outgoing edge, the letter labeling this edge, and the position of this letter in the label of the path are also called fixed), or has two children (branching point; the outgoing edges, and their labels and positions are called free). A fragment of T is shown in Figure 1.
To estimate the number of branching nodes in a path, we obtain bounds on the number of fixed positions/letters in the corresponding word. Assume that some position i in a cube-free word w is fixed; w.l.o.g., w [ i ] = a . Then the word w [ 1 . . i 1 ] b ends with a (unique) p-cube; in this case, we say that i (or w [ i ] ) is fixed by a p-cube. We assume that some constant h is chosen (we will choose it later) and we partition fixed positions in words into two groups: those fixed by “small” p-cubes with p < h and those fixed by “big” p-cubes with p h . To get the lower bound on the branching density, we establish separate upper bounds on the numbers of positions fixed by small and big cubes. All other results involve small cubes only.

3. Positions Fixed by Big Cubes

The aim of this section is to prove the following upper bound on the density of positions fixed by big cubes in a cube-free word.
Theorem 1.
For any integer h 2 and any infinite cube-free word w , the density of positions fixed by cubes with periods h in w is at most 6 5 h .
Theorem 1 is based on the following result, describing the restrictions on the cubes of similar length fixing closely located letters.
Theorem 2.
Suppose that t , l 1 , p , q 2 are integers, w is a word of length t + l such that w [ 1 . . t + l 1 ] is cube-free, the position t is fixed by a p-cube, and w ends with a q-cube. Then q is outside the red zone in Figure 2.
Remark 1.
Theorem 2 and Figure 2 improve and correct their earlier analogs, Theorem 7 and Figure 1 of [23]. The improvement can be seen as a few additional red patches in Figure 2 w.r.t. to ([23] Figure 1), and the correction is that the triangle with vertices ( 2 p , p ) , ( 4 p , 2 p ) , and ( 5 p , 2 p ) in ([23] Figure 1) should have been painted in green. This error does not affect the proofs of the main results of [23]: in those proofs, only a part of the red area is used. This part is drawn in ([23] Figure 8) and is strictly inside the red area in our Figure 2. Thus, in [23], only Theorem 7 and Remark 8 are (partially) incorrect.
Remark 2.
We believe that the boundary of the red area in Figure 2 is exact for p / 2 q 2 p , and so the result of Theorem 2 is optimal. We do not prove this claim, because it is not important for the aims of this paper. To substantiate the claim, we provide Table 1 with the examples of the words w corresponding to green points in the corners of the boundary in Figure 2.
Proof of Theorem 2. 
Let P = w [ t 3 p + 1 . . t 2 p ] , P = w [ t p + 1 . . t ] . Then, w [ t 3 p + 1 . . t ] = P P P . W.l.o.g. P ends with a; so w [ t 2 p ] = w [ t p ] = a , w [ t ] = P [ p ] = b . We write Q Q Q for the q-cube, which is a suffix of w. We begin with a few observations.
O1. If q = p / 2 , the condition w [ t p ] w [ t ] implies that the suffix Q Q Q of w does not contain w [ t p ] . Hence, l 3 q p = p / 2 , giving us the red segment of the line q = p / 2 in Figure 2. To get the red parts of the lines q = p and q = 2 p , note that for q = p , the same argument gives l 2 p , and for q = 2 p , the condition w [ t 2 p ] w [ t ] implies l 4 p . From now on, we assume p / 2 < q < 2 p and q p .
O2. Let i be the bigger of the positions of P P P and Q Q Q in w and consider the factor w [ i . . t 1 ] , having both periods p and q. If its length t i is big enough to apply Lemma 2, the words P , Q are integral powers of shorter words, contradicting the condition that w [ 1 . . t + l 1 ] is cube-free. Thus, Lemma 2 must be inapplicable, giving us t i < p + q gcd ( p , q ) 3 p 2 (recall that q < 2 p ). Hence, the position of Q Q Q in w is bigger than the position of P P P , implying Q Q Q = w [ i . . t + l ] , so that 3 q = t + l i + 1 < l + 1 + p + q gcd ( p , q ) . From this, l > 2 q p , meaning that all green points with q < 2 p are strictly below the line q = l + p 2 shown in Figure 2.
O3. If the factor w [ i . . t 1 ] considered in O2 is shorter than max { p , q } , then we are unable to restrict q: the p-periodic factor P P P and the q-periodic suffix Q Q Q have too short an overlap to “interact” inside w. Recall that t i = 3 q l 1 , so all are strictly above the lines q = l + p 3 and q = l 2 in Figure 2.
Thus, to prove the theorem, it remains to justify the colouring of the stripe between the line q = l + p 2 and the broken line { q = l + p 3 ; q = l 2 } in Figure 2. We split this stripe into zones I–IV by the lines q = l , q = p , and q = l + 2 p 3 . The arguments for all zones are very similar, so we provide maximum details for zone I and more concise proofs for zones II–IV.
Zone I: q > l + 2 p 3 . Together with q < 2 p (O1) and 2 q < l + p (O2), this gives the mutual location of the suffix Q Q Q and factor P P P of w, as depicted in Figure 3. Equal letters denote equal factors; note that x λ since 2 q < l + p and z λ , since q < 2 p .
The words y, z y , and y a x z are prefixes of Q (Figure 3). By the length argument, y is a prefix of z y , which is a prefix of y a x z . Then z y a x z P P (Figure 3) implies z z y P P . Since P P is cube-free, z z z P P , and thus z is not a prefix of y. Since z and y are both prefixes of Q, we have z = y z , z λ . Further, y a x y a x z Q Q (Q begins with y a x z and ends with y a x ) but ( y a x ) 3 Q Q , because Q Q is cube-free. Then, the fact that z and y a x are both prefixes of Q implies that z = y z is a proper prefix of y a x , so a x = z x , x λ . Now compare z y = y z y against y a x z = y z x y z . We see that y is a proper prefix of x y by the length argument. By Lemma 1 we can write x = f g , y = ( f g ) n f for some words f , g ; note that n 1 since x y is cube-free. If n = 1 , we have
Q = y a x z y a x = y z x y z y z x = f g f z f g f g f z f g f z f g , and then ( f g f z f g ) 3 Q Q = f g f z f g f g f z fgfz fg fgfz fgfgfz fg f z f g .
However, Q Q is cube-free, so n = 0 , implying y = f , x = y g . Finally, we can write Q = y z y g y z y z y g . Note that g λ , otherwise ( y z y ) 3 Q Q . From this representation of Q we can express q , p , and l in terms of | y | , | z | , and | g | ; from Figure 3 we know that l = 2 q | y z y b | . Thus, we have
p = 3 | y | + 2 | z | + | g | , q = 5 | y | + 3 | z | + 2 | g | , l = 8 | y | + 5 | z | + 4 | g | 1 .
Recall that | y | 0 , | z | , | g | 1 . From (1) we get q = l p | g | + 1 l p , q = p + l | z | + 1 4 p + l 4 , and also q > l + 2 p 3 (the border of Zone I). This gives us exactly the green triangle inside Zone I with the vertices ( 5 2 p , 3 2 p ) , ( 8 3 p , 5 3 p ) , ( 4 p , 2 p ) .
Zone II: q l + 2 p 3 and q > p . Together with 2 q > l (O3), this gives the mutual location of the suffix Q Q Q and factor P P P of w, as depicted in Figure 4 ( y λ since q > p ; z or x can be empty).
Since x b and y x are prefixes of Q and y is a suffix of Q (Figure 4), one has y y x Q Q . As Q Q is cube-free, y is not a prefix of x. Comparing the prefixes x b and y x of Q, we have y = x b y for some (possibly empty) y . Then, Q = x b y x a z x b y , P = z x b y x a . We express p , q , and l = 2 q | x b | in terms of | x | , | y | , | z | :
p = 2 ( | x | + 1 ) | + | y | + | z | , q = 3 ( | x | + 1 ) | + 2 | y | + | z | , l = 5 ( | x | + 1 ) | + 4 | y | + 2 | z | .
From (2), we get q = l p | y | l p ; together with the boundaries of Zone II, the line q = l p forms the green triangle inside Zone II with the vertices ( 2 p , p ) , ( 5 2 p , 3 2 p ) , ( 4 p , 2 p ) (Figure 2).
Zone III: q l , q < p . Together with q > l + p 3 (O3), this gives the mutual location of the suffix Q Q Q and factor P P P of w, as depicted in Figure 5 ( v , z λ since q < p ; x , y can be empty).
Since x a and v x are prefixes of Q and v Q P P (Figure 5), one has v v x P P , so v is not a prefix of x, and thus v = x a v for some (possibly empty) v . Then z = v x b y , Q = x a v x b y , P = v x b y x a v x a . We express p , q , and l = q + | y | in terms of | x | , | v | , | y | :
p = 3 ( | x | + 1 ) | + 2 | v | + | y | , q = 2 ( | x | + 1 ) | + | v | + | y | , l = 2 ( | x | + 1 ) | + | v | + 2 | y | .
From (3), we get q = l + 2 p | v | 4 l + 2 p 4 ; together with the boundaries of Zone III, this line forms the green triangle in Zone III with the vertices ( p 2 , p 2 ) , ( 2 3 p , 2 3 p ) , ( 2 p , p ) (Figure 2).
Zone IV: q > l . One has q > p / 2 (O1) and 2 q < l + p (O2), and so, q < p . Then the mutual location of the suffix Q Q Q and factor P P P of w is as in Figure 6 ( x λ since 2 q > p ; v λ since q < p ; z λ ; since 2 q < l + p ; y can be empty).
Since x, v x are prefixes of Q and v Q P P (Figure 6), one has v v x P P , so v v x P P , v is not a prefix of x, and thus v = x v for some v λ . Taking y and x y , which are another pair of prefixes of Q, we get x = y b x ( x is possibly empty) because x x y Q Q . Note that if v is a prefix of x, then ( x v ) 3 x v x v x x v Q Q P P , which is impossible. Thus, v is not a prefix of x and then of y. Since x v and x y a are prefixes of Q, we get v = y a g for some (possibly empty) g. Thus, Q = y b x y a g y b x , P = g y b x y b x y a g y b x y a . We express p , q , and l = q | y b | in terms of | x | , | g | , | y | :
p = 5 ( | y | + 1 ) | + 3 | x | + 2 | g | , q = 3 ( | y | + 1 ) | + 2 | x | + | g | , l = 2 ( | y | + 1 ) | + 2 | x | + | g | .
From (4), we get q = l + 2 p | g | 4 l + 2 p 4 and q = p l + | x | p l ; together with the boundary q = l , the obtained two lines form the green triangle in Zone IV with the vertices ( p 2 , p 2 ) , ( 2 5 p , 3 5 p ) , ( 2 3 p , 2 3 p ) (Figure 2).
Thus, we identified all “red” and “green” parts of the areas I–IV, getting the full picture from Figure 2. Theorem 2 is proved. □
The second crucial step in the proof of Theorem 1 is the following lemma on the density of positions fixed by cubes with similar periods.
Lemma 3.
Suppose that l 1 , p 2 are integers, and w is a cube-free word such that | w | > l . Among any l consecutive letters of w, those less than 8 5 + 3 l 5 p are fixed by cubes with periods in the range [ p . . 2 p 1 ] .
Proof. 
Let us consider an inverse problem:
( )
Let l 0 < l 1 < < l s ( s 1 ) be positions in w containing letters fixed by cubes with periods q 0 , , q s , respectively, where q i [ p . . 2 p 1 ] for all i; find a lower bound for l s l 0 (as a function of s and p) which applies for every sequence q 0 , , q s .
The distance between each two consecutive position l i and l i + 1 is lower-bounded by Theorem 2. More precisely, we use Theorem 2 to make conclusions of the form
  • l i + 1 l i A , where the point ( A , q i + 1 ) is on the border of the red polygon in Figure 2, in which p = q i , l = l i , and q = q i + 1 .
For example, since the point ( 25 , 15 ) is on the segment q = l p of the border of such a polygon built for p = 10 , we conclude that the condition q i = 10 , q i + 1 = 15 implies l i + 1 l i 25 . Let β = q i , α = q i + 1 , l = l i + 1 l i . Then Theorem 2 implies the following inequalities related to the boundaries of the polygon in Figure 2 ( β and α play the roles of p and q, respectively):
l 4 α 4 β if α 5 3 β
l α + β if β α 5 3 β
l 4 α 2 β if 3 5 β α β
l β α if α 3 5 β .
Assume that q = ( q 0 , , q s ) is a sequence of positive rational numbers such that
max i [ 0 . . s ] q i < 2 min i [ 0 . . s ] q i .
We define its span span ( q ) as the lower bound for the difference l s l 0 for the sequence q of periods. Precisely, span ( q ) is the minimum number such that there exists a sequence 0 = l 0 < l 1 < < l s = span ( q ) satisfying, for each i, the property “the point ( l i + 1 l i , q i + 1 ) is on the border of the red polygon in Figure 2, in which q i is substituted for p”. Thus, min span ( q ) , where the minimum is taken over all sequences of length s + 1 in the given range [ p . . 2 p 1 ] , is the lower bound sought in ( ) .
We write span ( q i , , q j ) for the span of the corresponding subsequence of q . Note that spans are additive: span ( q i , , q j ) + span ( q j , , q m ) = span ( q i , , q m ) . For the simplest case of a two-element sequence, (5a)–(5d) imply
span ( β , α ) = 4 α 4 β , if α 5 3 β α + β , if β α 5 3 β 4 α 2 β , if 3 5 β α β β α , if α 3 5 β .
From (6), we immediately have
(*)
for any fixed β , the function span ( β , α ) monotonically increases for α [ 3 5 β , 2 β ) .
Since all borders in Figure 2 are line segments, the equality span ( C q ) = C · span ( q ) holds for any C > 0 (if a sequence ( l 0 , , l s ) works for q , then ( C l 0 , , C l s ) works for C q ). Thus, we simplify the subsequent argument by considering a particular range for the sequence q : q i belongs to the semiclosed interval [ 1 , 2 ) for all i.
Given q , we iteratively modify it from right to left. Each modification results in a sequence of the same length, in the same range, and with the same or a smaller span; the result of the last modification is one of “good” sequences, the span of which can be easily computed. The smallest span of a “good” sequence is the lower bound for the span of any sequence q in the given range. Precise definitions are as follows. We call a sequence ( r 0 , , r t ) canonical if it contains only numbers 1 and 5 3 in a way that no two ( 5 3 ) ’s are consecutive and, in addition, r 0 = r t = 1 . A sequence q = ( q 0 , , q s ) is good if it has a nonempty canonical suffix beginning at q 0 , q 1 , or q 2 .
We transform an arbitrary sequence into a good one with local transformations changing either one element or two consecutive elements of a sequence. Note that if we change, say, q i and q i + 1 , this affects span ( q i 1 , q i , q i + 1 , q i + 2 ) but preserves span ( q 1 , , q i 1 ) and span ( q i + 2 , , q s ) . By (*) and (6) one has
for β 5 3 : span ( β , α ) span ( β , 1 ) = 4 2 β ; for β 5 3 : span ( β , α ) span ( β , 3 5 β ) = 2 5 β .
These inequalities justify the first transformation rule:
T1:
given a sequence ( q 0 , , β , α ) , replace α by 1 if β 5 3 and by 3 5 β otherwise.
Next, we consider the span of a triple ( γ , β , 3 5 β ) as a function of β . Here, β 5 3 , so span ( γ , β ) span ( γ , 5 3 ) by (*). Since span ( β , 3 5 β ) = 2 5 β 2 3 = span ( 5 3 , 1 ) , we have
span ( γ , β , 3 5 β ) = span ( γ , β ) + span ( β , 3 5 β ) span ( γ , 5 3 , 1 ) .
Further, compare span ( γ , 5 3 , 1 ) to span ( γ , 1 , 1 ) . For γ 5 3 we obtain, by (6),
span ( γ , 5 3 , 1 ) = span ( γ , 5 3 ) + span ( 5 3 , 1 ) = ( 4 · 5 3 2 γ ) + 2 3 > 3 > ( γ 1 ) + 2 = span ( γ , 1 , 1 ) .
For γ 5 3 , (6) gives us span ( γ , 5 3 , 1 ) = γ + 7 3 and span ( γ , 1 , 1 ) = 6 2 γ . The first number is bigger (resp., smaller) if γ is bigger (resp., smaller) than 11 9 . Therefore, we justified the second transformation rule:
T2:
given a sequence ( q 0 , , γ , β , 3 5 β ) , replace ( β , 3 5 β ) by ( 1 , 1 ) if γ 11 9 and by ( 5 3 , 1 ) otherwise.
Rules T1 and/or T2 replace the last number in the processed sequence q by 1 and serve as the base case in transforming q into a good sequence. Now we describe the general case, assuming that q has a nonempty canonical suffix ( q i , , q s ) . The subsequent transformations preserve the numbers q i , , q s and aim at extending the canonical suffix.
Consider the span of a triple ( γ , β , 1 ) as a function of β . By (6), for γ 5 3 , we have
span ( γ , β ) = γ + β , if β γ ; 4 β 2 γ , if 3 5 γ β γ ; γ β , if β 3 5 γ ; span ( β , 1 ) = β 1 , if β 5 3 ; 4 2 β , if β 5 3 ; span ( γ , β , 1 ) = span ( γ , β ) + span ( β , 1 ) = 2 β + γ 1 , if β γ ; 5 β 2 γ 1 , if 5 3 β γ ; 2 β 2 γ + 4 , if 3 5 γ β 5 3 ; 3 β + γ + 4 , if β 3 5 γ .
Thus, span ( γ , β , 1 ) has a unique minimum at β = 3 5 γ . Similarly, for γ 5 3 we have
span ( γ , β ) = 4 β 4 γ , if β 5 3 γ ; γ + β , if γ β 5 3 γ ; 4 β 2 γ , if β γ ; span ( γ , β , 1 ) = 5 β 4 γ 1 , if β 5 3 γ ; 2 β + γ 1 , if 5 3 β 5 3 γ ; β + γ + 4 , if γ β 5 3 ; 2 β 2 γ + 4 , if β γ .
Here, span ( γ , β , 1 ) has two local minima, γ + 7 3 at β = 5 3 and 6 2 γ at β = 1 . As we learned above, the first number is bigger (smaller) if γ is bigger (resp., smaller) than 11 9 . Now, the third transformation rule replaces β in a triple ( γ , β , 1 ) with a value minimizing span ( γ , β , 1 ) (recall that canonical sequences begin with 1):
T3:
given a sequence ( q 0 , , γ , β , q i , , q s ) with a canonical suffix ( q i , , q s ) , replace β by 3 5 γ if γ 5 3 ; by 5 3 if γ 11 9 ; and by 1 otherwise.
If the rule T3 replaces β by 1, the canonical suffix is extended. In the two remaining cases, we need additional rules. Consider the span of a triple ( δ , γ , 5 3 ) as a function of γ , where γ 11 9 . By (6), we have span ( γ , 5 3 ) = γ + 5 3 and
span ( δ , γ ) = δ + γ , if γ δ ; 4 γ 2 δ , if 3 5 δ γ δ ; δ γ , if γ 3 5 δ . span ( δ , γ , 5 3 ) = 2 γ + δ + 5 3 , if γ δ ; 5 γ 2 δ + 5 3 , if 3 5 δ γ δ ; δ + 5 3 , if γ 3 5 δ .
Hence, at γ = 1 the minimum is attained. Thus, the next transformation is correct:
T4:
given a sequence ( q 0 , , δ , γ , 5 3 , q i , , q s ) with a canonical suffix ( q i , , q s ) and γ 11 9 , replace γ by 1.
The application of T4 extends the canonical suffix of a sequence by two elements. Finally, consider a quadruple ( δ , γ , 3 5 γ , 1 ) . By (6), span ( δ , 1 , 5 3 , 1 ) = δ + 7 3 , if δ 5 3 ; 22 3 2 δ , if δ 5 3 ; and span ( γ , 3 5 γ , 1 ) = 4 4 5 γ . Then we study span ( δ , γ , 3 5 γ , 1 ) depending on δ :
δ γ : span ( δ , γ , 3 5 γ , 1 ) = 16 5 γ 2 δ + 4 > 16 3 > δ + 7 3 5 3 δ γ : span ( δ , γ , 3 5 γ , 1 ) = 1 5 γ + δ + 4 > δ + 7 3 3 5 γ δ 5 3 : span ( δ , γ , 3 5 γ , 1 ) = 1 5 γ + δ + 4 2 γ 2 δ + 4 22 3 2 δ δ 3 5 γ : span ( δ , γ , 3 5 γ , 1 ) = 16 5 γ 4 δ + 4 2 γ 2 δ + 4 22 3 2 δ .
In all cases, span ( δ , γ , 3 5 γ , 1 ) span ( δ , 1 , 5 3 , 1 ) , Thus, we have one more correct transformation rule which extends the canonical suffix:
T5:
given a sequence ( q 0 , , δ , γ , 3 5 γ , q i , , q s ) with a canonical suffix ( q i , , q s ) , replace ( γ , 3 5 γ ) by ( 1 , 5 3 ) .
Starting with an arbitrary sequence q , we apply T1 and/or T2 to get a sequence with a nonempty canonical suffix. For any sequence with such a suffix preceded by three or more numbers, at least one of the transformations T3–T5 is applicable. Note that T3 either increases the length of the canonical suffix or makes one of T4, T5 applicable, while each of T4 and T5 increases this length. Thus, we eventually arrive at the situation where the canonical suffix of the current sequence r = ( r 0 , , r s ) is preceded by 0, 1, or 2 numbers, so that no other transformations are possible. If this suffix begins with r 2 , then T3 and/or T4 was already applied, and then either r 1 = 5 3 or r 1 = 3 5 r 0 . In particular, r is a good sequence.
To find a good sequence of minimum span, we note that span ( 1 , 1 , 1 ) = 4 > 10 3 = span ( 1 , 5 3 , 1 ) . Hence, a unique canonical sequence of odd length and minimum span is ( 1 , 5 3 , 1 , 5 3 , , 1 ) and one of the canonical sequences of even length and minimum span is ( 1 , 1 , 5 3 , 1 , 5 3 , , 1 ) . Now it is easy to find, using (6), good sequences of minimum span. Namely, for even (resp., odd) s, we have r = ( β , 3 5 β , 1 , 5 3 , 1 , 5 3 , , 1 ) (resp., r = ( β , 3 5 β , 1 , 1 , 5 3 , 1 , 5 3 , , 1 ) ), where β = 2 ε for ε as small as possible. Thus, in the case of even (resp., odd) s, one has span ( r ) = 5 3 s 14 15 + 4 5 ε (resp., span ( r ) = 5 3 s 3 5 + 4 5 ε ). Thus, span ( q ) > 5 3 s 1 for any sequence q = ( q 0 , , q s ) such that q i [ 1 , 2 ) for all i.
Returning to the problem ( ) we are solving, recall that span ( p q ) = p · span ( q ) , Thus, we have the lower bound l s l 0 > ( 5 3 s 1 ) p . This means, at most, s + 1 letters fixed by cubes with periods in [ p . . 2 p 1 ] among each l = ( 5 3 s p p + 1 ) consecutive positions of w. Now, easy arithmetic gives s + 1 < 8 5 + 3 l 5 p , as required. □
Proof of Theorem 1. 
We split the range from h to infinity into disjoint finite ranges such that the kth range is [ 2 k 1 h . . 2 k h 1 ] . Consider the density of positions in a cube-free word w , fixed by p-cubes with p in the kth range. By Lemma 3 and the definition of density, it is upper-bounded by 3 5 · 2 k 1 h . Summing up the geometric series of all these upper bounds, we get the required bound 6 5 h . □

4. Positions Fixed by Small Cubes

4.1. Regular Approximations and Aho–Corasick Automata

To estimate the number of letters in a cube-free word that are fixed by small cubes, we analyze finite automata recognizing some approximations of the language CF . Let L i be the language of all binary words containing no cubes of period i . Then, L i contains CF and is regular (as a language defined by a finite set of forbidden factors); L i is referred to as ith regular approximation of CF . The study of regular approximations is a standard approach to power-free languages (see, e.g., [9] (Section 3)).
A regular language given by a finite set of forbidden factors can be represented by a partial deterministic finite automaton (dfa) built by a variation of the classical Aho-Corasick algorithm, as described in [27]. Let us provide some necessary details for regular approximations of CF , following a more general scheme from [28].
To get the dfa A i accepting the language L i , one proceeds in three steps.
  • List all p-cubes with periods p i and build the prefix tree P i of these words; then, the leaves of P i are exactly the p-cubes, and all internal nodes are cube-free words:
  • Consider P i as a partial dfa with the initial state λ and complete this dfa, adding transitions by the Aho–Corasick rule: if there is no transition from a state u by a letter c, add the transition u c v , where v is the longest suffix of u c , present in P i ;
  • Delete all leaves of P i from the obtained automaton.
The resulting partial dfa is A i ; it accepts by any state and rejects if it cannot read the word. For details see, for example, [27]. Let us fix some i 1 and analyze the properties of A i .
We write u . v for the state of A i reached from the state u by the path labelled by v. The following lemma connects A i and fixed letters in cube-free words.
Lemma 4.
A letter w [ j ] of a cube-free word w is fixed by a p-cube with p i if the state λ . ( w [ 1 . . j 1 ] ) of the dfa A i has a single outgoing transition.
Proof. 
W.l.o.g., w [ j ] = a . Since w is cube-free, the states λ . ( w [ 1 . . j 1 ] ) and λ . ( w [ 1 . . j ] ) exist and are connected by an edge labelled by a. Let w [ j ] be fixed by a p-cube; this means that w [ 1 . . j 1 ] b ends with some p-cube u u u ; since p i , P i has the leaf u u u . By the Aho–Corasick rule, the edge λ . ( w [ 1 . . j 1 ] ) b u u u was added to P i (step 2 above) and then deleted together with the leaf u u u (step 3). Thus, the state λ . ( w [ 1 . . j 1 ] ) has a single outgoing transition. For the other direction, note that if λ . ( w [ 1 . . j 1 ] ) has the only transition to λ . ( w [ 1 . . j ] ) , then the state λ . ( w [ 1 . . j 1 ] b ) was deleted at step 3. Hence, this state is some p-cube u u u with p i . Since the Aho–Corasick rule implies that the state λ . v is always a suffix of v, w [ 1 . . j 1 ] b has the suffix u u u , whence the result. □
In accordance with the other notation, we call fixed the states of A i with a single outgoing transition and the edges representing these transitions. The next lemma shows how to get an upper bound on the number of letters in a word, fixed by short cubes.
Lemma 5.
Let d i and c i be minimal numbers such that in the automaton A i (a) for any m, every simple cycle of length m contains, at most, d i m fixed states, and (b) for any n, every simple path of length n contains at most d i n + c i fixed states. Then, every cube-free word w contains, at most, d i | w | + c i positions fixed by p-cubes with p i .
Proof. 
In A i , consider the walk W from λ to λ . w labelled by w. By Lemma 4, the number of fixed positions we need to estimate equals the number of occurrences of fixed states in W, excluding the terminal occurrence of λ . w . Note that W, as well as any walk in A i , can be obtained as follows: take a simple path between the initial and the terminal states of the walk and insert repeatedly simple cycles into the walk built so far. The simple path (say, of length n) contains, at most, d i n + c i fixed states, and the rest contains, at most, d i ( | w | n ) fixed states, whence the result. □
Corollary 1.
In an infinite cube-free word, the density of the set of letters fixed by cubes with periods of at most i is upper-bounded by d i .
The numbers d i and c i can be computed from A i in polynomial time using dynamic programming (due to Corollary 1, we need only d i ). A straightforward way to do this is to compute for each u , v in the order of increasing k the maximum fraction d [ u , v , k ] of fixed states in a ( u , v ) -walk of length at most k; then d i = max u d [ u , u , N i ] , where A i has N i states. This algorithm has cubic complexity, but we can do significantly better. We note that any automaton A i has a unique nontrivial strongly connected component; this quite nontrivial fact follows from the main result of [29].
Proposition 1.
Let N i and n i be the numbers of nodes in A i and its nontrivial strongly connected component, respectively. Then there exists an algorithm computing d i from A i in time O ( n i 2 + N i ) .
Proof. 
Recall that the mean cost of a walk in a weighted digraph is the ratio between its cost and its length. We reduce the problem of computing d i to the problem of finding a cycle of the minimum mean cost. Considering A i as a digraph, we assign cost 0 to fixed edges and cost 1 to free edges. Then we replace A i with its unique nontrivial strongly connected component A i preserving the edge costs. This component contains all cycles of A i . Now d i = 1 μ , where μ is the minimum mean cost of a cycle in the weighted digraph A i .
The mean cost problem can be solved for an arbitrary strongly connected digraph with n nodes and m edges in O ( n m ) time and space using Karp’s algorithm [30]. Noting that, in our case, m = O ( n ) , and that the strongly connected component can be found in linear time by a textbook algorithm, we obtain the required time bound.
For the sake of completeness, let us describe Karp’s algorithm for our case. Fix an arbitrary state q and define C ( j , v ) to be the minimum cost of a length-j walk from q to v or if no such walk exists. The ( n i + 1 ) × n i table with the values of C ( j , v ) for j = 0 , , n i and all states of A i is filled using the following dynamic programming rule:
C ( j + 1 , v ) = min z : z c v ( C ( j , z ) + c o s t ( z , v ) ) ,
C ( 0 , v ) = 0 , if v = q , , otherwise .
According to [30] (Theorem 1), μ = min v A i max 0 j < n i C ( n i , v ) C ( j , v ) n i j . □
Remark 3.
Karp’s algorithm also allows one to retrieve a cycle of minimum mean cost. To do this, one stores the node z = P ( j , v ) , which gives the minimum in the computation (7) of C ( j , v ) (here, j = 1 , , n i , and P ( j , v ) is undefined if C ( j , v ) = ). The n i × n i table P ( j , v ) is then used as follows. If u is a node for which the value of μ is reached, then we built the length- n i walk q = u 0 u 1 u n i = u such that P ( j + 1 , u j + 1 ) = u j for all j and output any simple cycle from this walk. We will need the cycles of minimum mean cost in Section 4.3.

4.2. Lower Bounds on Branching Density

We implemented the above algorithm and ran it for all i 18 ; for i = 18 , the memory required for the table C ( j , v ) is over 1 GB. The results are as follows.
Lemma 6.
One has d 1 = d 2 = 1 / 2 , d 3 = = d 11 = 7 / 13 , d 12 = d 13 = d 14 = 13 / 24 , d 15 = d 16 = d 17 = d 18 = 53 / 96 .
Now we are ready to prove our first main result.
Theorem 3.
The branching density of an infinite binary cube-free word is at least 3509 / 9120 0.38476 .
Proof of Theorem 3. 
Let us fix some integer h 2 . Theorem 1 and Corollary 1 together imply that the density of fixed positions in an infinite cube-free word is upper-bounded by d h 1 + 6 5 h . Trying all values of d i from Lemma 6, we get the maximum at h = 19 :
d 18 + 6 5 · 19 = 53 96 + 6 95 = 5611 9120 .
The statement of the theorem now follows. □
In the same way, we can get the lower bound for the ternary square-free language SF . From [19] (Lemma 5), we have the upper bound 2 h for the density of positions fixed by squares of periods h . Lemmas 4 and 5, and Corollary 1 have direct analogs for ternary square-free words; Proposition 1 and the algorithm inside remain valid for any automaton having, at most, two outgoing edges for each state. Running the algorithm for the regular approximations of SF up to i = 30 , we obtained the correspondent numbers d i . Taking h = 31 and adding 2 h to d 30 = 19 / 28 , we arrive at the following bound.
Theorem 4.
The branching density of an infinite ternary square-free word is at least 223 / 868 0.25691 .
Recall that the growth rate of a factorial language L over the alphabet Σ is the limit lim n | L Σ n | 1 / n . The growth rate β of CF is known with quite high precision [9]: 1.4575732 β 1.4575773 . In terms of the prefix tree, this means that for big n, the number of nodes at the ( n + 1 ) th level is approximately β times bigger than the number of nodes at the nth level. This fact makes β 1 a lower bound on the fraction of branching nodes at the nth level (because this level also contains nodes having no children). In Theorem 5 below, we use Proposition 1 and Remark 3 to prove that there exist infinite cube-free words with the branching density strictly smaller than β 1 .
The above considerations can also be applied to the growth rate γ of SF , 1.3017597 γ 1.3017619 [9]. However, it is still open whether an infinite square-free word can have the branching density smaller than γ 1 . The method of Theorem 5 would not work for SF because the obtained values of d i are too small.

4.3. Cube-Free Words with Small Branching Density

Theorem 5.
The minimum branching density of an infinite cube-free word is less than or equal to 13 / 29 0.44828 .
Proof. 
The result of Lemma 6 gives us an idea of constructing an infinite cube-free word with branching density less than β 1 . We see that 1 d 15 0.44792 < β 1 (while 1 d 14 0.45833 > β 1 ). Our aim is to construct an infinite cube-free word which has the density of fixed positions very close to d 15 .
Using the table P ( j , v ) of Karp’s algorithm (see Remark 3), we find that the automaton A 15 contains exactly four cycles C 1 , C 2 , C ¯ 1 , and C ¯ 2 , each of length 96, reaching the minimum mean cost ( 1 d 15 ) . All four cycles are disjoint; the labels of cycles C ¯ 1 , C ¯ 2 are complements of the labels of C 1 and C 2 , respectively. We note that C 1 and C 2 are connected to each other by many edges. Let us consider a subgraph of A 15 consisting of C 1 , C 2 , and two edges connecting them as in Figure 7.
Since in the Aho-Corasick automaton, all edges with a common endpoint have the same label (if this endpoint is distinct from λ ), the paths from u to v and from u to v in Figure 7 are labeled by the same word x 1 , while the paths from both v and v to u are labeled by the same word x 2 . Denote the labels of the paths from v to u and from u to v by y 1 and y 2 , respectively. Then the label of C 1 is x 1 y 1 (starting from u), the label of C 2 is x 2 y 2 (starting from v ), and there is an “outer” cycle C 3 with the label x 1 x 2 (starting from u ). We also note that x 1 and y 2 (resp., x 2 and y 1 ) begin with different letters.
Analyzing the subgraph of A 15 generated by C 1 and C 2 , we find the cycle C 3 with the smallest mean cost: it has a length of 156 and 86 fixed states. The corresponding values of x 1 , x 2 , y 1 , y 2 are as follows:
x 1 = aabaabbaabaababaabaabbaabaababaabbaabaababaabaabbaabaababaabaabbaa baababbabbaab , | x 1 | = 79 x 2 = babbababbabbaabbabbababbabbaabbababbabbaabbabbababbabbaabbabbababb abbaabaabab , | x 2 | = 77 y 1 = aababbabbaabaabab , | y 1 | = 17 y 2 = babbaabaababbabbaab , | y 2 | = 19 .
Recall that the Thue–Morse word t is the fixed point of the morphism a a b , b b a :
t = t [ 1 . . ] = a b b a b a a b b a a b a b b a b a a b a b b a a b b a b a a b b a a b a b b a a b b a b a a b a b b a b a a b .
This word is overlap-free [10], that is, it has no factors of the form c u c u c where c is a letter and u is a word. We map t to an infinite binary word by the mapping ϕ defined by two rules:
  • ϕ ( t [ 1 ] ) = x 1 ;
  • ϕ ( t [ i ] ) = x 1 if t [ i ] = a t [ i 1 ] , y 1 x 1 if t [ i ] = a = t [ i 1 ] , x 2 if t [ i ] = b t [ i 1 ] , y 2 x 2 if t [ i ] = b = t [ i 1 ] .
In other terms, to get ϕ ( t ) , we replace each a (resp., b) in t with x 1 (resp., x 2 ), and then insert y i in the middle of each factor x i x i of the obtained word:
t = a bb ab aa bb aa ba bb a ϕ ( t ) = x 1 x 2 y 2 x 2 x 1 x 2 x 1 y 1 x 1 x 2 y 2 x 2 x 1 y 1 x 1 x 2 x 1 x 2 y 2 x 2 x 1 .
Thus, we naturally have a partition of ϕ ( t ) into factors which we call blocks, distinguishing between x-blocks x 1 , x 2 and y-blocks y 1 , y 2 . Note that
( )
x 1 and x 2 have no occurrences in ϕ ( t ) other than x-blocks.
Indeed, the factor x 1 [ 4 . . 9 ] = a a b b a a does not occur in x 2 , y 1 , y 2 , or on the border of any two blocks; the symmetric property holds for x 2 [ 3 . . 9 ] = b b a b a b b .
Let us prove that ϕ ( t ) is cube-free. Assume to the contrary that ϕ ( t ) contains a cube; let X X X be the leftmost of the minimal cubes in ϕ ( t ) . A direct check shows that all words x i y i x i and x i x j x i are cube-free. Therefore, X X X contains a whole x-block inside; below, x i denotes the leftmost such block, and x j denotes the x-block distinct from x i . The period | X | of the cube cannot be smaller than the minimum period of x i . It is easy to check that the only period of x i is | x i | 3 . Thus, | X | | x i | 3 74 .
First, assume that x i is the only x-block in X X X . Since for the word x j x i x j is cube-free, X X X is an internal factor of either x i y i x i x j or x j x i y i x i . Since | x i y i x i | , | y i x i x j | 175 and | X X X | 74 · 3 = 222 , the length argument shows that y i x i in the first case, and x i y i in the second case is a factor of X X X . Thus, | X | is not smaller than the minimum of the periods of words x i y i , y i x i . This minimum equals 84, which is a period of both y 1 x 1 and y 2 x 2 . Hence, | X X X | 252 = 2 | x i | + | x j | + | y i | . However, an internal factor of a word must be shorter by at least two symbols than the word itself; this contradiction shows that X X X contains more than one x-block. Therefore, X X X contains either x i y i x i or x i x j .
Let X X X = v 1 x i y i x i v 2 . By the choice of x i , v 1 is a proper suffix of x j and v 2 , if nonempty, begins with the first letter of x j , which differs from the first letter of y i . Thus, if | X | equals the minimum period of x i y i x i , which is | x i y i | , then v 2 = λ and then | X X X | < 3 | X | , which is impossible. Hence, | X | > | x i y i | . Therefore, for each of the two blocks, x i , we see that X X X contains another factor x i at the distance of | X | . By ( ) , these factors are x-blocks; both are inside v 2 because v 1 is a suffix of an x-block. Then, X contains at least one occurrence of the block x i . As a result, X X X contains a factor x i w x i w x i for some word w which is a product of blocks; here, x i w is a cyclic shift of X. Taking the ϕ -pre-image of x i w x i w x i , we obtain a factor of the form a u a u a or b u b u b in t , in contradiction with the overlap-freeness property.
Finally, let X X X = v 1 x i x j v 2 . The word x i x j has no periods smaller than | x i x j | 1 . Hence, | X | > | x i | + | x j | 1 and then X contains at least one of x i , x j . Since v 1 contains no x-blocks by the choice of x i , v 1 x i has no factor x j by ( ) . Then, X must contain x i , and we arrive at a contradiction as in the previous paragraph. Thus, finally, we have proved that ϕ ( t ) is cube-free.
The word ϕ ( t ) corresponds to an infinite walk from the node u in the subgraph of A 15 depicted in Figure 7. The walk reads x 1 (rule 1 in the definition of ϕ ) and then respects rule 2; the details are as follows. Let x i be just read; if the current letter of t coincides with the previous one, the walk returns to the “start” node of the same cycle C i by reading y i and reads x i again; otherwise, the walk reads x j , where i j . We know the fractions of fixed states in the cycles C 1 , C 2 , and C 3 ; to calculate the density of positions fixed by short squares in ϕ ( t ) we use the folklore fact that the density of the sets of positions i such that t [ i . . i + 1 ] = a a (resp., a b , b a , b b ) equals 1/6 (resp., 1/3, 1/3, 1/6). Then in the partition of ϕ ( t ) into blocks, the densities of the blocks x 1 , x 2 are equal and twice bigger than the densities of the blocks y 1 and y 2 . We group the blocks into labels of cycles C 1 , C 2 , C 3 :
ϕ ( t ) = [ x 1 x 2 ] [ y 2 x 2 ] [ x 1 x 2 ] [ x 1 y 1 ] [ x 1 x 2 ] [ y 2 x 2 ] [ x 1 y 1 ] [ x 1 x 2 ] [ x 1 x 2 ] [ y 2 x 2 ] [ x 1 .
Since x-blocks appear in the labels of two cycles while y-blocks appear in the label of one cycle, all cycles appear with the same density. Thus, to get the density of fixed letters, we take the total number of fixed states in C 1 , C 2 , and C 3 and divide it by the sum of lengths of cycles to get ( 86 + 53 + 53 ) / ( 156 + 96 + 96 ) = 16 / 29 . Then the branching density of ϕ ( t ) is, at most, 13 / 29 . The theorem is proved. □
Remark 4.
In fact, the branching density of ϕ ( t ) is exactly 13 / 29 : refining the analysis of cube-freeness of ϕ ( t ) , it is possible to show that this word does not contain letters fixed by long cubes.

5. The Bounds on Maximum Branching Density

The branching density of infinite cube-free words can be much bigger than β 1 0.45758 . The aim of this section is to prove the following theorem.
Theorem 6.
(1) The maximum branching density of an infinite binary cube-free word is less than 67 / 93 0.72043 .
(2) There exists an infinite binary cube-free word with branching density 18 / 25 = 0.72 .
Example 1.
The branching density of the Thue–Morse word t is 2/3. Indeed, t is overlap-free, and thus all fixed letters in it are fixed by 1-cubes. Hence, the fixed letters are exactly the letters a (resp. b) preceded by the1-square b b (resp. by a a ); in each case, the density of such positions is 1/6, as mentioned in Section 4.3. Thus, the density of fixed positions is 1/3.
The proof of Theorem 6 is based on the analysis of positions fixed by 1 cube. The distance between two successive occurrences of the square of a letter in a cube-free word is 2 ( a a b b / b b a a ) or 3 ( a a b a a / b b a b b ) or 4 ( a a b a b b / b b a b a a ) or 5 ( a a b a b a a / b b a b a b b ); it cannot be 1 ( a a a / b b b ) or 6 ( a a b a b a b / b b a b a b a ) because of cube-freeness. Hence, if we know a prefix w [ 1 . . i ] of a cube-free word, this prefix ends with a 1-square, and the distance d to the next 1-square is known; then we can uniquely reconstruct w [ 1 . . i + d ] . We consider an auxiliary alphabet Δ = { 2 , 3 , 4 , 5 } and refer to its elements as digits and to the words over it as codes. For every cube-free word w Σ we define its distance code dist ( w ) Δ as follows: dist ( w ) [ i ] is the distance between the ith and ( i + 1 ) th 1-squares in w (counting from the left). For example, one has
t = a b b a b a a b b a a b a b b a b a a b a b b a a b b a b a a b dist ( t ) = 4 2 2 4 4 4 2 2 4 .
Note that dist ( w ) determines w up to the complement and the few letters preceding the first 1-square; in particular, it determines the branching density of w if w is cube-free. Thus, instead of infinite cube-free words, here we study their distance codes. We extend the definition of a distance code to finite words in the obvious way; for example, dist ( b b a b a a b b ) = 42 . Here, dist ( w ) determines w up to the complement, the letters preceding the first 1-square, and the letters following the last 1-square. We define the inverse of the map dist : for a code X Δ + , w = word ( X ) is the unique word which begins with a a , ends with a 1-square, and satisfies dist ( w ) = X . Clearly, word ( X ) has length [ X ] + 2 , where [ X ] denotes the sum of digits in X, and has | X | letters fixed by 1-cubes. For example, the cube-free word a a b ̲ b a ̲ b b a ̲ b a a b ̲ a b a a = word ( 2345 ) has a length of 16 and 4 letters fixed by 1-cubes (underlined). The same definition of word , with the condition on the end of the word omitted, applies for infinite codes.
Remark 5.
The word word ( 33 ) = a a b a a b a a is not a proper factor of a cube-free word; word ( 434 ) = a a b a b b a b b a b a a contains ( b a b ) 3 , as well as word ( 435 ) , word ( 534 ) , and word ( 535 ) . In the following list of cube-free words, letters fixed by p-cubes are underlined by p lines:
word ( 2 ) = a a b ̲ b word ( 234 ) = a a b ̲ b a ̲ b b a ̲ b a ̲ ̲ ̲ a word ( 3 ) = a a b ̲ a a word ( 235 ) = a a b ̲ b a ̲ b b a ̲ b a ̲ ̲ ̲ b b ̲ ̲ word ( 4 ) = a a b ̲ a b b word ( 432 ) = a a b ̲ a b b a ̲ b b a ̲ a ̲ ̲ ̲ word ( 5 ) = a a b ̲ a b a a ̲ ̲ word ( 532 ) = a a b ̲ a b a a ̲ ̲ b ̲ a a b ̲ b ̲ ̲ ̲ .
Proof of Theorem 6. 
Let A = ( 4 4 2 ) 4 4 2 2 , B = ( 4 4 2 ) 4 4 4 5 . Let X be the image of the Thue-Morse word t under the substitution a A , b B . We prove Statement 2 by showing that u = word ( X ) is cube-free and has the branching density 18 / 25 = 0.72 . We first count the positions in u fixed by 1-cubes and 2-cubes. We have one position fixed by a 1-cube per digit of X and add one position fixed by a 2-cube for each digit 5 in X (see Remark 5). Each block A adds [ A ] = 82 letters to u , from which | A | = 23 are fixed by 1-cubes; each block B adds [ B ] = 93 letters from which | B | = 25 are fixed by 1-cubes and one is fixed by a 2-cube. Since A and B appear in X with the same density, the density of positions fixed by 1- and 2-cubes in u equals ( | A | + | B | + 1 ) / ( [ A ] + [ B ] ) = 49 / 175 = 7 / 25 . Thus, to prove Statement 2 of the theorem, it is necessary and sufficient to show that no position in u is fixed by a k-cube with k > 2 ; that is, u contains no almost-cubes of the form x x x [ 1 . . | x | 1 ] with | x | > 2 . Aiming at a contradiction, consider an almost-cube of x in u .
It is easy to check that word ( 24 ) , word ( 42 ) , word ( 44 ) , and word ( 454 ) contain no almost-cubes and have at least six periods. Thus, | x | 6 , and hence, x contains a 1-square. Then, | x | is the distance between two squares of the same letter. Each of word ( 2 ) and word ( 4 ) begins and ends with different 1-squares, while word ( 5 ) begins and ends with the same square. Hence, | x | = [ Y ] for some factor Y of X such that (i) the total number of 2’s and 4’s in Y is even; (ii) Y Y is a factor of X . If Y contains no 5’s, there are just two cases to check.
Case 1: Y = 44 , | x | = 8 . Long factors with period 8 are located in u , up to a complement, within the factors
word ( 244442 ) = a ( a b b a b a a b ) 2 ( a b b a ) a ( length 20 ) ; word ( 244445 ) = a ( a b b a b a a b ) 2 ( a b b a b a ) b b ( length 22 ) ; word ( 544442 ) = a a ( b a b a a b a b ) 2 ( b a b a a b ) b ( length 22 ) ;
their lengths are less than 8 · 3 1 = 23 required for an almost-cube.
Case 2: Y = ( 4 4 2 ) 2 , | x | = 36 . The code Y Y occurs in X only as a prefix of A or B and thus is preceded and followed by one of the factors 4 2 2 and 4 4 5 . The longest factor of u with period 36, up to a complement, can be found in
word ( 24 2 2 ( 4 4 2 ) 4 4 4 5 ) = a ( a b b a b a a b a b b a a b a b b a b a a b a b b a b a a b b a b a a b ) 2 ( a b b a b a a b a b b a a b a b b a b a a b a b b a b a a b ) a b a a ,
which is again too short (102 letters) for an almost-cube ( 36 · 3 1 = 107 ) .
Therefore, 5 must occur in Y. Since Y Y X , | Y | is the distance between two occurrences of 5 in X . Since 5 occurs in X only as a suffix of the block B, Y is a cyclic shift of some product of blocks. As above, we will show that the longest factor of u with period | x | = [ Y ] is shorter than 3 | x | 1 . Let C = ( 4 4 2 ) 4 4 2 , then A = C 2 , B = C 445 . Since t is overlap-free, the maximal factor of X with period | Y | looks like Y Y C , where Y is a product of blocks and a cyclic shift of Y. Then, the longest factor of u with period | x | looks like v 1 word ( Y Y C ) v 2 . Here, | word ( Y Y C ) | = 2 [ Y ] + [ C ] + 2 = 2 [ Y ] + 82 ; note that [ Y ] [ B ] = 93 . Further, v 1 is the common suffix obtained when decoding different digits (2 and 5) and v 2 is the common prefix obtained when decoding different digits (2 and 4). Hence, | v 1 | = | v 2 | = 1 . In total, the length of the | x | -periodic factor is strictly smaller than 3 [ Y ] 1 . Therefore, no almost-cubes are present in u . This proves Statement 2 of the theorem.
For Statement 1, we take a cube-free word w of maximum branching density and consider its code dist ( w ) . By Remark 5, each digit in dist ( w ) corresponds to a letter fixed by a 1-cube, and a digit 5 also corresponds to a letter fixed by a 2-cube. The density of fixed positions in w is at its minimum, and thus is upper-bounded by 7 / 25 , which is such a density for u . Since 7 / 25 is closer to 1/4 than to 1/3, the majority of digits in w are 4’s. Since w has the same branching density as each of its suffixes, we assume w.l.o.g. that dist ( w ) begins with 4, and represent it as a sequence of blocks: each block consists of one or more 4’s in the beginning and one or more other digits in the end. Note that the words word ( 54 4 5 ) and word ( c 4 5 d ) , for any digits c , d , contain an 8-cube (cf. Case 1 above). Then, a short search reveals all blocks providing the density of fixed positions not greater than 0.3 :
44442 : 5 / 18 0.27778 ; 4445 : 5 / 17 0.29412 ; 44445 : 6 / 21 0.28571 ; 4 442 : 3 / 10 ; 4 4442 : 4 / 14 0.28571 .
(We recall that blocks containing 3’s are restricted, as shown in Remark 5.) We note that word ( ( 4 4 2 ) 5 4 4 ) and word ( ( 4 4 2 ) 5 4 3 5 ) contain 36-cubes, while word ( 4 3 24 3 24 3 ) contains a 14-cube. As a result, the density of fixed letters in w cannot be smaller than such density in word ( ( ( 4 4 2 ) 4 4 4 5 ) ) , which is 26 / 93 . This gives us the upper bound 67 / 93 on the branching density of w , as required. □

6. Discussion and Future Work

As we have seen in this paper, the branching density of particular infinite words in a typical power-free language of exponential growth can vary significantly. Thus, a natural question is to determine the average density. The first problem is to define what is “average”; we suggest that this should be the expected density of a word randomly chosen from all infinite binary cube-free words according to the distribution which is “uniform” in some sense. One possible way to choose a random infinite word is a random walk down the prefix tree (with all finite subtrees trimmed).
Another possible next step is to check whether the ternary square-free language SF , which is another “typical” power-free language of exponential growth, demonstrates the same patterns as CF . Currently we do not know whether some infinite square-free words have branching density strictly less than its growth rate minus one. We also know no reasonable bound for the maximum branching density in SF .

Author Contributions

Methods and algorithms, A.M.S.; software and experiments, E.A.P.; theorems and proofs, A.M.S. and E.A.P.; writing—original draft preparation, A.M.S.; writing—review and editing, A.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Higher Education of the Russian Federation (Ural Mathematical Center project No. 075-02-2020-1537/1).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thue, A. Über unendliche Zeichenreihen. Nor. Vid. Selsk. Skr. Mat. Nat. Kl. 1906, 7, 1–22. [Google Scholar]
  2. Restivo, A.; Salemi, S. Overlap free words on two symbols. In Automata on Infinite Words, Proceedings of the Ecole de Printemps d’Informatique Theorique, Le Mont Dore, France, 14–18 May 1984; Nivat, M., Perrin, D., Eds.; Springer: Berlin/Heidelberg, Germany, 1985; Volume 192, pp. 198–206. [Google Scholar]
  3. Shur, A.M.; Gorbunova, I.A. On the growth rates of complexity of threshold languages. RAIRO Inform. Théor. App. 2010, 44, 175–192. [Google Scholar] [CrossRef] [Green Version]
  4. Karhumäki, J.; Shallit, J. Polynomial versus exponential growth in repetition-free binary words. J. Combin. Theory. Ser. A 2004, 104, 335–347. [Google Scholar] [CrossRef] [Green Version]
  5. Ochem, P. A generator of morphisms for infinite words. RAIRO Inform. Théor. App. 2006, 40, 427–441. [Google Scholar] [CrossRef]
  6. Kolpakov, R.; Rao, M. On the number of Dejean words over alphabets of 5, 6, 7, 8, 9 and 10 letters. Theoret. Comput. Sci. 2011, 412, 6507–6516. [Google Scholar] [CrossRef] [Green Version]
  7. Tunev, I.N.; Shur, A.M. On two stronger versions of Dejean’s conjecture. In Proceedings of the 37th International Symposium on Mathematical Foundations of Computer Science (MFCS 2012), Bratislava, Slovakia, 27–31 August 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7464, pp. 801–813. [Google Scholar]
  8. Currie, J.D.; Mol, L.; Rampersad, N. The Number of Threshold Words on n Letters Grows Exponentially for Every n ≥ 27. J. Integer Seq. 2020, 23, 1–12. [Google Scholar]
  9. Shur, A.M. Growth properties of power-free languages. Comput. Sci. Rev. 2012, 6, 187–208. [Google Scholar] [CrossRef]
  10. Thue, A. Über die gegenseitige Lage gleicher Teile gewisser Zeichenreihen. Nor. Vid. Selsk. Skr. Mat. Nat. Kl. 1912, 1, 1–67. [Google Scholar]
  11. Jungers, R.M.; Protasov, V.Y.; Blondel, V.D. Overlap-free words and spectra of matrices. Theoret. Comput. Sci. 2009, 410, 3670–3684. [Google Scholar] [CrossRef] [Green Version]
  12. Guglielmi, N.; Protasov, V. Exact Computation of Joint Spectral Characteristics of Linear Operators. Found. Comput. Math. 2013, 13, 37–97. [Google Scholar] [CrossRef] [Green Version]
  13. Carpi, A. On the centers of the set of weakly square-free words on a two-letter alphabet. Inform. Process. Lett. 1984, 19, 187–190. [Google Scholar] [CrossRef]
  14. Shur, A.M. Deciding context equivalence of binary overlap-free words in linear time. Semigroup Forum 2012, 84, 447–471. [Google Scholar] [CrossRef]
  15. Bean, D.A.; Ehrenfeucht, A.; McNulty, G. Avoidable patterns in strings of symbols. Pac. J. Math. 1979, 85, 261–294. [Google Scholar] [CrossRef] [Green Version]
  16. Currie, J.D. On the structure and extendibility of k-power free words. Eur. J. Comb. 1995, 16, 111–124. [Google Scholar] [CrossRef] [Green Version]
  17. Currie, J.D.; Shelton, R.O. The set of k-power free words over Σ is empty or perfect. Eur. J. Comb. 2003, 24, 573–580. [Google Scholar] [CrossRef] [Green Version]
  18. Petrova, E.A.; Shur, A.M. Constructing premaximal ternary square-free words of any level. In Proceedings of the 37th International Symposium on Mathematical Foundations of Computer Science (MFCS 2012), Bratislava, Slovakia, 27–31 August 2012; Volume 7464, pp. 752–763. [Google Scholar]
  19. Petrova, E.A.; Shur, A.M. On the tree of ternary square-free words. In Combinatorics on Words, Proceedings of the 10th International Conference (WORDS 2015), Kiel, Germany, 14–17 September 2015; Springer: Cham, Switzerland, 2015; Volume 9304, pp. 223–236. [Google Scholar]
  20. Shelton, R. Aperiodic words on three symbols. II. J. Reine Angew. Math. 1981, 327, 1–11. [Google Scholar]
  21. Shelton, R.O.; Soni, R.P. Aperiodic words on three symbols. III. J. Reine Angew. Math. 1982, 330, 44–52. [Google Scholar]
  22. Petrova, E.A.; Shur, A.M. Constructing premaximal binary cube-free words of any level. Internat. J. Found. Comp. Sci. 2012, 23, 1595–1609. [Google Scholar] [CrossRef]
  23. Petrova, E.A.; Shur, A.M. On the tree of binary cube-free words. In Developments in Language Theory, Proceedings of the 21st International Conference (DLT 2017), Liège, Belgium, 7–11 August 2017; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10396, pp. 296–307. [Google Scholar]
  24. Lothaire, M. Combinatorics on Words; Volume 17, Encyclopedia of Mathematics and Its Applications; Addison-Wesley: Reading, MA, USA, 1983. [Google Scholar]
  25. Lyndon, R.C.; Schützenberger, M.P. The equation aM=bNcP in a free group. Mich. Math. J. 1962, 9, 289–298. [Google Scholar] [CrossRef]
  26. Fine, N.J.; Wilf, H.S. Uniqueness theorems for periodic functions. Proc. Am. Math. Soc. 1965, 16, 109–114. [Google Scholar] [CrossRef]
  27. Crochemore, M.; Mignosi, F.; Restivo, A. Automata and forbidden words. Inform. Process. Lett. 1998, 67, 111–117. [Google Scholar] [CrossRef] [Green Version]
  28. Shur, A.M. Growth rates of complexity of power-free languages. Theoret. Comput. Sci. 2010, 411, 3209–3223. [Google Scholar] [CrossRef] [Green Version]
  29. Petrova, E.A.; Shur, A.M. Transition property for cube-free words. In Computer Science—Theory and Applications, Proceedings of the 14th International Computer Science Symposium in Russia (CSR 2019), Novosibirsk, Russia, 1–5 July 2019; Springer: Berlin, Germany, 2019; Lecture Notes in Computer Science; Volume 11532, pp. 311–324. [Google Scholar]
  30. Karp, R.M. A characterization of the minimum cycle mean in a digraph. Discret. Math. 1978, 23, 309–311. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A fragment of the prefix tree of the binary cube-free language CF . Branching nodes and free edges are green, while fixed nodes and fixed edges are red. Nodes can be restored from the labels of paths.
Figure 1. A fragment of the prefix tree of the binary cube-free language CF . Branching nodes and free edges are green, while fixed nodes and fixed edges are red. Nodes can be restored from the labels of paths.
Algorithms 14 00126 g001
Figure 2. The restrictions on fixed positions in a cube-free word. If t is fixed by a p-cube and ( t + l ) is fixed by a q-cube, then q (as a function of l with parameter p) must be outside the red polygon, including red border lines. The cases q > 2 p and q < p / 2 are not considered.
Figure 2. The restrictions on fixed positions in a cube-free word. If t is fixed by a p-cube and ( t + l ) is fixed by a q-cube, then q (as a function of l with parameter p) must be outside the red polygon, including red border lines. The cases q > 2 p and q < p / 2 are not considered.
Algorithms 14 00126 g002
Figure 3. Location of factors of w for Zone I: q > l + 2 p 3 , q < 2 p , 2 q < l + p . The leftmost Q consists of a suffix of P, followed by P and a prefix of P; P = x z y a is partitioned accordingly.
Figure 3. Location of factors of w for Zone I: q > l + 2 p 3 , q < 2 p , 2 q < l + p . The leftmost Q consists of a suffix of P, followed by P and a prefix of P; P = x z y a is partitioned accordingly.
Algorithms 14 00126 g003
Figure 4. Location of factors of w for Zone II: q l + 2 p 3 , q > p , 2 q > l . The leftmost Q consists of a suffix of P, followed by a longer prefix of P; P = z y x a is partitioned accordingly.
Figure 4. Location of factors of w for Zone II: q l + 2 p 3 , q > p , 2 q > l . The leftmost Q consists of a suffix of P, followed by a longer prefix of P; P = z y x a is partitioned accordingly.
Algorithms 14 00126 g004
Figure 5. Location of factors of w for Zone III: q < p , q l , 3 q > l + p . The leftmost Q consists of a suffix of P, followed by a shorter prefix of P; the middle Q ends with some suffix y outside P , possibly empty; P = z v x a is partitioned accordingly.
Figure 5. Location of factors of w for Zone III: q < p , q l , 3 q > l + p . The leftmost Q consists of a suffix of P, followed by a shorter prefix of P; the middle Q ends with some suffix y outside P , possibly empty; P = z v x a is partitioned accordingly.
Algorithms 14 00126 g005
Figure 6. Location of factors of w for Zone IV: q > p / 2 , q > l , 2 q < l + p . The leftmost Q consists of a suffix of P, followed by a shorter prefix of P; the middle Q is a proper factor of P; P = z v x a is partitioned accordingly.
Figure 6. Location of factors of w for Zone IV: q > p / 2 , q > l , 2 q < l + p . The leftmost Q consists of a suffix of P, followed by a shorter prefix of P; the middle Q is a proper factor of P; P = z v x a is partitioned accordingly.
Algorithms 14 00126 g006
Figure 7. Building the infinite cube-free word of small branching density using the cycles of low mean cost in A 15 .
Figure 7. Building the infinite cube-free word of small branching density using the cycles of low mean cost in A 15 .
Algorithms 14 00126 g007
Table 1. Example words w with the pair ( l , q ) in the green corner of the red zone boundary in Figure 2. One can take X = a b b a a b or a longer overlap-free word of similar structure.
Table 1. Example words w with the pair ( l , q ) in the green corner of the red zone boundary in Figure 2. One can take X = a b b a a b or a longer overlap-free word of similar structure.
PointWord w ( PPP is bold, QQQ is underlined)
l = p / 2
q = p / 2
b Xb Xa Xb Xa Xb Xb X b ̲
l = 2 p / 5
q = 3 p / 5
b Xb Xb Xa Xb Xa Xb Xb Xa Xb Xa Xb Xb Xa Xb Xb X a X b ̲
l = 2 p
q = p
b Xa Xa Xb X b X b ̲
l = 8 p / 3
q = 5 p / 3
b Xb Xa Xa Xb Xa Xa Xb Xa Xb X a X a X b X a X b X a X a X b ̲
l = 4 p
q = 2 p
b Xa Xa Xb X a X b X a X b ̲
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Petrova, E.A.; Shur, A.M. Branching Densities of Cube-Free and Square-Free Words. Algorithms 2021, 14, 126. https://doi.org/10.3390/a14040126

AMA Style

Petrova EA, Shur AM. Branching Densities of Cube-Free and Square-Free Words. Algorithms. 2021; 14(4):126. https://doi.org/10.3390/a14040126

Chicago/Turabian Style

Petrova, Elena A., and Arseny M. Shur. 2021. "Branching Densities of Cube-Free and Square-Free Words" Algorithms 14, no. 4: 126. https://doi.org/10.3390/a14040126

APA Style

Petrova, E. A., & Shur, A. M. (2021). Branching Densities of Cube-Free and Square-Free Words. Algorithms, 14(4), 126. https://doi.org/10.3390/a14040126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop