The root extraction problem for generic braids

We show that, generically, finding the $k$-th root of a braid is very fast. More precisely, we provide an algorithm which, given a braid $x$ on $n$ strands and canonical length $l$, and an integer $k>1$, computes a $k$-th root of $x$, if it exists, or guarantees that such a root does not exist. The generic-case complexity of this algorithm is $O(l(l+n)n^3\log n)$. The non-generic cases are treated using a previously known algorithm by Sang-Jin Lee.


Introduction
There are several computational problems in braid groups that have been proposed for their potential applications to cryptography [9]. Initially, the conjugacy problem in the braid group B n was proposed as a non-commutative alternative to the discrete logarithm problem [1,18]. Later, some other problems were proposed, including the k-th root extraction problem: given x ∈ B n and an integer k > 1, find a ∈ B n such that a k = x.
The interest of braid groups for cryptography has decreased considerably, mainly due to the appearance of algorithms which solve the conjugacy problem extremely fast in the generic case [13,14,15]. The main problem with the proposed cryptographic protocols turns out to be the key generation. Public and secret keys are chosen 'at random', and this implies that the protocols are insecure against algorithms which have a fast generic-case complexity.
While the future of braid-cryptography depends on finding a good key-generation procedure, there are some other problems in braid groups whose generic-case complexity is still to be studied. This is the case of the k-th root (extraction) problem.
A priori, the study of the generic case for the k-th root problem could be though to be nonsense as, generically, the k-th root of a braid x does not exist. But we should think of the braid x as the k-th power of a generic braid: in protocols based on this problem, a secret braid a is chosen at random, and the braid x = a k is made public. Hence we are dealing with braids for which a k-th root is known to exist. In any case, the algorithm in this paper not only shows that root extraction in braid groups is generically very fast, but can also be used by those mathematicians needing a simple algorithm for finding a k-th root of a braid (or proving that it does not exist), which works in most cases.
There are already known algorithms to solve the k-th root problem in braid groups and, more generally, in Garside groups [20,19]. But these algorithms can be simplified a lot in the generic case, as we will show in this paper.
The plan of this paper is as follows. In Section 2 we provide the necessary tools to describe the situation and attack the problem. Then in Section 3, we prove the theoretical results needed for our proposed algorithm, which is given in Section 4, together with the study of its generic-case complexity.
This generic-case complexity turns out to be quadratic on the canonical length l of the braid, if the number n of strands is fixed. More precisely, the generic-case complexity is O(l(l + n)n 3 log n) (Theorem 22).

Preliminaries
2.1 Garside structure of B n A group G is said to be a Garside group [10] if it admits a submonoid P (whose elements are called positive) such that P ∩ P −1 = {1}, and a special element ∆ ∈ P, called Garside element, satisfying the following properties: • The partial order in G defined by a b if a −1 b ∈ P is a lattice order. If a b we say that a is a prefix of b. The lattice structure implies that for all a, b ∈ G there exists a unique meet a ∧ b and a unique join a ∨ b with respect to . Notice that this partial order is invariant under left-multiplication.
• The set of simple elements S := {s ∈ G | 1 s ∆} is finite and generates G.
• P is atomic: the atoms are the indivisible elements of P (elements a ∈ P for which there is no decomposition a = bc with non-trivial elements b, c ∈ P). Then, for every x ∈ P there is an upper bound on the number of atoms in a decomposition of the form x = a 1 a 2 · · · a n , where each a i is an atom.
One of the main examples of Garside groups is the braid group on n strands, denoted by B n . This group has a standard presentation due to Artin [2]: Attending to the above presentation, a braid is said to be positive if it can be written as a product of positive powers of the generators {σ i } n i=1 . The set of positive braids forms the monoid P corresponding to the classical Garside structure of B n . We will denote this monoid by B + n .
The usual Garside element in B + n , which we denote ∆ n , is defined recursively setting ∆ 2 = σ 1 and ∆ n = ∆ n−1 σ n−1 σ n−2 · · · σ 1 , for all n > 2. We will often write ∆ and omit the subindex n when there is no ambiguity.
Consider now the inner automorphism τ : B n → B n determined by ∆. That is, τ (x) = ∆ −1 x∆. One can easily show from the presentation of B n that τ (σ i ) = σ n−i for 1 ≤ i ≤ n − 1. Hence τ has order 2 and ∆ 2 is central. In fact, the center of B n is cyclic, generated by ∆ 2 [8].
The set S of simple elements and the automorphism τ will be very important in the sequel.

Normal forms, cyclings and decyclings
It is well-known that Garside groups have solvable word problem, as one can compute a normal form for each element.
Let us first define the right complement of a simple element s ∈ S as ∂(s) = s −1 ∆. That is, ∂(s) is the only element t ∈ P such that st = ∆. Let us see that ∂(s) = t is also a simple element. Recall that the simple elements are the positive prefixes of ∆. Since τ preserves P (by definition of Garside group), we have that τ (s) is positive. Now stτ (s) = ∆τ (s) = s∆, hence tτ (s) = ∆, which implies that t is a positive prefix of ∆, that is, t ∈ S. It follows that we have a map ∂ : S → S. Notice that, by definition, ∂ 2 ≡ τ .
Given two simple elements s, t ∈ S, we say that the decomposition st is left weighted if s is the biggest possible simple element (with respect to ) in any decomposition of the element st as a product of two simple elements. This condition can be restated as ∂(s) ∧ t = 1, i.e., ∂(s) and t have no non-trivial prefixes in common.
Given such a decomposition, we define the infimum, supremum and canonical length of x as inf(x) = p, sup(x) = p + l and ℓ(x) = l, respectively. Equivalently, the infimum and supremum of x can be defined as the maximum and minimum integers p and s so that ∆ p x ∆ s (see [11]).
It is important to notice that conjugation by ∆ preserves the Garside structure of B n . Hence, if the left normal form of a braid x is ∆ p x 1 · · · x l , then the left normal form of τ (x) is ∆ p τ (x 1 ) · · · τ (x l ). We will make use of this property later.
Garside groups also have solvable conjugacy problem. One of the main tools to solve problems related to conjugacy in braid groups are the summit sets, which are subsets of the conjugacy class of a braid. Throughout this article we are going to use two of them: the super summit set [11] and the ultra summit set [13]. Let us first introduce some concepts: Definition 2. Let x = ∆ p x 1 · · · x l be in left normal form, with l > 0. Notice that we can write: We define the initial factor of x as ι(x) = τ −p (x 1 ), and the final factor of x as ϕ(x) = x l . We can then write: If l = 0, we set ι(x) = 1 and ϕ(x) = ∆.
Notice that, as τ 2 is the identity, we actually have either ι(x) = x 1 if p is even, or ι(x) = τ (x 1 ) if p is odd. This happens in braid groups, but not in other Garside groups in which the order of τ is bigger.

Definition 3.
[11] Let x = ∆ p x 1 · · · x l be in left normal form, with l > 0. The cycling and decycling of x are the conjugates of x defined, respectively, as Thus c(x) is the conjugate of x by ι(x), and that d(x) is the conjugate of x by ϕ(x) −1 .
Cyclings and decyclings were defined in [11] in order to try to simplify the braid x by conjugations. Usually, if l ≥ 2, the decomposition ∆ p x 2 · · · x l ι(x) is not the left normal form of c(x). So c(x) could a priori have a shorter normal form (with less factors). A similar situation happens for d(x).
If ∆ p x 2 · · · x l ι(x) is actually the left normal form of c(x) (when l ≥ 2), we say that the braid x is rigid. This happens if and only if x l ι(x) (that is, ϕ(x)ι(x)) is a left weighted decomposition. We can extend this definition to every case, when l ≥ 0: If x is rigid, neither cycling nor decycling can simplify its normal form x = ∆ p x 1 · · · x l . Actually, the normal forms of the iterated cyclings of x are, if p is even: so c l (x) = x in this case. In the case when p is odd we have: In the same way, if x is rigid we have, for p even: If p is odd we get:

Summit sets
Let now x ∈ B n be an arbitrary braid (not necessarily rigid). Consider the conjugacy class of x, denoted x Bn , and write inf s (x) (resp. sup s (x)) for the maximal infimum (resp. the minimal supremum) of an element in x Bn . These numbers are known to exist [11], and are called the summit infimum and the summit supremum of x, respectively. Set ℓ s (x) = sup s (x) − inf s (x), the summit length of x. It is shown in [11] that the elements in x Bn having the shortest possible normal form are those whose canonical length is precisely ℓ s (x), and they coincide with the elements whose infimum and supremum are equal to inf s (x) and sup s (x), respectively. The set formed by these elements is called the supper summit set of the braid x: Starting from x, it is possible to obtain an element in SSS(x) by applying cyclings and decyclings iteratively. It is known [11] that if inf(x) < inf s (x) then the infimum of x can be increased by iterated cycling. Actually, in this case inf(x) < inf(c k (x)) for some k < n(n−1) 2 (see [5]). Hence, every n(n−1) 2 iterations either the infimum has increased, or one is sure to have an element whose infimum is the summit infimum.
In the same way, if sup(x) > sup s (x), then the supremum of x can be decreased by iterated decycling [11], and in that case sup(x) > sup(d k (x)) for some k < n(n−1) 2 [5]. Hence, every n(n−1) 2 iterations either the supremum has decreased, or we are sure to have an element whose supremum is the summit supremum.
Since decycling can never decrease the infimum of an element, it follows that starting with any x ∈ B n and applying iterated cycling (until summit infimum is obtained) followed by iterated decycling (until summit supremum is obtained) yields an element y ∈ SSS(x).
The super summit set SSS(x) is a finite set, but it is usually huge, so smaller subsets of the conjugacy class of x were defined in order to solve the conjugacy problem of x more efficiently. Namely, the ultra summit set of x, denoted by U SS(x), is a subset of SSS(x) defined as follows [13]: Since SSS(x) is finite, the subset U SS(x) is also finite. It is then clear that one obtains an element is U SS(x) by iterated application of cycling, starting from an element in SSS(x), when a repeated element is obtained. Actually, the whole orbit under cycling of an element in U SS(x) belongs to U SS(x). So U SS(x) is a finite set of orbits under cycling.
Notice that every rigid braid belongs to its ultra summit set, as cylings and decyclings are basically cyclic permutations of its factors. It is shown in [3] that, if x is conjugate to a rigid braid and ℓ s (x) > 1, then U SS(x) coincides with the set of rigid conjugates of x.
There is actually a simpler way, in the general case, to obtain an element in U SS(x) starting from x. Instead of using cyclings and decyclings, one can use the following single type of conjugation: . For every such pair of integers, one has s k (x) ∈ U SS(x).
By the above result, one can obtain an element in U SS(x) by iterated cyclic sliding starting form x. Furthermore, if x is conjugate to a rigid element (this will be the generic situation, as we will see in Subsection 2.4), iterated cyclic sliding yields the shortest positive conjugating element from x to a rigid element. Theorem 7. [14] Let x ∈ B n and suppose that x is conjugate to a rigid braid. Then there is an integer k > 0 such that s k (x) is rigid. Moreover, the conjugating element α from x to s k (x), that is, is the smallest positive element (with respect to ) conjugating x to a rigid element, meaning that for every positive element β such that β −1 xβ is rigid, one has α β.
After obtaining one element in U SS(x), it is possible to compute all elements in U SS(x) together with conjugating elements connecting them. In this way, one solves the conjugacy problem in B n , as two elements x and y are conjugate if and only if U SS(x) = U SS(y) or, equivalently, if U SS(x) ∩ U SS(y) = ∅. Then, in order to check whether x and y are conjugate, one can compute the whole set U SS(x), and one element y ∈ U SS(y). Then, x and y are conjugate if and only if y ∈ U SS(x). By construction, one can even compute a conjugating element from x to y.
In order to understand the forthcoming proofs in this paper, we will need to describe some conjugating elements connecting the elements of U SS(x).

Definition 8.
[13] Let x ∈ B n and y ∈ U SS(x). A simple non-trivial element s ∈ S is said to be a minimal simple element for y if y s ∈ U SS(x) and y t / ∈ U SS(x), for every 1 ≺ t ≺ s.
In [13], Gebhardt showed that for any two elements y, z ∈ U SS(x) there exists a sequence where c i is a minimal simple element for y i , and y i+1 = c −1 i y i c i , for i = 1, . . . , t. Moreover, he introduced an algorithm to compute all minimal simple elements for a given y ∈ U SS(x). This allows to construct a directed graph Γ x , whose vertices correspond to elements of U SS(x), and whose arrows correspond to minimal simple elements, in such a way that for every minimal simple element s for y, there is an edge with label s from y to y s = s −1 ys. By the above discussion, it follows that Γ x is a connected graph, and this is why U SS(x) can be computed starting with a single vertex, iteratively computing the minimal simple elements corresponding to each known vertex, until all vertices are obtained.
We will later see that, generically, ultra summit sets are really small. Actually, they usually have a very simple structure, that we explain now.

Lemma 9.
[4] Let y ∈ U SS(x) with ℓ(y) > 0 and let s be a minimal simple element for y. Then, s is a prefix of either ι(y) or ∂(ϕ(y)), or both.
The above lemma allows us to classify the arrows in Γ x into two groups: a directed edge labelled by s starting at y ∈ U SS(x) is black (resp. grey), if s is a prefix of ι(x) (resp. of ∂(ϕ(y))). In principle, an edge could be of both colors at the same time (a bi-colored arrow, whose label is a prefix of both ι(x) and ∂(ϕ(x))), but not in the case of rigid braids, as ι(x) ∧ ∂(ϕ(x)) = 1 if x is rigid. Actually, this is a necessary and sufficient condition: A braid y ∈ U SS(x) with ℓ(y) > 0 is rigid if and only if none of the edges starting at y is bi-colored.
Definition 11. Given a braid x ∈ B n , its associated U SS(x) is minimal if ℓ s (x) > 1 and, for every vertex y in the graph Γ x , there are exactly two directed edges starting at y, a black one labeled ι(y) and a grey one labeled ∂(ϕ(y)).
Notice that, as a consequence of Lemma 10, if U SS(x) is minimal then all elements in U SS(x) are rigid. Moreover, the arrow labeled ι(y) corresponds to a cycling of y, and the arrow labeled ∂(ϕ(y)) corresponds to a twisted decycling of y, meaning a decycling followed by the automorphism τ . This implies that, if U SS(x) is minimal, the elements of U SS(x) are obtained from y by applying c and τ • d in every possible way. Since y is rigid, cyclings and decyclings basically correspond to cyclic permutations of the factors. Therefore, if U SS(x) is minimal, it consists of either two orbits under cycling (conjugate to each other by ∆), or one orbit under cycling (conjugate to itself by ∆). If the infimum of y is even, the orbit of y has at most ℓ(y) = ℓ s (x) ≤ ℓ(x) elements, so the size of U SS(x) is at most 2ℓ(x). If the infimum of y is odd, the orbit of y has at most 2ℓ(y) ≤ 2ℓ(x) elements, and it is conjugate to itself by ∆, so it is the only orbit. Therefore, in any case, if U SS(x) is minimal it has at most 2ℓ(x) elements.
Remark 12. In order to see whether U SS(x) is minimal, one should a priori check the condition in Definition 11 for every element in U SS(x). But it is actually shown in [17,Theorem 4.6] that, given y ∈ U SS(x), the set U SS(x) is minimal if and only if ℓ(y) > 1 and the minimal simple elements for y are precisely ι(y) and ∂(ϕ(y)). Hence, one just needs to compute the minimal elements for a single arbitrary element y ∈ U SS(x).
Let us see that this case, in which U SS(x) is so small and has such a simple structure, is generic.

Generic braids
Since B n is an infinite set, it is necessary to explain what we mean by 'picking a random braid' or by saying that a braid is 'generic'. Even if we fix the subset of braids of a given length, we must specify if we choose braids from the subset with a uniform distribution, or if we pick braids by choosing a random walk in the Cayley graph, which are the two usual situations.
We will consider the Cayley graph of the braid group B n , taking as generators the simple braids, and assume that each edge of the Cayley graph has length 1, so it becomes a metric space. Let us point out that left normal forms of braids are closely related to geodesics in this Cayley graph [7]. Now let B(r) denote the ball of radius r centered at the trivial braid 1. As the number of simple braids is finite, the set B(r) is a finite subset of B n . We will consider the uniform distribution within this set. It turns out that 'most' elements in B(r) have a very simple ultra summit set: Theorem 13. [17] The proportion of braids in B(r) whose ultra summit set is minimal tends to 1 exponentially fast, as r tends to infinity. This is why we can say that the ultra summit set of a 'generic braid' is minimal. Moreover, the above result was obtained by refining the following theorem, which gives some important information concerning the elements in B(r). We have simplified the statement to adapt it to our situation: The proportion of braids x in B(r) which are conjugate to a rigid braid y = α −1 xα, in such a way that α is a positive braid with ℓ(α) < ℓ(x), tends to 1 exponentially fast, as r tends to infinity. Therefore, not only generic braids have minimal ultra summit sets (made of rigid braids), but one can also obtain a rigid conjugate of a generic braid x very fast, applying iterated cyclic sliding to x. By Theorem 7, the obtained conjugating element will be the smallest possible positive conjugator, so its canonical length will be smaller than ℓ(x). Once that a rigid conjugate y (which belongs to U SS(x)) is obtained, one can compute the whole U SS(x) very fast, as it consists of at most 2ℓ(x) elements, connected by cyclings and twisted decyclings. This is why solving the conjugacy problem in braid groups is generically very fast.
We will also be interested in the centralizer Z(x) of a braid x. Notice that if y = α −1 xα, then Z(y) = α −1 Z(x)α. Therefore, knowing Z(y) is equivalent to knowing Z(x), via α. We will then be interested in Z(y) for y ∈ U SS(x).
Definition 15. Let x ∈ B n and y ∈ U SS(x), and let t be the smallest positive integer such that c t (y) = y. Denote p i := ι(c i−1 (y)) the positive element conjugating c i−1 (y) to c i (y), for i = 1, . . . , t. Then the preferred cycling conjugator of y is defined as P C(y) = p 1 p 2 · · · p t .
In other words, P C(y) corresponds to the conjugating element along the whole cycling orbit of y. By construction, P C(y) commutes with y.
In the generic case (when U SS(x) is minimal), it turns out that Z(x) is isomorphic to Z 2 , and one can describe the generators of Z(y) for any y ∈ U SS(x) (and thus of Z(x)) in a very explicit way: Theorem 16.
[17] Let x ∈ B n and y ∈ U SS(x). Let P C(y) = p 1 · · · p t as above. If U SS(x) is minimal, then all elements in U SS(x) are rigid, Z(x) ≃ Z(y) ≃ Z 2 , and one of the following conditions holds: (i) U SS(x) has two orbits under cycling, conjugate to each other by ∆, and Z(y) = ∆ 2 , P C(y) .

k-th root problem
Now we come to the central problem in this paper: given x ∈ B n and an integer k > 1, find a k-th root of x.
In other words, we want to either find a ∈ B n such that a k = x, or show that such a braid does not exist.
Notice that if a k = x then a belongs to Z(x), the centralizer of x. It is interesting to know that finding a single solution a to the k-th root equation is basically the same as finding all possible solutions, as the complete set of solutions coincides with the conjugacy class of a in Z(x): Proposition 17. Let a, x ∈ B n be such that a k = x for some integer k > 1. Then the set k √ x of k-th roots of x is precisely Proof. In [16], the second author proved that the k-th root of a braid is unique, up to conjugacy. That is, if a, b ∈ B n satisfy a k = b k = x, then a = u −1 bu for some u ∈ B n . Then one has x = b k = u −1 a k u = u −1 xu, and hence u ∈ Z(x). This proves that k √ x ⊂ a Z(x) .
On the other hand, if b = a Z(x) and we write b = u −1 au for some u ∈ Z(x), Observe that a k = x if and only if (α −1 aα) k = α −1 xα for any α ∈ B n . Hence, given x, it suffices to solve the k-th root problem for any conjugate of x, for instance for some y ∈ U SS(x).
We will focus our attention in the generic case in which U SS(x) is minimal. Recall from Theorem 16 that in this case Z(x) ≃ Z(y) ≃ Z 2 . If we express the centralizer of y as Z(y) = v, w , where v and w commute, we know that y has the form y = v c w d , for some c, d ∈ Z (and that this expression is unique, as any other expression would yield a different element of Z(y)). If we are able to express y in this way, then the k-th root problem is trivially solved: Proof. We know from Theorem 16 that Z(y) ≃ Z 2 , so it is abelian. Hence, by Proposition 17, if a k-th root a of y exists then k √ y = a Z(y) = {a}. Therefore, if a k-th root exists, it is unique.
Suppose that the k-th root problem for y has a solution a ∈ B n . Then a ∈ Z(y), and hence a = v r w s for some r, s ∈ Z. But since v and w commute, we have: This implies that c and d are multiples of k, and that a = v r w s = v c k w d k .
Conversely, if c and d are multiples of k, we write c = rk and d = sk for some integers r, s, and we consider the element a = v r w s . Since v and w commute, it follows that a k = y.
By the above result, it follows that the only difficulty in solving the k-th root problem, in the generic case in which U SS(x) is minimal, is to express some y ∈ U SS(x) in terms of the generators of Z(y). We know from Theorem 16 that there are three possible cases, depending on whether U SS(x) has two orbits under cycling, or has one orbit with τ (y) = y, or has one orbit with τ (y) = y. The three following results address each case: . Let x ∈ B n , and let y = ∆ p y 1 · · · y l ∈ U SS(x), written in left normal form. Suppose that U SS(x) is minimal. Suppose also that U SS(x) has two orbits under cycling, conjugate to each other by ∆.
Let v = ∆ 2 and w = P C(y) = p 1 · · · p t , so: If we write c = p/2 and d = l/t, then c and d are integers and we have: Proof. We know that, since U SS(x) is minimal, it consists of rigid elements. Hence iterated cycling corresponds to a cyclic permutation of the factors in the normal form of y (with possible conjugations by ∆, if p is odd).
Suppose that p is odd. Then c l (y) is obtained from y by cyclically permuting all its l factors, conjugating all of them by ∆. Hence c l (y) = τ (y). This implies that τ (y) = ∆ −1 y∆ is in the same orbit of y under cycling, but this is a contradiction with the hypotheses, as U SS(x) has two distinct orbits (the one containing y and the one containing τ (y)). Therefore p is even.
Since p is even, iterated cyclings of y correspond exactly to cyclic permutations of the factors of y. By definition, t is the smallest positive integer such that c t (y) = y, and it is then clear that c m (y) = y for some positive integer m if and only if m is a multiple of t. Since c l (y) = y, we finally obtain that l is a multiple of t. Then the normal form of y is as follows: y = ∆ p y 1 · · · y l = ∆ p (y 1 · · · y t )(y 1 · · · y t ) · · · (y 1 · · · y t ), where P C(y) = y 1 · · · y t , and there are l/t parenthesized factors.
Now, if we write c = p/2 and d = l/t, these numbers are integers and we have: Proposition 20. Let x ∈ B n , and let y = ∆ p y 1 · · · y l ∈ U SS(x), written in left normal form. Suppose that U SS(x) is minimal. Suppose also that U SS(x) has one orbit under cycling, conjugate to itself by ∆, and that τ (y) = y. Let v = ∆ and w = P C(y) = p 1 · · · p t , so: If we write c = p and d = l/t, then c and d are integers and we have: Proof. We know that the left normal form of τ (y) is ∆ p τ (y 1 ) · · · τ (y l ). Since τ (y) = y, the normal forms of y and τ (y) must coincide, hence τ (y i ) = y i for i = 1, . . . , l.
This implies that iterated cyclings correspond to cyclic permutations of the factors of y. We do not care about the parity of p, as every factor of y is invariant under τ . It then follows that P C(y) = y 1 · · · y t , that t divides l and that the normal form of y is: y = ∆ p y 1 · · · y l = ∆ p (y 1 · · · y t )(y 1 · · · y t ) · · · (y 1 · · · y t ), where there are l/t parenthesized factors. Now, if we write c = p and d = l/t, these numbers are integers and we have: Proposition 21. Let x ∈ B n , and let y = ∆ p y 1 · · · y l ∈ U SS(x), written in left normal form. Suppose that U SS(x) is minimal. Suppose also that U SS(x) has one orbit under cycling, conjugate to itself by ∆, and that τ (y) = y. Let v = ∆ 2 , P C(y) = p 1 · · · p t and w = p 1 · · · p t 2 ∆ −1 (recall from Theorem 16 that t is even), so: Z(y) = v, w = ∆, p 1 · · · p t 2 ∆ −1 . If we write c = pt+2l 2t and d = 2l t , then c and d are integers and we have: Proof. We know from Theorem 16 that t is even, but let us see why this holds. We know that there exists some m > 0 so that τ (y) = c m (y); we take m as small as possible, and this implies that c r (y) = y for 0 < r < m. Now, it follows from their own definitions that τ and c commute, and therefore y = τ 2 (y) = τ (c m (y)) = c m (τ (y)) = c 2m (y). This implies that the length of the cycling orbit of y is a divisor of 2m. It cannot be m (as c m (y) = τ (y) = y), and it cannot be smaller than m (as c r (y) = y for every r < m). Therefore, the length of the orbit is precisely t = 2m. The generators of Z(y) are then v = ∆ 2 and w = p 1 · · · p m ∆ −1 .
Then l = 2rm for some positive integer r.
Then l = (2r + 1)m for some positive integer r.
4 An algorithm to find the k-th root of a braid We end this paper by providing a detailed algorithm that summarizes the results from the previous section, together with a study of its complexity.
The results of the previous section are valid when U SS(x) is minimal (which is the generic case). In order to have an algorithm which always succeeds in finding the k-th root of a braid x, we need to include instructions on what to do if U SS(x) is not minimal. In those cases, one can use the algorithm in [19], which finds the k-th root of x in any case, considering the Garside group G = Z ⋉ (B n ) k , where Z = δ acts on (B n ) k by cyclic permutation of the coordinates. S. J. Lee shows that the braid x has a k-th root if and only if the ultra summit set of δ(x, 1, . . . , 1) in G has an element of the form δ(h, . . . , h). Hence, computing an ultra summit set in such a group also solves the root extraction problem in B n . It is not clear to us how big these ultra summit sets are in generic cases, while the algorithm presented in this paper is very simple, and generically very fast.
If one is not interested in programming the algorithm in [19], one could tell our algorithm to return 'fail' when U SS(x) is not minimal, obtaining an algorithm which will succeed only in the generic case. In any case, we present now the main result: Theorem 22. There is an algorithm that takes as input a braid x = ∆ p x 1 . . . , x l ∈ B n written in left normal form, and a positive integer k > 1, and finds a braid a ∈ B n such that a k = x, or guarantees that such a braid does not exist, whose generic-case complexity is O(l(l + n)n 3 log n).
Proof. Algorithm 1, which uses the results from the previous section, constitutes a proof of the theorem. Let us describe it in detail.
The input is a braid x = ∆ p x 1 · · · x l ∈ B n in left normal form and an integer k > 1. First (lines 2-5), the algorithm applies iterated cyclic sliding to x, checking at each iteration whether the resulting braid y is rigid. As we will now see, if the algorithm applies cyclic sliding l n(n−1) 2 − 1 times and no rigid braid is obtained, then we are not in the generic case stated in Theorem 14, hence the algorithm in [19] is applied.
The number l n(n−1) 2 − 1 is precisely l times the length of ∆ minus one. Recall from Theorem 14 that in the generic case there is a positive element α conjugating x to a rigid braid, such that ℓ(α) < ℓ(x) = l. If α is the smallest possible one, there is no ∆ in its normal form. Hence, the length of α in terms of atoms (σ i 's) is at most l n(n−1) 2 − 1 . Now, from Theorem 7 we know that the smallest positive conjugator to a rigid braid is obtained by iterated cyclic sliding. Since at every iteration the conjugating element gets bigger, if we are in the generic case we must obtain a rigid element in at most l n(n−1) 2 − 1 iterations, as we claimed.
If the braid y obtained after the loop in lines 2-5 is rigid, as the algorithms stores the conjugating elements for cyclic sliding at each iteration, we will have a braid α such that α −1 xα = y. Now the algorithm checks whether U SS(y) is minimal (the generic case we are interested in), as explained in Remark 12, checking whether the minimal simple elements for y are precisely ι(y) and ∂(ϕ(y)).
In general, it is not known how fast it is to compute the minimal simple elements for a given arbitrary braid y. But if y is rigid, one can easily find the minimal simple elements for y. We know that every such element must be a prefix of either ι(y) or ∂(ϕ(y)). For every generator σ i , one can consider σ −1 i yσ i and apply iterated cyclic sliding to it, until it becomes rigid. The obtained conjugating element is the smallest conjugating element from y to a rigid braid, having σ i as a prefix. We do this for all σ i which are prefixes of ι(y), and either we find a conjugating element which is a proper prefix of ι(y) (in which case ι(y) is not minimal), or we have shown that ι(y) is minimal. Then we do the same for all generators which are prefixes of ∂(ϕ(y)). The number of iterations in each case is bounded by the length of ι(y) (resp. ∂(ϕ(y))), which are simple elements, while the total number of generators is n − 1. So the total number of cyclic slidings used to check whether ι(y) and ∂(ϕ(y)) are minimal (and hence whether U SS(y) is minimal) is O(n 3 ).
If U SS(y) is not minimal, we are not in the generic case stated in Theorem 14, hence the algorithm in [19] is applied. Otherwise, we are in one of the situations described in Proposition 19, Proposition 20 and Proposition 21. The rest of the algorithm just applies these propositions together with Proposition 18: after decomposing y in the form y = v c w d , it checks whether both c and d are multiples of k. If this is the case, then v c k w d k is the (unique) k-th root of y, and since x = αyα −1 , it follows that αv c k w d k α −1 is the desired k-th root of x; otherwise, the algorithm returns the sentence "A k-th root does not exist".
We study now the complexity of our algorithm, assuming that we are in the generic case in which U SS(x) is minimal, and we can quickly conjugate x to a rigid braid. Computing the complement or applying τ to a simple element is O(n), and computing s ∧ t for two simple elements s and t is O(n log n) [12, Proposition 9.5.1]. Starting with an element y in left normal form, computing s(y) consists of computing a complement (∂(ϕ(y))), a meet (ι(x) ∧ ∂(ϕ(x))) and the normal form of the conjugate of y by a simple element of length at most l (which is O(ln log n)). Hence the total complexity of applying a cyclic sliding is O(ln log n).
The first loop (lines 2-5) is repeated O(ln 2 ) times, checking the condition takes O(n log n) and the body of the loop takes O(ln log n). Hence the total complexity of the loop in lines 2-5 is O(l 2 n 3 log n).
The "If" statement in lines 6-7 is negligible compared with the previous "while" loop.
Next, in lines 8-9 the algorithm checks whether ι(y) and ∂(ϕ(y)) are minimal, for the rigid element y. By the arguments above, this applies O(n 3 ) cyclic slidings, hence the total complexity of this step is O(ln 4 log n).
In line 11 and in the loop in lines 12-15, some cyclings are applied. Since the involved braids are rigid of canonical length at most l, and cycling is just a cyclic permutation of the factors with a possible application of τ to a simple element, this final part of the algorithm is negligible with respect to the previous one. Therefore, the generic-case complexity of Algorithm 1 is O(l(l + n)n 3 log n).
Remark 23. Although the integers p and k are part of the input, the computed complexity does not involve them, as treating with these integers is usually negligible, in reasonable examples, with respect to the calculated complexity. If p is really big, one should take into account the number log p. The case of k is somehow different, as one would have a positive answer only if k is a divisor of the integers c and d (with d = 0), which are O(p + l), so it makes no sense to ask for a k-th root of x, in the generic case, if k is too big compared with p and l.