A Survey on Complexity Measures for Pseudo-Random Sequences

Since the introduction of the Kolmogorov complexity of binary sequences in the 1960s, there have been significant advancements in the topic of complexity measures for randomness assessment, which are of fundamental importance in theoretical computer science and of practical interest in cryptography. This survey reviews notable research from the past four decades on the linear, quadratic and maximum-order complexities of pseudo-random sequences and their relations with Lempel-Ziv complexity, expansion complexity, 2-adic complexity, and correlation measures.


Introduction
Cryptography and security applications make extensive use of random bits, such as keys and initialization vectors in encryption and nonces in security protocols.The generation of random bits should be designed with the security goal of indistinguishability from an ideal randomness source, which generates identically distributed and independent uniform random bits with full entropy (i.e., one bit of entropy per bit).However, this security goal is challenging to achieve.The list of real-world failures, where a random bit generator (RBG) is broken and the security of the reliant application crumbles with it, continues to grow [1,2,3,4].There are two fundamentally different strategies for designing RBGs.One strategy is to produce bits non-deterministically, where every bit of output is based on a physical process that is unpredictable; the other strategy is to compute bits deterministically using an algorithm from an initial value that contains sufficient entropy to provide an assurance of randomness.This class of RBGs are known as pseudo-random bit generators (PRBGs) or deterministic random bit generators (DRBGs) [5].Due to their deterministic nature, PRBGs produce sequences of pseudo-random (rather than random) bits.Real-world PRBGs usually employ cryptographic primitives, such as stream ciphers, block ciphers, hash functions and elliptic curves, as their basic building blocks.For instance, the revised NIST SP 800-90A standard [5] recommends Hash-DRBG, HMAC-DRBG and CTR-DRBG based on approved hash functions and block ciphers.It is worth mentioning that, as pointed in [2], the NIST SP 800-90A standard is not free from controversy: the disgraced algorithm DualEC-DRBG in the standard was reported to contain a back door [3]; the recommended DRBGs, when supporting a variety of optional inputs and parameters, do not fit cleanly into the usual security models of PRBGs; there is no formal competition of the standarisation process, and there is a limited amount of formal analysis of those recommended DRBGs.Although not included in the NIST standard, PRBGs that use stream ciphers based on feedback shift registers (FSRs) as building blocks have several advantages, including simple structure of operations of addition and multiplication, fast and easy hardware implementations in almost all computing devices, good statistic characteristics (long period, balance, run properties, etc.) in output sequences [6,7].In addition, the output sequence from other PRBGs will ultimately become periodic and therefore can be produced by certain FSRs.This provides another perspective for investigating the security strength of output sequences from PRBGs.
Consider a pseudo-random sequence s = s 0 s 1 s 2 . . .generated from a PRBG.A natural question arises: how should one evaluate the randomness of the sequence s? Researches on this problem can be traced back to the early 20th century.As early as 1919 Mises [8] initiated the notion of random sequences based on his frequency theory of probability.Later Church [9] pointed out the vagueness of the second condition on randomness in [8] and proposed a less restricted but more precise definition of random sequences: an infinite binary sequence s is a random sequence if it satisfies the following two conditions: 1) assume f (r, s) denotes the number of 1's among the first r terms of s, then the sequence f (r, s)/r for s approaches a limit p as r approaches infinity; 2) for a sequence b with b 1 = 1, b n+1 = 2b n + s n and any effectively calculable function ϕ, if the integers n such that ϕ(b n ) = 1 form an infinite sequence n 1 , n 2 , • • • , then the sequence f (r, c)/r for c = s n 1 s n 2 s n 3 . . .approaches the same limit p as r approaches infinity.Despite the emphasis of calculability of ϕ in Condition 2), one can see that it is hardly (if not at all) possible to test the randomness of a sequence by this definition.It could be used in randomness test by negative outcomes, namely, a binary sequence s is not random if one can find certain effectively calculable function ϕ, for which Condition 2) cannot hold.In order to justify a proposed definition of randomness, one has to show that the sequences, which are random in the stated sense, possess various properties of stochasticity with which we are acquainted in probability theory.Inspired by the properties obeyed by the maximum-length sequences, Golomb proposed the randomness postulates of binary sequences [10]: balancedness, run property1 and ideal autocorrelation.Kolmogorov [11] proposed the notion of complexity to test the randomness of a sequence s, which is defined as min A(x)=s len(x), the length of the shortest x that can produce s by a universal Turing machine program A. This notion is later known as the Kolmogorov complexity in the literature.Along this line further developments were made in [12,13] and summarized in Knuth's famous book series the Art of Computer Programming [14].
Rather than considering an abstract Turing machine program that generates a given sequence, the model of using FSRs to generate a given sequence had attracted considerable attentions.This model has two major advantages: firstly, all sequences that are generated by a finite-state machine in a deterministic manner are ultimately periodic, and as such, can be produced by certain finite-state shift registers; secondly, it is comparatively easier and more efficient to identify the shortest FSRs producing a given sequence, either of a finite length or of infinite length with a certain period.Due to its tractability, linear complexity of a specific sequence s, which uses linear FSRs (LFSRs) as the algorithm A in the Kolmogorov complexity, is particularly appealing and has been intensively studied.The linear complexity of a given sequence s can be efficiently calculated by the Berlekamp-Massey algorithm [15,16], and the stochastic behavior of linear complexity for finite-length sequences and periodic sequences can be considered to be fairly well understood [17,18,19].The good understanding of linear complexity of sequences is an important factor for its adoption in the NIST randomness test suite [20].Note that extremely long sequences with large linear complexity can usually be generated by much short FSRs with certain nonlinear feedback function.Consequently, more figures of merit to judge the randomness of sequences, such as maximum-order complexity [21], 2-adic complexity [22], quadratic complexity [23], expansion complexity [24] and their variants, have been also explored in the literature.This paper will survey the development of complexity measures used to assess the randomness of sequences.It is important to note that this paper does not intend to offer a comprehensive survey of this broad topic.Instead it aims to provide a preliminary overview of the topic with technical discussions to certain extent, focusing on mainly complexities within the domain of FSR that align with the author's research interests.To this end, I delved into significant findings on the study of FSR-based complexity measures, particularly on the algorithmic and algebraic methods in computation, statistical behavior, theoretical constructions, relations of those complexity measures.Research papers on this topic presented at flagship conferences in cryptography, such as ASIACRYPT, CRYPTO and EUROCRYPT, published at prestigious journals with IEEE, Springer, Science Direct, etc, as well as some well-recognized books since 1970s constitute the major content of this survey.Readers may refer to relevant surveys on this topic with different focuses by Meidl and Niederreiter [25,19], Topuzoǧlus and Winterhof [26].
The remainder of this paper is organized as follows: Section 2 recalls the basics for feedback shift registers, which lays the foundation for the discussions in subsequent sections.Section 3 reviews important developments of the theory of linear complexity profile for random sequences, Section 4 discusses a mathematical tool used to efficiently calculate the quadratic complexity of sequences and Section 5 surveys computational methods, the statistical behavior for maximum-order complexity, and theoretical constructions of periodic sequences with largest maximum-order complexity.In Section 6 known relations among complexity measures of sequences are briefly summarized.Finally, conclusive remarks and discussion are given in Section 7.

Feedback Shift Registers
Figure 1: An m-stage FSR with feedback function f Let F q be the finite field of q elements, where q is an arbitrary prime power.For a positive integer m, an m-stage feedback shift register (FSR) is a clock-controlled circuit consisting of m consecutive storage units and a feedback function f as displayed in Figure 1.Starting with an initial state s 0 s 1 . . .s m−1 over F q , the states in the FSR will be updated by a clockcontrolled transformation as follows: where and the leftmost symbol for each state will be output.In this way an FSR produces a sequence s = s 0 s 1 s 2 . . .based on each initial state s 0 s 1 . . .s m−1 and its feedback function f .The output sequence from an FSR, known as an FSR sequence, can be equivalently expressed as a sequence of states When s p = F p (s 0 ) = s 0 for the least integer p ≥ 1, we obtain a cycle of states s 0 , . . .s p−1 , or equivalently a sequence s 0 . . .s p−1 . . . of least period p.In his influential book [10], Golomb intensively studied the properties of FSR sequences from both linear feedback functions and nonlinear feedback functions.Readers can refer to [10] for a comprehensive understanding of FSR sequences.
For an m-stage FSR with a feedback function f and an initial state s 0 s 1 . . .s m−1 , if we know n consecutive s i s i+1 . . .s i+n−1 in the output s, then we obtain n − m equations When the feedback function f is a linear, namely, f (x 0 , . . ., x m−1 ) = a 1 x m−1 + a 2 x m−2 +• • •+a m x 0 , and n ≥ 2m, one can uniquely determine its m unknown coefficients a i from the above n − m linear equations.When the feedback function f is nonlinear, the number of terms in f increases significantly.For instance, for q = 2 a binary feedback function f of degree r has r d=0 n d possible terms.The above equations in Eq. ( 2) become more difficult to analyze.The equations are not necessarily linearly independent and a lot more variables are involved.Consequently, more observations and techniques are required in the analysis of these equations.In the following we will review some results on FSR sequences when the feedback function is linear, quadratic and arbitrarily nonlinear, and their relations with other complexity measures.
We will consider both finite-length sequences and infinite-length sequences with certain finite period.Throughout what follows, we will use s to denote a generic sequence with terms from certain alphabet, s n = s 0 s 1 . . .s n−1 to denote a sequence with length n that is not a repetition of any shorter sequence, s k n to denote a sequence of k repetition of s n , and s ∞ n to denote infinite repetitions of s n , indicating that s ∞ n has least period n.

Linear Complexity
Given an FSR, when its feedback function f is a linear function on the input state, namely, it is associated with the following linear recurrence: where a 1 , . . ., a m are taken from F q , it is termed linear FSR and the output sequence s is called an LFSR sequence of order m.The polynomial with a 0 = 1, is called the feedback or characteristic polynomial of s.The zero sequence 00 . . . is viewed as an LFSR sequence of order 0.
Definition 1.Let s be an arbitrary sequence of elements of F q and let n be a non-negative integer.Then the linear complexity L n (s) is defined as the length of the shortest LFSR that can generate s n = s 0 s 1 . . .s n−1 .The sequence L 0 (s), L 1 (s), L 2 (s), . . ., is called the linear complexity profile of the sequence s.
It is clear that L n (s) ≤ L n+1 (s) and 0 ≤ L n (s) ≤ n for any integer n and sequence s.Hence the linear complexity profile of s is a nondecreasing sequence of nonnegative integers.Two extreme cases for L n (s) correspond to highly non-random sequences s whose first n elements s 0 s 1 . . .s n−1 are The linear complexity of a sequence s can be efficiently calculated by the well-known Berlekamp-Massey algorithm [15,16], which also returns the linear feedback function generating s.In the Berlekamp-Massey algorithm, at each step n − 1 with the current linear recurrence for s 0 s 1 . . .s n−2 , a discrepancy is calculated to assess whether the linear recurrence holds for the extended sequence s 0 s 1 . . .s n−2 s n−1 : when the discrepancy is zero, indicating that the current linear recurrence indeed holds for s 0 s 1 . . .s n−2 s n−1 , then move on to n; when the discrepancy is nonzero, indicating that a new (and possibly longer) linear recurrence is needed, the linear recurrence is then updated and the linear complexity is updated as Gustavson [27] showed that the above process requires O(n 2 ) multiplication/addition operations for calculating L n (s).Dornstetter in [28] pointed out the equivalence between this procedure of calculation and the Euclidean algorithm.
Rueppel [17] investigated the varying behavior of the discrepancy ∆ t , 1 ≤ t < n, for a binary sequence of length n, and characterized the linear complexity profile of random sequences in the following propositions.
Proposition 1.The number of sequences of length n and linear complexity l, denoted by N n (l), satisfies the following recursive relation and it, starting from N 1 (0) = N 1 (1) = 1, can be given by Proposition 2. The expected linear complexity L n of binary random sequences of length n drawn from a uniform distribution is given by and the variance of the linear complexity is given by where R 2 (n) denotes n modulo 2.
From the above proposition, we readily see that where c n = (4 + R 2 (n))/18.From Rueppel's characterization on the general stochastic behavior of binary random sequences, a similar and more important question arose: for a randomly chosen and then fixed sequence s over F q , what is the behavior of L n (s)?To settle this question, Niederreiter [18] developed a probabilistic theory for linear complexity and linear complexity profiles of a given sequence s over any finite field F q .More specifically, he applied techniques from probability theory to dynamic systems and continued fractions, and deduced the probabilistic limit theorems for linear complexity by exploiting the connection between continued fractions and linear complexity of a sequence s.Below we first recall the connection discussed in [18].Given a sequence s = s 0 s 1 s 2 . . ., the fractional function where the partial quotients A 1 , A 2 , . . .are polynomials over F q with positive degrees.The n-th linear complexity of s can be expressed as for the integer j satisfying By the study of the above continued fraction expansion, Niederreiter obtained the following result on L n (s).Theorem 1. Suppose f is a nonnegative nondecreasing function on the positive integers with ∞ n=1 q −f (n) < ∞.Then we have By taking f (n) = (1 + ϵ) log n/ log(q) for arbitrary ϵ > 0, a more explicit law of the logarithm for the n-th linear complexity of a random sequence s can be obtained as follows.
Consequently, we have As the equalities in ( 6) hold for infinitely many n, there is a strong interest in sequences s whose n-th linear complexity has small derivations from n/2.
Constructing d-perfect sequences is of significant interest.Sophisticated methods based on algebraic curves was introduced in [29], which yielded sequences with almost perfect linear complexity profile.For cryptographic applications, especially as a keystream, a sequence s should have a linear complexity profile close to that of a random sequence.Moreover, this condition should be true for any starting point of the sequence.That is to say, for a sequence s = s 0 s 1 s 2 . . .and any r ≥ 0, the r-shifted sequence s r s r+1 s r+2 . . .should have an acceptable linear complexity profile close to random sequences as well.Niederreiter and Vielhaber [30] attacked this problem with the help of continued fractions.An important fact is that the jumps in the linear complexity profile of s are exactly the degrees of the partial quotients A 1 , A 2 , . . . .With such a connection, they proposed an algorithm to determine the linear complexity profile of shifted sequences s r s r+1 s r+2 . . .by investigating the corresponding continued fractions [31].To be more concrete, they designed a method to calculate the continued fraction expansions of G n (x), . . ., G 1 (x), where G r (x) = n−1 j=r s j x −(j+1) for r ≥ 0. Later in his survey paper [19], Niederreiter proposed the following open problem on d-perfect sequences.
Problem 1. Construct a sequence s = s 0 s 1 s 2 . . .over F q such that the rshifted sequences s r s r+1 s r+2 . . .for all r ≥ 0 are d-perfect for some integer d ≥ 1.
The preceding discussion is mainly concerned with generic infinite-length sequences.Note that sequences from an m-stage LFSR over F q are periodic and their maximum period is q m −1, which is achieved when the characteristic polynomial f (x) is a primitive polynomial in F q [x].LFSR sequences of order m with period q m − 1 are thus known as maximum-length sequences, or m-sequences for short.For a periodic sequence s = s ∞ n , the values in its linear complexity profile will remain unchanged at a certain point.This reveals apparently different linear complexity profiles between an infinitelength sequence of certain period and a random infinite-length sequence, which basically can be considered to have arbitrarily long period.
An important tool for the analysis of the linear complexity of n-periodic sequences over F q is the discrete Fourier transform.Assume gcd(n, q) = 1, which means that there exists an n-th primitive root β of unity in some finite extension of F q .Then the discrete Fourier transform of a time-domain n-tuple (a 0 , . . ., DFT((a 0 , . . ., a n−1 )) = (a 0 , . . ., a n−1 ) In this case, the linear complexity of an n-periodic sequence can be determined via the discrete Fourier transform [32].
where β is any n-th primitive root of unity over F q and It's clear that when gcd(n, q) = 1, the GDFT of (s 0 , s 1 , . . ., s n−1 ) reduced to the discrete Fourier transform of (s 0 , s 1 , . . ., s n−1 ).As pointed out in [34,33], the linear complexity of s = s ∞ n can be calculated in terms of the Günther weight of GDFT ((s 0 , s 1 , . . ., s n−1 )), which was referred to as the Günther-Blahut theorem, which was used by Blahut implicitly in [35].
n be an n-periodic sequence over F q with characteristic p, where n = p v w with gcd(w, p) = 1.Then the linear complexity of s is equal to the Günther weight of the GDFT of the n-tuple (s 0 , s 1 , . . ., s n−1 ), where the Günther weight of a matrix M is the number of its entries that are nonzero.
Algebraically, the linear complexity of s ∞ n can be obtained via the feedback polynomial of the LFSR generating s [36].Consider the generating function Assume the characteristic polynomial of s is given by f where gcd(φ 0 , f 0 ) = 1 and f 0 is the minimal polynomial of s.This relation implies the following equality [36]: where . Therefore, we have the following result.
Proposition 5. Let s = s ∞ n be an n-periodic sequence over F q and s(x) be the associated function of s given by s Then the minimal polynomial of s is given by With the above neat expression of L(s) for periodic sequences s ∞ n , several families of sequences with nice algebraic structure were investigated, such as Legendre sequences [37], discrete logarithm function [38], Lempel-Cohn-Eastman sequences [39].For more information about linear complexity of sequences with special algebraic structures, readers are referred to Shparlinski's book [40].
The above discussion is concerned with the explicit calculation of an nperiodic sequence.The statistical behavior of a random n-periodic sequence over F q is of fundamental interest, particularly from the cryptographic perspective.Rueppel in [41] considered this problem: let s = s ∞ n run through all n-periodic sequences over F q , what is the expected linear complexity For the case q = 2, Rueppel showed that if n is a power of 2, then . Based on observations on numerical results he conjectured that E 2,n is always close to n.There was little progress on this conjecture until the work by Meidl and Niederreiter [42], in which they studied E q,n for an arbitrary prime power q with the help of the above Günther-Blahut theorem and the analysis of cyclotomic cosets.For an integer 0 ≤ j < w, the cyclotomic coset of j modulo w is defined as C j = {jq t (mod w) : t ≥ 0}.
Theorem 2. Let n = p v w, where p is the characteristic of F q , v ≥ 0, and gcd(p, w) = 1.Let C 1 , . . ., C h be the different cyclotomic cosets modulo w and b From Theorem 2, a routine inequality scaling In the particular case v = 0, ).
For small q, the above lower bounds can be further improved.For instance, for q = 2, as the only singleton coset is for odd n.
Recall that the expected linear complexity of a random binary sequence s 0 s 1 . . .s n−1 is around n 2 .For a periodic sequence s = s ∞ n from a sequence s n with linear complexity l, the calculation of linear complexity of s ∞ n needs to consider the sequence s 0 s 1 . . .s n−1 s 0 . . .s l−1 .This expansion sequence, when considered as a random sequence, would have expected complexity around (n + l)/2.This is somewhat consistent with E 2,n = (3n − 1)/4 when l = (n − 1)/2 for odd n.Further improved results for the case gcd(q, n) = 1 were obtained by more detailed analysis and can be found in [42].

Quadratic Complexity
Quadratic feedback functions are the easiest nonlinear case and has been discussed by some researchers [43,44,45].Chan and Games in [43] studied the quadratic complexity of binary DeBruijn sequences of order m (in which each binary string s 0 s 1 . . .s m−1 appears exactly once); Youssef and Gong characterized a jump property of the quadratic complexity profile of random sequences, and Rizomiliotis et al. in [45] proposed an efficient method to calculate the quadratic complexity of binary sequences in general.We first recall the definition of quadratic complexity of a sequence in the following.Definition 3. Given a binary sequence s = s 0 s 1 s 2 . . ., its quadratic complexity is the length of the shortest FSRs with quadratic feedback functions that can generate s.The quadratic complexity profile of the sequence s is similarly defined as Q 0 (s), Q 1 (s), Q 2 (s), . . ., where Q n (s) is the quadratic complexity of the first n terms of s.
For the first n terms s 0 s 1 . . .s n−1 of s, suppose it can be generated by an m-stage quadratic FSR.Following the recurrence as in Eq. ( 2), we can obtain a system of n − m equations.More concretely, denote where, for 0 Therefore, finding a quadratic feedback function in m variables that outputs the sequence s n = s 0 s 1 . . .s n−1 is equivalent to solving the above system of linear equations in 1+ m(m+1) variables in F (m). Notice that the ordering of linear and quadratic terms in M (n, m) has a feature that the quadratic terms s i s j with j < i occur in M (n, m) only after the term s i occurs.This feature facilitates the calculation of Q n (s) term by term as in the Berlekamp-Massey algorithm.
We first give a toy example to illustrate the above linear equations.Let n = 9 and m = 3.Then we have From the relation M (9, 3)F (3) = E(9, 3), we obtain 6 equations with 7 variables in F (3).
It is straightforward to see that the system is solvable if and only if Chan and Games in [43] had the following observation.
Theorem 3. If an m-stage quadratic FSR can generate s n−1 but not s n , then there is no m-stage quadratic FSR that can generate s n if and only if By this theorem, for each step, when s 0 s 1 . . .s n−2 has quadratic complexity m, if rank(M (n, m)) ̸ = rank(M (n − 1, m)), then s 0 s 1 . . .s n−2 s n−1 for any s n−1 also has the same quadratic complexity m.Based on such an observation, they proposed a term-by-term algorithm, similarly to the Berlekamp-Massey algorithm, to calculate the quadratic complexity of a sequence s n [23].Their algorithm strongly depends on the Gaussian elimination for the computation of rank(M (n, m)) for each new term.The special structure of the matrix M (n, m) was not well taken into consideration in their algorithm, which didn't reveal the precise increment of the quadratic complexity of s n .By looking into certain structure of the matrix M (n, m), Youssef and Gong [44] showed that if the quadratic complexity of the first n terms of a sequence s, is larger than n/2, then the first n + 1 terms of s has the same quadratic complexity.Rizomiliotis, Kolokotronis and Kalouptsidis in [45] observed the following nesting structure of the matrix M (n, m):

No
No Yes No Yes Figure 2: Recursive calculation of the quadratic complexity profile of s Theorems 3 and 4 laid the foundation for an algorithmic method to calculate the quadratic complexity of a sequence.More specifically, for each new term, Theorem 3 determines when the quadratic complexity will increase; and Theorem 4 characterizes how large the increment is.Combining different cases for each new term, Figure 2 provides a preliminary algorithm to recursively assess the quadratic complexity of a sequence s.It can be seen from Figure 2 that the assessment heavily depends on the calculation of ranks of involved matrices, which becomes slower as n and m grow.With a detailed analysis of the nesting structure of M (n, m), the authors in [45] proposed a more efficient algorithm (as in [45,Fig. 3]) to calculate the quadratic complexity profile of s n .In addition, when n < m(m+3) 2 + 1, the system M (n, m)F (m) = E(n, m) is under-determined; and when n ≥ m(m+3) 2 + 1, the necessary and sufficient condition that a unique quadratic feedback function generating the sequence s n can be given [45].
Theorem 5.The quadratic feedback function of an m-stage FSR that generates the sequence s n is unique if and only if Otherwise, there are 2 1+ m(m+1)
This theorem illustrates that a binary sequence s with small quadratic complexity m should not be used for cryptographic applications.Otherwise, when O( m(m+3) 2 ) consective components s i 's in s are known, the quadratic feedback function could be (uniquely) determined, thereby producing the whole sequence and violating the requirement of unpredictability on sequences used in cryptography.
In the previous section, we saw a good theoretical understanding of the statistical behavior of linear complexity and linear complexity profile for random sequences.However, to the best of my knowledge, there is no published theoretical result on the statistical behavior for quadratic complexity and quadratic complexity profile of random sequences s n and n-periodic sequences s ∞ n , except for the following two conjectures by Youssef and Gong [44] strongly indicated by numerical results.
Conjecture 1.Let N n (q c ) be the number of binary sequences s n with quadratic complexity q c = Q(s) > n/2.Then N n (q c ) = N n+i (q c + i) for any i ≥ 1, indicating that N n (q c ) is a function of n − q c .Conjecture 2. For moderately large n, the expected value of the quadratic complexity of a random binary sequence of length n is given by

Maximum-order Complexity
As an additional figure of merit for randomness testing, Jansen and Boekee in 1989 proposed the notion of maximum-order complexity, also known as nonlinear complexity later, of sequences [46].We adopt the term maximum-order complexity in this survey for better distinction with the notion of quadratic complexity.
Definition 4. The maximum-order complexity of a sequence s = s 0 s 1 s 2 . . ., denoted by M (s), is the length of the shortest FSRs that can generate the sequence s.Similarly, the maximum-order complexity profile of s is a sequence of M 0 (s), M 1 (s), M 2 (s), . . .,, where M n (s) for each n ≥ 0 is the shortest FSRs that can generate the first n terms of s.
As pointed in [46], the significance of the maximum-order complexity is that it tells exactly how many terms of s have to be observed at least, in order to be able to generate the entire sequence by means of an FSR with length M (s).Therefore, it has been considered as an important measure to judge the randomness of sequences.Below we recall some basic properties of maximum-order complexity of sequences [46,47].
Lemma 1.Let s = s 0 s 1 s 2 • • • be a sequence over F q .(i) Let l be the length of the longest subsequence of s that occurs at least twice with different successors.Then the sequence s has maximum-order complexity M (s) = l + 1; (ii) The maximum-order complexity of any n-length sequence is at most n−1.Moreover, the equality holds if and only if s n has the form (α, n be a sequence of period n over F q .(i) The maximum-order complexity of s satisfies ⌈log q From Lemmas 1 and 2 the difference between nonlinear complexities of finite-length sequences and periodic sequences is apparent.One typical difference is that the maximum-order complexity of a periodic sequence remains unchanged under cyclic shift operations, while that of a finite-length sequence can change dramatically.For instance, for the sequence 0 . . .01 of length n, while M (0 . . .01 ∞ ) = M (0 . . .01) = n − 1, after a right cyclic shift, we have M (10 . . .0 ∞ ) = n − 1 but M (10 . . .0) = 1.
Recall that for the case of linear feedback function is unique for periodic sequences, and that for quadratic feedback function is also unique when the matrix M (n, m) with n = m(m+3) 2 + 1 is nonsingular, the situation for maximum-order complexity is significantly different.Proposition 6.Let Φ s denote the class of feedback functions of the FSRs that can generate a periodic sequence s = s ∞ n with maximum-order complexity m over F q .Then the number of functions in Φ s is equal to |Φ s | = q q m −n .
By the above proposition, the class Φ s generally contains more than one feedback function that can generate the periodic sequence, which is similar for non-periodic sequences.One can search for functions exhibiting certain properties, such as the function with the least number of terms.One of the methods that minimize the number of terms and their orders in the inclusiveor sum of products of variables or their complements is to use the disjunctive normal form (DNF) representation of the feedback function.For the binary case, the DNF of f is given by where It is also interesting to note that a unique feedback function occurs for DeBruijn sequences of order m, which have m-tuples over F q exactly once in one period q m .For binary DeBruijn sequences of order m, Chan and Games in [23] showed their quadratic complexity are upper bounded by 2 m −1− m 2 , which can be reached by those DeBruijn sequences derived from m-sequences by inserting 0 to the all-zero subsequence of length m − 1 in m-sequences.

Computation and Statistical Behavior
In [46] Jansen associated M (s) with the maximum depth of a direct acyclic word graph (DAWG) from s. Below we recall a toy example in [46] to generate a DAWG from a binary string, which helps readers better understand relevant notions.
Example 1.The sequence s = 110100 has the following set of all subsequences SUB(s) = {λ, 1, 0, 11, 10, 01, 00, 110, 101, 010, 100, 1101, 1010, 0100, 11010, 10100, 110100}, where λ represents the empty sequence.For the sequence s, the endpoints are given as s 1 1 0 1 0 0 endpoints 0 1 2 3 4 5 6 With the notions illustrated in Example 1, Jansen in [21] established the following connection between the maximum-order complexity of a sequence s and the depth of DAWG derived from s, and proposed to calculate the maximum-order complexity of s from its DAWG.Proposition 7. Given a sequence s, the maximum-order complexity of s and the depth d(s) of its DAWG satisfy Blumer's algorithm [48] was identified as an important tool to build an DAWG of a sequence s, thereby calculating its maximum-order complexity profile in linear time and memory with regard to the sequence length.The method worked particularly well for periodic sequences.With this algorithm, Jansen exhausted all 2 l binary l-length sequences and l-periodic sequences as l ranges from 1 up to 24 [21, Tables 3.1-3.4],which exhibited typical statistical behaviors of maximum-order complexity profiles of random sequences: the expected maximum-order complexity of a random sequence s of sufficiently large length n over F q is given by Following the successful approach in [45], Rizomiliotis and Kalouptsidis [49] considered the calculation of the maximum-order complexity of a sequence s in a similar way.From the recursive equations f (s i , s i+1 , . . ., s i+m−1 ) = s i+m for 0 ≤ i < n − m, one can obtain a similar system to linear equations where the coefficient matrix M (n, m) from all binary terms m−1 i=0 x j i i , where (j 0 , j 1 , . . ., j m−1 ) is any element in F m 2 in a special ordering according to the degree of the terms.For instance, when n = 10 and m = 3, the system of linear equations is given by ) = (a, a 0 , a 1 , a 3 , a 0 a 1 , a 0 a 2 , a 1 a 2 , a 0 a 1 a 2 ) T , E(10, 3) = (s 3 , s 4 , s 5 , s 6 , s 7 , s 8 , s 9 ) T .
In [49] the authors investigated the properties of the m-tuples T j (n, m) = (s j , s j+1 , . . ., s j+m−1 ) as j runs through the indices 0, 1, . . ., n − m and proposed a term-by-term algorithm to compute the maximum-order complexity and output a feedback function for a given sequence s n .They also pointed out that the dominant multiplication complexity in their proposed algorithm is in the order of O(n).
The above approach by analyzing the structure of M (n, m) is not satisfactory enough.Later in [50] the authors made more observations that further improved the performance of calculating maximum-order complexity of sequences.The above observation is a natural extension of the property of maximumorder complexity in Lemma 1 (i).Instead, the following observation characterized the jump of maximum-order complexity profile at each term.Proposition 9.For a binary sequence s, suppose M n−1 (s) = m < M n (s).Let i ≤ n−m−1 be the least integer such that s j . . .In addition, they observed that the maximum-order complexity profile of a sequence has a close connection to its eigenvalue profile.To be more concrete, the eigenwords in a sequence s 0 s 1 . . .s r−1 are those subsequences of s 0 s 1 . . .s r−1 that are not contained in any proper subsequence s 0 . . .s t−1 of s 0 s 1 . . .s r−1 .They showed the following interesting connection.Theorem 6.For a binary sequence s, suppose M n−1 (s) = m and the minimal FSR of s n−1 does not generate s n .Then where Eigenvalue(s n−1 ) is the number of eigenwords in the sequence s n−1 .
From the observations on the update of certain feedback function when the minimal FSR of s n−1 does not generate s n , they proposed the following procedure to generate a minimal feedback function and maximum-order complexity of a sequence s.Suppose M n−1 (s) = m and the minimal feedback function of s n−1 is h n−1 (x 0 , . . ., x m−1 ).Then , then s n and s n−1 share the minimal feedback function h n−1 and the same maximum-order complexity m Consequently, they proposed an efficient algorithm for maximum-order complexity, which works very similarly to the Berlekamp-Massey algorithm.For ease of comparison with the Berlekamp-Massey algorithm, we recall it in Algorithm 1.The computational complexity of Algorithm 1 in the worse case is O(n 3 ), while in the average case is O(n 2 log 2 (n)) since the the expected maximum-order complexity E[M n ] ≈ 2 log 2 (n).While it has the similar complexity as the DAWG method [21], its recursive nature is an important advantage since it eliminates the need to know the entire sequence in advance.This resembles the well-known Berlekamp-Massey algorithm in the linear case: if a discrepancy occurs at certain step, then a well-determined corrective function is added to the current minimal feedback function of s n−1 to provide an updated minimal feedback of s n .In addition, the update of maximum-order complexity M n (s) = max{M n−1 (s), n − Eigenvalue(s n−1 )} at each step is similar to L n (s) = max{L n−1 (s), n − L n−1 (s)} for linear complexity of the sequence s in the recursive procedure.

Approximate statistical behavior
Conducting a rigorous theoretical analysis of the behavior of maximum-order complexity profiles, as done for linear complexity, appeared intractable.On the other hand, maximum-order complexity had a lot of similarities as linear complexity.In order to facilitate the randomness test with maximum-order complexity, Erdman and Murphy proposed a method to approximate the distribution of the maximum-order complexity [51].Inspired by the property of maximum-order complexity in Lemmas 1 and 2, they investigated a function that approximates the function P (m, n): the probability that the first n m-tuples are all different in a sequence.
Proposition 10.Let R(m, n − 1) be the probability that the n-th m-tuple is unique given that the previous (n − 1) m-tuples were unique.Then Simulations on random sequences for m from 4 to 24 [51, Table 2] indicated that the above approximation was accurate, particularly when m ≥ 17.
Recall that a purely periodic sequence has maximum-order complexity m if the first n m-tuples are unique but at least one of the first n (m − 1)-tuples is repeated.Thus, for calculating a periodic sequence s = s ∞ n , it suffices to only look at the first n + m − 1 bits to see if the n m-tuples are unique and the first n + m − 2 bits to see if the n (m − 1)-tuples are not unique.Denote by Q(m, n) the probability that the first n m-tuples are unique while the n (m − 1)-tuples are not.This probability function can be well approximated by q(m, n), which is given by Based on this approximation (of which the accuracy was shown in [51, Table 3]), the expected maximum-order complexity was approximated as follows: Theorem 7. Let M n be the maximum-order complexity of a random periodic binary sequence s ∞ n .Then Similarly, for random binary sequences of length n, the expected maximumorder complexity was approximated below.Theorem 8. Let N (m, n) be the number of binary sequences of length n with maximum-order complexity m.Then it can be approximated as and the expected maximum-order complexity M n of random binary sequence of length n is given by Motivated by pushing nonlinear complexity as a metric of sequences in statistical test as in [51], Petrides and Mykkeltveit [52,53] considered the classification of binary periodic sequences in terms of their nonlinear complexities.More concretely, they defined a certain type of recursion of periodic binary sequences: a binary sequence s ∞ n is said to satisfy the recursion 1 is the relating vector, which contains an even number of 1's, v 0 . . .v n−γ−1 = 0 . . .01 for certain γ and v n−1 = 1.They established the following connections between the above recursion and the number N (n, m) of periodic binary sequences s ∞ n of nonlinear complexity m.Theorem 9. Let V n,γ be the set of 2 γ−2 possible relating vector for given integers n and γ and S(R v (n, i), m) be the set of cyclically inequivalent sequences of nonlinear complexity m which satisfy recursion R v (n, i).Then one has In particular, when v has the form v = 0 . . .

Sequences with high maximum-order complexity
Different complexity measures were proposed in the literature to evaluate the randomness of sequences.A sequence with low complexity, including the linear, quadratic and maximum-order complexity, allows for short FSRs to generate the whole period of the sequence.That is to say, all remaining unknown terms in this sequence can be efficiently uncovered when the feedback function and the initial state of the short FSRs are determined.It is clear that this kind of sequences should be avoided for any cryptographic applications.On the other hand, the relation between high complexity of a sequence and good randomness of a sequence is not yet well understood.For the aforementioned complexity measures, the sequences of the form (0, . . ., 0, 1) have the largest complexity, but have clearly poor randomness.According to the exhaustive search results on maximum-order complexity in Tables 1-3 in [21], while sequences (0, . . ., 0, 1) are the only instances of n-length sequences that have the highest possible complexity n − 1, there are multiple n-periodic binary sequences having the highest maximum-order complexity n − 1.
Researchers have been interested in the study of sequences with high maximum-order complexity [54,55], particularly on sequences having highest possible maximum-order complexity [56,57].For the latter case, Rizomiliotis [49] in 2005 proposed the following necessary and sufficient condition for an n-periodic binary sequence to have maximum-order complexity the same as its linear complexity.
Proposition 11.For a periodic binary sequence if and only if there exists integers 0 ≤ p 1 < p 2 ≤ n − 1 satisfying Based on the above condition, different families of n-periodic binary sequences that have the same linear complexity and maximum-order complexity were proposed in [49].Moreover, an algorithm based on Lagrange interpolation and an algorithm based on the relative shift of the component sequences were proposed to generate binary sequences with highest possible maximum-order complexity.These two ideas were further developed for constructing such periodic sequences with period of the form 2 m −1 [56].Roughly 10 years later, n-periodic sequences with highest maximum-order complexity were revisited in [57], which provided a complete picture of sequences with maximum-order complexity.The authors of [57] gave the necessary and sufficient condition for n-periodic sequences over any alphabet to have maximum-order complexity n − 1.
Theorem 10.Let s = s ∞ n be an n-periodic sequence over any field F. Then M (s) = n − 1 if and only if there exists an integer p with 1 ≤ p < n such that Furthermore, such a sequence can, up to shift equivalence, be represented as one of the following forms (1) where the integers m i , r With the full characterization in Theorem 10, the distribution of random n-periodic binary sequences having maximum-order complexity n − 1 can be derived as follows.
Proposition 12.The probability that a randomly generated sequence s = s ∞ n having highest M (s) = n − 1 is given by where φ(•) is the Euler's totient function and µ(•) is the Möbius function given by µ(n) = k∈Z * n e 2πik/n .In particular, when q = 2, the probability .
Interestingly, the sequences s characterized in Theorem 10 exhibit a strong recursive structure, which can be derived by applying the Euclidean algorithm on n and the integer p, a smaller period of a subsequence of s n .This strong recursive structure, on the other hand, implies that s is far from being random.As discussed in [57], the balancedness, stability and scalability of such sequences are not good, indicating that they should be avoided for cryptographic applications.Very recently, binary sequences with periodic n were further studied in [58], where the authors further investigated the structure of s ∞ n and proposed an algorithmic method to determine all n-periodic binary sequences with maximum-order complexity ≥ n/2.
In addition, Liang et al. recently investigated the structure of n-length binary sequences with high maximum-order complexity ≥ n/2 and proposed an algorithm that can completely generate all those binary sequences [55].Based on the completeness, they managed to provide an explicit expression of the number of n-length binary sequences with any maximum-order complexity between n/2 and n.
Proposition 13.The number of n-length binary sequences with maximumorder complexity m with n/2 ≤ m ≤ n − 1 is given by where

Relations between Complexity Measures
Besides the complexity measures in the context of FSRs in previous sections, some other measures have been discussed by researchers, such as Lempel-Ziv complexity [59], 2-adic complexity for FSRs with carry operation [22], k-error complexity, expansion complexity [60], and correlation measure [61,62].This section will briefly review the relations among these measures.Below we start with the relation between the complexity measures in the previous sections, and we will be mainly focusing on extreme cases.

Relations between FSR-based complexities
From their definitions, for a sequence s, it is easily seen that We know that the special sequence of the form s n = α . . .αβ, where α ̸ = β, has the largest linear complexity n, quadratic and maximum-order complexity n − 1.As for n-periodic sequences s = s ∞ n over F q , the largest linear complexity is also n, which can be seen from the expression L(s) = n − deg(gcd(x n − 1, s n (x))), as recalled in Proposition 5.This can be used to characterize the periodic sequences with largest linear complexity.Assume n = p v w with gcd(p, w) = 1, where p is the characteristic of F q .Let C 1 = {0}, C 2 , . . ., C h be the cyclotomic cosets modulo w relative to the power q.Let β be a primitive n-th root of unity in an extension of F q .Then we have the following factorization of x n − 1 over F q : , which indicates that an n-periodic sequences over F q has the largest linear complexity n if and only if gcd(s n (x), f j ) = 1 for j = 1, 2, . . ., h.
Niederreiter in [63] proved the existence of n-periodic sequences with the largest linear complexity n and large k-error linear complexity, which is defined as This result was obtained by finding integers n = p v w satisfying a condition, which was derived from the following observation [63]: let l = min 2≤i≤h |C i | and suppose k is an integer such that k j=0 n j (q − 1) j < q l , then there exists an n-periodic sequence s satisfies As shown in the examples in [63], there are infinitely many primes n allowing for the existence of binary n-periodic sequences of maximum possible linear complexity n.
Finding n-periodic sequences with the largest maximum-order complexity seems more challenging, especially when k-error complexity is also considered.As recalled in Section 5, Sun et at.[57] revealed n-periodic sequences with the largest maximum-order complexity n − 1, which have surprisingly strong recursive structure.Besides this, little result about k-error maximumorder complexity and quadratic complexity has been reported.Here we propose two open problems for interested readers.
Problem 2. Characterize all the n-periodic sequences over F q with the largest quadratic complexity n − 1.
Problem 3. Characterize certain lower bound on the k-error maximumorder complexity of n-periodic sequences with maximum-order complexity n− 1.
Another extreme case for sequences is that they have the least possible complexity values.For finite-length sequences, we know that the linear, quadratic and maximum-order complexity can be as small as zero, which don't appear interesting to explore.When s is a periodic sequence over F q , we know that m-sequences of length n = q m − 1 have the least complexities M (s) = Q(s) = L(s) = m and DeBruijn sequences of length n = q m have the least maximum-order complexity M (s) = m.In addition to the aforementioned observations, readers may wonder what else we know about their relations so far?
Due to the limited understanding of DeBruijn sequences, there have been only partial results for the above problem.It was well known [64] that the linear complexity of binary DeBruijn sequences of order m is between 2 m−1 + m and 2 m − 1, where both the lower and upper bound are attainable.
However, there exists no binary DeBruijn sequences with linear complexity 2 m−1 +m+1 [65].Based on evidences on existing results, Etzion in his survey made the following conjecture: There is a close connection between m-sequence over F q of order m and DeBruijn sequences of order m.There have been also some works on the linear complexity of modified DeBruijn sequences, which obtained by removing one zero from the longest run of zeros in a DeBruijn sequence.Mayhew and Golomb [66] showed that the minimal polynomial of binary modified DeBruijn sequences of order m is a product of distinct irreducible polynomials with degrees not equal to 1 and dividing m.Further results on the restrictions of the degrees were developed in [67,68,69].
When considering the fact for a binary sequence s with low quadratic complexity m, any subsequence of length slightly larger than (m+3)m 2 + 1 can be sufficient to recover the whole sequence s.However, the topic of quadratic complexity was significantly less explored.There has been little research progress on the quadratic complexity although it was already studied in 1989.To the best of my knowledge, only result about the quadratic complexity of DeBruijn sequences were report by Chan and Games [23].
Proposition 14.Given a binary DeBruijn sequence s of order m ≥ 3, then In particular, this upper bound can be attained by the DeBruijn sequence derived from an m-sequence of order by including the all zero m-tuple.
In addition, numerical results for m = 3, 4, 5, 6 led to a conjecture in [23]: the quadratic complexity of binary DeBruijn sequences of order m ≥ 3 is lower bounded by m + 2. Three years later this conjecture was later confirmed by Khachatrian in [70].

FSR-based complexities, Lempel-Ziv complexity, expansion complexity, and autocorrelation
This subsection will briefly review recent works on the relations among the linear, maximum-order complexity and other significant metrics for assessing pseudo-random sequences.

2-adic complexity
The 2-adic complexity introduced by Goresky and Klapper [71,22] is closely related to the length of a shortest feedback with carry shift register (FCSR) that generates the sequence.The theory of 2-adic complexity of periodic sequences has been well developed.Given a sequence s = s ∞ n of period n, one has where 0 ≤ A ≤ q, gcd(A, q) = 1 and .
The 2-adic complexity of s is defined as Φ(s) = log 2 (q).In the aperiodic case, the n-th 2-adic complexity, denoted by Φ 2,n (s), is the binary logarithm of There was very little work on exploring the relation between linear, quadratic and maximum-order complexities of a binary sequence and its 2-adic complexity.Until recently it was shown [72] that the maximum-order complexity of a binary sequence s is upper bounded by its 2-adic complexity.

Lempel-Ziv complexity
The Lempel-Ziv complexity of a sequence is one classical complexity measure based on pattern counting, introduced by Lempel and Ziv in 1976 [59] and later named after them.The parsing procedure in the calculation of Lempel-Ziv complexity laid the basis for the prominent Lempel-Ziv compression algorithms [73,74].The Lempel-Ziv complexity reflects the compressibility of a sequence and thus has significant cryptographic interest.The Lempel-Ziv complexity measures the rate at which new patterns emerge as we move along a given sequence.For a binary sequence s n = s 0 s 1 . . .s n−1 , we split up s n into adjacent blocks.By definition, the first block consists of s 0 .If s 0 . . .s m−1 is a union of blocks, or in other words if s m−1 is the last bit in a block, then the next block s m . . .s m+k−1 is uniquely determined by two properties: i) the sequence s m . . .s m+k−2 occurs as a subsequence in s 1 . . .s m+k−3 ; ii) the sequence s m . . .s m+k−1 does not occur as a subsequence in s 1 . . .s m+k−2 .The Lempel-Ziv complexity is then the number of blocks into which s n is split up by this procedure.
Motivated by Niederreiter's statement [31], Limniotis, Kolokotronis and Kalouptsidis explored the relation between the maximum-order complexity and Lempel-Ziv complexity of a sequence, which are both closely connected to the eigenvalue profile of the sequence.They pointed out that although the exhaustive history (which uniquely determines the Lempel-Ziv complexity) cannot fully estimate the maximum-order complexity profile and vise versa, there still exists a relationship between them in the sense that the eigenvalue profile of a sequence of given Lempel-Ziv complexity, which determines the maximum-order complexity profile, is restricted rather than arbitrary.They also established a lower bound on the compression ratio of sequences with a prescribed maximum-order complexity.

Expansion complexity
In 2012, Diem introduced the notion of expansion complexity of sequences [24], in which he showed that the sequences over finite fields with optimal linear complexity proposed by Xing and Lam via function expansion [29] can be efficiently computed from a relatively short subsequence.
Let s = s 0 s 1 s 2 . . .be a sequence over F q .For a positive integer n, the n th expansion complexity E n (s) is 0 if s 0 = • • • = s n−1 = 0 and otherwise the least total degree of a nonzero polynomial h(x, y) = i,j h ij x i y j ∈ F q [x, y] with h(x, G(x)) ≡ 0 (modx n ) , where the total degree of h(x, y) is given by deg(h(x, y)) = max i,j≥0 i + j | x i y j is a monomial of h(x, y), h ij ̸ = 0} .
The expansion complexity of the sequence s is defined as It can be verified that E n (s) ≤ E n+1 (s) ≤ E n (s) + 1.In particular, if h(x, y) is restricted to be irreducible, then the n-th irreducible expansion complexity and irreducible expansion complexity of s can be defined accordingly.The relations between the linear complexity, maximum-order complexity and expansion complexity were discussed recently in [60,75].Proposition 16.For any infinite sequence s over F q , if M n (s) = n − k with 1 ≤ k < √ 2n − 2 and n > 8, then its n th expansion complexity E n (s) is no more than k + 2.

k-order correlation measure
Let k be a positive integer.The n-th correlation measure of order k of a binary sequence s = s 0 s 1 s 2 . . . is given by [76] C n (s, = max The relations between the linear complexity, maximum-order complexity and correlation measure of order k were explored in [76,61,62].The above bounds were recently improved in [62].
Proposition 18.Given a binary sequence s = s 0 s 1 s 2 . . ., if for some integer t, the n-th linear complexity of s satisfies then for some k with 1 < k ≤ 2t, one has In addition, if the n-th maximum-order complexity of s satisfies M n (s) ≤ M , then one has C n (s, 2) ≥ n − 2 M + 1.

Conclusion
This survey primarily focused on the complexity measures of sequences within the domain of feedback shift registers, which are efficiently computable and hold significant interest in cryptographic applications.Notable works on the computation, stochastic behavior, and theoretic result on these complexities were examined to a certain technical extent.Several conjectures were revisited and new open problems were proposed.To offer a sounding overview of the research development on complexity measures, the survey also briefly reviewed the established relation between these complexities as well as their connection with other important metrics for pseudo-random sequences, including the well-known 2-adic complexity, Lempel-Ziv complexity, expansion complexity and k-order correlation measure.The survey indicates that although the study of complexity measures of randomness traces back to the 1960s, only linear complexity and maximum-order complexity could be considered relatively well explored.Other complexity measures and their interrelations appear more intractable and require new tools, techniques, and theoretic studies in future researches.
is the starting row of M (n, m), and M ′ (n, m) contains all the columns of M (n, m + 1) with indices of the form with indices of the form j = b(b+1)2 +k, with 0 ≤ k ≤ b < m, including the starting all one column 1 T n−(m+1) in M (n, m + 1).This nesting structure played an important role in the their derivation of the following results.

Proposition 8 .
For a binary sequence s, suppose M n−1 (s) = m and the minimal FSR of s 0 s 1 . . .s n−2 does not generate s 0 s 1 . . .s n−2 s n−1 .Then M n (s) = m if and only if the subsequence s n−m−1 s n−m . . .s n−2 has not appeared within s n−1 .
then the sequence s n ||s n . . .s n+k−1 by arbitrary extension s n . . .s n+k−1 has the same maximum-order complexity m + k.