Building Test Batteries Based on Analyzing Random Number Generator Tests within the Framework of Algorithmic Information Theory

The problem of testing random number generators is considered and a new method for comparing the power of different statistical tests is proposed. It is based on the definitions of random sequence developed in the framework of algorithmic information theory and allows comparing the power of different tests in some cases when the available methods of mathematical statistics do not distinguish between tests. In particular, it is shown that tests based on data compression methods using dictionaries should be included in test batteries.


Introduction
Random numbers play an important role in cryptography, gambling, Monte Carlo methods and many other applications.Nowadays, random numbers are generated using so-called random number generators (RNGs), and the "quality" of the generated numbers is evaluated using special statistical tests [1].This problem is so important for applications that there are special standards for RNGs and for so-called test batteries, that is, sets of tests.The current practice for using an RNG is to verify the sequences it generates with tests from some battery (such as those recommended by [2,3] or other standards).
A natural question is: how do we compare different tests and, in particular, create a suitable battery of tests?Currently, this question is mostly addressed experimentally: possible candidate tests are applied to a set of known RNGs and the tests that reject more ("bad") RNGs are suitable candidates for the battery.In addition, researchers try to choose independent tests (i.e., those that reject different RNGs) and take into account other natural properties (e.g., testing speed, etc.) [1][2][3][4].Obviously, such an approach depends significantly on the set of selected tests and RNGs pre-selected for consideration.It is worth noting that at present there are dozens of RNGs and tests, and their number is growing fast, so the recommended batteries of tests are rather unstable (see [4]).
It is clear that increasing the number of tests in a battery increases the total testing time or, conversely, if testing time is limited, increasing the number of tests causes the length of the binary sequence being examined to decrease and therefore the power of any battery test is reduced.Therefore, it is highly desirable to include in the battery powerful tests designed for different deviations from randomness.
The goal of this paper is to develop a theoretical framework for test comparison and illustrate it by comparing some popular tests.The main idea of the proposed approach is based on the definition of randomness developed in algorithmic information theory (AIT).Apparently, it is natural to use this theory, since it is the only mathematically correct theory that formally defines what a random binary sequence is, and by definition any RNG should generate such sequences.Similar to AIT, we extend the notion of "random sequence" to any statistical test T, and then compare the "size" of the set of random sequences corresponding to different tests.More precisely, let R T 1 and R T 2 be random sequences according to T 1 and T 2 .Then, if dim(R T 1 \R T 2 ) > 0, then T 1 accepts a large set of sequences as random, whereas T 2 rejects these sequences as non-random.So, in this sense, a T 1 test cannot replace T 2 in a battery of tests (here dim is the Hausdorff dimension.).
Based on this approach, we give some practical recommendations for building test batteries.In particular, we recommend including in the test batteries a test based on a dictionary data compressor, like Lempel-Ziv codes [6], grammar-based codes [7] and some others.
The rest of this paper consists is organized as follows.The next part contains definitions and preliminary information, the third part is a comparison of the test performance on Markov processes with different memories and general stationary processes, and the fourth part investigates tests based on Lempel-Ziv data compressors.The fifth part is a brief conclusion; some of the concepts used in this paper are given in the Appendix A.

Hypothesis Testing
In hypothesis testing, there is a main hypothesis H 0 = {the sequence x is random} and an alternative hypothesis H 1 = ¬H 0 .(In the probabilistic approach, H 0 is that the sequence is generated by a Bernoulli source with equal probabilities 0 and 1.)A test is an algorithm for which the input is the prefix x 1 . . .x n (of the infinite sequence x 1 , . . ., x n , . . . ) and the output is one of two possible words: random or non-random (meaning that the sequence is random or non-random, respectively).
Let there be a hypothesis H 0 , some alternative H 1 , let T be a test and τ be a statistic, that is, a function on {0, 1} n which is applied to a binary sequence x = x 1 . . .x n .Here and below {0, 1} n is the set of all n-bit binary words, {0, 1} ∞ is the set of all infinite words x 1 x 2 . . . ,x i ∈ {0, 1}.
By definition, Type I error occurs if H 0 is true and H 0 is rejected; the significance level is defined as the probability of the Type I error.Denote the critical region of the test T for the significance level α by CT (α, n) and let C T (α, n) = {0, 1} n \ CT (α, n).Recall that, by definition, H 0 is rejected if and only if x ∈ CT (α, n) and, hence, see [8].We also apply another natural limitation.We consider only tests T such that for all n and α 1 < α 2 CT (α 1 , n) ⊂ CT (α 2 , n). (Here and below, |X| is the number of elements X if X is a set, and the length of X, if X is a word.)A finite sequence x 1 . . .x n is considered random for a given test T and the significance level α if it belongs to C T (α, n).

Batteries of Tests
Let us consider a situation where the randomness testing is performed by conducting a battery of statistical tests for randomness.Suppose that the battery T contains a finite or countable set of tests T 1 , T 2 , . . .and α i is the significance level of i-th test, i = 1, 2, . . . .If the battery is applied in such a way that the hypothesis H 0 is rejected when at least one test in the battery rejects it, then the significance level α of this battery satisfies the following inequality: because P(A + B) ≤ P(A) + P(B) for any events A and B (This inequality is a simple extension of the so-called Bonferroni correction, see [9]).
It will be convenient to formulate this inequality in a different way.Suppose there is some α ∈ (0, 1) and a sequence ω of non-negative ω i such that ∑ ∞ i=1 ω i ≤ 1.For example, we can define the following sequence ω * : If the significance level T i equals αω i , then the significance level of the battery T is not grater than α.(Indeed, from (2) we obtain ) Note that this simple observation makes it possible to treat a test battery as a single test.

Random and Non-Random Infinite Sequences
Kolmogorov complexity is one of the central notations of algorithmic information theory (AIT), see [10][11][12][13][14][15][16][17][18].We will consider the so-called prefix-free Kolmogorov complexity K(u), which is defined on finite binary words u and is closely related to the notion of randomness.More precisely, an infinite binary sequence for all n, see [19].Conversely, the sequence x is non-random if In some sense, Kolmogorov complexity is the length of the shortest lossless prefix-free code, that is, for any (algorithmically realisable) code f there exists a constant c f for which Let f be a lossless prefix-free code defined for all finite words.Similarly to (4), we call it random with respect to f if there is a constant C f such that for all n.We call this difference the statistic corresponding to f and define Similarly, the sequence x is non-random with respect to f if Informally, x is random with respect to f if the statistic τ f is bounded by some constant on all prefixes x 1 . . .x n and, conversely, x is non-random if τ f is unbounded when the prefix length grows.
Based on these definitions, we can reformulate the concepts of randomness and nonrandomness in a manner similar to what is customary in mathematical statistics.Namely, for any α ∈ (0, 1) we define the set {y = y 1 . . .y n : τ f (y) ≥ − log α}.It is easy to see that (1) is valid and, therefore, this set represents the critical region CT (α, n), where the test T is as follows: T = {x 1 . . .
Based on these consideration, (6) and the definitions of randomness (4), (5) we give the following definition of randomness and non-randomness for the statistic τ f and corresponding test T f .An infinite sequence x = x 1 x 2 . . . is random according to the test T f if there exists such α > 0 that for any integer n and this α the word x 1 . . .x n is random (according to the T f test).Otherwise, the sequence x is non-random.
Note that we can use the statistic [20,21].So, there is no need to use the density distribution formula and it greatly simplifies the use of the test and makes it possible to use this test for any data compressor f .
It is important to note that there are tests developed within the AIT that can be used to test RNG [22,23].

Test Performance Comparison
For test T, let us define the set R T of all infinite sequences that are random for T. We use this definition to compare the "effectiveness" of different tests as follows.The test T 1 is more efficient than T 2 if the size of the difference R T 2 \R T 1 is not equal to zero, where the size is measured by the Hausdorff dimension.
Informally, the "smallest" set of random sequences corresponds to a test based on Kolmogorov complexity (4) (corresponding set R K contains "truly" random sequences).For a given test T 1 we cannot calculate the difference R T 1 \R K because the statistic (4) is noncomputabele, but in the case of two tests T 1 and T 2 , where dim(R T 2 \R T 1 ) > 0, we can say that the set of sequences random according to T 2 contains clearly non-random sequences.So, in some sense, T 1 is more efficient than T 2 .(Recall that we only consider computable tests.) The definition of the Hausdorff dimension is given in the Appendix A, but here we briefly note that we use the Hausdorff dimension for it as follows: for any binary sequence x 1 x 2 . . .we define a real number σ(x) = 0. x 1 x 2 . . .and for any set of infinite binary sequences S we denote the Hausdorff dimension of σ(S) by dim S. So, a test T 1 is more efficient than T 2 (formally Obviously, information about a test's effectiveness can be useful to developers of the test's batteries. Also note that the Hausdorff dimension is widely used in information theory.Perhaps the first such use was due to Eggleston [24] (see also [25,26]), and later the Hausdorff dimension found numerous applications in AIT [27][28][29].

Shannon Entropy
In RNG testing, one of the popular alternative hypotheses (H 1 ) is that the considered sequence generated by Markov process of memory (or connectivity) m, m > 0, (S m ), but the transition probabilities are unknown.(S 0 , i.e., m = 0, corresponds to the Bernoulli process).Another popular and perhaps the most general H 1 is that the sequence is generated by a stationary ergodic process (S ∞ ) (excluding H 0 ).
Let us consider the Bernoulli process µ ∈ S 0 for which µ(0) = p, µ(1) = q, (p + q = 1).By definition, the Shannon entropy h(µ) of this process is defined as h(µ) = −(p log p + q log q) [30].For any stationary ergodic process ν ∈ S the entropy of order k is defined as follows: where E ν is the mathematical expectation according to ν, ν(z/u) is the conditional probability ν(x i+1 = z|x i−k . . .x i = u), it does not depend on i due to stationarity [30].
It is known in Information Theory that for stationary ergodic processes (including S ∞ and S m , m ≥ 0) h k ≥ h k+1 for k ≥ 0 and there exists the limit Shannon entropy h ∞ (ν) = lim h k (ν).Besides, for ν ∈ S m h ∞ = h m [30].
Shannon entropy plays an important role in data compression because for any lossless and prefix-free code, the average codeword length (per letter) is at least as large as the entropy, and this limit can be reached.More precisely, let ϕ be a lossless, prefix-free code defined on {0, 1} n , n > 0, and let ν ∈ S.Then, for any ϕ, ν, and codewords of average length E n (ϕ, ν) ≥ h(ν).In addition, there are codes ϕ 1 , ϕ 2 , . . .such that lim n→∞ E n (ϕ n , ν) = h(ν) [30].

Typical Sequences and Universal Codes
The sequence x 1 x 2 . . . is typical for the measure µ ∈ S ∞ if for any word y 1 . . .y r lim t→∞ N x 1 ...x t (y 1 . . .y r )/t = µ(u), where N x 1 ...x t (y 1 . . .y r ) is the number of occurrences of a word y 1 . . .y r in a word x 1 . . .x t .
Let us denote the set of all typical sequences as T µ and note that µ(T µ ) = 1 [30].This notion is deeply related to information theory.Thus, Eggleston proved the equality dim T µ = h(µ) for Bernoulli processes (µ ∈ S 0 ) [24], and later this was generalized for µ ∈ S ∞ [26,28].
By definition, a code ϕ is universal for a set of processes S if for any µ ∈ S and any In 1968, R. Krichevsky [31] proposed a code κ t m (x 1 . . .x t ), m ≥ 0, t is an integer, whose redundancy, i.e., the average difference between the code length and Shannon entropy, is asymptotically minimal.This code and its generalisations are described in the Appendix A, but here we note the following main property.For any stationary ergodic process µ, that is, see [32].Currently there are many universal codes which are based on different ideas and approaches, among which we note the PPM universal code [33], the arithmetic code [34], the Burrows-Wheeler transform [35], which is used along with the book-stack (or MTF) code [36][37][38], and some others [39][40][41].
The most interesting for us is the class of grammar-based codes suggested by Kieffer and Yang [7,42] which includes the Lempel-Ziv (LZ) codes [6] (note that perhaps the first grammar-based code was described in [43]).
The point is that all of them are universal codes and hence they "compress" stationary processes asymptotically to entropy and therefore cannot be distinguishable at S ∞ .On the other hand, we show that grammar-based codes can distinguish "large" sets of sequences as non-random beyond S ∞ .

Two-Faced Processes
The so-called two-faced processes are described in [20,21] and their definitions will be given in Appendix A. Here, we note some of their properties: the set of two-faced processes Λ s (p) of order s, s ≥ 1, and probability p, p ∈ (0, 1), contains the measures λ from S s such that Note that they are called two-faced because they appear to be truly random if we look at word frequencies whose length is less than s, but are "completely" non-random if the word length is equal to or greater than s (and p is far from 1/2).

Comparison of the Efficiency of Tests for Markov Processes with Different Memories and General Stationary Processes
We now describe the statistical tests for Markov processes and stationary ergodic processes as follows.By (6), statistical definitions are as follows: where κt m and ρt are universal codes for S m and S ∞ defined in the Appendix A, see (A4) and (A5).We also denote the corresponding tests by T t K m and T t R .The following statement compares the performance of these tests.
Theorem 1.For any integers m, s and t = ms Proof.First, let us say a few words about the scheme of the proof.If we apply the T t K m test to typical sequences of a two-faced process λ ∈ T Λ m+1(p) , p ̸ = 1/2, they will appear random since h m (λ) = 1.So, the set of random sequences R T t Km (i.e., random sequences according to T t K m test) contains the set of the typical sequences T Λ m+1(p) for which dim(T Λ m+1(p) ) equals the limit Shannon entropy −( On the other hand, typical sequences of a two-faced process λ ∈ T Λ m+1(p) , p ̸ = 1/2 are not random according to 11) and the test.T t K m "compresses" them till the Shannon entropy −(p log p + (1 (11), where the first equality is due to typicality, and the second to the property of two-faced processes (11).
From here and (A1), (A4) we obtain From this and the definition of randomness (5), we can see that typical sequences from T Λ m+1(p) are random according to κt m (x 1 . . .x n ), i.e., T t K m .From this and (A6), we obtain T t K m+1 ⪯ T t R .

Effectiveness of Tests Based on Lempel-Ziv Data Compressors
In this part we will describe a test that is more effective than T t R and T t K m for any m.First, we will briefly describe the LZ77 code based on the definition in [44].Suppose there is a binary string σ * that is encoded using the code LZ77.This string is represented by a list of pairs (p 1 ; l 1 ) . . .(p s ; l s ).Each pair (p i ; l i ) represents a string, and the concatenation of these strings is σ * .In particular, if p i = 0, then the pair represents the string l i , which is a single terminal.If p i ̸ = 0, then the pair represents a portion of the prefix of σ * that is represented by the preceding i − 1 pairs; namely, the l i terminals beginning at position p i in σ * ; see ( [44] Such codes are known in information theory; see, for example, ([30] part 7.2).Note that C is the prefix code and, hence, for any r ≥ 1 the codeword C(p 1 )C(l 1 ) . . .C(p r )C(l r ) can be decoded to (p 1 ; l 1 ) . . .(p r ; l r ).There is the following upper bound for the length of the LZ77 code [30,44]: for any word w 1 w 2 . . ..w m |code LZ (w 1 w 2 . . .
We will now describe such sequences that, on the one hand, are not typical for any stationary ergodic measure and, on the other hand, are not random and will be rejected by the suggested test.Thus, the proposed model allows us to detect non-random sequences that are not typical for for any stationary processes.On the other hand, those sequences are recognized tests based on LZ77 as non-random.To do this, we take any random sequence x = x 1 x 2 . . .(that is, for which (4) is valid) and define a new sequence y(x) = y 1 y 2 . . .as follows.Let for k = 0, 1, 2, . . .
For example, So, lim inf n→∞ |LZ(y 1 y 2 . . .y n )|/n = 1/2.Hence, it follows that dim({y(x) : Here, we took into account that x is random and, dim{x : x is random} = 1, see [28].)So, having taken into account the definitions of non-randomness ( 6) and ( 7 Likewise, the same is true for the T R test.From the latest inequality we obtain the following Theorem 2. For any random (according to (4) ) sequence x the sequence y(x) is non-random for the test T LZ , whereas this sequence is random for tests T t R and T t K m .Moreover, T t R ⪯ T LZ and T t K m ⪯ T LZ for any m, t.
Comment.The sequence y(x) is constructed by duplicating parts of x.This construction can be slightly modified as follows: instead of duplication (as u i u i ), we can use u i u γ i , where u γ i contains γ|u| the first letters of u, γ < 1/2.In this case, dim(R T t Km \R T LZ ) ≥ 1 − γ and, therefore, sup γ∈(0,1/2) dim(R T t Km \R T LZ ) = 1 .

Conclusions
Here, we describe some recommendations for the practical testing of RNGs, based on the method of comparing the power of different statistical tests.Based on Theorem 1, we can recommend to use several tests T K t s , based on the analysis of occurrence frequencies of words of different length s.In addition, we recommend using tests for which s depends on the length n of the sequence under consideration.For example, s 1 = O(log log n)), s 2 = O( log n), etc.They can be included in the test battery directly or as the "mixture" T R with several non-zero β coefficients, see (A2) in the Appendix A.
Theorem 2 shows that it is useful to include tests based on dictionary data compressors such as the Lempel-Ziv code.In such a case we can use the statistic τ LZ = n − |LZ(y 1 . . .y n )| with the critical value t α = n − log(1/α) − 1, α ∈ (0, 1), see [20,21].Note that in this case, there is no need to use the density distribution formula, which greatly simplifies the use of the test and makes it possible to use a similar test for any grammar-based data compressor.
part 3.1).The length of the codeword depends on the encoding of the sub-words p i , l i which are integers.For this purpose we will use a prefix code C for integers, for which for any integer m |C(m)| = log m + 2 log log(m + 1) + O(1).

x 14 x 3
x 4 . . .x 14 x 15 . . .x 254 x 15 . . .x 254 . . . .The idea behind this sequence is quite clear.Firstly, it is obvious that the word y cannot be typical for a stationary ergodic source and, secondly, when u 0 u 0 u 1 u 1 . . .u k u k is encoded the second subword u k will be encoded by a very short word (about O(log |u k |)), since it coincides with the previous word u k .So, for large k the length of the encoded word LZ(u 0 u 0 u 1 u 1 . . .u k u k ) will be about |u 0 u 0 u 1 u 1 . . .u k u k | (1/2 + o(1)) .
where ϵ > 0. From this and typicality we can see that lim n→∞ | κt m (x 1 . . .x n )|/n = 1 + ϵ.Hence, there exists such n δ that 1 Denote this test by T LZ .Let us consider the test T t K m , m, t are integers.Having taken into account that the sequence x is random, we can see that lim t→∞ |κ t m (x i x i+1 . ..x i+t |/t = 1.So, from from (A4) we can see that for any n | κt m (x 1 . ..x n |/t = 1 + o(1).The same reasoning is true for the code ρt .We can now compare the size of random sequence sets across different tests as follows: ), we can see that y(x) is non random according to statistics τ = n − |LZ(y 1 . . .y n )|.