Next Article in Journal
A Review of Physical Layer Security Techniques for Internet of Things: Challenges and Solutions
Previous Article in Journal
Image Thresholding Segmentation on Quantum State Space
Open AccessArticle

Numerical and Non-Asymptotic Analysis of Elias’s and Peres’s Extractors with Finite Input Sequences

1
Graduate School of Environment and Information Sciences, Yokohama National University, Yokohama 240-8501, Japan
2
Department of Applied Mathematics, Faculty of Engineering, Yokohama National University, Yokohama 240-8501, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017.
Entropy 2018, 20(10), 729; https://doi.org/10.3390/e20100729
Received: 29 July 2018 / Revised: 13 September 2018 / Accepted: 19 September 2018 / Published: 23 September 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

Many cryptographic systems require random numbers, and the use of weak random numbers leads to insecure systems. In the modern world, there are several techniques for generating random numbers, of which the most fundamental and important methods are deterministic extractors proposed by von Neumann, Elias, and Peres. Elias’s extractor achieves the optimal rate (i.e., information-theoretic upper bound) h ( p ) if the block size tends to infinity, where h ( · ) is the binary entropy function and p is the probability that each bit of input sequences occurs. Peres’s extractor achieves the optimal rate h ( p ) if the length of the input and the number of iterations tend to infinity. Previous research related to both extractors has made no reference to practical aspects including running time and memory size with finite input sequences. In this paper, based on some heuristics, we derive a lower bound on the maximum redundancy of Peres’s extractor, and we show that Elias’s extractor is better than Peres’s extractor in terms of the maximum redundancy (or the rates) if we do not pay attention to the time complexity or space complexity. In addition, we perform numerical and non-asymptotic analysis of both extractors with a finite input sequence with any biased probability under the same environments. To do so, we implemented both extractors on a general PC and simple environments. Our empirical results show that Peres’s extractor is much better than Elias’s extractor for given finite input sequences under a very similar running time. As a consequence, Peres’s extractor would be more suitable to generate uniformly random sequences in practice in applications such as cryptographic systems.
Keywords: true random number generation; von Neumann’s extractor; Peres’s extractor; Elias’s extractor true random number generation; von Neumann’s extractor; Peres’s extractor; Elias’s extractor

1. Introduction

Many cryptographic systems require random numbers, and the use of weak random numbers leads to insecure systems. In fact, many past security problems were due to the use of weak random numbers [1,2,3,4]. This tells us that random number generation is very important in cryptography, in particular to ensure that secret keys are random and unpredictable. In the modern world, there are several techniques for generating random numbers. A natural source such as physical phenomena, the stock market, or Bitcoin [5] can produce unpredictable random sequences, although such sequences are not uniformly random at the source (i.e., biased). However, there is a solution to solve this problem, namely, to use deterministic extractors. A deterministic extractor is a function which takes a non-uniformly random sequence as input and outputs a uniformly random sequence. The deterministic extractors have been studied in mathematics, information theory, and cryptography. In information theory, those extractors can also be treated for the intrinsic randomness problem (i.e., the problem of generating truly random numbers). Furthermore, as applications in cryptography, the output sequence of those extractors can be used as secret keys in information-theoretic cryptography or symmetric key cryptography. The extractors by von Neumann [6], Elias [7], and Peres [8] are fundamental and important ones. In particular, Elias’s and Peres’s extractors are interesting, since they can achieve the optimal rate (or redundancy), if we suppose that input size tends to infinity (i.e., from an asymptotic viewpoint). However, it is not easy to conclude which one is better, since those are constructed by completely different approaches. The main purpose of this paper is to investigate those with finite inputs (i.e., from a non-asymptotic viewpoint) by numerical analysis to make it clear which is better for practical use.

1.1. Related Work

Several works have proposed methods for extracting uniform random sequences from non-uniform random sequences. The most famous among them is von Neumann’s extractor [6] proposed in 1951. He demonstrated a simple procedure for extracting independent unbiased bits from a sequence of independent, identically distributed (i.i.d.) and biased bits. The technique by von Neumann guarantees that the output sequences are independent and uniform if the input sequence is independent and constantly biased, while this cannot be guaranteed if the bias is not constant (e.g., see [9]).
An improved algorithm of von Neumann’s extractor was proposed by Elias [7] in 1971. Elias’s extractor utilizes a block coding technique to improve the rate (or redundancy) of von Neumann’s extractor; however, the straightforward implementation of this extractor requires exponential time and exponential memory size with respect to N, where N is the block size, to store all 2 N input sequences with their assignment of output sequences. In 2000, Ryabko and Matchikina [10] proposed an extension of Elias’s extractor that improved time complexity and space complexity by using the enumerative encoding technique from [11] and the Schönhage–Strassen algorithm [12] for fast integer multiplication in order to compute the assignment of output sequences. In this paper, we call this improved method the RM method.
Peres’s extractor is another extended algorithm of von Neumann’s extractor. In 1992, Peres [8] proposed a procedure which improved upon von Neumann’s extractor. The basic idea of Peres’s extractor is to reuse the discarded bits in von Neumann’s extractor by iterating similar procedures in von Neumann’s extractor.
The extractors by von Neumann, Elias, and Peres are the most fundamental and important ones using a single source. In particular, Elias’s and Peres’s extractors are interesting, since they can achieve the optimal rate (i.e., information-theoretic upper bound) h ( p ) if input size tends to infinity (i.e., in an asymptotic case), where each bit of input sequences from a single source occurs with probability p ( 0 , 1 ) and h ( · ) is the binary entropy function. In this paper, we are interested in the non-asymptotic case, namely, the achievable rate for finite input-sizes. The rate of Elias’s extractor for finite input-sizes can be observed in the work [7], but the rate of Peres’s extractor for finite input-sizes is not explicitly known. As a work related to Peres’s extractor, Pae [13] reported a recursion formula to compute the rate for finite input-sizes, but it is difficult to give the rate function with finite input-sizes since the recursion formula is complicated. Pae also computed the rate by the recursion formula in the case p = 1 / 3 , compared the rates of Peres’s extractor and Elias’s extractor, and concluded that the rate of Peres’s extractor increased much slower than that of Elias’s extractor via numerical analysis. However, it is not explicitly known which extractor is better to use in practice, if we take into account the running time, implementation cost, and memory size required in the extractors, as mentioned in [13].
There are several works for constructing extractors using multiple sources (i.e., not a single source). Bourgain [14] provided a 2-source extractor under the condition that the two sources are independent and each source has min-entropy 0.499 n , where n is the bit-length of the output of the sources. Raz [15] proposed an improvement in terms of total min-entropy, and constructed 2-source extractors with the condition that one source has min-entropy more than n / 2 and the other source requires min-entropy O ( log n ) . In 2015, Cohen [16] constructed a 3-source extractor, where one source has min-entropy δ n , the second source has min-entropy O ( log n ) and the third source has min-entropy O ( log log n ) . In 2016, Chattopadhyay and Zuckerman [17] proposed a general 2-source extractor, where each source has a polylogarithmic min-entropy. They combined two weak random sequences into a single sequence by using K-Ramsey graphs and resilient functions. Their extractor has only one-bit output and achieves negligible error and higher complexity than Peres’s extractor or Elias’s extractor.
Furthermore, there are various reports about extracting random bits in the real world. In particular, in 2009, Bouda et al. [18] used mobile phones or pocket computers to generate random data that is close to truly random data. They took 12 pictures per second then used their function to obtain four random bits in each picture, and then applied Carter–Wegman universal 2 hash functions. Halprin and Naor [19] presented the idea of using human game-play as a randomness source in 2009. They constructed the Hide and Seek game that produced approximately 17 bits of raw data per click, and then generated with a pairwise independent hash function a 128-bit string which is 2 64 -close to the random one in less than two minutes. In 2011, Voris et al. [20] investigated the use of accelerators on the RFID tags as a source. They implemented a two-stage extractor on the RFID tags. It can produce 128 random bits in 1.5 s by storing a Toeplitz matrix on the RFID tags and performing matrix multiplications.

1.2. Our Contribution

In this paper, we revisit the extractors by von Neumann, Elias, and Peres, since they are fundamental and only require a single source. In the studies on those extractors, it is normal to asymptotically analyze the rate or redundancy of the extractors in the literature, where the rate is the average bit-length of outputs per bit of input (see Section 2 for details). Specifically, the rate of von Neumann’s extractor is p ( 1 p ) that is far from the optimal rate (i.e., information-theoretic upper bound) h ( p ) . Meanwhile, the rate of Elias’s extractor converges to h ( p ) if the block size tends to infinity. Specifically, Elias’s extractor outputs a uniformly random sequence at a high rate, when it takes a long block size equal to the input length. However, it has a trade-off between the rates and computational resources such as time complexity and memory size. On the other hand, Peres’s extractor achieves the optimal rate h ( p ) if the length of input and the number of iterations tend to infinity, and it requires smaller time complexity and memory size. However, it would be hard to explicitly derive the exact rate for finite input sequences. Thus, it is not easy to conclude which is a more suitable extractor for practical use in general. Among related work, only one, by Pae [13], compared both extractors as mentioned in Section 1.1, but it does not completely answer the question, since it analyzed the performance of both extractors only for restricted parameters, in particular, the case where each bit of input sequences occurs with probability p = 1 / 3 and did not consider the running time. In this paper, we will perform non-asymptotic analysis for the wide range of parameters for Elias’s and Peres’s extractors, to answer the following question: which is more suitable for practical use in real-world applications? To do this, we evaluate the numerical performance of Peres’s extractor and Elias’s extractor with the RM method in terms of practical aspects including achievable rates (or redundancy) and running time with finite input sequences. Specifically, the contribution of this paper is as follows:
(i)
Based on some heuristics, we derive a lower bound on the maximum redundancy of Peres’s extractor in Section 3. This result shows that the maximum redundancy of Elias’s extractor is superior to Peres’s extractor in general, if we focus only on redundancy (or rates) and we do not pay attention to the time complexity or space complexity.
(ii)
By numerical analysis, we design our experiments by comparing both extractors with finite input sequences of which each bit occurs with any biased probability p ( 0 , 1 ) under the same environments in terms of practical aspects. Both extractors are implemented on a general PC and do not require any special resources, libraries, or frameworks for computation. Our implementation and results will be explained in Section 4. We calibrate our implementation by comparing the theoretical and experimental redundancy of both extractors. Afterwards, we analyze the time complexity of both extractors with respect to the bit-length of input sequences from 100 to 5000. We compare the redundancy of both extractors, and our implementation shows that Peres’s extractor is much better than Elias’s extractor under a very similar running time. As a result, Peres’s extractor would be more suitable for generating uniformly random sequences for practical use in applications.
The primary version of this paper appeared in CISS2017 [21], and this paper is an extended and full version of it. The difference between the primary version [21] and this paper is as follows: This paper contains the above result (i) in Section 3, and reports more detailed implementation results for (ii) in Section 4. In particular, we implemented and confirmed the results (ii) at a larger scale (see Section 4 for details) in addition to obtaining new figures in Section 4.1.

2. Preliminaries

Throughout this paper, we assume that log ( · ) : = log 2 ( · ) and ln ( · ) : = log e ( · ) , and we define 0 log 0 : = 0 . h ( · ) is the binary entropy function defined by h ( p ) = p log p ( 1 p ) log ( 1 p ) for p [ 0 , 1 ] . Let n k be a binomial coefficient defined by n k : = n ( n 1 ) ( n 2 ) ( n k + 1 ) k ( k 1 ) ( k 2 ) 1 for nonnegative integers n and k, and n 0 : = 1 for any n 0 (see [22] for an extension of the traditional definition of binomial coefficients). Note that n k > 0 if k n , and n k = 0 if k > n .
The first deterministic extractor was constructed by von Neumann [6] in 1951, and later improved ones were proposed by Elias [7] in 1971, and by Peres [8] in 1992. The prior work [6,7,8] considered Bernoulli source Bern ( p ) from which input sequences were generated, namely Bern ( p ) outputs i.i.d. ( x 1 , x 2 , , x n ) { 0 , 1 } n according to Pr ( x i = 1 ) = p and Pr ( x i = 0 ) = q = 1 p for some unknown p ( 0 , 1 ) .
A deterministic extractor A takes ( x 1 , x 2 , , x n ) { 0 , 1 } n as input and outputs ( y 1 , y 2 , , y ) { 0 , 1 } , and its average bit-length of output is denoted by ¯ ( n ) which is a function of n, and defines its rate function by r A ( p ) : = lim n ¯ ( n ) / n . Additionally, for a deterministic extractor A, we define the redundancy function by f A ( p ) : = h ( p ) r A ( p ) , and the maximum redundancy by Γ : = sup p ( 0 , 1 ) f A ( p ) . Note that the above definition of redundancy functions is meaningful, since h ( p ) is shown to be the information-theoretic upper bound of the extractors in [7,8]. Furthermore, in this paper we define a non-asymptotic rate function r A ( p , n ) : = ¯ ( n ) / n , a non-asymptotic redundancy function f A ( p , n ) : = h ( p ) r A ( p , n ) , and the non-asymptotic maximum redundancy Γ ( n ) : = sup p ( 0 , 1 ) f A ( p , n ) , which will be used in our non-asymptotic analysis.

2.1. Von Neumann’s Extractor

Von Neumann’s extractor was a simple algorithm for extracting independent unbiased bits from biased bits. This algorithm divides the input sequences ( x 1 , x 2 , x 3 , x 4 , , x n ) into the pairs (If n is odd, we discard the last bit.) ( ( x 1 x 2 ) , ( x 3 x 4 ) , ) and maps each pair with a mapping as follows:
00 , 01 0 , 10 1 , 11 ,
where ∧ means no output was generated. After that, it concatenates all resulting outputs of (1). To facilitate understanding, we give an example as follows.
Example 1.
Suppose that an input sequence is ( x 1 , x 2 , x 3 , , x 8 ) = ( 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 ) . Firstly, divide it into the pairs as ( ( 1 , 0 ) , ( 0 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) ) . Next, map each pair with the mapping (1). Finally, the extractor outputs ( y 1 , y 2 ) = ( 1 , 0 ) .
Complexity: Von Neumann’s extractor is efficient in the sense that both time complexity and space complexity are small such that time complexity is evaluated as O ( n ) , and space complexity is evaluated as O ( 1 ) .
Redundancy: Von Neumann’s extractor is not desirable, since the maximum redundancy is far from zero. In fact, the rate function r vN ( p ) of von Neumann’s extractor is evaluated by r vN ( p ) = lim n n p ( 1 p ) / n = p ( 1 p ) , which is 1 / 4 at p = 1 / 2 and less elsewhere. In addition, the (non-asymptotic) rate functions, (non-asymptotic) redundancy functions, and the (non-asymptotic) maximum redundancy is evaluated as follows: f vN ( p , n ) = f vN ( p ) = h ( p ) p ( 1 p ) , Γ vN ( n ) = Γ vN = 3 / 4 .

2.2. Elias’s Extractor

Elias [7] improved von Neumann’s extractor by using a block coding technique in 1971. Let N N ( N 2 ) be the block size in Elias’s extractor. For all binary sequences with bit-length N, partition them into N + 1 sets S k ( k = 0 , 1 , 2 , , N ), where S k consists of all the N k sequences of length N which have k ones and N k zeros. Here, we note that each sequence of S k is equiprobable, i.e., the probability p k q N k .
We consider binary representation of the nonnegative integer | S k | = N k as follows: N k = α m 2 m + α m 1 2 m 1 + + α 0 2 0 , where m = log N k , α j { 0 , 1 } , and α m = 1 . In this case, we briefly write | S k | = N k = ( α m , α m 1 , , α 0 ) . For each j ( 1 j m ) such that α j = 1 , we assign 2 j distinct output sequences of length j to 2 j distinct sequences of S k which have not already been assigned. If α 0 = 1 , one sequence of S k is assigned to ∧. In particular, since | S 0 | = | S N | = 1 , two sequences ( 0 , 0 , , 0 ) and ( 1 , 1 , , 1 ) are assigned to ∧. For instance, we show a procedure of Elias’s extractor in Example 2.
Example 2.
Suppose that the given input sequence x = ( 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 ) with block size N = 4 , is the same as in Example 1. Firstly, we partition the set { 0 , 1 } 4 of possible input sequences into the following subsets:
S 0 = { ( 0 , 0 , 0 , 0 ) } , S 1 = { ( 1 , 0 , 0 , 0 ) , ( 0 , 1 , 0 , 0 ) , ( 0 , 0 , 1 , 0 ) , ( 0 , 0 , 0 , 1 ) } , S 2 = { ( 0 , 0 , 1 , 1 ) , ( 0 , 1 , 0 , 1 ) , ( 0 , 1 , 1 , 0 ) , ( 1 , 1 , 0 , 0 ) , ( 1 , 0 , 1 , 0 ) , ( 1 , 0 , 0 , 1 ) } , S 3 = { ( 1 , 1 , 1 , 0 ) , ( 1 , 0 , 1 , 1 ) , ( 1 , 1 , 0 , 1 ) , ( 0 , 1 , 1 , 1 ) } , S 4 = { ( 1 , 1 , 1 , 1 ) } .
Then, we have | S 0 | = | S 4 | = 1 = ( 1 ) , | S 1 | = | S 3 | = 4 = ( 1 , 0 , 0 ) , | S 2 | = 6 = ( 1 , 1 , 0 ) . We consider the following assignment of output sequences:
( 0 , 0 , 0 , 0 ) , ( 1 , 1 , 1 , 1 ) , ( 1 , 0 , 0 , 0 ) ( 0 , 0 ) , ( 1 , 1 , 1 , 0 ) ( 0 , 0 ) , ( 0 , 1 , 0 , 0 ) ( 0 , 1 ) , ( 1 , 0 , 1 , 1 ) ( 1 , 0 ) , ( 0 , 0 , 1 , 0 ) ( 1 , 0 ) , ( 1 , 1 , 0 , 1 ) ( 1 , 1 ) , ( 0 , 0 , 0 , 1 ) ( 1 , 1 ) , ( 0 , 1 , 1 , 1 ) ( 0 , 1 ) , ( 0 , 0 , 1 , 1 ) ( 0 , 1 ) , ( 1 , 0 , 1 , 0 ) ( 1 , 0 ) , ( 0 , 1 , 1 , 0 ) ( 0 , 0 ) , ( 1 , 0 , 0 , 1 ) ( 1 , 1 ) , ( 0 , 1 , 0 , 1 ) ( 0 ) , ( 1 , 1 , 0 , 0 ) ( 1 ) .
Suppose that an input sequence x = ( 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 ) is given. Since the block size N = 4 , the sequence is divided as x = ( ( 1 , 0 , 0 , 1 ) , ( 0 , 0 , 1 , 1 ) ) . By the above assignment of output sequences, the output sequence is y = ( ( 1 , 1 ) ( 0 , 1 ) ) = ( 1 , 1 , 0 , 1 ) . Furthermore, there are several ways to assign m k bits to binary output sequences with the same probability that affect the output sequence y. Thus, the output sequence of 10010011 will not be 1101, if we use another assignment. Note that Elias’s extractor with block size N = 2 is equivalent to von Neumann’s extractor, or equivalently the mapping (1). In this sense, Elias’s extractor is an extension of von Neumann’s extractor.
Complexity: It can be seen that the straightforward implementation of Elias’s extractor requires much space and time complexity to make a table of the assignment of output sequences, as illustrated by Example 2. Specifically, it requires exponential time and exponential memory size with respect to N to store all 2 N binary sequences with their assignment of output sequences. To reduce the time and space complexity of Elias’s extractor, Ryabko and Matchikina [10] proposed a method that is extended from Elias’s extractor, which we call the RM method in this paper. The RM method utilizes the enumerative encoding technique from [11] and the Schönhage–Strassen algorithm [12] for fast integer multiplication in order to compute the assignment of output sequences without making the table large. The procedure of the RM method is described as follows.
Firstly, suppose that a binary input sequence x N = ( x 1 , x 2 , , x N ) contains k ones and N k zeros. Let N u m ( x N ) 0 be a number which is defined by x N depending on the lexicographical order set S k . Namely, if x N has k ones, then the number Num ( x N ) 0 is defined by
Num ( x N ) = t = 1 N x t N t k i = 1 t 1 x i ,
where the summation is taken over all 1 t N such that x t = 1 , and Num ( 0 N ) : = 0 . Then, we calculate a binary codeword c o d e ( x N ) of x N , which is the assignment of an output sequence of x N as follows:
(i)
Compute Num ( x N ) in the set S k , if x N contains k ones.
(ii)
Let | S k | = N k = 2 j 0 + 2 j 1 + + 2 j m for 0 j 0 < j 1 < < j m .
(iii)
If j 0 = 0 and Num ( x N ) = 0 , then c o d e ( x N ) = .
(iv)
If 0 Num ( x N ) < 2 j 0 , then c o d e ( x N ) is defined to be the j 0 low-order binary string of Num ( x N ) .
(v)
If s = 0 t 2 j s Num ( x N ) < s = 0 t 2 j s + 2 j t + 1 for some 0 t m , then c o d e ( x N ) is defined to be the suffix consisting of the j t + 1 binary string of Num ( x N ) .
Example 3.
Suppose that the block size N = 4 , and the given input sequence is x = ( 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 ) , which is the same as all previous examples. After that, the sequence is divided as x = ( ( 1 , 0 , 0 , 1 ) , ( 0 , 0 , 1 , 1 ) ) . Next, compute Num ( x N ) following Equation (2):
Num ( ( 1 , 0 , 0 , 1 ) ) = 4 1 2 + 4 4 2 1 = 3 , Num ( ( 0 , 0 , 1 , 1 ) ) = 4 3 2 + 4 4 2 1 = 0 .
Then, the RM method computes c o d e ( 1 , 0 , 0 , 1 ) = ( 1 , 1 ) and c o d e ( 0 , 0 , 1 , 1 ) = ( 0 ) . Finally, it outputs y = ( 1 , 1 , 0 ) by concatenating c o d e ( 1 , 0 , 0 , 1 ) and c o d e ( 0 , 0 , 1 , 1 ) .
The time and space complexity of Elias’s extractor with the RM method are O ( N log 3 N log log N ) and O ( N log 2 N ) , respectively (see [10] for details).
Redundancy: Generally, the rate function and redundancy function of Elias’s extractor depend on block size N. For a given n-bit input sequence, if we take the block size equal to the length of input sequence N : = n , the rate function (or redundancy) achieve the best value. For simplicity, we assume that N = n in the following explanation. Then, the rate function r E ( p , n ) is evaluated by
r E ( p , n ) 1 n k = 0 n n k p k ( 1 p ) n k log n k .
Elias’s extractor takes i.i.d. with non-uniform distribution as the input, and it will output i.i.d. with uniform distribution such that its rate is given by Equation (3). Elias [7] showed that the rate function r E ( p , n ) of Elias’s extractor converges to h ( p ) as n , or equivalently, the redundancy function f E ( p , n ) : = h ( p ) r E ( p , n ) converges to zero as n . More precisely, it was shown that f E ( p , n ) = O ( 1 / n ) for any fixed p. Therefore, for a given n-bit input sequence, if we set the maximum block size to be the input size, the non-asymptotic maximum redundancy Γ E ( n ) converges to zero not slower than 1 / n .

2.3. Peres’s Extractor

Peres’s extractor is another method that improved the rates (or redundancy) of von Neumann’s extractor. The basic idea behind Peres’s extractor is to reuse the discarded bits in the mapping (1). In the following, we denote von Neumann’s extractor by Ψ 1 . For an n-bit sequence ( x 1 , x 2 , , x n ) , we describe von Neumann’s extractor by Ψ 1 ( x 1 , x 2 , , x n ) = ( y 1 , y 2 , , y ) , where y i = x 2 m i 1 and m 1 < m 2 < < m are all the indices satisfying x 2 m i 1 x 2 m i with m i n / 2 . In Peres’s extractor, Ψ ν ( ν 2 ) is defined inductively as follows: For an even n,
Ψ ν ( x 1 , x 2 , , x n ) = Ψ 1 ( x 1 , x 2 , , x n ) Ψ ν 1 ( u 1 , u 2 , , u n 2 ) Ψ ν 1 ( v 1 , v 2 , , v n 2 ) ,
where ∗ is concatenation; u j = x 2 j 1 x 2 j for 1 j n / 2 ; v s = x 2 i s 1 and i 1 < i 2 < < i n 2 are all the indices satisfying x 2 i s 1 = x 2 i s with i s n / 2 . For an odd input size n, Ψ ν ( x 1 , x 2 , , x n ) : = Ψ ν ( x 1 , x 2 , , x n 1 ) , i.e., the last bit is discarded and the above case of an even n is utilized.
Note that the number of iterations ν is at most log n , since Ψ ν for every ν 2 is defined by Ψ ν 1 having an input sequence whose bit-length is at most n / 2 , i.e., the bit-length of both ( u 1 , u 2 , , u n 2 ) and ( v 1 , v 2 , , v n 2 ) in Equation (4) is at most n / 2 . Obviously, Peres’s extractor with ν = 1 is the same as von Neumann’s extractor. In addition, Peres’s extractor with a large ν is considered to be an elegantly improved version of von Neumann’s extractor by utilizing a recursion mechanism.
Example 4.
Suppose that an input sequence is given as x = ( 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 ) , which is the same as all previous examples. The number of iterations satisfies ν log 8 = 3 . Then, Peres’s extractor is executed as follows:
Ψ 1 ( x ) = ( 1 , 0 ) , Ψ 2 ( x ) = Ψ 1 ( x ) Ψ 1 ( 1 , 1 , 0 , 0 ) Ψ 1 ( 0 , 1 ) = ( 1 , 0 , 0 ) , Ψ 3 ( x ) = Ψ 1 ( x ) Ψ 2 ( 1 , 1 , 0 , 0 ) Ψ 2 ( 0 , 1 ) = Ψ 1 ( x ) ( Ψ 1 ( 1 , 1 , 0 , 0 ) Ψ 1 ( 0 , 0 ) Ψ 1 ( 1 , 0 ) ) ( Ψ 1 ( 0 , 1 ) Ψ 1 ( 1 ) ) = ( 1 , 0 , 1 , 0 ) .
Complexity: We denote the time complexity of Ψ ν by T ν ( n ) . By Equation (4), we have
T ν ( n ) = T 1 ( n ) + n / 2 + T ν 1 ( n / 2 ) + T ν 1 ( n / 2 ) ,
and T 1 ( n ) = O ( n ) (see Section 2.1 for the time complexity of von Neumann’s extractor). From the condition (5), we obtain T ν ( n ) = O ( ν n ) for Ψ ν with 1 ν log n . In particular, the time complexity of Peres’s extractor with the maximum iterations ν = log n is evaluated as T ν ( n ) = O ( n log n ) and the space complexity is O ( 1 ) .
Redundancy: The rate function r ν P ( p ) of Peres’s extractor can be computed inductively by the equation
r ν P ( p ) = p q + 1 2 r ν 1 P ( p 2 + q 2 ) + 1 2 ( p 2 + q 2 ) r ν 1 P p 2 p 2 + q 2
for ν 2 , and r 1 P ( p ) = p q . Note that r 1 P ( p ) is the rate of von Neumann’s extractor. Peres’s extractor takes i.i.d. with non-uniform distribution as input, and it will output i.i.d. with uniform distribution such that its rate is given by Equation (6) if n . It is shown in [8] that r ν P ( p ) r ν + 1 P ( p ) for all ν N , p ( 0 , 1 ) , and lim ν r ν P ( p ) = h ( p ) uniformly in p ( 0 , 1 ) .
In other words, the above result is described in terms of redundancy as follows:
f ν P ( p ) = h ( p ) r ν P ( p ) = 1 2 f ν 1 P ( p 2 + q 2 ) + 1 2 ( p 2 + q 2 ) f ν 1 P p 2 p 2 + q 2
for ν 2 and f 1 P ( p ) = h ( p ) p ( 1 p ) , where the above Equation (7) follows from Equation (6). Furthermore, it holds that f ν P ( p ) f ν + 1 P ( p ) for all ν N , p ( 0 , 1 ) , and lim ν f ν P ( p ) = 0 uniformly in p ( 0 , 1 ) . Suppose that we take the maximum ν = log n and n , and then, we have Γ P ( n ) = o ( 1 ) .
In Table 1, we summarize the redundancy, time complexity and space complexity (memory size) for von Neumann’s, Elias’s, and Peres’s extractors.

3. Lower Bound on Redundancy of Peres’s Extractor

Although it is shown that Γ P ( n ) = o ( 1 ) in Peres’s extractor (i.e., Γ P ( n ) converges to zero as n ), it is not known whether Γ P ( n ) converges to zero rapidly or slowly. To investigate it, we analyze the non-asymptotic redundancy function f ν P ( p , n ) and non-asymptotic maximum redundancy Γ P ( n ) . In particular, we derive a lower bound on Γ P ( n ) based on some heuristics.
Let f ν P ( p ) = h ( p ) r ν P ( p ) be the redundancy function for Peres’s extractor with ν iterations. Then, we first show that f ν P ( p ) is not concave in p ( 0 , 1 ) for ν 5 as follows. The proof is given in Appendix A.
Proposition 1.
The redundancy function f ν P ( p ) in Peres’s extractor with ν iterations is not concave in p ( 0 , 1 ) if ν 5 . More generally, for Peres’s extractor with ν iterations, the redundancy function f ν P ( p ) satisfies
d 2 f ν P ( 1 2 ) d p 2 = 8 4 ln 2 6 3 4 ν 1 .
In particular, d 2 f ν P d p 2 1 2 < 0 for 1 ν 4 and d 2 f ν P d p 2 1 2 > 0 for ν 5 .
Here, we assume that the following proposition, Proposition 2, holds true. Although it is not easy to provide proof, it seems to be true from our experimental results that are provided in Appendix B. In Figure A1 in Appendix B, we depict the difference values f ν P ( p , n ) f ν P ( p ) with input bit-length n = 80 , 100 , , 200 and iterations 1 ν log n . We note that Proposition 2 states that f log n P ( p , n ) f log n P ( p ) 0 for p ( 0 , 1 ) , and we can observe that it holds true for input bit-length n = 80 , 100 , , 200 by our experimental results given in Figure A1.
Proposition 2 (Heuristics).
Suppose ν = log n . Then, we have f ν P ( p , n ) f ν P ( p ) , or equivalently r ν P ( p , n ) r ν P ( p ) , for a sufficiently large n and any p ( 0 , 1 ) .
The following theorem shows a lower bound on Γ P ( n ) that is derived based on Proposition 2.
Theorem 1.
Suppose that Proposition 2 holds true. Then, in Peres’s extractor with the maximum iterations ν = log n , we have Γ P ( n ) > 1 / n 2 log 3 . In particular, Γ P ( n ) = ω ( 1 / n ) .
Proof. 
Let n be a large natural number. For a natural number ν N with 1 ν log n , we define a ν : = r ν ( 1 / 2 ) . Then, by Equation (6) we have
a 1 = 1 4 , a ν = 1 4 + 3 4 a ν 1   for   ν 2 .
By solving the above equation, we have
a ν = 1 3 4 ν   for   ν 1 .
Thus, for ν = log n , we obtain
f ν P ( 1 / 2 , n ) f ν P ( 1 / 2 )
= ( 3 / 4 ) ν ( 3 / 4 ) log n = 1 n 2 log 3 ,
where the inequality (10) follows from Proposition 2, and the equality (11) follows from (9).
Therefore, we have
Γ P ( n ) = sup p ( 0 , 1 ) f log n P ( p , n ) > f log n P ( 1 2 , n ) 1 n 2 log 3 ,
where the inequality (12) follows from Proposition 1. ☐
Theorem 1 shows that the non-asymptotic maximum redundancy Γ P ( n ) does converge to zero slower than 1 / n . This means that Peres’s extractor is worse than Elias’s extractor in terms of the maximum redundancy, since Γ E ( n ) = O ( 1 / n ) if block size is set to be n. However, this result does not always mean that Peres’s extractor is worse than Elias’s extractor, since the time complexity and space complexity of Peres’s extractor are better than those of Elias’s extractor, as shown in Table 1. In this sense, it is not easy to conclude which extractor is superior. In the next section, from a viewpoint of practicality including running time, we compare both extractors and show that Peres’s extractor is better than Elias’s extractor by numerical analysis with various parameters.

4. Implementation and Numerical Analysis

In this section, we describe our experimental results of Peres’s extractor and Elias’s extractor with the RM method. We used Java language version 1.8 to implement both extractors and evaluated the performance on a desktop PC with Intel Core i3 3.70 GHz and 4 GB of RAM. Our experiments could also be performed on a general PC and do not require any special resources, libraries, or frameworks for computation. In fact, we can use other languages instead of Java language, but Java language can evaluate it on every platform without any support software. Thereby, we used Java language for implementation. To compare Peres’s extractor and Elias’s extractor with the RM method with finite input sequences in terms of non-asymptotic viewpoints, we consider the following four questions.
(1)
Is theoretical redundancy the same as experimental redundancy in both extractors?
(2)
Is the experimental redundancy of Elias’s extractor with the RM method better than the experimental redundancy of Peres’s extractor?
(3)
What is the exact running time of both extractors?
(4)
Which extractor achieves better redundancy (or rate) under the very similar running time?
To answer the above questions, we design our experiments as follows.
To answer the Questions (1) and (2), we evaluate the theoretical and experimental redundancy of Peres’s extractor and Elias’s extractor by using a pseudorandom number generation program rand() in MATLAB [23] to obtain biased input sequences while controlling the probability (See Section 4.1 and Section 4.2). This experiment used rand() to generate input sequences because we can control the probability p for each input sequence. Therefore, we vary the probability p = 0.1 , 0.2 , , 0.9 . We show the results for a finite input sequence with 180 bits that would be used in various cryptographic algorithms. In fact, we implemented various bit-lengths of input sequences such as n = 80 , 100 , , 200 bit-length, and obtained very similar results to the case of 180-bit length (In the primary version [21], we implemented only the case of 180-bit length, and in this paper we further investigated n = 80 , 100 , , 200 bit-length.). Hence, we will describe only the input length with 180 bits, and we omit the cases of other bit-length in this paper. In addition, to investigate the efficiency of Elias’s extractor, the input size should be divided by a reasonable block size. Therefore, the 180 bit-length is also suitable, because it can be divided by many simple block-sizes 10, 20, 30, 60, 90, 180. To compute N k in Elias’s extractor with the RM method, we consider the following:
  • The Schönhage–Strassen multiplication algorithm requires O ( N 1 + ϵ ) which is asymptotically faster than the normal multiplication requiring O ( N 2 ) ;
  • To avoid multiplication, we use only the addition operation because it is simple and makes the basic operation lighter so that it can be used in various applications and environments.
Additionally, we use the recursive formula N k = N 1 k 1 + N 1 k for 10 N 180 in order to compute N k only by additions and also by dynamic programming. To compute experimental redundancy with finite input sequences, we use 180 bit-length of inputs and generate 100 times for each probability p. The rand() will produce different sequences in every time under the same probability, thus we repeat the process to generate input sequences 100 times and calculate the average experimental redundancy. In fact, we repeated the process to generate input sequences 100, 1000, and 2000 times, but all the results on the average of experimental redundancy are almost the same, and hence, we focus on generating input sequences 100 times only (In the primary version [21], we repeated the process to generate input sequences only 100 times, and in this paper we conducted further investigations when repeating the process 1000 and 2000 times.). Next, we note that the number of iterations satisfies ν log 180 = 7 for Peres’s extractor in Section 4.1, and we take the block size N = 10 , 20 , 30 , 60 , 90 , 180 for Elias’s extractor with the RM method in Section 4.2. Then, we calculate the average on the redundancy function f ν P ( p ) of Peres’s extractor by using (7) and the redundancy function f E ( p , N ) = h ( p ) r E ( p , N ) of Elias’s extractor with the RM method by using (3) for each probability p.
To answer the Question (3), we investigate running time to extract uniformly random sequences for both extractors (See Section 4.3). Time complexity depends on the length of input sequences, and thus the probability is not a parameter in this investigation. Thereby, this experiment changes the random number generator for input sequences to RANDOM.ORG [24] to generate input sequences. This random number generator can produce a sequence that is very close to a true random number with unknown probability p by using the randomness of atmospheric noises. In addition, it can produce 131,072 random bits in each time. This experiment takes n = 100 , 200 , 400 , 600 , 800 , 1000 , 2000 , 3000 , 4000 , 5000 as the bit-length of input sequences (In the primary version [21], we took only n = 100 , 200 , 400 , 600 , 800 , 1000 , and we further investigated longer bit-length 2000 , 3000 , 4000 , 5000 in this paper.). For reliability of our experiment, we repeated the process to extract unbiased random sequences 100 times for each n, and then calculated their average running time.
By analyzing all the results of the experiments above, we can answer the Question (4): we can compare the redundancy of both extractors under the very similar running time (see Section 4.4).

4.1. Analysis of the Redundancy of Peres’s Extractor

In Figure 1a, we show the redundancy of Peres’s extractor from theoretical aspects, that is, we calculated the redundancy f ν P ( p ) of Peres’s extractor by using (7) with the iterations ν = 1 , 2 , , 7 and the probability p = 0.1 , 0.2 , , 0.9 . We depicted the graphs of redundancy f ν P ( p ) , where the x-axis means probability p and the y-axis means redundancy. It can be easily seen that the redundancy becomes smaller as the number of iterations becomes bigger, for all p ( 0 , 1 ) . Furthermore, we showed the experimental redundancy of Peres’s extractor with 180 bit-length of input sequences in Figure 1b. As a result, the theoretical redundancy in Figure 1a is almost the same as the experimental redundancy in Figure 1b.
Figure 2 depicts the graphs of theoretical redundancy f ν P ( p ) with ν = 5 , 6 around p = 1 / 2 , namely, 0.450 p 0.550 . Both graphs support Proposition 1 from a geometric viewpoint. In addition, our experiment shows that f 5 P ( p ) would approximately take the maximum 0.2373467 at p 0.476 and p 0.524 , and f 6 P ( p ) would approximately take the maximum 0.1781326 at p 0.459 and p 0.541 .
Figure 3 is provided to observe the difference or similarity between f ν P ( p ) and f ν P ( p , n ) ( ν 5 ) for a large fixed n. Figure 3 shows experimental redundancy with probability 0.450 p 0.550 at the x-axis as in Figure 2. When we observe f ν P ( p ) and f ν P ( p , n ) by a rough scale, those graphs are very similar as shown in Figure 1; however, when we observe f ν P ( p ) and f ν P ( p , n ) with n = 180 and ν = 5 , 6 by a fine scale, we can see the difference between those graphs. Actually, the graphical forms of f 5 P ( p , 180 ) and f 6 P ( p , 180 ) are quite different from f 5 P ( p ) and f 6 P ( p ) , respectively, as shown in Figure 2 and Figure 3, although there is the fluctuation in Figure 3 depending on our experiments. This implies that we need to analyze a non-asymptotic function f ν P ( p , n ) much more from a theoretical aspect in the future, and this analysis is also important to validate the assumption given in Proposition 2.

4.2. Analysis of the Redundancy of Elias’s Extractor with the RM Method

In Figure 4a, we show the redundancy of Elias’s extractor with the RM method from theoretical aspects, that is, we calculated the theoretical redundancy f E ( p , N ) = h ( p ) r E ( p , N ) of Elias’s extractor with the RM method by using (3) with probability p = 0.1 , 0.2 , , 0.9 and the block size N = 10 , 20 , 30 , 60 , 90 , 180 . It can be seen that the redundancy becomes smaller as block size becomes larger, for all p ( 0 , 1 ) . In spite of the fact that there is a slight difference between theoretical redundancy in Figure 4b and experimental redundancy in Figure 4a, we can say that most of them are similar.
As a result, the redundancy of Elias’s extractor with large block size is better than that of Peres’s extractor, which is an answer to our second question. Moreover, we can observe that the theoretical redundancy is almost the same as the experimental redundancy in both extractors, which is an answer to our first question. Therefore, we can rely on our implementation, and we will use this implementation for analyzing the running time in the next section.

4.3. Analysis of the Time Complexity of Both Extractors

This section will answer the third question. In Figure 5a, we show the running time of Peres’s extractor with iterations ν = 1 , 2 , , 7 and bit-length of input sequences n = 100 , 200, 400, 600, 800, 1000, 2000 , 3000 , 4000 , 5000 . We depicted the graphs of the running time, where the x-axis is the bit-length of input sequences and the y-axis is the running time in the second unit. It is clearly seen that an increase in the number of iterations leads to a large running time. The running time increases almost linearly but the slope depends on the iterations ν , as supported by a theoretical estimate of time complexity O ( ν n ) . Additionally, the running time of iterations ν = 7 and the bit-length of input sequences n = 5000 leads to the largest running time ( 1.425 milliseconds), which means that it can be used in real-world applications.
In Figure 5a, we show the running time of Elias’s extractor with the RM method with block size N = 2 , 4 , 6 , 8 , 10 , 12 , 16 , 20 . It can be seen that an increase in the block size leads to a large running time. The running time increases linearly, but the slope depends on the block size N, as supported by a theoretical estimate of time complexity O ( N log 3 N log log N ) . In addition, the running time with block size N = 20 and bit-length of input sequences n = 5000 leads to the largest running time ( 33.155 milliseconds), which is much larger than that of Peres’s extractor.
By comparing the running time of both extractors, the running time of Peres’s extractor is better than that of Elias’s extractor with the RM method at the same bit-length of input sequences. In the case of a long bit-length of input sequences, the difference between running time of both extractors can be seen more clearly. Therefore, we can conclude that Peres’s extractor is faster than Elias’s extractor with the RM method at the same bit-length of input sequences. On the other hand, according to the results in Section 4.1 and Section 4.2, we have seen that the redundancy of Elias’s extractor with the RM method is better than that of Peres’s extractor. Thus, we analyze the comparison of redundancy (or rate) under the very similar running time in the next section.

4.4. Comparison under the Very Similar Running Time

In all previous experiments, we have observed that the redundancy of Elias’s extractor with the RM method is better than that of Peres’s extractor; however, the time complexity of Peres’s extractor is better than that of Elias’s extractor with the RM method. Therefore, we will answer the fourth question by comparing the running time in Figure 6a and redundancy under the very similar running time in Figure 6b.
In Figure 6a, we show the comparison of the running time of Peres’s extractor with iterations ν = 4 , 5 , 6 and the running time of Elias’s extractor with the RM method with block size N = 2 , 10 , 20 . The running time of Peres’s extractor with iterations ν = 6 (the yellow line) is almost the same as the running time of Elias’s extractor with the RM method having block size N = 2 (the black dashed line). Thereby, we can compare the experimental redundancy of Peres’s extractor and that of Elias’s extractor with the RM method under the very similar running time, that is, f 6 P ( p , 180 ) and f E ( p , 2 ) in Figure 6b. It is clearly seen that f 6 P ( p , 180 ) (the yellow line) is much better than f E ( p , 2 ) (the black dashed line), and f 6 P ( p , 180 ) is close to f E ( p , 20 ) (the green dashed line). However, the running time of Elias’s extractor with the RM method with block size N = 20 is much larger than the running time of Peres’s extractor with iterations ν = 6 , as seen in Figure 6a. In addition, we can observe that the redundancy f 4 P ( p , 180 ) of Peres’s extractor with iterations ν = 4 (the red line) is close to the redundancy f E ( p , 10 ) of Elias’s extractor with the RM method with block size N = 10 (the blue dashed line), but the running time of Elias’s extractor with the RM method with block size N = 10 is approximately 16 times larger than that of Peres’s extractor with iterations ν = 4 , as seen in Figure 6a (i.e., the blue dashed line and the red line). As a result, we can conclude that Peres’s extractor achieves a better rate (or redundancy) than Elias’s extractor with the RM method under the very similar running time.

5. Conclusions

It is known that Elias’s extractor achieves the optimal rate if the block size tends to infinity. We considered an improved version of Elias’s extractor from Ryabko and Matchikina [10] to reduce both the time complexity and space complexity. Peres’s extractor achieves the optimal rate if the length of the input and the number of iterations tend to infinity. These are the results of asymptotic analysis, but it is important and interesting to non-asymptotically analyze and compare both extractors for finite input sequences, since the resulting information will be useful in applications (e.g., cryptography) in practice.
In this paper, we evaluated the numerical performance of Peres’s extractor and Elias’s extractor with the RM method in terms of practical aspects. Firstly, we derived a lower bound on the maximum redundancy of Peres’s extractor based on some heuristics, and we showed that the maximum redundancy of Elias’s extractor (with the RM method) was superior to that of Peres’s extractor in general, if we do not pay attention to the time complexity or space complexity. We also found that f ν P ( p ) is not concave in p ( 0 , 1 ) for every ν 5 . Afterwards, we evaluated the numerical performance of Peres’s extractor and Elias’s extractor with the RM method for finite input sequences. Our implementation evaluated it on a general PC and did not require any special resources, libraries, or frameworks for computation. Our empirical results showed that Peres’s extractor is much better than Elias’s extractor for given finite input sequences under a very similar running time. As a consequence, Peres’s extractor would be more suitable to generate uniformly random sequences in practice in applications such as cryptographic systems.

Author Contributions

Conceptualization, A.P. and J.S.; Data curation, A.P. and J.S.; Formal analysis, A.P., N.K., and J.S.; Investigation, A.P., N.K., and J.S.; Methodology, A.P., N.K., and J.S.; Resources, A.P.; Software, A.P.; Supervision, J.S.; Validation, A.P. and J.S.; Visualization, A.P.; Writing—Original Draft, A.P., N.K., and J.S.; Writing—Review & Editing, A.P., N.K., and J.S.

Funding

This work was supported by JSPS KAKENHI under Grant Numbers JP18H03238 and JP17H01752.

Acknowledgments

The authors would like to thank anonymous referees for valuable and helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

First, we note that, for ν 1 ,
f ν P ( 1 / 2 ) = h ( 1 / 2 ) r ν P ( 1 / 2 ) = 3 4 ν ,
where the last equality follows from (9).
For p ( 0 , 1 ) , we define p ˜ : = p 2 + ( 1 p ) 2 and p ^ : = p 2 / p ˜ . Then, it holds that
d p ˜ d p = 2 ( 2 p 1 ) , d p ^ d p = 2 p ( 1 p ) p ˜ 2 .
Next, for the first-order derivative of f ν P ( p ) , we have
d f 1 P ( p ) d p = 1 ln 2 ln 1 p p + 2 p 1 ,
d f ν P ( p ) d p = ( 2 p 1 ) f ν 1 P ( p ^ ) + d f ν 1 P ( p ˜ ) d p + p ( 1 p ) p ˜ d f ν 1 P ( p ^ ) d p for ν 2 .
Then, by setting p = 1 / 2 in (A4), for ν 2 , we have
d f ν P ( 1 / 2 ) d p = 1 2 d f ν 1 P ( 1 / 2 ) d p = 1 2 ν 1 d f 1 P ( 1 / 2 ) d p
= 0 ,
where (A5) follows from (A4), and (A6) follows from (A3).
Moreover, for the second-order derivative of f ν P ( p ) , we obtain
d 2 f 1 P ( p ) d p 2 = 1 ln 2 1 p ( 1 p ) + 2 , d 2 f ν P ( p ) d p 2 = 2 f ν 1 P ( p ^ ) + 2 d f ν 1 P ( p ˜ ) d p + 1 2 p p ˜ d f ν 1 P ( p ^ ) d p
+ 2 ( 2 p 1 ) 2 d 2 f ν 1 P ( p ˜ ) d p 2 + 2 p 2 ( 1 p ) 2 p ˜ 3 d 2 f ν 1 P ( p ^ ) d p 2 for ν 2 .
Furthermore, by setting p = 1 / 2 in (A9), for ν 2 , we have
d 2 f ν P ( 1 / 2 ) d p 2 = 2 f ν 1 P ( 1 / 2 ) + 2 d f ν 1 P ( 1 / 2 ) d p + d 2 f ν 1 P ( 1 / 2 ) d p 2 = 2 3 4 ν 1 + d 2 f ν 1 P ( 1 / 2 ) d p 2 ,
where the first equality follows from (A9), and the second equality (A9) follows from (A1) and (A6). Then, by solving Equation (A9) ( ν 2 ) and d 2 f 1 P ( 1 / 2 ) d p 2 = 2 4 / ln 2 , we obtain
d 2 f ν P ( 1 / 2 ) d p 2 = d 2 f 1 P ( 1 / 2 ) d p 2 + 2 k = 1 ν 1 3 4 k = 2 4 ln 2 + 6 1 3 4 ν 1 = 8 4 ln 2 6 3 4 ν 1 .
From Equation (A11), it follows that
d 2 f ν P ( 1 / 2 ) d p 2 < 0 for   1 ν 4 , d 2 f ν P ( 1 / 2 ) d p 2 > 0 for   ν 5 .

Appendix B. Experimental Results for Proposition 2

In this appendix, we show experimental results for Proposition 2, which support that Proposition 2 holds true. In Figure A1, we depict the difference values f ν P ( p , n ) f ν P ( p ) with input bit-length n = 80 , 100 , , 200 and iterations 1 ν log n . The x-axis is the probability p = 0.1 , 0.2 , , 0.9 and the y-axis is the difference values defined by f ν P ( p , n ) f ν P ( p ) .
Proposition 2 states that f log n P ( p , n ) f log n P ( p ) 0 for p ( 0 , 1 ) , and we can observe that it holds true for input bit-length n = 80 , 100 , , 200 by our experimental results given in Figure A1.
Figure A1. Difference values f ν P ( p , n ) f ν P ( p ) with n = 80 , 100 , , 200 and 1 ν log n .
Figure A1. Difference values f ν P ( p , n ) f ν P ( p ) with n = 80 , 100 , , 200 and 1 ν log n .
Entropy 20 00729 g0a1aEntropy 20 00729 g0a1b

References

  1. Heninger, N.; Durumeric, Z.; Wustrow, E.; Halderman, J.A. Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices. In Proceedings of the 21st USENIX Security Symposium, Bellevue, WA, USA, 8–10 August 2012. [Google Scholar]
  2. Lenstra, A.K.; Hughes, J.P.; Augier, M.; Bos, J.W.; Kleinjung, T.; Wachter, C. Public Keys. In Advances in Cryptology—ECRYPTO 2012; Safavi-Naini, R., Canetti, R., Eds.; Number 7417 in Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; pp. 626–642. [Google Scholar]
  3. Bendel, M. Hackers Describe PS3 Security As Epic Fail, Gain Unrestricted Access. Available online: Exophase.com (accessed on 20 September 2018).
  4. Dorrendorf, L.; Gutterman, Z.; Pinkas, B. Cryptanalysis of the Random Number Generator of the Windows Operating System. ACM Trans. Inf. Syst. Secur. 2009, 13, 10. [Google Scholar] [CrossRef]
  5. Bonneau, J.; Clark, J.; Goldfeder, S. On Bitcoin as a public randomness source. IACR Cryptol. ePrint Arch. 2015, 2015, 1015. [Google Scholar]
  6. Von Neumann, J. Various Techniques Used in Connection with Random Digits. J. Res. Nat. Bur. Stand. Appl. Math. Ser. 1951, 12, 36–38. [Google Scholar]
  7. Elias, P. The Efficient Construction of an Unbiased Random Sequence. Ann. Math. Stat. 1972, 43, 865–870. [Google Scholar] [CrossRef]
  8. Peres, Y. Iterating Von Neumann’s Procedure for Extracting Random Bits. Ann. Stat. 1992, 20, 590–597. [Google Scholar] [CrossRef]
  9. Abbott, A.A.; Calude, C.S. Von Neumann Normalisation and Symptoms of Randomness: An Application to Sequences of Quantum Random Bits. In Unconventional Computation; Calude, C.S., Kari, J., Petre, I., Rozenberg, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 40–51. [Google Scholar]
  10. Ryabko, B.; Matchikina, E. Fast and efficient construction of an unbiased random sequence. IEEE Trans. Inf. Theory 2000, 46, 1090–1093. [Google Scholar] [CrossRef]
  11. Cover, T. Enumerative source encoding. IEEE Trans. Inf. Theory 1973, 19, 73–77. [Google Scholar] [CrossRef]
  12. Schönhage, A.; Strassen, V. Schnelle Multiplikation großer Zahlen. Computing 1971, 7, 281–292. (In German) [Google Scholar] [CrossRef]
  13. Pae, S.I. Exact output rate of Peres’s algorithm for random number generation. Inf. Process. Lett. 2013, 113, 160–164. [Google Scholar] [CrossRef]
  14. Bourgain, J. More on the sum-product phenomenon in prime fields and its applications. Int. J. Number Theory 2005, 1, 1–32. [Google Scholar] [CrossRef]
  15. Raz, R. Extractors with Weak Random Seeds. In Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing, Hunt Valley, MD, USA, 22–24 May 2005; ACM: New York, NY, USA, 2005; pp. 11–20. [Google Scholar]
  16. Cohen, G. Local Correlation Breakers and Applications to Three-Source Extractors and Mergers. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 17–20 October 2015; pp. 845–862. [Google Scholar]
  17. Chattopadhyay, E.; Zuckerman, D. Explicit Two-Source Extractors and Resilient Functions. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, Cambridge, MA, USA, 18–21 June 2016; ACM: New York, NY, USA, 2016; pp. 670–683. [Google Scholar]
  18. Bouda, J.; Krhovjak, J.; Matyas, V.; Svenda, P. Towards True Random Number Generation in Mobile Environments. In Identity and Privacy in the Internet Age; Jøsang, A., Maseng, T., Knapskog, S.J., Eds.; Number 5838 in Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; pp. 179–189. [Google Scholar]
  19. Halprin, R.; Naor, M. Games for Extracting Randomness. In Proceedings of the 5th Symposium on Usable Privacy and Security, Mountain View, CA, USA, 15–17 July 2009; ACM: New York, NY, USA, 2009; p. 12. [Google Scholar]
  20. Voris, J.; Saxena, N.; Halevi, T. Accelerometers and Randomness: Perfect Together. In Proceedings of the Fourth ACM Conference on Wireless Network Security, Hamburg, Germany, 14–17 June 2011; ACM: New York, NY, USA, 2011; pp. 115–126. [Google Scholar]
  21. Prasitsupparote, A.; Konno, N.; Shikata, J. Numerical Analysis of Elias’s and Peres’s Deterministic Extractors. In Proceedings of the 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017. [Google Scholar]
  22. Graham, R.L.; Knuth, D.E.; Patashnik, O. Concrete Mathematics, 2nd ed.; Addison-Wesley: Boston, MA, USA, 1994; pp. 153–256. [Google Scholar]
  23. The MathWorks, Inc. Uniformly Distributed Random Numbers—MATLAB Rand. Available online: Mathworks.com/help/matlab/ref/rand.html (accessed on 20 September 2018).
  24. RANDOM.ORG. RANDOM.ORG—Random Byte Generator. Available online: Random.org/bytes (accessed on 20 September 2018).
Figure 1. Redundancy of Peres’s extractor. (a) Asymptotic and theoretical estimate of redundancy by Equation (7); (b)Non-asymptotic and experimental estimate of redundancy with 180-bit input sequences.
Figure 1. Redundancy of Peres’s extractor. (a) Asymptotic and theoretical estimate of redundancy by Equation (7); (b)Non-asymptotic and experimental estimate of redundancy with 180-bit input sequences.
Entropy 20 00729 g001
Figure 2. Asymptotic and theoretical estimate of the redundancy of Peres’s extractor with ν = 5 , 6 and 0.450 p 0.550 . (a) Graph of f 5 P ( p ) ; (b) Graph of f 6 P ( p ) .
Figure 2. Asymptotic and theoretical estimate of the redundancy of Peres’s extractor with ν = 5 , 6 and 0.450 p 0.550 . (a) Graph of f 5 P ( p ) ; (b) Graph of f 6 P ( p ) .
Entropy 20 00729 g002
Figure 3. Non-asymptotic and experimental estimates on the redundancy of Peres’s extractor for 180-bit input sequences with ν = 5 , 6 and 0.450 p 0.550 . (a) Graph of f 5 P ( p , 180 ) ; (b) Graph of f 6 P ( p , 180 ) .
Figure 3. Non-asymptotic and experimental estimates on the redundancy of Peres’s extractor for 180-bit input sequences with ν = 5 , 6 and 0.450 p 0.550 . (a) Graph of f 5 P ( p , 180 ) ; (b) Graph of f 6 P ( p , 180 ) .
Entropy 20 00729 g003
Figure 4. Redundancy of Elias’s extractor with the RM method. (a) Asymptotic and theoretical estimate of redundancy by Equation (3) and f E ( p , n ) : = h ( p ) r E ( p , n ) ; (b) Non-asymptotic and experimental estimate of redundancy with 180-bit input sequences.
Figure 4. Redundancy of Elias’s extractor with the RM method. (a) Asymptotic and theoretical estimate of redundancy by Equation (3) and f E ( p , n ) : = h ( p ) r E ( p , n ) ; (b) Non-asymptotic and experimental estimate of redundancy with 180-bit input sequences.
Entropy 20 00729 g004
Figure 5. Running time. (a) Peres’s extractor; (b) Elias’s extractor with the RM method.
Figure 5. Running time. (a) Peres’s extractor; (b) Elias’s extractor with the RM method.
Entropy 20 00729 g005
Figure 6. Comparison of Peres’s and Elias’s extractors. (a) Comparison of running time; (b) Comparison of redundancy for 180-bit inputs.
Figure 6. Comparison of Peres’s and Elias’s extractors. (a) Comparison of running time; (b) Comparison of redundancy for 180-bit inputs.
Entropy 20 00729 g006
Table 1. Comparison of extractors.
Table 1. Comparison of extractors.
Redundancy Γ ( n ) Time ComplexitySpace Complexity
von Neumann’s extractor 3 / 4 O ( n ) O ( 1 )
Elias’s extractor O ( 1 / n ) O ( n log 3 n log log n ) O ( n log 2 n )
(with maximum block-size)(by [7])(by [10])(by [10])
Peres’s extractor O ( 1 ) O ( n log n ) O ( 1 )
(with maximum iterations)(by [8])(by [8])(by [8])
Back to TopTop