Next Article in Journal
Efficient Representations of Spatially Variant Point Spread Functions with Butterfly Transforms in Bayesian Imaging Algorithms
Previous Article in Journal
Attention-Guided Multi-Scale CNN Network for Cervical Vertebral Maturation Assessment from Lateral Cephalometric Radiography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing †

LTCI, Télécom Paris, Institut Polytechnique de Paris, 91120 Palaiseau, France
Presented at the 41st InternationalWorkshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Paris, France, 18–22 July 2022.
Phys. Sci. Forum 2022, 5(1), 30; https://doi.org/10.3390/psf2022005030
Published: 13 December 2022

Abstract

:
In many areas of computer science, it is of primary importance to assess the randomness of a certain variable X. Many different criteria can be used to evaluate randomness, possibly after observing some disclosed data. A “sufficiently random” X is often described as “entropic”. Indeed, Shannon’s entropy is known to provide a resistance criterion against modeling attacks. More generally one may consider the Rényi α -entropy where Shannon’s entropy, collision entropy and min-entropy are recovered as particular cases α = 1 , 2 and + , respectively. Guess work or guessing entropy is also of great interest in relation to α -entropy. On the other hand, many applications rely instead on the “statistical distance”, also known as “total variation" distance, to the uniform distribution. This criterion is particularly important because a very small distance ensures that no statistical test can effectively distinguish between the actual distribution and the uniform distribution. In this paper, we establish optimal lower and upper bounds between α -entropy, guessing entropy on one hand, and error probability and total variation distance to the uniform on the other hand. In this context, it turns out that the best known “Pinsker inequality” and recent “reverse Pinsker inequalities” are not necessarily optimal. We recover or improve previous Fano-type and Pinsker-type inequalities used for several applications.

1. Some Well-Known “Randomness” Measures

It is of primary importance to assess the “randomness” of a certain random variable X, which represents some identifier, cryptographic key, signature or any type of intended secret. Applications include pseudo-random bit generators [1], general cipher security [2], randomness extractors [3] and hash functions ([4], Chapter 8), physically unclonable functions [5], true random number generators [6], to list but a few. In all of these examples, X takes finitely many values x { x 1 , x 2 , , x M } with probabilities p X ( x ) = P ( X = x ) . In this paper, it will be convenient to denote
p ( 1 ) p ( 2 ) p ( M )
any rearrangement of the probabilities p ( x ) in descending order (where ties can be resolved arbitrarily), p ( 1 ) = max x p X ( x ) is the maximum probability, p ( 2 ) the second maximum, etc. In addition, we need to define the cumulative sums
P ( k ) p ( 1 ) + + p ( k ) ( k = 1 , 2 , , M )
where, in particular, P ( M ) = 1 .
Many different criteria can be used to evaluate the randomness of X or its distribution p X , depending on the type of attack that can be carried out to recover the whole or part of the secret, possibly after observing disclosed data Y. The observed random variable Y can be any random variable and is not necessarily discrete. The conditional probability distribution of X having observed Y = y is denoted by p X | y to distinguish it from from the unconditional distribution p X . To simplify the notation, we write
p ( x ) p X ( x ) = P ( X = x )
p ( x | y ) p X | y ( x ) = P ( X = x | Y = y ) .
A “sufficiently random” secret is often described as “entropic” in the literature. Indeed, Shannon’s entropy
H ( X ) = H ( p ) x p ( x ) log 1 p ( x ) = E log 1 p ( X )
(with the convention 0 log 1 0 = 0 ) is known to provide a resistance criterion against modeling attacks. It was introduced by Shannon as a measure of uncertainty of X. The average entropy after having observed Y is the usual conditional entropy
H ( X | Y ) E y H ( p X | y ) = E log 1 p ( X | Y ) .
A well-known generalization of Shannon’s entropy is the Rényi entropy of order α > 0 or α -entropy
H α ( X ) = H α ( p ) 1 1 α log x p ( x ) α = α 1 α log p X α
where, by continuity as α 1 , the 1-entropy H 1 ( X ) = H ( X ) is Shannon’s entropy. One may consider many different definitions of conditional α -entropy [7], but for many applications the preferred choice is Arimoto’s definition [8,9,10]
H α ( X | Y ) α 1 α log E y p X | y α
where the expectation over Y is taken over the “ α -norm” inside the logarithm. (Strictly speaking, · α is not a norm when α < 1 .)
For α = 2 , the collision entropy
H 2 ( X ) = H 2 ( p ) = log 1 P ( X = X ) ,
where X is an independent copy of X, is often used to ensure security against collision attacks. Perhaps one of the most popular criteria is the min-entropy defined when α + as
H ( X ) = H ( p ) = log 1 p ( 1 ) = log 1 1 P e ( X ) ,
whose maximization is equivalent to a probability criterion to ensure a worst-case security level. Arimoto’s conditional -entropy takes the form
H ( X | Y ) = log 1 1 P e ( X | Y )
where we have noted
P e ( X ) = P e ( p ) 1 p ( 1 ) = 1 P ( 1 )
P e ( X | Y ) E y P e ( X | y ) .
The latter quantities correspond to the minimum probability of decision error using a MAP (maximum a posteriori probability) rule (see, e.g., [11]).
Guess work or guessing entropy [2,12]
G ( X ) = G ( p X ) i = 1 M i · p ( i )
and more generally guessing moments of order ρ > 0 or ρ -guessing entropy
G ρ ( X ) = G ρ ( p X ) i = 1 M i ρ · p ( i )
are also of great interest in relation to α -entropy [10,13,14]. The conditional versions given observation Y are the expectations
G ρ ( X | Y ) E y G ρ ( X | y ) .
When ρ = 1 , this represents the average number of guesses that an attacker has to make to guess the secret X correctly after having observed Y [13].

2. Statistical (Total Variation) Distance to the Uniform Distribution

As shown in the sequel, all quantities introduced in the preceding section (H, H α , P e , G, G ρ ) have many properties in common. In particular, each of these quantities attains
  • its minimum value for a delta (Dirac) distribution p = δ , that is, a deterministic random variable X with p ( 1 ) = 1 and all other probabilities = 0 ;
  • its maximum value for the uniform distribution p = u , that is, a uniformly distributed random variable X with p ( x ) = 1 M for all x.
Indeed, it can be easily checked that
0 H α ( X ) log M
1 G ( X ) M + 1 2
0 P e ( X ) 1 1 M
where the lower (resp. upper) bounds are attained for a delta (resp. uniform) distribution, the uniform distribution is the “most entropic” ( H α ), “hardest to guess” (G), and “hardest to detect” ( P e ).
The maximum entropy property is related to the minimization of divergence [15]
D ( p u ) = log M H ( p )
where D ( p q ) = p ( x ) log p ( x ) q ( x ) 0 denotes the Kullback-Leibler divergence which vanishes if and only if p = q . Therefore, entropy appears as the complementary value of the divergence to the uniform distribution. Similarly, for α -entropy,
D α ( p u ) = log M H α ( p )
where D α ( p q ) = 1 α 1 log x p ( x ) α q ( x ) 1 α denotes the Rényi α -divergence [16] (Bhattacharyya distance for α = 1 2 ).
Instead of the divergence to the uniform distribution, it is often desirable to rely instead on the statistical distance, also known as total variation distance to the uniform distribution. The general expression of the total variation distance is
Δ ( p , q ) = 1 2 x | p ( x ) q ( x ) |
where the 1 / 2 factor is there to ensure that 0 Δ ( p , q ) 1 . Equivalently,
Δ ( p , q ) = max T | P ( T ) Q ( T ) |
where the maximum is over any event T and P , Q denote the respective probabilities w.r.t. p and q. As is well known, the maximum
Δ ( p , q ) = P ( T + ) Q ( T + )
is attained when T = T + = { x p ( x ) q ( x ) } .
The total variation criterion is particularly important because a very small distance Δ ( p , q ) ensures that no statistical test can effectively distinguish between p and q. In fact, given some observation X following either p (null hypothesis H 0 ) or q (alternate hypothesis H 1 ), such a statistical test takes the form “is X T ?” (then accept H 0 , otherwise reject H 0 ). If | P ( X T ) Q ( X T ) | Δ ( p , q ) is small enough, the type-I or type-II errors have total probability P ( X T ) + Q ( X T ) 1 . Thus, in this sense the two hypotheses p and q are undistinguishable (statistically equivalent).
By analogy with (20) and (21) we can then define “statistical randomness R ( X ) = R ( p ) 0 as the complementary value of the statistical distance to the uniform distribution, i.e., such that
Δ ( p , u ) = 1 R ( p )
holds. With this definition,
R ( X ) = R ( p ) 1 1 2 x | p ( x ) 1 M |
is maximum = 1 when Δ ( p , u ) = 0 , i.e., p = u . Thus the uniform distribution u is the “most random”. What is fundamental is that R ( X ) 1 ensures that no statistical test can effectively distinguish the actual distribution from the uniform distribution.
Again the “least random” distribution corresponds to the deterministic case. In fact, from (24) we have
Δ ( p , u ) = P ( T + ) K M = P ( K ) K M
where T + = { x p ( x ) 1 M } of cardinality K = | T + | , and P ( T + ) = P ( K ) by definition (2). It is easily seen that Δ ( p , u ) attains its maximum value = 1 1 M if and only if p = δ is a delta distribution. In summary
1 M R ( X ) 1
where the lower (resp. upper) bound is attained for a delta (resp. uniform) distribution. The conditional version is again taken by averaging over the observation:
R ( X | Y ) E y R ( X | y ) .

3. F-Concavity: Knowledge Reduces Randomness and Data Processing

Knowledge of the observed data Y (on average) reduces uncertainty, improves detection or guessing, and reduces randomness in the sense that:
H α ( X | Y ) H α ( X )
G ( X | Y ) G ( X )
P e ( X | Y ) P e ( X )
R ( X | Y ) R ( X ) .
When α = 1 , the property H ( X | Y ) H ( X ) is well-known (“conditioning reduces entropy” [15]): the difference H ( X ) H ( X | Y ) = I ( X ; Y ) is the mutual information, which is nonnegative. Property (30) for α 1 is also well known, see [7,8]. In view of (10) and (11), the case α = + in (30) is equivalent to (32) which is obvious in the sense that any observation can only improve MAP detection. This, as well as (31), is also easily proved directly (see, e.g., [17]).
For all quantities H, P e , G, R, the conditional quantity is obtained by averaging over the observation as in (6), (13), (16) and (29). Since p ( x ) = E y p ( x | y ) , the fact that knowledge of Y reduces H, P e , G or R amounts to saying that these are concave functions of the distribution p of X. Note that concavity of R ( X ) = R ( p ) in p is clear from the definition (26), which shows (33).
For entropy H, this also has been given some physical interpretation: “mixing” distributions (taking convex combinations of probability distributions) can only increase the entropy on average. For example, given any two distributions p and q, H ( λ p + λ ¯ q ) λ H ( p ) + λ ¯ H ( q ) where 0 λ = 1 λ ¯ 1 . Similarly, such mixing of distributions increases the average probability of error P e , guessing entropy G, and statistical randomness R.
For conditional α -entropy H α ( X | Y ) where α 1 , averaging over Y in the definition (8) is made on the α -norm of the distribution p X | y , which is known to be convex for α > 1 (by Minkowski’s inequality) and concave for 0 < α < 1 (by the reverse Minkowski inequality), the fact that knowledge reduces α -entropy (inequality (30)) is equivalent to the fact that H α ( p ) in (6) is an F-concave function, that is, an increasing function F of a concave function in p, where F ( x ) = α 1 α log ( sgn ( 1 α ) x ) . The average over Y in H α ( X | Y ) is made on the quantity F 1 ( H α ) instead of H α . Thus, for example, H 1 / 2 ( p ) is a log-concave function of p.
A straightforward generalization of (30)–(33) is the data processing inequality: for any Markov chain  X Y Z , i.e., such that p ( x | y , z ) = p ( x | y ) ,
H α ( X | Y ) H α ( X | Z )
G ( X | Y ) G ( X | Z )
P e ( X | Y ) P e ( X | Z )
R ( X | Y ) R ( X | Z )
When α = 1 , the property H ( X | Y ) H ( X | Z ) amounts to I ( X ; Z ) I ( X ; Y ) , i.e., (post)-processing can never increase information. Inequalities (34)–(37) can be deduced from (30)–(33) by considering a fixed Z = z , averaging over Z to show that H ( X | Y , Z ) H ( X | Z ) , etc. (additional knowledge reduces randomness) and then noting that p ( x | y , z ) = p ( x | y ) by the Markov property—see, e.g., [7,18] for H α and [17] for G. Conversely, (30)–(33) can be re-obtained from (34)–(37) as the particular case Z = 0 (any deterministic variable representing zero information).

4. S-Concavity: Mixing Increases Randomness and Data Processing

Another type of mixing (different from the one described in the preceding section) is also useful in certain physical science considerations. It can be described as a sequence of elementary mixing operations as follows. Suppose that one only modifies two probability values p i = p ( x i ) and p j = p ( x j ) for i j . Since the result should be again a probability distribution, the sum p i + p j should be kept constant. Then there are two possibilities:
  • | p i p j | decreases; the resulting distribution is “smoother”, “more spread out”, “more disordered”; the resulting operation can be written as ( p i , p j ) ( λ p i + λ ¯ p j , λ p j + λ ¯ p i ) where 0 λ = 1 λ ¯ 1 , also known as “transfer” operation. We call it elementary mixing operation or M-transformation in short.
  • | p i p j | increases; this is the reverse operation, an elementary unmixing operation or U-transformation in short.
We say that a quantity is s-concave if it increases by any M-transformation (equivalently, decreases by any U-transformation). Note that any increasing function F of an s-concave function is again s-concave.
This notion coincides with that of Schur-concavity from majorization theory [19]. In fact, we can say that p is majorized by q, and we write p q , if p is obtained from q by a (finite) sequence of elementary M-transformations, or, what amounts the same, that q majorizes p, that is, q is obtained from p by a (finite) sequence of elementary U-transformations. A well-known result ([19], p. 34) states that p q if and only if
P ( k ) Q ( k ) ( 0 < k < M )
(see definition (2)) where always P ( M ) = Q ( M ) = 1 .
From the above definitions it is immediate to see that all previously considered quantities H, H α , G, G ρ , P e , R are s-concave, mixing increases uncertainty, guessing, error, and randomness, that is, p q implies
H α ( p ) H α ( q )
G ρ ( p ) G ρ ( q )
P e ( p ) P e ( q )
R ( p ) R ( q ) .
For H α and R this can be easily seen from the fact that these quantities can be written as (an increasing function of) a quantity of the form x ϕ ( p ( x ) ) where ϕ is concave. Then the effet of an M-transformation ( p i , p j ) ( λ p i + λ ¯ p j , λ p j + λ ¯ p i ) gives ϕ ( λ p i + λ ¯ p j ) + ϕ ( λ p j + λ ¯ p i ) λ ϕ ( p i ) + λ ¯ ϕ ( p j ) + λ ϕ ( p j ) + λ ¯ ϕ ( p i ) = ϕ ( p i ) + ϕ ( p j ) . For P e it is obvious, and for G and G ρ it is also easily proved using characterization (38) and summation by parts [17].
Another kind of (functional or deterministic) data processing inequality can be obtained from (39)–(42) as a particular case. For any deterministic function f,
H α ( f ( X ) ) H α ( X )
G ( f ( X ) ) G ( X )
P e ( f ( X ) ) P e ( X )
R ( f ( X ) ) R ( X )
Thus deterministic processing (by f) decreases (cannot increase) uncertainty, can only make guessing or detection easier, and decreases randomness. For α = 1 the inequality H ( f ( X ) ) H ( X ) can also be seen from the data processing inequality of the preceding section by noting that H ( f ( X ) ) = I ( f ( X ) ; f ( X ) ) I ( X ; f ( X ) ) H ( X ) (since X f ( X ) f ( X ) is trivially a Markov chain).
To prove (43)–(46) in general, consider preimages by f of values of y = f ( x ) ; it is enough to show that each of the quantities H α , P e , G, or R decreases by the elementary operation consisting in putting together two distincts values x i , x j of x in the same preimage of y. However, for probability distributions, this operation amounts the U-transformation ( p i , p j ) ( p i + p j , 0 ) and the result follows by s-concavity.
An equivalent property of (43)–(46) is the fact that any additional random variable Y increases uncertainty, probability of error, guessing, and randomness in the sense that
H α ( X ) H α ( X , Y )
G ( X ) G ( X , Y )
P e ( X ) P e ( X , Y )
R ( X ) R ( X , Y )
This is a particular case of (43)–(46) applied to the joint ( X , Y ) and the first projection f ( x , y ) = x . Conversely, (43)–(46) follows from (47)–(50) by applying it to ( f ( X ) , X ) in place of ( X , Y ) and noting that the distribution of ( f ( X ) , X ) is essentially that of X.

5. Optimal Fano-Type and Pinsker-Type Bounds

We have seen that informational quantities such as entropies H, H α , guessing entropies G, G ρ on one hand, and statistical quantities such as probability of error for MAP detection P e and statistical randomness R on the other hand, satisfy many common properties: decrease by knowledge, data processing, increase by mixing, etc. For this reason, it is desirable to establish the best possible bounds between one informational quantity (such as H α or G ρ ) and one statistical quantity ( P e or R = 1 Δ ( p , u ) ).
To achieve this, we remark that for any distribution p, we have the following majorizations. For fixed P e = 1 P s :
( P s , P e M 1 , , P e M 1 ) p ( P s , , P s , 1 K P s , 0 , , 0 )
where (necessarily) K = 1 P s , and for fixed R = 1 Δ :
( 1 M + Δ K , , 1 M + Δ K K times , 1 M Δ M K , , 1 M Δ M K M K times ) p ( Δ + 1 M , 1 M , , 1 M L 1 times , R L M , 0 , , 0 )
where K = | { p 1 M } | as in (27) and (necessarily) L = M R (K can possibly be any integer between 1 and L). These majorizations are easily established using characterizations (12), (27) and (38).
Applying s-concavity of entropies H α or G ρ to (51) gives closed-form upper bounds of entropies as a function of P e , known as Fano inequalities; and closed-form lower bounds, known as reverse Fano inequalities. Figure 1 shows some optimal regions.
The original Fano inequality was an upper bound on conditional entropy H ( X | Y ) as a function of P e ( X | Y ) . It can be shown that upper bounds in the conditional case are unchanged. Lower bounds of conditional entropies or α -entropies, however, have to be slightly changed due to the average operation inside the function F (see Section 3 above) by taking the convex enveloppe (piecewise linear) of the lower curve on F 1 ( H α ) . In this way, one recovers easily the results of [20] for H,11] for H α , and [14,17] for G and G ρ .
Likewise, applying s-concavity of entropies H α or G ρ to (52) gives closed-form upper bounds of entropies as a function of R, similar to Pinsker inequalities; and closed-form lower bounds, similar to reverse Pinsker inequalities. Figure 2 shows some optimal regions.
The various Pinsker and reverse Pinsker inequalities that can be found in the literature give bounds between Δ ( p , q ) and D ( p q ) for general q. Such inequalities find application in Quantum physics [21] and to derive lower bounds on the minimax risk in nonparametric estimation [22]. As they are of more general applicability, they turn out not to be optimal here since we have optimized the bounds in the particular case q = u . Using our method, one again recovers easily previous results of [23] (and [24], Theorem 26) for H, and improves previous inequalities used for several applications [3,4,6].

6. Conclusions

Using a simple method based on “mixing” or majorization, we have established optimal (Fano-type and Pinsker-type) bounds between entropic quantities ( H α , G ρ ) and statistical quantities ( P e , R) in an interplay between information theory and statistics. As a perspective, similar methodology could be developed for statistical distance to an arbitrary (not necessarily uniform) distribution.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Maurer, U.M. A Universal Statistical Test for Random Bit Generators. J. Cryptol. 1992, 5, 89–105. [Google Scholar] [CrossRef]
  2. Pliam, J.O. Guesswork and Variation Distance as Measures of Cipher Security. In SAC 1999: Selected Areas in Cryptography, Proceedings of the International Workshop on Selected Areas in Cryptography, Kingston, ON, Canada, 9–10 August 1999; Heys, H., Adams, C., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1758, pp. 62–77. [Google Scholar]
  3. Chevalier, C.; Fouque, P.A.; Pointcheval, D.; Zimmer, S. Optimal Randomness Extraction from a Diffie-Hellman Element. In Advances in Cryptology—EUROCRYPT 2009, Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cologne, Germany, 26–30 April 2009; Joux, A., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5479, pp. 572–589. [Google Scholar]
  4. Shoup, V. A Computational Introduction to Number Theory and Algebra, 2nd ed.; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  5. Schaub, A.; Boutros, J.J.; Rioul, O. Entropy Estimation of Physically Unclonable Functions via Chow Parameters. In Proceedings of the 57th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 24–27 September 2019. [Google Scholar]
  6. Killmann, W.; Schindler, W. A Proposal for Functionality Classes for Random Number Generators. Ver. 2.0, Anwendungshinweise und Interpretationen zum Schema (AIS) 31 of the Bundesamt für Sicherheit in der Informationstechnik. 2011. Available online: https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Certification/Interpretations/AIS_31_Functionality_classes_for_random_number_generators_e.pdf?__blob=publicationFile&v=4 (accessed on 11 March 2021).
  7. Fehr, S.; Berens, S. On the conditional Rényi entropy. IEEE Trans. Inf. Theory 2014, 60, 6801–6810. [Google Scholar] [CrossRef]
  8. Arimoto, S. Information measures and capacity of order α for discrete memoryless channels. In Topics in Information Theory; Csiszár, I., Elias, P., Eds.; Colloquium Mathematica Societatis János Bolyai, 2nd ed.; North Holland: Amsterdam, The Netherlands, 1977; Volume 16, pp. 41–52. [Google Scholar]
  9. Liu, Y.; Cheng, W.; Guilley, S.; Rioul, O. On conditional alpha-information and its application in side-channel analysis. In Proceedings of the 2021 IEEE Information Theory Workshop (ITW2021), Online, 17–21 October 2021. [Google Scholar]
  10. Rioul, O. Variations on a theme by Massey. IEEE Trans. Inf. Theory 2022, 68, 2813–2828. [Google Scholar] [CrossRef]
  11. Sason, I.; Verdú, S. Arimoto–Rényi Conditional Entropy and Bayesian M-Ary Hypothesis Testing. IEEE Trans. Inf. Theory 2018, 64, 4–25. [Google Scholar] [CrossRef]
  12. Massey, J.L. Guessing and entropy. In Proceedings of the IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994; p. 204. [Google Scholar]
  13. Arikan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
  14. Sason, I.; Verdú, S. Improved Bounds on Lossless Source Coding and Guessing Moments via Rényi Measures. IEEE Trans. Inf. Theory 2018, 64, 4323–4346. [Google Scholar] [CrossRef] [Green Version]
  15. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  16. van Erven, T.; Harremoës, P. Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
  17. Béguinot, J.; Cheng, W.; Guilley, S.; Rioul, O. Be my guess: Guessing entropy vs. success rate for evaluating side-channel attacks of secure chips. In Proceedings of the 25th Euromicro Conference on Digital System Design (DSD 2022), Maspalomas, Gran Canaria, Spain, 31 August–2 September 2022. [Google Scholar]
  18. Rioul, O. A primer on alpha-information theory with application to leakage in secrecy systems. In Geometric Science of Information, Proceedings of the 5th Conference on Geometric Science of Information (GSI’21), Paris, France, 21–23 July 2021; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2021; Volume 12829, pp. 459–467. [Google Scholar]
  19. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  20. Ho, S.W.; Verdú, S. On the Interplay Between Conditional Entropy and Error Probability. IEEE Trans. Inf. Theory 2010, 56, 5930–5942. [Google Scholar] [CrossRef]
  21. Audenaert, K.M.R.; Eisert, J. Continuity Bounds on the Quantum Relative Entropy—II. J. Math. Phys. 2011, 52, 7. [Google Scholar] [CrossRef] [Green Version]
  22. Tsybakov, A.B. Introduction to Nonparametric Estimation; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  23. Ho, S.W.; Yeung, R.W. The Interplay Between Entropy and Variational Distance. IEEE Trans. Inf. Theory 2010, 56, 5906–5929. [Google Scholar] [CrossRef]
  24. Sason, I.; Verdú, S. f-Divergence Inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
Figure 1. Optimal regions: Entropies (in bits) vs. error probability. Top row M = 4 ; bottom row M = 32 .
Figure 1. Optimal regions: Entropies (in bits) vs. error probability. Top row M = 4 ; bottom row M = 32 .
Psf 05 00030 g001
Figure 2. Optimal regions: Entropies (in bits) vs. randomness R. Top row M = 4 ; bottom row M = 32 .
Figure 2. Optimal regions: Entropies (in bits) vs. randomness R. Top row M = 4 ; bottom row M = 32 .
Psf 05 00030 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rioul, O. What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing. Phys. Sci. Forum 2022, 5, 30. https://doi.org/10.3390/psf2022005030

AMA Style

Rioul O. What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing. Physical Sciences Forum. 2022; 5(1):30. https://doi.org/10.3390/psf2022005030

Chicago/Turabian Style

Rioul, Olivier. 2022. "What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing" Physical Sciences Forum 5, no. 1: 30. https://doi.org/10.3390/psf2022005030

APA Style

Rioul, O. (2022). What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing. Physical Sciences Forum, 5(1), 30. https://doi.org/10.3390/psf2022005030

Article Metrics

Back to TopTop