Next Article in Journal
Maximum Relative Entropy Updating and the Value of Learning
Next Article in Special Issue
Information Hiding Method Using Best DCT and Wavelet Coefficients and Its Watermark Competition
Previous Article in Journal
Projective Synchronization for a Class of Fractional-Order Chaotic Systems with Fractional-Order in the (1, 2) Interval
Previous Article in Special Issue
Message Authentication over Noisy Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Security Notions of Secret Sharing Schemes

Department of Electronic Engineering, Xiamen University, Xiamen 361005, China
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(3), 1135-1145; https://doi.org/10.3390/e17031135
Submission received: 15 January 2015 / Revised: 25 February 2015 / Accepted: 3 March 2015 / Published: 10 March 2015

Abstract

:
Different security notions of secret sharing schemes have been proposed by different information measures. Entropies, such as Shannon entropy and min entropy, are frequently used in the setting security notions for secret sharing schemes. Different to the entropies, Kolmogorov complexity was also defined and used in study the security of individual instances for secret sharing schemes. This paper is concerned with these security notions for secret sharing schemes defined by the variational measures, including Shannon entropy, guessing probability, min entropy and Kolmogorov complexity.

1. Introduction

A secret sharing scheme [1,2] is a protocol to share a secret among participants such that only specified subsets of participants can recover the secret. In considering the security notions of secret sharing schemes, some authors have introduced concepts of security for secret sharing schemes based on different information measures [37]. These information measures include four very important information measures: Shannon entropy, min entropy, Rényi entropy and Kolmogorov complexity. Shannon entropy is the most widely used information measure, which is used to prove bounds on the share size and on the information rate in secret sharing schemes [35]. Recently, min and Rényi entropies are also used in study of the security of secret sharing schemes [6,7].
Kolmogorov complexity K(x) [810], known as algorithmic information theory [11,12], measures the quantity of information in a single string x, by the size of the smallest program that generates it. It is well known that Kolmogorov complexity and entropy measure are different but related measures [1315]. Measuring the security by Kolmogorov complexity offers us some new security criteria. Antunes et al. [16] gave a notion of individual security for cryptographic systems by using Kolmogorov complexity. Kaced [17] defined a normalized version of individual security for secret sharing schemes.
However these information measures are different. This means a scheme is secure based on one information measure but not secure based on another information measure [18]. Recently, several relations of security notions of cryptography have been studied. Iwamoto et al. [6] and Jiang [18] studied relations between security notions for the symmetric-key cryptography. In this paper, we are interested in relationships of security notions for secret sharing schemes. Antunes et al. [16] and Kaced [17] also studied relations between security notions for secret sharing schemes. However, their studies are between security notions based on Shannon entropy and Kolmogorov complexity. We study relationships of different security notions for secret sharing schemes under various information measures including Shannon entropy, guessing probability, min entropy and Kolmogorov complexity.
This paper is organized as follows: In Section 2, we review some definitions of entropy measures, Kolmogorov complexity and secret sharing schemes. In Section 3, we propose several security notions in entropies, and their relations. In Section 4, by using Kolmogorov complexity, security notions of secret sharing schemes are given, then are compared to entropy-based security in Section 5. Conclusions are presented in Section 6.

2. Preliminaries

In this paper, string means a finite binary string Σ* := {0, 1}*. |x| represents the length of a string x. For the cardinality of a set A we write |A|. Function log means the function log2. ln(·) denotes the logarithm function with natural base e = 2.71828….
Let [n] := {1, 2, …, n} be a finite set of IDs of n users. For every i ∈ [n], let Vi be a finite set of shares of the user i. Similarly, let S be a finite set of secret information. In the following, for any subset U := {i1, i2, …, iu} ⊂ [n], we use the notation v U : = { v i 1 , v i 2 , , v i u } and V U : { V i 1 , V i 2 , , V i u }.

2.1. Entropy

Let X and Y be two finite sets. Let X and Y be two random variables over X and Y, respectively. The probability that X takes on the value x from a finite or countably infinite set X is denoted by pX(x); the mutual probability, the probability that both x and y occur, by pXY (x, y) and the conditional probability, the probability that x occurs knowing that y has occurred by pXY (x|y). For convenience, pX(x), pXY (x, y) and pXY (x|y) are denoted by p(x), p(x, y) and p(x|y), respectively. Two random variables X and Y are independent if and only if p(x, y) = p(x) × p(y) for all xX and yY.
The Shannon entropy [19] of a random variable X, defined by H(X) = −∑xX p(x) log p(x), is a measure of its average uncertainty. The conditional Shannon entropy with respect to X given Y is defined as
H ( X ) = y Y p ( y ) H ( X | Y = y ) .
The Mutual information between X and Y is
I ( X ; Y ) = H ( X ) H ( X | Y ) .
Guessing probability [20] of X, occurred by G(X) = maxx∈X p(x), is the success probability of correctly guessing the value of a realization of variable when using the best guessing strategy (guessing the most probable value of the range as the guess). Conditional guessing probability with respect to X given Y is defined as
G ( X | Y ) = y Y p ( y ) max x X p ( x | y ) .
Min-entropy [6,18,20] is a measure of success chance of guessing X, i.e.,
H ( X ) = log G ( X ) = log max x X p ( x ) .
It can also be viewed as the worst case entropy compared to Shannon entropy which is an average entropy. The conditional min entropy with respect to X given Y is defined as
H ( X | Y ) = log G ( X | Y ) = log ( y Y p ( y ) max x X p ( x | y ) ) .

2.2. Kolmogorov Complexity

In this subsection, some definitions and basic properties of Kolmogorov complexity are recalled below. We will use the prefix-free definition of Kolmogorov complexity. A set of strings A is prefix-free if there are not two strings x and y in A such that x is a proper prefix of y. For more details and attributions we refer to [11,12].
The conditional Kolmogorov complexity K(y|x) of y with condition x, with respect to a universal prefix-free Turing machine U, is defined by
K U ( y | x ) = min { | p | : U ( p , x ) = y } .
Let U be a universal prefix-free computer, then for any other computer F:
K U ( y | x ) K F ( y | x ) + c F .
for all x, y, where cF depends on F but not on x, y. The (unconditional) Kolmogorov complexity KU(y) of y is defined as KU(y|Λ) where Λ is the empty string. For convenience, KU(y|x) and KU(y) are denoted, respectively by K(y|x) and K(y).
The mutual algorithmic information between x and y is the quantity
I ( x : y ) = K ( x ) K ( x | y ) .
We consider x and y to be algorithmic independent whenever I(x : y) is zero.

2.3. Secret Sharing Schemes

Then, secret sharing schemes for general access structures are recalled below. For more details refer to [1,3,7,21,22].
Each set of shares is classified into either a qualified set or a forbidden set. A qualified set is the set of shares that can recover the secret. Let Q 2 [ n ] and 2 [ n ] be families of qualified and forbidden sets, respectively. Then Γ : = ( Q , ) an access structure. An access structure is monotone if for all Q Q, every Q Q satisfies Q Q and; for all F , every FF′ satisfies F .
In particular, the access structure is called (t, n)-threshold access structure if it satisfies that Q : = { Q : | Q | t } and : = { F : | F | t 1 }. In this paper, the access structure is a partition of 2[n], namely, Q F = 2 [ n ] and Q = .
Let ∏ = (S, V[n], ∏share, ∏comb) be a secret sharing scheme for an access structure Γ, as defined below:
  • S is set of secret information;
  • V[n] is set of shares for all users;
  • share is an algorithm for generating shares for all users. It takes a secret sS on input and outputs (v1, v2, …, vn) ∈ V[n];
  • comb is an algorithm for recovering a secret. It takes a set of shares vQ, Q Q, on input and outputs a secret sS.
In this paper, we assume that ∏ meets perfect correctness: for any secret sS, and for all shares ∏share(s) = (v1, v2, …, vn), it holds that ∏comb(vQ) = s for any subset Q Q.

3. Information Theoretic Security of Secret Sharing Schemes

In this section, we first give the security notions of information theoretic security for secret sharing schemes based on Shannon entropy, guessing probability and min entropy, respectively, and then we discuss the relations between these security notions.
Definition 1. Letbe a secret sharing scheme for an access structure Γ. We sayis
  • ε-Shannon security, if I(S; VF) ≤ ε;
  • ε-guess security, if G(S|VF) − G(S) ≤ ε;
  • ε-min security, if H(S) − H(S|VF) ≤ ε
for any forbidden set F .
Now, we discuss the relations between above three security notions for secret sharing schemes. The following relations are important for the present paper.
Lemma 1. [11,18,20] Let X and Y be two random variables over X and Y, respectively. Then
  • G(X|Y) ≥ G(X).
  • H(X|Y) ≤ H(X).
  • H(X|Y) ≤ H(X).
  • I(X; Y) (2/ ln 2)[G(X|Y) − G(X)]2.
  • |H (X) − H (X|Y)| (1/ ln 2)|G(X) − G(X|Y)|.
  • H (X) − H (X|Y) ≥ I(X; Y), where X is uniformly random over X.
From above lemma, several relations of security notions for the symmetric-key cryptography in [18]. Similarly, from above lemma, we obtain the following.
Theorem 1. Letbe a secret sharing scheme for an access structure Γ.
  • Ifis ε-Shannon security, then it is 1 2 ε ln 2-guess security.
  • Ifis ε-min security, then it is ε ln 2-guess security
  • Ifis ε-min security and S is uniformly random over S, thenis ε-Shannon security.
From this result, we can see that, for a secret sharing scheme, ε-Shannon and ε-min security both are stronger than ε-guess security. If we assume S is uniformly random, then, for a secret sharing scheme, ε-min security is stronger than ε-Shannon security.
In the following, using a modified example of threshold secret sharing scheme, we showed that a secret sharing scheme is ε-guess security does not imply it is ε-Shannon security.
Example 1. Let s, and v1, v2, …, vn be binary strings with same length k. Assume that s and v1, v2, …, vn−1 are independent. We generate vn by vn = sv1v2 ⊕ … ⊕ vn−1 wheredenotes the exclusive OR operation. This scheme is (n, n)−threshold secret sharing scheme, called Karnin–Greene–Hellman scheme [5].
Let S = { 0 , 1 } k, V 1 = { 0 , 1 } k 1, V 2 = { 0 , 1 } k and V 3 = { 0 , 1 } k 1. S is uniformly random over S and V1 × V2 is uniformly random over {0, 1}k−1 × {0, 1}k. To share s = s′|sfor s′ ∈ {0, 1}k−1 and s″ ∈ {0, 1}. Let v 2 = v 2 | v 2 where v′ ∈ {0, 1}k−1 and v″ ∈ {0, 1}. And sand v1, v 2 are independent. Let v 2 = s and v 3 = s v 1 v 2 . Algorithm for recovering the secret is s = s′|swhere s = v 3 v 1 v 2 and s′ = v2″. This scheme is (3, 3)−threshold secret sharing scheme. It is easy to see that G(S|V2, V3) = G(S|V1, V3) = G(S|V1, V2) = 2(k−1) and hence |G(S|Vi, Vj) − G(S)| = 2−k for 1 ≤ i < j ≤ 3. However, I(S; (V2, V3)) = H(S) − H(S|V2, V3) = k − (k − 1) = 1.
Next, we discuss the relationship between these security notions when ε = 0.
Theorem 2. If a secret sharing scheme is 0-Shannon security, then it is 0-min security. Moreover, if S is uniformly random over S, then, for a secret sharing scheme, 0-min security, 0-guess security and 0-Shannon security are all equivalent.
However, a secret sharing scheme is 0-min security does not imply it is 0-Shannon security.
Example 2. [18]. Let S = V 1 = V 2 = { 0 , , k 1 } for k ≥ 4. p V 1 ( 1 ) = = p V 1 ( k 1 ) = 1 / ( k + 1 ) and p V 1 ( 0 ) = 2 / ( k + 1 ). Let pS(1) = … = pS(k − 1) = 1/(2k − 2) and pS(0) = 1/2. s and v1 are independent. We generate v2 by v2 = v1 + s(mod k). This scheme is (2, 2)−threshold secret sharing scheme. By maxsS PS(s) = 1/2 and hence H(S) = 1. By p S | V 2 ( s | v 2 ) = p S ( s ) p V 2 | S ( v 2 | s ) / p V 2 ( v 2 ) then p 0 | V 2 1 / ( 2 k + 2 ) p V 2 while p S | V 2 ( s | v 2 ) 1 / ( k 2 1 ) p V 2 ( v 2 ) for s ≠ 0. As k ≥ 4, for any v2, we have that p S | V 2 ( 0 | v 2 ) > p S | V 2 ( s | v 2 ) for s ≠ 0. So H(S|V2) = 1. H(S|V1) = H(S) = 1 by s and v1 are independent. So this scheme is 0-min security. But this scheme is not 0-Shannon security.
Some implications do not hold in general, but holds when S is uniformly random distribution. From above results, if S is uniformly random over S, then for a secret sharing scheme, ε-min security is stronger than ε-Shannon security, ε-Shannon security is stronger than ε-guess security, and these three security notions are the same when ε = 0.

4. Individual Security of Secret Sharing Schemes

In this section, we first give the security notions of individual security for secret sharing schemes based on Kolmogorov complexity, and then we consider the size of the shares based on the new concept of security in secret sharing schemes.
Definition 2. Letbe a secret sharing scheme for an access structure Γ. An instance (s, v1, v2, …; vn) is
  • Kolmogorov ε-security, if for any forbidden set F it satisfies
    I ( s ; v F ) ε , i . e . , K ( s ) K ( s | v F ) ε
  • normalized Kolmogorov ε-security, if for any forbidden set F it satisfies
    I ( s ; v F ) ε K ( s ) , i . e . , K ( s ) K ( s | v F ) ε K ( s ) .
We know that, in the notion of Kolmogorov ε-security, the security parameter ε of an instance is amount of information leakage, the maximal value of I(s; vF) for any forbidden set F. However, for example, 50 leaked bits is big for a 100-bit secret, but is small for a 1000-bit secret. So, we give the notion of normalized Kolmogorov ε-security. The parameter ε in latter notion is information leak ratio, the maximal value of I(s; vF) for any forbidden set F, divided by K(s).
The notion of normalized Kolmogorov ε-security can simply be understood as a normalized version of individual security.
In fact, for the same instance (s, v1, v2, …, vn), the security parameter ε is small in a forbidden set F but I(s; vF is a big variance in another forbidden set F′. It is worth noting that in Definition 2, for Kolmogorov ε-security, ε is a maximum value of { I ( s ; v F ) ; F }, more precisely, ε = sup F I ( s ; v F ). And for normalized Kolmogorov ε-security, ε is a maximum value of { I ( s ; v F ) / K ( s ) ; F }.
Now we discuss some results for Kolmogorov ε-security,
By I(s; vF) ≤ I(s; vF) + O(1), if FF′ (by K(x|y) ≤ K(x|y, z) + O(1)). We know that, up to a constant, the mutual algorithmic information between s and vi is smaller than ε, because, for any iF, we have
I ( s ; v i ) = K ( s ) K ( s | v i ) K ( s ) K ( s | v F ) + O ( 1 ) ε ( k ) + O ( 1 ) .
Moreover, if access structure Γ is a (t; n)-threshold access structure, then in Definition 2(i), up to a constant, ε is a maximum value of {I(s; vF); |F| = t − 1}, or equivalently, ε = sup|F|=t1 I(s; vF).
We show some lower bounds of share sizes of secret sharing schemes.
Theorem 3. Let ∏ be a secret sharing scheme for an access structure Γ.
  • If an instance (s, v1, v2, …, vn) is Kolmogorov ε-security, then
    | v i | K ( s ) ε O ( 1 )
    for every i ∈ [n].
  • If an instance (s, v1, v2, …, vn) is normalized Kolmogorov ε-security, then
    | v i | ( 1 ε ) K ( s ) O ( 1 )
    for every i ∈ [n].
Proof. For any i ∈ [n], there exists a forbidden set F such that iF and F { i } Q. Let p a shortest binary program that computes s from vF. By ∏comb(vF, vi) = s, we have p ≤ | ∏comb| + |vi|.
  • If ∏ is Kolmogorov ε-security, K(s) − K(s|vF) ≤ ε, then we have
    K ( s ) ε K ( s | v F ) | c o m b | + | v i | .
    Thus |vi| ≥ K(s) − εO(1).
  • If ∏ is normalized Kolmogorov ε-security, K(s) − K(s|vF) ≤ εK(s), then
    K ( s ) ε K ( s ) K ( s | v F ) | c o m b | + | v i | .
    Thus |vi| ≥ (1 − ε)K(s) − O(1).
From above theorem, we know that a string with high Kolmogorov complexity, or a nearly Kolmogorov random string, cannot be split among participants with small share sizes and high security parameter.

5. Information Theoretic Security Versus Individual Security

In this section, we establish some relations between information theoretic security and individual security for secret sharing schemes.
First, we know that, in a secret sharing scheme, the security parameter ε is small for some instances but is a big value for other instances. This means in a secret sharing scheme, it is difficult for every instance is (normalized) Kolmogorov ε-security and ε is a small value. So we consider the case of a secret sharing scheme that the probability of an instance with low security parameter is high, i.e., most of instances are (normalized) Kolmogorov ε-security and ε is a small value.
Definition 3. Letbe a secret sharing scheme for an access structure Γ. ∏ is
  • Kolmogorov (ε, δ)-security, if for any forbidden set F, it satisfies
    Pr s S , v F × V F [ I ( s ; v F | u ) ε ] δ .
  • normalized Kolmogorov (ε, δ)-security, if for any forbidden set F, it satisfies
    Pr s S , v F × V F [ I ( s ; v F | u ) ε K ( s ) ] δ .
    where u a distribution over S × V F.
The following relations between Kolmogorov complexity, entropy and mutual information are important for the present paper.
Lemma 2. [11,16] Let X, Y be random variables over X, Y. For any computable probability distribution u(x, y) over X × Y,
  • 0 (∑x,y u(x, y)K(x|y) − H(X|Y)) ≤ K(u) + O(1).
  • I(X; Y) − K(u) x,y u(x, y)I(x : y) ≤ I(X;Y) + 2K(u). When u is given, then I(X; Y) =∑x,y u(x, y)I(x : y|u) + O(1).
Here we give following relations between information theoretic security and individual security of Definition 3(i).
Theorem 4. For any (t, n)-threshold schemewhere S is the set of secrets and V[n] the set of all shares for all users, for any independent variables S, V[n] over S, V [ n ] with distribution u. Ifis Kolmogorov (ε, δ)-security, then, up to a constant, it is ε + (1 − δ) log(|S|)-Shannon security and 1 2 [ ε + ( 1 δ ) log ( | S | ) ] ln 2-guess security.
Proof. For any forbidden set F, let Q be the set of Kolmogorov ε-security instances, i.e., Q = { ( s , v [ n ] ) ; I ( s ; v F | u ) ε , F }. Then by Lemma 2, up to a constant,
I ( S ; V F ) ( s , v [ n ] ) Q u ( s , v ) I ( s : v F | u ) + ( s , v [ n ] ) Q u ( s , v F ) I ( x : y | u ) ε ( s , v [ n ] ) Q u ( s , v ) + ( s , v [ n ] ) Q u ( s , v F ) [ K ( s | u ) K ( s | u F , u ) ] ε + ( 1 δ ) log ( | S | ) .
Then by Theorem 1, up to a constant, we have | G ( S ) G ( S | V F ) | 1 2 [ ε + ( 1 δ ) log ( | S | ) ] ln 2. □
Then we establish some relations between information theoretic security and normalized individual security of Definition 3(ii).
Theorem 5. For any (t, n)-threshold schemewhere S is the set of secrets and V[n] the set of all shares for all users, for any independent variables S, V[n] over S, V [ n ] with distribution u. Ifis normalized Kolmogorov (ε, δ)-security, then, up to a constant, it is (1 +ε − δ) log(|S|)-Shannon security and 1 2 ( 1 + ε δ ) log ( | S | ) ln 2-guess security.
Proof. ∏ is normalized Kolmogorov (ε, δ)-security, then the probability that an instance is normalized Kolmogorov ε-security is at least δ, i.e., for any forbidden set F,
Pr s S , v F × V F [ I ( s ; v F | u ) ε K ( s ) ] δ .
For any forbidden set F, let Q be the set of normalized Kolmogorov ε-security instances, i.e., Q = { ( s , v [ n ] ) ; I ( s ; v F | u ) ε K ( s ) , F }. Then by Lemma 2, up to a constant,
I ( S ; V F ) ( s , v [ n ] ) Q u ( s , v ) I ( s : v F | u ) + ( s , v [ n ] ) Q u ( s , v F ) I ( x : y | u ) ε ( s , v [ n ] ) Q u ( s , v ) K ( s | u ) + ( s , v [ n ] ) Q u ( s , v F ) [ K ( s | u ) K ( s | u F , u ) ] ε log ( | S | ) + ( 1 δ ) log ( | S | ) ( 1 + ε δ ) log ( | S | ) .
Then by Theorem 1, up to a constant, we have | G ( S ) G ( S | V F ) | 1 2 ( 1 + ε δ ) log ( | S | ) ] ln 2. □
Comparing the Theorem 4 with Theorem 5, we have different relations between entropy-based security notions and two versions of individual security for secret sharing schemes.

6. Conclusions

Kolmogorov complexity and entropy measures are fundamentally different measures. They both are used in measuring the security for secret sharing schemes. In this paper, we study relations of several security notions for secret sharing schemes. First we consider three security notions of information theoretic security of secret sharing schemes, ε-Shannon and ε-min security both are stronger than ε-guess security, and ε-min security is stronger than ε-Shannon security when S is uniformly random. However, for a secret sharing scheme, 0-min security, 0-guess security and 0-Shannon security are the same when S is uniformly random. Then after giving the security notions of individual security for secret sharing schemes in the frame work of Kolmogorov complexity, we establish some relations between information theoretic security and two versions of individual security for secret sharing schemes, respectively.
In this paper, we only considered relations of several security notions for secret sharing schemes. Naturally, a more detailed discussion of connections with other security notions in other fields of cryptography, such as the security notions based on conditional Rényi entropies in [6,7], will be both necessary and interesting.

Acknowledgments

The authors are grateful for the financial support partly from the NSF Project (No. 61274133) of China.

Author Contributions

Both authors have contributed to the study and preparation of the article. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blakley, G.R. Safeguarding cryptographic keys, Proceedings of the 1979 AFIPS National Computer Conference, New York, NY, USA, 4–7 June 1979; 48, pp. 313–317.
  2. Shamir, A. How to share a secret. Commun. ACM. 1979, 22, 612–613. [Google Scholar]
  3. Blundo, C.; De Santis, A.; Vaccaro, U. On secret sharing schemes. Inf. Process. Lett. 1998, 65, 25–32. [Google Scholar]
  4. Karnin, E.D.; Greene, J.W.; Hellman, M.E. On secret sharing systems. IEEE Trans. Inf. Theory. 1983, 29, 35–41. [Google Scholar]
  5. Iwamoto, M.; Ohta, K. Security notions for information theoretically secure encryptions, Proceedings of 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August; pp. 1777–1781.
  6. Iwamoto, M.; Shikata, J. Information theoretic security for encryption based on conditional Rényi entropies, Proceedings of the 7th International Conference on Information Theoretic Security (ICITS 2013), Singapore, Singapore, 28–30 November 2013; pp. 103–121.
  7. Iwamoto, M.; Shikata, J. Secret sharing schemes based on min-entropies, 2014; arXiv:1401.5896.
  8. Chaitin, G. On the length of programs for computing finite binary sequences. J. ACM. 1966, 13, 547–569. [Google Scholar]
  9. Kolmogorov, A. Three approaches to the quantitative definition of information. Probl. Inf. Transm. 1965, 1, 1–7. [Google Scholar]
  10. Solomonoff, R. A formal theory of inductive inference, part I. Inf. Control. 1964, 7, 1–22. [Google Scholar]
  11. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  12. Li, M.; Vitányi, P.M.B. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed; Springer: New York, NY, USA, 2008. [Google Scholar]
  13. Grünwald, P.; Vitányi, P. Shannon information and Kolmogorov complexity, 2008; arXiv:cs/0410002v1.
  14. Pinto, A. Comparing notions of computational entropy. Theory Comput. Syst. 2009, 45, 944–962. [Google Scholar]
  15. Teixeira, A.; Matos, A.; Souto, A.; Antunes, L. Entropy measures vs. Kolmogorov complexity. Entropy 2011, 13, 595–611. [Google Scholar]
  16. Antunes, L.; Laplante, S.; Pinto, A; Salvador, L. Cryptographic security of individual instances. In Information Theoretic Security; Springer: Berlin/Heidelberg, Germany, 2009; pp. 195–210. [Google Scholar]
  17. Kaced, T. Almost-perfect secret sharing, Proceedings of 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August 2011; pp. 1603–1607.
  18. Jiang, S. On Unconditional -Security of Private Key Encryption. Comput. J 2013. [Google Scholar] [CrossRef]
  19. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J 1948, 27. [Google Scholar]
  20. Alimomeni, M.; Safavi-Naini, R. Guessing secrecy, Proceedings of 6th International Conference on Information Theoretic Security (ICITS 2012), Montreal, QC, Canada, 15–17 August 2012; pp. 1–13.
  21. Capocelli, R. M.; De Santis, A.; Gargano, L.; Vaccaro, U. On the size of shares for secret sharing schemes. J. Cryptol. 1993, 6, 157–167. [Google Scholar]
  22. Stinson, D.R. Decomposition constructions for secret sharing Schemes. IEEE Trans. Inf. Theory. 1994, 40, 118–125. [Google Scholar]

Share and Cite

MDPI and ACS Style

Dai, S.; Guo, D. Comparing Security Notions of Secret Sharing Schemes. Entropy 2015, 17, 1135-1145. https://doi.org/10.3390/e17031135

AMA Style

Dai S, Guo D. Comparing Security Notions of Secret Sharing Schemes. Entropy. 2015; 17(3):1135-1145. https://doi.org/10.3390/e17031135

Chicago/Turabian Style

Dai, Songsong, and Donghui Guo. 2015. "Comparing Security Notions of Secret Sharing Schemes" Entropy 17, no. 3: 1135-1145. https://doi.org/10.3390/e17031135

Article Metrics

Back to TopTop