Next Article in Journal
Electric Double Layer and Orientational Ordering of Water Dipoles in Narrow Channels within a Modified Langevin Poisson-Boltzmann Model
Next Article in Special Issue
Computational Hardness of Collective Coin-Tossing Protocols
Previous Article in Journal
Metaheuristics in the Optimization of Cryptographic Boolean Functions
Previous Article in Special Issue
Bounds on the Sum-Rate of MIMO Causal Source Coding Systems with Memory under Spatio-Temporal Distortion Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reduction Theorem for Secrecy over Linear Network Code for Active Attacks †

1
Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
2
Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
3
Shenzhen Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
4
Graduate School of Mathematics, Nagoya University, Nagoya 464-8602, Japan
5
Department of Computer Science, Faculty of Informatics, Shizuoka University, Shizuoka 422-8529, Japan
6
NTT Communication Science Laboratories, NTT Corporation, Tokyo 100-8116, Japan
7
The School of Information Science and Technology, ShanghaiTech University, Middle Huaxia Road no. 393, Pudong, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Parts of this paper were presented at the 2017 IEEE International Symposium on Information Theory (ISIT 2017), Aachen, Germany, 25–30 June 2017.
These authors contributed equally to this work.
Entropy 2020, 22(9), 1053; https://doi.org/10.3390/e22091053
Submission received: 27 August 2020 / Revised: 16 September 2020 / Accepted: 16 September 2020 / Published: 21 September 2020
(This article belongs to the Special Issue Multiuser Information Theory III)

Abstract

:
We discuss the effect of sequential error injection on information leakage under a network code. We formulate a network code for the single transmission setting and the multiple transmission setting. Under this formulation, we show that the eavesdropper cannot increase the power of eavesdropping by sequential error injection when the operations in the network are linear operations. We demonstrated the usefulness of this reduction theorem by applying a concrete example of network.

1. Introduction

Secure network coding offers a method for securely transmitting information from an authorized sender to an authorized receiver. Cai and Yeung [1] discussed the secrecy of when a malicious adversary, Eve, wiretaps a subset E E of the set E of all the channels in a network. Using the universal hashing lemma [2,3,4], the papers [5,6] showed the existence of a secrecy code that works universally for any type of eavesdropper when the cardinality of E E is bounded. In addition, the paper [7] discussed the construction of such a code. As another type of attack on information transmission via a network, a malicious adversary contaminates the communication by changing the information on a subset E A of E. Using an error correcting code, the papers [8,9,10,11] proposed a method to protect the message from contamination. That is, we require that the authorized receiver correctly recovers the message, which is called robustness.
As another possibility, we consider the case when the malicious adversary combines eavesdropping and contamination. That is, by contaminating a part of the channels in the network, the malicious adversary might increase the ability of eavesdropping, whereas a parallel network offers no such a possibility [12,13,14]. In fact, in arbitrarily varying channel model, noise injection is allowed after Eve’s eavesdropping, but Eve does not eavesdrop the channel after Eve’s noise injection [15,16,17,18,19]. The paper [20] also discusses secrecy in the same setting while it addresses the network model. The studies [7,14] discussed the secrecy when Eve eavesdrops the information transmitted on the channels in E E after noises are injected in E A , but they assume that Eve does not know the information of the injected noise.
In contrast, this paper focuses on a network, and discusses the secrecy when Eve adds artificial information to the information transmitted on the channels in E A , eavesdrops on the information transmitted on the channels in E E , and estimates the original message from the eavesdropped information and the information of the injected noises. We call this type of attack an active attack and call an attack without contamination a passive attack. We call each of Eve’s active operations a strategy. When E A E E and any active attack are available for Eve, she is allowed to arbitrarily modify the information on the channels in E A sequentially based on the obtained information.
This paper aims to show a reduction theorem for an active attack, i.e., the fact that no strategy can increase Eve’s information when every operation in the network is linear and Eve’s contamination satisfies a natural causal condition. When the network is not well synchronized, Eve can make an attack across several channels. This reduction theorem holds even under this kind of attack. In fact, there is an example having a non-linear node operation such that Eve can increase her performance to extract information from eavesdropping an edge outgoing an intermediate node by adding artificial information to an edge incoming the intermediate node [21] (The paper [21] also discusses linear code; it discusses only code a on one-hop relay network. Our results can be applied to general networks.) This example shows the necessity of linearity for this reduction theorem. Although our discussion can be extended to the multicast and multiple-unicast cases, for simplicity, we consider the unicast setting in the following discussion.
Further, we apply our general result to the analysis of a concrete example of a network. In this network, we demonstrate that any active attack cannot increase the performance of eavesdropping. However, in the single transmission case over the finite field F 2 , the error correction and the error detection are impossible over this contamination. To resolve this problem, this paper addresses the multiple transmission case in addition to the single transmission case. In the multiple transmission case, the sender uses the same network multiple times, and the topology and dynamics of the network do not change during these transmissions. While several papers discussed this model, many of them discussed the multiple transmission case only with contamination [22,23,24] or eavesdropping [5,6,25,26,27]. Only the paper [20] addressed it with contamination and eavesdropping, and its distinction from the paper [20] is summarized as follows. The paper [20] assumes that all injections are done after eavesdropping, while this paper allows Eve to inject the artificial information before a part of eavesdropping.
We formulate the multiple transmission case when each transmission has no correlation with the previous transmission while injected noise might have such a correlation. Then, we show the above type of reduction theorem for an active attack even under the multiple transmission case. We apply this result to the multiple transmission over the above example of a network, in which the error correction and the error detection are possible over this contamination. Hence, the secrecy and the correctness hold in this case.
The remaining part of this paper is organized as follows. Section 2 discusses the single transmission setting that has only a single transmission, and Section 3 discusses the multiple transmission setting that has n transmissions. Two types of multiple transmission setting are formulated. Then, we state our reduction theorem in both settings. In Section 4, we state the conclusion.

2. Single Transmission Setting

2.1. Generic Model

In this subsection, we give a generic model, and discuss its relation with a concrete network model in the latter subsections. We consider the unicast setting of network coding on a network. Assume that the authorized sender, Alice, intends to send information to the authorized receiver, Bob, via the network. Although the network is composed of m 1 edges and m 2 vertecies, as shown later, the model can be simplified as follows when the node operations are linear. We assume that Alice inputs the input variable X in F q m 3 and Bob receives the output variable Y B in F q m 4 , where F q is a finite field whose order is a power q of a prime p. We also assume that the malicious adversary, Eve, wiretaps the information Y E in F q m 6 This network has m 7 = m 1 m 3 edges that are not directly linked to the source node. The parameters are summarized as Table 1. (In this paper, we denote the vector on F q by a bold letter, but we use a non-bold letter to describe a scalar and a matrix).
Then, we adopt the model with matrices K B = [ K B ; j , i ] F q m 4 × m 3 and K E = [ K E ; j , i ] F q m 6 × m 3 , in which the variables X , Y B , and Y E satisfy their relations
Y B = K B X , Y E = K E X .
This attack is a conventional wiretap model and is called a passive attack to distinguish it from an active attack, which will be introduced later. Section 2.3 will explain how this model is derived from a directed graph with E E and linear operations on nodes.
In this paper, we address a stronger attack, in which Eve injects noise Z F q m 5 . Hence, using matrices H B = [ H B ; j , i ] F q m 4 × m 5 and H E = [ H E ; j , i ] F q m 6 × m 5 , we rewrite the relations (1) as
Y B = K B X + H B Z , Y E = K E X + H E Z ,
which is called a wiretap and addition model. The i-th injected noise Z i (the i-th component of Z ) is decided by a function α i of Y E . Although a part of Y E is a function of α i , this point does not make a problem for causality, as explained in Section 2.5. In this paper, when a vector has the j-th component x j ; the vector is written as [ x j ] 1 j a , where the subscript 1 j a expresses the range of the index j. Thus, the set α = [ α i ] 1 i m 5 of the functions can be regarded as Eve’s strategy, and we call this attack an active attack with a strategy α . That is, an active attack is identified by a pair of a strategy α and a wiretap, and an addition model decided by K , H . Here, we treat K B , K E , H B , and H E as deterministic values, and denote the pairs ( K B , K E ) and ( H B , H E ) by K and H , respectively. Hence, our model is written as the triplet ( K , H , α ) . As shown in the latter subsections, under the linearity assumption on the node operations, the triplet ( K , H , α ) is decided from the network topology (a directed graph with E A and E E ) and dynamics of the network. Here, we should remark that the relation (2) is based on the linearity assumption for node operations. Since this assumption is the restriction for the protocol, it does not restrict the eavesdropper’s strategy.
We impose several types for regularity conditions for Eve’s strategy α , which are demanded from causality. Notice that α i is a function of the vector [ Y E , j ] 1 j m 6 . Now, we take the causality with respect to α into account. Here, we assume that the assigned index i for 1 i m 5 expresses the time-ordering of injection. That is, we assign the index i for 1 i m 5 according to the order of injections. Hence, we assume that α i is decided by a part of Eve’s observed variables. We say that subsets w i { 1 , , m 6 } for i { 1 , , m 5 } are the domain index subsets for α when the function α i is given as a function of the vector [ Y E , j ] j w i . Here, the notation j w i means that the j-th eavesdropping is done before the i-th injection; i.e., w i expresses the set of indexes corresponding to the symbols that do effect the i-th injection. Hence, the eavesdropped symbol Y E , j does not depend on the injected symbol z i for j w i . Since the decision of the injected noise does not depend on the consequences of the decision, we introduce the following causal condition.
Definition 1.
We say that the domain index subsets { w i } 1 , , m 5 satisfy the causal condition when the following two conditions hold:
(A1) 
The relation H E ; j , i = 0 holds for j w i .
(A2) 
The relation w 1 w 2 w m 5 holds.
As a necessary condition of the causal condition, we introduce the following uniqueness condition for the function α i , which is given as a function of the vector [ Y E , j ] 1 j m 6 .
Definition 2.
For any value of x , there uniquely exists y F q m 6 such that
y = K E x + H E α ( y ) .
This condition is called the uniqueness condition for α.
When the uniqueness condition does not hold, for an input x , there exist two vectors y and y to satisfy (3). It means that both outputs y and y may happen; nevertheless, all the operations are deterministic. This situation is unlikely in a real word. Examples of a network with w i , [ H E ; j , i ] i , j will be given in Section 2.6. Then, we have the following lemma, which shows that the uniqueness condition always holds under a realistic situation.
Lemma 1.
When a strategy α has domain index subsets to satisfy the causal condition, the strategy α satisfies the uniqueness condition.
Proof. 
When the causal condition holds, we show the fact that y j is given as a function of K E x for any j w i by induction with respect to the index i = 1 , , m 5 , which expresses the order of the injected information. This fact yields the uniqueness condition.
For j w 1 , we have y j = ( K E x ) j because ( H E α ( y ) ) j is zero. Hence, the statement with i = 1 holds. We choose j w i + 1 w i . Let z i be the i -th injected information. Due to conditions (A1) and (A2), y j ( K E x ) j = ( H E z ) j is a function of z 1 = α ( y ) 1 , , z i = α ( y ) i . Since the assumption of the induction guarantees that z 1 , , z i are functions of [ y j ] j w i , z 1 , , z i are functions of K E x . Then, we find that y j = ( K E x ) j + ( H E z ) j is given as a function of K E x for any j w i + 1 w i . That is, the strategy α satisfies the uniqueness condition. □
Now, we have the following reduction theorem.
Theorem 1
(Reduction theorem). When the strategy α satisfies the uniqueness condition, Eve’s information Y E ( α ) with strategy α can be calculated from Eve’s information Y E ( 0 ) with strategy 0 (the passive attack), and Y E ( 0 ) is also calculated from Y E ( α ) . Hence, we have the equation
I ( X ; Y E ) [ 0 ] = I ( X ; Y E ) [ α ] ,
I ( X ; Y E ) [ α ] expresses the mutual information between X and Y E under the strategy α.
Proof. 
This proof can be done by showing that Eve’s information with a strategy α can be simulated by Eve’s information with a strategy 0 as follows.
Since Y E ( 0 ) = K E X and Y E ( α ) = K E X + H E Z , due to the uniqueness condition of the strategy α , we can uniquely evaluate Y E ( α ) from Y E ( 0 ) = K E X and α . Therefore, we have I ( X ; Y E ) [ 0 ] I ( X ; Y E ) [ α ] . Conversely, since Y E ( 0 ) is given as a function ( Y E ( α ) H E Z ) of Y E ( α ) , Z , and H E , we have the opposite inequality. □
This theorem shows that the information leakage of the active attack with the strategy α is the same as the information leakage of the passive attack. Hence, to guarantee the secrecy under an arbitrary active attack, it is sufficient to show secrecy under the passive attack. However, there is an example of non-linear network such that this kind of reduction does not hold [21]. In fact, even when the network does not have synchronization so that the information transmission on an edges starts before the end of the information transmission on the previous edge, the above reduction theorem hold under the uniqueness condition.

2.2. Recovery and Information Leakage with Linear Code

Next, we consider the recovery and the information leakage when a linear code is used. Assume that a message M F q is generated subject to the uniform distribution and is sent via an encoding map, i.e., a linear map f 1 from F q to F q m 3 . Additionally, Alice independently generates a scramble variable L F q k subject to the uniform distribution and send it via another a linear map f 2 from F q k to F q m 3 . In this case, Alice transmits f 1 ( M ) + f 2 ( L ) F q m 3 , as is implicitly stated in many papers ([22] Section V; [23] Section V; [20] Section IV).
Proposition 1.
Bob is assumed to know the forms of K B , H B and f 1 , f 2 . Bob can correctly recover the message M with probability 1 if and only if dim Im K B f 1 = and dim ( Im K B f 1 ( Im K B f 2 + Im H B ) ) = 0 .
Proof. 
To recover the message M, the dimension dim Im K B f 1 needs to be . When dim ( Im K B f 1 ( Im K B f 2 + Im H B ) ) > 0 , there exist vectors m 0 ( 0 ) F q , l 0 F q k , and z 0 F q m 5 such that K B ( f 1 ( m 0 ) + f 2 ( l 0 ) ) + H B ( z 0 ) = 0 . Thus, Bob may receive 0 when the message M is 0 or m 0 . This fact means the impossibility of Bob’s perfect recovery.
When dim Im K B f 1 = and dim ( Im K B f 1 ( Im K B f 2 + Im H B ) ) = 0 , there exists a linear map P from F q m 4 to Im K B f 1 such that P u = u for u Im K B f 1 and P u = 0 for u Im K B f 2 + Im H B . By applying the map P, Bob recovers the message M . □
Assume that Bob knows K B but does not know the form of H B ; i.e., there are several possible forms H B , 1 , , H B , d as the candidate of H B . Additionally, we assume that all possible forms H B , 1 , , H B , d satisfy the condition of Proposition 1. In general, the map P used in the proof depends on the form of H B . When the map P can be chosen commonly with H B , 1 , , H B , d , Bob can recover the message M . Otherwise, Bob cannot recover it.
However, when the condition dim ( Im K B f 2 Im H B , i ) = 0 holds in addition to the condition of Proposition 1 for i = 1 , , d , Bob can detect the existence of contamination as follows. In this case, when Y B does not belong to Im K B f 1 + Im K B f 2 , Bob considers that a contamination exists. In other words, when we choose a linear function f 3 such that { y B F q m 3 | f 3 ( y B ) = 0 } = Im K B f 1 + Im K B f 2 , the existence of a contamination can be detected by checking the condition f 3 ( Y B ) = 0 .
When the strategy α satisfies the uniqueness condition, Eve’s recovery can be reduced to the case with Z = 0 due to Theorem 1. Therefore, Eve can correctly recover the message M if and only if dim Im K E f 1 = and dim ( Im K E f 1 Im K E f 2 ) = 0 .
For the amount of information leakage, the papers [28] (Theorem 2), and [29] (Corollary 3.3 and (25)) stated the following relation in a slightly different way.
Proposition 2.
Information leakage to Eve can be evaluated as I ( M ; Y E ) [ 0 ] = ( log q ) ( dim Im K E f 1 dim ( Im K E f 1 Im K E f 2 ) ) . In particular, I ( M ; Y E ) [ 0 ] = 0 if and only if dim Im K E f 1 = dim ( Im K E f 1 Im K E f 2 ) .
Proof. 
Consider the case with α = 0 . H ( Y E ) = ( log q ) dim ( Im K E f 1 + Im K E f 2 ) and H ( Y E | M ) = ( log q ) dim Im K E f 2 . Hence, I ( M ; Y E ) [ 0 ] = ( log q ) dim ( Im K E f 1 + Im K E f 2 ) ( log q ) dim Im K E f 2 = ( log q ) ( dim Im K E f 1 dim ( Im K E f 1 Im K E f 2 ) ) . □
Therefore, using Proposition 2, we can evaluate the amount of leaked information even for general strategy α .
To check the condition dim Im K E f 1 = dim ( Im K E f 1 Im K E f 2 ) , we introduce two matrices A 1 F q m 6 × and A 2 F q m 6 × k by K E f 1 ( m ) = A 1 m for m F q and K E f 2 ( l ) = A 2 l for l F q k . Then, we define m 6 low vectors v i for i = 1 , , m 6 of the matrix A : = ( A 1 A 2 ) . Considering an equivalent condition to dim Im K E f 1 = dim ( Im K E f 1 Im K E f 2 ) , we have the following corollary.
Corollary 1.
I ( M ; Y E ) [ 0 ] = 0 if and only if there does not exist a vector ( b 1 , , b m 6 ) F q m 6 such that i = 1 m 6 b i v i has a form ( m , 0 , , 0 k ) with m 0 F q .

2.3. Construction of K B , K E from Concrete Network Model

Next, we discuss how we can obtain the generic passive attack model (1) from a concretely structured network coding, i.e., communications identified by directed edges and linear operations by parties identified by nodes. We consider the unicast setting of network coding on a network, which is given as a directed graph ( V , E ) , where the set V : = { v ( 1 ) , , v ( m 2 ) } of vertices expresses the set of nodes and the set E : = { e ( 1 ) , , e ( m 1 ) } of edges expresses the set of communication channels, where a communication channel means a packet in network engineering; i.e., a single communication channel can transmit single character in F q . In the following, we identify the set E with { 1 , , m 1 } ; i.e, we identify the index of an edge with the edge itself. Here, the directed graph ( V , E ) is not necessarily acyclic. When a channel transmits information from a node v ( i ) V to another node v ( i ) V , it is written as ( v ( i ) , v ( i ) ) E .
In the single transmission, the source node has several elements of F q and sends each of them via its outgoing edges in the order of assigned number of edges. Each intermediate node keeps received information via incoming edges. Then, for each outgoing edge, the intermediate node calculates one element of F q from previously received information, and sends it via the outgoing edge. That is, every outgoing piece of information from a node v ( i ) via a channel e ( j ) depends only on the information coming into the node v ( i ) via channels e ( j ) such that j < j . The operations on all nodes are assumed to be linear on the finite field F q with prime power q. Bob receives the information Y B in F q m 4 on the edges of a subset E B : = { e ( ζ B ( 1 ) ) , , e ( ζ B ( m 4 ) ) } E , where ζ B is a strictly increasing function from { 1 , , m 4 } to { 1 , , m 1 } . Let X ˜ j be the information on the edge e ( j ) . In the following, we describe the information on the m 7 = m 1 m 3 edges that are not directly linked to the source node because m 3 expresses the number of Alice’s input symbols. When the edge e ( j ) is an outgoing edge of the node v ( i ) , the information X ˜ j is given as a linear combination of the information on the edges coming into the node v ( i ) . We chose an m 1 × m 1 matrix θ = ( θ j , j ) such that X ˜ j = j θ j , j X ˜ j , where θ j , j is zero unless e ( j ) is an edge incoming to v ( i ) . The matrix θ is the coefficient matrix of this network.
Now, from causality, we can assume that each node makes the transmissions on the outgoing edges in the order of the numbers assigned to the edges. At the first stage, all m 3 information generated at the source node is directly transmitted via e ( 1 ) , e ( m 3 ) respectively. Then, at time j, the information transmission on the edge e ( j + m 3 ) is done. Hence, naturally, we impose the condition
θ j , j = 0 for j j ,
which is called the partial time-ordered condition for θ . Then, to describe the information on m 7 edges that is not directly linked to the source node, we define m 7 m 1 × m 1 matrices M 1 , , M m 7 . The j-th m 1 × m 1 matrix M j gives the information on the edge e ( j + m 3 ) as a function of the information on edges { e ( j ) } 1 j m 1 at time j. The j + m 3 -th row vector of the matrix M j is defined by [ θ j + m 3 , j ] 1 j m 1 . The remaining part of M j , i.e., the i-th row vector for i j + m 3 , is defined by [ δ i , j ] 1 j m 1 and δ i , j is the Kronecker delta. Since i = 1 m 3 ( M j M 1 ) j , i X i expresses the information on edge e ( j ) at time j, we have
Y B , j = i = 1 m 3 ( M m 7 M 1 ) ζ B ( j ) , i X i .
While the output of the matrix M m 7 M 1 takes values in F q m 1 , we focus the projection P B to the subspace F q m 4 that corresponds to the m 4 components observed by Bob. That is, P B is a m 4 × m 1 matrix to satisfy P B ; i , j = δ ζ B ( i ) , j . Similarly, we use the projection P A (an m 1 × m 3 matrix) as P A ; i , j = δ i , j . Due to (6), the matrix K B : = P B M m 7 M 1 P A satisfies the first equation in (1).
The malicious adversary, Eve, wiretaps the information Y E in F q m 6 on the edges of a subset E E : = { e ( ζ E ( 1 ) ) , , e ( ζ E ( m 6 ) ) } E , where ζ E is a strictly increasing function from { 1 , , m 6 } to { 1 , , m 1 } . Similar to (6), we have
Y E , j = i = 1 m 3 ( M m 7 M 1 ) ζ E ( j ) , i X i .
We employ the projection P E (an m 6 × m 1 matrix) to the subspace F q m 6 that corresponds to the m 6 components eavesdropped by Eve. That is, P E ; i , j = δ ζ E ( i ) , j . Then, we obtain the matrix K E as P E M m 7 M 1 P A . Due to (6), the matrix K E : = P E M m 7 M 1 P A satisfies the second equation in (1).
In summary the topology and dynamics (operations on the intermediate nodes) of the network, including the places of attached edges decides the graph ( V , E ) , the coefficients θ i , j , and functions ζ B , ζ E , uniquely give the two matrices K B and K E . Section 2.6 gives an example for this model. Here, we emphasize that we do not assume the acyclic condition for the graph ( V , E ) . We can use this relaxed condition because we have only one transmission in the current discussion. That is, due to the partial time-ordered condition for θ , we can uniquely define our matrices K B and K E , in a similar way to [30] (Section V-B; Λ of Ahlswede–Cai–Li–Yeung corresponds to the number of edges that are not connected to the source node in our paper.) However, when the graph has a cycle and we have n transmissions, there is a possibility of a correlation with the delayed information that is dependent on the time ordering. As a result, it is difficult to analyze secrecy for the cyclic network coding.

2.4. Construction of H B , H E from a Concrete Network Model

We identify the wiretap and addition model from a concrete network structure. We assume that Eve injects the noise in a part of edges E A E and eavesdrops the edges E E .
The elements of the subset E A are expressed as E A = { e ( η ( 1 ) ) , , e ( η ( m 5 ) ) } by using a function η from { 1 , , m 5 } to { 1 , , m 1 } , where the function η is not necessarily monotonically increasing function. To give the matrices H B and H E , while modifying the matrix M j , we define the new matrix M j as follows The j + m 3 -th row vector of the new matrix M j is defined by [ θ j + m 3 , j + δ j + m 3 , j ] 1 j m 1 . The remaining part of M j , i.e., the i-th row vector for i j + m 3 , is defined by [ δ i , j ] 1 j m 1 . Since i = 1 m 3 ( M j M 1 ) j , i X i + i = 1 m 5 ( M j M 1 ) j , η ( i ) Z i expresses the information on edge e ( j ) at time j, we have
Y B , j = i = 1 m 3 ( M m 7 M 1 ) ζ B ( j ) , i X i + i = 1 m 5 ( M m 7 M 1 ) ζ B ( j ) , η ( i ) Z i
Y E , j = i = 1 m 3 ( M m 7 M 1 ) ζ E ( j ) , i X i + i = 1 m 5 ( M m 7 M 1 I ) ζ E ( j ) , η ( i ) Z i .
When Eve eavesdrops the edges E E E A , she obtains the information on E E E A before her noise injection. Hence, to express her obtained information on E E E A , we need to subtract her injected information on E E E A . Hence, we need I in the second term of (9). We introduce the projection P E , A (an m 1 × m 5 matrix) as P E , A ; i , j = δ i , η ( j ) . Due to (8) and (9), the matrices H B : = P B M m 7 M 1 P E , A and H E : = P E ( M m 7 M 1 I ) P E , A satisfy conditions (2) with the matrices K B and K E , respectively. This model ( K B , K E , H B , H E ) to give (2) is called the wiretap and addition model determined by ( V , E ) and ( E E , E A , θ ) , which expresses the topology and dynamics.

2.5. Strategy and Order of Communication

To discuss the active attack, we see how the causal condition for the subsets { w i } 1 , , m 5 follows from the network topology in the wiretap and addition model. We choose the domain index subsets { w i } 1 i m 5 for α ; i.e., Eve chooses the added error Z i on the edge e ( η ( i ) ) E A as a function α i of the vector [ Y E , j ] j w i . Since the order of Eve’s attack is characterized by the function η from { 1 , , m 5 } to E A { 1 , , m 1 } , we discuss what condition for the pair ( η , { w i } i ) guarantees the causal condition for the subsets { w i } i .
First, one may assume that the tail node of the edge e ( j ) sends the information to the edge e ( j ) after the head node of the edge e ( j 1 ) receives the information to the edge e ( j 1 ) . Since this condition determines the order of Eve’s attack, the function η must be a strictly increasing function from { 1 , , m 5 } to { 1 , , m 1 } . Additionally, due to this time ordering, the subset w i needs to be { j | η ( i ) ζ E ( j ) } or its subset. We call these two conditions the full time-ordered condition for the function η and the subsets { w i } i . Since the function η is strictly increasing, condition (A2) for the causal condition holds. Since the relation (5) implies that M m 7 M 1 I is a lower triangular matrix with zero diagonal elements, the strictly increasing property of η yields that
H E ; j , i = 0 when η ( i ) ζ E ( j ) ,
which implies condition (A1) for the causal condition. In this way, the full time-ordered condition for the function η and the subsets { w i } i satisfies the causal condition.
However, the full time ordered condition does not hold in general, even when we reorder the numbers assigned to the edges. That is, if the network is not well synchronized, Eve can make an attack across several channels; i.e., it is possible that Eve might intercept (i.e., wiretap and contaminate) the information of an edge before the head node of the previous edge receives the information on the edge. Hence, we consider the case when the partial time-ordered condition holds, but the full time-ordered condition does not necessarily hold. (For an example, we consider the following case. Eve gets the information on the first edge. Then, she gets the information on the second edge before she hands over the information on the first edge to the tail node of the first edge. In this case, she can change the information on the first edge based on the information on the first and second edges. Then, the time-ordered condition (10) does not hold.) That is, the function η from { 1 , , m 5 } to E is injective but is not necessarily monotonically increasing. Given the matrix θ , we define the function γ θ ( j ) : = min j { j | θ j , j 0 } . Here, when no index j satisfies the condition θ j , j 0 , γ θ ( j ) is defined to be m 1 + 1 . Then, we say that the function η and the subsets { w i } i are admissible under θ when { e ( k ) | k Im η } = E A , the subsets { w i } i satisfy condition (A2) for the causal condition, and any element j w i satisfies
ζ E ( j ) < γ θ ( η ( i ) ) .
Here, Im η expresses the image of the function η . The condition (11) and the condition (5) imply the following condition; for j w i , there is no sequence ζ E ( j ) = j 1 > j 2 , > j l = η ( i ) such that
θ j i , j i + 1 0 .
This condition implies condition (A1) for the causal condition. Since the admissibility under θ is natural, even when the full time-ordered condition does not hold, the causal condition can be naturally derived.
Given two admissible pairs ( η , { w i } i ) and ( η , { w i } i ) , we say that the pair ( η , { w i } i ) is superior to ( η , { w i } i ) for Eve when w η 1 ( j ) w η 1 ( j ) for any j E A . Now, we discuss the optimal choice of ( η , { w i } i ) in this sense when E A is given. That is, we choose the subset w i as large as possible under the admissibility under θ . Then, we choose the bijective function η o from { 1 , , m 5 } to E A such that γ θ η o is monotonically increasing. Then, we define w o , i : = { j | ζ E ( j ) < γ θ ( η o ( i ) ) } , which satisfies the admissibility under θ . conditions (A1) and (A2) for the causal condition. Further, when the pair ( η , { w i } i ) is admissible under θ , the condition (11) implies w η 1 ( j ) w o , η o 1 ( j ) for j E A ; i.e., w o , i is the largest subset under the admissibility under θ . Hence, we obtain the optimality of ( η o , { w o , i } i ) . Although the choice of η o is not unique, the choice of w o , η o 1 ( j ) for j E A is unique.

2.6. Secrecy in Concrete Network Model

In this subsection, as an example, we consider the network given in Figure 1 and Figure 2, which shows that our framework can be applied to the network without synchronization. Alice sends the variables X 1 , , X 4 F q to nodes v ( 1 ) , v ( 2 ) , v ( 3 ) , and v ( 4 ) via the edges e ( 1 ) , e ( 2 ) , e ( 3 ) , and e ( 4 ) , respectively. The edges e ( 5 ) , e ( 6 ) , e ( 8 ) , e ( 10 ) send the elements received from the edges e ( 1 ) , e ( 5 ) , e ( 5 ) , e ( 8 ) , respectively. The edges e ( 7 ) , e ( 9 ) , and e ( 11 ) send the sum of two elements received from the edge pairs ( e ( 2 ) , e ( 5 ) ) , ( e ( 3 ) , e ( 6 ) ) , and ( e ( 4 ) , e ( 8 ) ) , respectively.
Bob receives elements via the edges e ( 7 ) , e ( 9 ) , and e ( 11 ) , which are written as Y B , 1 , Y B , 2 , and Y B , 3 , respectively. Then, the matrix K B is given as
K B = 1 1 0 0 1 0 1 0 1 0 0 1 .
Then, m 3 = 4 and m 4 = 3 .
Now, we assume that Eve eavesdrops the edges e ( 2 ) , e ( 5 ) , e ( 6 ) , e ( 7 ) , and e ( 8 ) , i.e., all edges connected to v ( 2 ) , and contaminates the edge e ( 2 ) , e ( 5 ) . Then, we set ζ B ( 1 ) = 7 , ζ B ( 2 ) = 9 , ζ B ( 3 ) = 11 and ζ E ( 1 ) = 2 , ζ E ( 2 ) = 5 , ζ E ( 3 ) = 6 , ζ E ( 4 ) = 7 , ζ E ( 5 ) = 8 . Eve can choose the function η as
η ( 1 ) = 5 , η ( 2 ) = 2
while η ( 1 ) = 2 , η ( 2 ) = 5 is possible. In the following, we choose (14). Since γ θ ( 2 ) = 7 and γ θ ( 5 ) = 6 , the subsets w i are given as
w 1 : = w o , 1 = { 1 , 2 } , w 2 : = w o , 2 = { 1 , 2 , 3 }
This case satisfies conditions (A1) and (A2). Hence, this model satisfies the causal condition. Lemma 1 guarantees that any strategy also satisfies the uniqueness condition.
We denote the observed information on the edges e ( 2 ) , e ( 5 ) , e ( 6 ) , e ( 7 ) , and e ( 8 ) by Y E , 1 , Y E , 2 , Y E , 3 , Y E , 4 , and Y E , 5 . As in Figure 1, Eve adds Z 1 and Z 2 in edges e ( 2 ) and e ( 5 ) . Then, the matrices H B , K E , and H E are given as
H B = 1 1 0 0 0 0 , K E = 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 0 0 0 , H E = 0 0 0 0 0 1 1 1 0 1 .
In this case, to keep the secrecy of the message to be transmitted, Alice and Bob can use coding as follows. When Alice’s message is M F q , Alice prepares scramble random number L 1 , L 2 , L 3 F q . These variables are assumed to be subject to the uniform distribution independently. She encodes them as X i = L i for i = 1 , , 3 and X 4 = M + L 1 + L 2 + L 3 . As shown in the following, under this code, Eve cannot obtain any information for M, even though she makes active attack. Due to Theorem 1, it is sufficient to show the secrecy when Z i = 0 . Since Eve’s information is Y E , 1 = X 2 , Y E , 2 = X 1 , Y E , 3 = X 1 , Y E , 4 = X 1 + X 2 , and Y E , 5 = X 1 , the matrix A given in Section 2.2 is
0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 0 0 .
Thus, Proposition 2 guarantees that Eve cannot obtain any information for the message M.
Indeed, the above attack can be considered as the following. Eve can eavesdrop all edges connected to the intermediate node v ( 2 ) and contaminate all edges incoming to the intermediate node v ( 2 ) . Hence, it is natural to assume that Eve similarly eavesdrops and contaminates at another intermediate node v ( i ) . That is, Eve can eavesdrop all edges connected to the intermediate node v ( i ) and contaminate all edges incoming to the intermediate node v ( i ) . For all nodes v ( i ) , this code has the same secrecy against the above Eve’s attack for node v ( i ) .
Furthermore, the above code has the secrecy even when the following attack.
(B1) 
Eve eavesdrops one of three edges e ( 7 ) , e ( 9 ) , and e ( 11 ) connected to the sink node, and eavesdrops and contaminates one of the remaining eight edges e ( 1 ) , e ( 2 ) , e ( 3 ) , e ( 4 ) , e ( 5 ) , e ( 6 ) , e ( 8 ) , and e ( 10 ) that are not connected to the sink node.
To apply Corollary 1 for analysis of the secrecy, we denote the low vector in Corollary 1 corresponding to the edge e ( i ) by v i . Then, the vectors v 7 , v 9 , and v 11 are ( 0 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 1 ) , and ( 1 , 2 , 1 , 1 ) . The remaining vectors v 1 , v 2 , v 3 , v 4 , v 5 , v 6 , v 8 , and v 10 are ( 0 , 1 , 0 , 0 ) , ( 0 , 0 , 1 , 0 ) , ( 0 , 0 , 0 , 1 ) , ( 1 , 1 , 1 , 1 ) , ( 0 , 1 , 0 , 0 ) , ( 0 , 1 , 0 , 0 ) , ( 0 , 1 , 0 , 0 ) , and ( 0 , 1 , 0 , 0 ) . Since any combination of the vector of the first group and the second group cannot be ( 1 , 0 , 0 , 0 ) the combination of Corollary 1 and Theorem 2 guarantees that the secrecy holds under the above attack (B1).

2.7. A Problem in Error Detection in a Concrete Network Model

However, the network given in Figure 1 and Figure 2 has the problem for the detection of the error in the following meaning. When Eve makes an active attack, Bob’s recovering message is different from the original message due to the contamination. Further, Bob cannot detect the existence of the error in this case. It is natural to require the detection of the existence of the error when the original message cannot be recovered and the secrecy. As a special attack model, we consider the following scenario with the attack (B1).
(B2) 
Our node operations are fixed to the way as Figure 2.
(B3) 
The message set M and all information on all edges are F 2 .
(B4) 
The variables X 1 , X 2 , X 3 , and X 4 are given as the output of the encoder. The encoder on the source node can be chosen, but is restricted to linear. It is allowed to use a scramble random number, which is an element of L : = F 2 k with a certain integer k. Formally, the encoder is given as a linear function from M × L to F 2 4 .
(B5) 
The decoder on the sink node can be chosen dependently of the encoder and independently of Eve’s attack.
Then, it is impossible to make a pair of an encoder and a decoder such that the secrecy holds and Bob can detect the existence of error.
This fact can be shown as follows. In order to detect it, as discussed in Section 2.2, Alice needs to make an encoder such that the vector ( Y B , 1 , Y B , 2 , Y B , 3 ) belongs to a linear subspace because the detection can be done only by observing that the vector does not belong to a certain linear subspace, which can be written as { ( Y B , 1 , Y B , 2 , Y B , 3 ) | c 1 Y B , 1 + c 2 Y B , 2 + c 3 Y B , 3 = 0 } with a non-zero vector ( c 1 , c 2 , c 3 ) F 2 3 . That is, the encoder needs to be constructed so that the relation c 1 Y B , 1 + c 2 Y B , 2 + c 3 Y B , 3 = ( c 1 + c 2 + c 3 ) X 1 + c 1 X 2 + c 2 X 3 + c 3 X 4 = 0 holds unless Eve’s injection is made. Since our field is F 2 3 , we have three cases. (C1) ( c 1 , c 2 , c 3 ) is ( 1 , 0 , 0 ) , ( 0 , 1 , 0 ) , or ( 0 , 0 , 1 ) . (C2) ( c 1 , c 2 , c 3 ) is ( 1 , 1 , 0 ) , ( 0 , 1 , 1 ) , or ( 0 , 1 , 1 ) . (C3) ( c 1 , c 2 , c 3 ) is ( 1 , 1 , 1 ) . If we impose another linear condition, the transmitted information is restricted into a one-dimensional subspace, which means that the message M uniquely decides the vector ( Y B , 1 , Y B , 2 , Y B , 3 ) . Hence, if Eve eavesdrops one suitable variable among three variables Y B , 1 , Y B , 2 , and Y B , 3 , Eve can infer the original message.
In the first case (C1), one of three variables Y B , 1 , Y B , 2 , and Y B , 3 is zero unless Eve’s injection is made. When Y B , 1 = 0 , i.e., ( c 1 , c 2 , c 3 ) = ( 1 , 0 , 0 ) , Bob can detect an error on the edge e ( 5 ) or e ( 2 ) because the error on e ( 5 ) or e ( 2 ) affects Y B , 1 so that Y B , 1 is not zero. However, Bob cannot detect any error on the edge e ( 4 ) because the error does not affect Y B , 1 . The same fact can be applied to the case when Y B , 2 = 0 . When Y B , 3 = 0 , Bob cannot detect any error on the edge e ( 3 ) because the error does not affect Y B , 3 .
In the second case (C2), two of three variables Y B , 1 , Y B , 2 , and Y B , 3 have the same value unless Eve’s injection is made. When Y B , 1 = Y B , 2 , i.e., ( c 1 , c 2 , c 3 ) = ( 1 , 1 , 0 ) , Bob can detect an error on the edge e ( 2 ) or e ( 3 ) because the error on e ( 2 ) or e ( 3 ) affects Y B , 1 or Y B , 2 so that Y B , 1 + Y B , 2 is not zero. However, Bob cannot detect any error on the edge e ( 4 ) because the error does not affect Y B , 1 or Y B , 2 . Similarly, When Y B , 2 = Y B , 3 ( Y B , 1 = Y B , 3 ), Bob cannot detect any error on the edge e ( 2 ) ( e ( 3 ) ).
In the third case (C3), the relation Y B , 1 = Y B , 2 + Y B , 3 holds; i.e., ( c 1 , c 2 , c 3 ) = ( 1 , 1 , 1 ) . Then, the linearity of the code implies that the message has the form a 1 Y B , 1 + a 2 Y B , 2 + a 3 Y B , 3 . Due to the relation Y B , 1 = Y B , 2 + Y B , 3 , the value a 1 Y B , 1 + a 2 Y B , 2 + a 3 Y B , 3 = ( a 1 + a 2 ) Y B , 2 + ( a 1 + a 3 ) Y B , 3 is limited to Y B , 1 , Y B , 2 , Y B , 3 , or 0 because our field is F 2 . Since the message is not a constant, it is limited to one of Y B , 1 , Y B , 2 , or Y B , 3 . Hence, when it is Y B , 1 , Eve can obtain the message by eavesdropping the edge e ( 7 ) . In other cases, Eve can obtain the message in the same way.
To resolve this problem, we need to use this network multiple times. Hence, in the next section, we discuss the case with multiple transmission.

2.8. Wiretap and Replacement Model

In the above subsections, we have discussed the case when Eve injects the noise in the edges E A and eavesdrops the edges E E . In this subsection, we assume that E A E E and Eve eavesdrops the edges E E and replaces the information on the edges E A by other information. While this assumption implies m 5 m 6 and the image of η is included in the image of ζ E , the function η does not necessarily equal the function ζ E because the order that Eve sends her replaced information to the heads of edges does not necessarily equal the order that Eve intercepts the information on the edges. Additionally, this case belongs to general wiretap and addition model (2) as follows. Modifying the matrix M j , we define the new matrix M j as follows. When there is an index i such that ζ E ( i ) = j , the j + m 3 -th row vector of the new matrix M j is defined by [ δ j + m 3 , j ] 1 j m 1 and the remaining part of M j is defined as the identity matrix. Otherwise, M j is defined to be M j . Additionally, we define another matrix F as follows. The ζ E ( i ) -th row vector of the new matrix F is defined by [ θ ζ E ( i ) , j ] 1 j m 1 and the remaining part of F is defined as the identity matrix. Hence, we have
Y B , j = i = 1 m 3 ( M m 7 M 1 ) ζ B ( j ) , i X i + i = 1 m 5 ( M m 7 M 1 ) ζ B ( j ) , η ( i ) Z i
Y E , j = i = 1 m 3 ( F M m 7 M 1 ) ζ E ( j ) , i X i + i = 1 m 5 ( F M m 7 M 1 ) ζ E ( j ) , η ( i ) Z i .
Then, we choose matrices K B , K E , H B , and H E as K B : = P B M m 7 M 1 P A , K E : = P E F M m 7 M 1 P A , H B : = P B M m 7 M 1 P E T , and H E : = P E F M m 7 M 1 P E T , which satisfy conditions (2) due to (18) and (19). This model ( K B , K E , H B , H E ) is called the wiretap and replacement model determined by ( V , E ) and ( E E , E A , θ , η ) . Notice that the projections P A , P B , and P E are defined in Section 2.3.
Next, we discuss the strategy α under the matrices K B , K E , H B , and H E such that the added error Z i is given as a function α i of the vector [ Y E , j ] j w i . Since the decision of the injected noise does not depend on the results of the decision, we impose the causal condition defined in Definition 4 for the subsets w i .
When the relation j w i holds with ζ E ( j ) = η ( i ) , a strategy α on the wiretap and replacement model ( K B , K E , H B , H E ) determined by ( V , E ) and ( E E , θ ) is written by another strategy α on the wiretap and addition model K B , K E , H B , and H E determined by ( V , E ) and ( E E , θ ) , which is defined as α j ( [ Y E , j ] j w i ) : = α j ( [ Y ^ E , j ] j w i ) Y E , j . In particular, due to the condition (5), the optimal choice η o , { w o , i } under the partial time-ordered condition satisfies that the relation j w o , i holds with ζ E ( j ) = η o ( i ) . That is, under the partial time-ordered condition, the strategy on the wiretap and replacement model can be written by another strategy on the wiretap and addition model.
However, if there is no synchronization among vertexes, Eve can inject the replaced information to the head of an edge before the tail of the edge sends the information to the edge. Then, the partial time-ordered condition does not hold. In this case, the relation j w i does not necessarily hold with ζ E ( j ) = η ( i ) . Hence, a strategy α on the wiretap and replacement model ( K B , K E , H B , H E ) cannot be necessarily written as another strategy on the wiretap and addition model ( K B , K E , H B , H E ).
To see this fact, we discuss an example given in Section 2.6. In this example, the network structure of the wiretap and replacement model is given by Figure 3.

3. Multiple Transmission Setting

3.1. General Model

Now, we consider the n-transmission setting, where Alice uses the same network n times to send a message to Bob. Alice’s input variable (Eve’s added variable) is given as a matrix X n = ( X 1 , , X n ) F q m 3 × n (a matrix Z n = ( Z 1 , , Z n ) F q m 5 × n ), and Bob’s (Eve’s) received variable is given as a matrix Y B n = ( Y B , 1 , , Y B , n ) F q m 4 × n (a matrix Y E n = ( Y E , 1 , , Y E , n ) F q m 6 × n ). Then, we consider the model
Y B n = K B X n + H B Z n ,
Y E n = K E X n + H E Z n ,
whose realization in a concrete network will be discussed in Section 3.2 and Section 3.3. Notice that the relations (20) and (21) with H E = 0 (only the relation (20)) were treated as the starting point of the paper [20] (the papers [22,23,24]).
In this case, regarding n transmissions of one channel as n different edges, we consider the directed graph composed of n m 5 edges. Then, Eve’s strategy α n is given as n m 5 functions { α i , l } 1 i m 5 , 1 l n from Y E n to the respective components of Z n . In this case, we extend the uniqueness condition to the n-transmission version.
Definition 3.
For any value of K E x n , there uniquely exists y n F q m 6 × n such that
y n = K E x n + H E α n ( y n ) .
This condition is called the n-uniqueness condition.
Since we have n transmissions on each channel, the matrix θ is given as an ( n m 1 ) × ( n m 1 ) matrix. In the following, we see how the matrix θ is given and how the n-uniqueness condition is satisfied in a more concrete setting.

3.2. The Multiple Transmission Setting with Sequential Transmission

This section discusses how the model given in Section 3.1 can be realized in the case with sequential transmission as follows. Alice sends the first information X 1 . Then, Alice sends the second information X 2 . Alice sequentially sends the information X 3 , , X n . Hence, when an injective function τ E from { 1 , , m 1 } × { 1 , , n } to { 1 , , n m 1 } gives the time ordering of n m 1 edges, it satisfies the condition
τ E ( i , l ) τ E ( i , l ) when i i l l .
Here, we assume that the topology and dynamics of the network and the edge attacked by Eve do not change during n transmissions, which is called the stationary condition. All operations in intermediate nodes are linear. Additionally, we assume that the time ordering on the network flow does not cause any correlation with the delayed information like Figure 1 unless Eve’s injection is made; i.e., the l-th information Y B , l received by Bob is independent of X 1 , , X l 1 , X l + 1 , , X n , which is called the independence condition. The independence condition means that there is no correlation with the delayed information. Due to the stationary and independence conditions, the ( n m 1 ) × ( n m 1 ) matrix θ satisfies that
θ ( i , l ) , ( j , k ) = θ ¯ i , j δ k , l ,
where θ ¯ i , j : = θ ( i , 1 ) , ( j , 1 ) . When the m 1 × m 1 matrix θ ¯ satisfies the partial time-ordered condition (5), due to (23) and (24), the ( n m 1 ) × ( n m 1 ) matrix θ satisfies the partial time-ordered condition (5) with respect to the time ordering τ E . Since the stationary condition guarantees that the edges attacked by Eve do not change during n transmissions, the above condition for θ implies the model (20) and (21). This scenario is called the n-sequential transmission.
Since the independence condition is not so trivial, it is needed to discuss when it is satisfied. If the l-th transmission has no correlation with the delayed information of the previous transmissions for l = 2 , , n , the independence condition holds. In order to satisfy the above independence condition, the acyclic condition for the network graph is often imposed. This is because any causal time ordering on the network flow does not cause any correlation with the delayed information and achieves the max-flow if the network graph has no cycle [31]. In other words, if the network graph has a cycle, there is a possibility that a good time ordering on the network flow that causes correlation with the delayed information. However, there is no relation between the relations (20) and (21) and the acyclic condition for the network graph, and the relations (20) and (21) directly depend on the time ordering on the network flow. That is, the acyclic condition for the network graph is not equivalent to the existence of the effect of delayed information. Indeed, if we employ breaking cycles on intermediate nodes ([31] Example 3.1), even when the network graph has cycles, we can avoid any correlation with the delayed information. (To handle a time ordering with delayed information, one often employs a convolution code [32]. It is used in sequential transmission, and requires synchronization among all nodes. Additionally, all the intermediate nodes are required to make a cooperative coding operation under the control of the sender and the receiver. If we employ breaking cycles, we do not need such synchronization and avoid any correlation with the delayed information.) Additionally, see the example given in Section 3.5.
To extend the causality condition, we focus on the domain index subsets { w i , l } 1 i m 5 , 1 l n of { 1 , , m 6 } × { 1 , , n } for Eve’s strategy α n = { α i , l } 1 i m 5 , 1 l n . Then, we define the causality condition under the order function τ E .
Definition 4.
We say that the domain index subsets { w i , l } i , l satisfy the n-causal condition under the order function τ E and the function η from { 1 , , m 5 } to { 1 , , m 1 } when the following two conditions hold:
(A1’) 
The relation H E ; j , i = 0 holds for ( j , l ) w i , l .
(A2’) 
The relation w i , l w i , l holds when τ E ( η ( i ) , l ) τ E ( η ( i ) , l ) .
Next, we focus on the domain index subsets { w i , l } i , l and the function η from { 1 , , m 5 } to { 1 , , m 1 } . We say that the pair ( η , { w i , l } i , l ) are n-admissible under θ ¯ under the order function τ E when { e ( k ) | k Im η } = E A , the subsets { w i , l } i , l satisfy condition (A2’) for the n causal condition, and any element ( j , l ) w i , l satisfies
τ E ( ζ E ( j ) , l ) < γ θ ¯ ( η ( i ) , l ) .
where the function γ θ ¯ is defined as
γ θ ¯ ( j , l ) : = min j { τ E ( j , l ) | θ ¯ j , j 0 } .
Here, when no index j satisfies the condition θ ¯ j , j 0 , γ θ ¯ ( j , l ) is defined to be n m 1 + 1 . In the same way as Section 2.5, we find that the n-admissibility of the pair ( η , { w i , l } i , l ) implies the n-causal condition under τ E and η for the domain index subsets { w i , l } i , l .
Given two n-admissible pairs ( η , { w i , l } i , l ) and ( η , { w i , l } i , l ) , we say that the pair ( η , { w i , l } i , l ) is superior to ( η , { w i , l } i , l ) for Eve when w η 1 ( j ) , l w η 1 ( j ) , l for j E A and l = 1 , , n . Then, we choose the bijective function τ E , η from { 1 , , m 5 } × { 1 , , n } to { 1 , , n m 5 } such that γ θ ¯ η τ E , η 1 is monotonically increasing, where γ θ ¯ η is defined as γ θ ¯ η ( i , l ) = γ θ ¯ ( η ( i ) , l ) . The function τ E , η expresses the order of Eve’s contamination. Then, we define w η , i , l : = { ( j , l ) | τ E ( ζ E ( j ) , l ) < γ θ ¯ ( η ( i ) , l ) } , which satisfies the n-admissibility under θ ¯ and the order function τ E .
Further, when the pair ( η , { w i , l } i , l ) is n-admissible under θ ¯ and τ E , the condition (25) implies w η 1 ( j ) , l w η , η 1 ( j ) , l for j E A and l = 1 , , n ; i.e., w η , i , l is the largest subset under the n admissibility under θ ¯ and τ E . Hence, we obtain the optimality of ( η , { w η , i , l } i , l ) when θ ¯ , τ E , and E A are given. Although the choice of η is not unique, the choice of w η , η 1 ( j ) , l for j E A and l = 1 , , n is unique when θ ¯ , τ E , and E A are given.
In the same way as Lemma 1, we find that the n-causal condition with sequential transmission guarantees the n-uniqueness condition as follows.
Lemma 2.
When a strategy α for the n-sequential transmission has domain index subsets to satisfy the n-causal condition, the strategy α satisfies the n-uniqueness condition.
Proof. 
Consider a big graph composed of n m 1 edges { e ( i , l ) } 1 i m 1 , 1 l n and n m 2 vertecies { v ( j , l ) } 1 j m 2 , 1 l n . In this big graph, the coefficient matrix is given in (24). We assign the n m 1 edges the number τ E ( i , l ) . The n-causal and n-uniqueness conditions correspond to the causal and uniqueness conditions of this big network, respectively. Hence, Lemma 1 implies Lemma 2. □

3.3. Multiple Transmission Setting with Simultaneous Transmission

We consider anther scenario to realize the model given in Section 3.1. Usually, we employ an error correcting code for the information transmission on the edges in our graph. For example, when the information transmission is done by wireless communication, an error correcting code is always applied. Now, we assume that the same error correcting code is used on all the edges. Then, we set the length n to be the same value as the transmitted information length of the error correcting code. In this case, n transmissions are done simultaneously in each edge. Each node makes the same node operation for n transmissions, which implies the condition (24) for the ( n m 1 ) × ( n m 1 ) matrix θ . Then, the relations (20) and (21) hold because the delayed information does not appear. This scenario is called the n-simultaneous transmission.
In fact, when we focus on the mathematical aspect, the n-simultaneous transmission can be regarded as a special case of the n-sequential transmission. In this case, the independence condition always holds even when the network has a cycle. Further, the n-uniqueness condition can be derived in a simpler way without discussing the n-causal condition as follows.
In this scenario, given a function η from { 1 , , m 5 } to E A { 1 , , m 1 } , Eve chooses the added errors ( Z i , 1 , , Z i , n ) F q n on the edge e ( η ( i ) ) E A as a function α i of the vector [ Y E , j ] j w i with subsets { w i } 1 i m 5 of { 1 , , m 6 } . Hence, in the same way as the single transmission, domain index subsets for α are given as subsets w i { 1 , , m 6 } for i { 1 , , m 5 } . In the same way as Lemma 1, we have the following lemma.
Lemma 3.
When a strategy α for the n-simultaneous transmission has domain index subsets to satisfy the causal condition, the strategy α satisfies the n-uniqueness condition.
In addition, the wiretap and replacement model in this setting can be introduced for the n-sequential transmission and the n-simultaneous transmission in the same way as Section 2.8.

3.4. Non-Local Code and Reduction Theorem

Now, we assume only the model (20) and (21) and the n-uniqueness condition. Since the model (20) and (21) is given, we manage only the encoder in the sender and the decoder in the receiver. Although the operations in the intermediate nodes are linear and operate only on a single transmission, the encoder and the decoder operate across several transmissions. Such a code is called a non-local code to distinguish operations over a single transmission. Here, we formulate a non-local code to discuss the secrecy. Let M and L be the message set and the set of values of the scramble random number, which is often called the private randomness. Then, an encoder is given as a function ϕ n from M × L to F q m 3 × n , and the decoder is given as ψ n from F q m 4 × n to M . Here, the linearity for ϕ n and ψ n is not assumed. That is, the decoder does not use the scramble random number L because it is not shared with the decoder. Our non-local code is the pair ( ϕ n , ψ n ) , and is denoted by Φ n . Then, we denote the message and the scramble random number as M and L. The cardinality of M is called the size of the code and is denoted by | Φ n | . More generally, when we focus on a sequence { l n } instead of { n } , an encoder ϕ n is a function from M × L to F q m 3 × l n , and the decoder ψ n is a function from F q m 4 × l n to M .
Here, we treat K B , K E , H B , and H E as deterministic values, and denote the pairs ( K B , K E ) and ( H B , H E ) by K and H , respectively, while Alice and Bob might not have the full information for K E , H B , and H E . Additionally, we assume that the matrices K and H are not changed during transmission. In the following, we fix Φ n , K , H , α n . As a measure of the leaked information, we adopt the mutual information I ( M ; Y E n , Z n ) between M and Eve’s information Y E n and Z n . Since the variable Z n is given as a function of Y E n , we have I ( M ; Y E n , Z n ) = I ( M ; Y E n ) . Since the leaked information is given as a function of Φ n , K , H , α n in this situation, we denote it by I ( M ; Y E n ) [ Φ n , K , H , α n ] .
Definition 5.
When we always choose Z n = 0 , the attack is the same as the passive attack. This strategy is denoted by α n = 0 .
When K , H are treated as random variables independent of M , L , the leaked information is given as the expectation of I ( M ; Y E n ) [ Φ n , K , H , α n ] . This probabilistic setting expresses the following situation. Eve cannot necessarily choose edges to be attacked by herself. However, she knows the positions of the attacked edges, and chooses her strategy depending on the attacked edges.
Remark 1.
It is better to remark that there are two kinds of formulations in network coding, even when the network has only one sender and one receiver. Many papers [1,8,9,28,29] adopt the formulation, where the users can control the coding operation in intermediate nodes. However, this paper adopts another formulation, in which the non-local coding operations are done only for the input variable X and the output variable Y B like the papers [7,14,20,22,23,24]. In contrast, all intermediate nodes make only linear operations over a single transmission, which is often called local encoding in [22,23,24]. Since the linear operations in intermediate nodes cannot be controlled by the sender and the receiver, this formulation contains the case when a part of intermediate nodes do not work and output 0 always.
In the former setting, it is often allowed to employ the private randomness in intermediate nodes. However, we adopt the latter setting; i.e., no non-local coding operation is allowed in intermediate nodes, and each intermediate node is required to make the same linear operation on each alphabet. That is, the operations in intermediate nodes are linear and are not changed during n transmissions. The private randomness is not employed in intermediate nodes.
Now, we have the following reduction theorem.
Theorem 2
(Reduction theorem). When the triplet ( K , H , α n ) satisfies the uniqueness condition, Eve’s information Y E n ( α n ) with strategy α n can be calculated from Eve’s information Y E n ( 0 ) with strategy 0 (the passive attack), and Y E n ( 0 ) is also calculated from Y E n ( α n ) . Hence, we have the equation
I ( M ; Y E n ) [ Φ n , K , 0 , 0 ] = I ( M ; Y E n ) [ Φ n , K , H , 0 ] = I ( M ; Y E n ) [ Φ n , K , H , α n ] .
Proof. 
Since the first equation follows from the definition, we show the second equation. We define two random variables Y E n ( 0 ) : = K E X n and Y E n ( α n ) : = K E X n + H E Z n . Due to the uniqueness condition of Y E n ( α n ) , for each Y E n ( 0 ) = K E X n , we can uniquely identify Y E n ( α n ) . Therefore, we have I ( M ; Y E n ( 0 ) ) I ( M ; Y E n ( α n ) ) . Conversely, since Y E n ( 0 ) is given as a function of Y E n ( α n ) , Z n , and H E , we have the opposite inequality. □
Remark 2.
Theorem 2 discusses the unicast case. It can be trivially extended to the multicast case because we do not discuss the decoder. It can also be extended to the multiple unicast case, whose network is composed of several pairs of senders and receivers. When there are k pairs in this setting, the messages M and the scramble random numbers L have the forms ( M 1 , , M k ) and ( L 1 , , L k ) . Thus, we can apply Theorem 2 to the multiple unicast case. The detail discussion for this extension is discussed in the paper [33].
Remark 3.
One may consider the following type of attack when Alice sends the i-th transmission after Bob receives the i 1 -th transmission. Eve changes the edge to be attacked in the i-th transmission dependently of the information that Eve obtains in the previous i 1 transmissions. Such an attack was discussed in [34]; there was no noise injection. Theorem 2 does not consider such a situation because it assumes that Eve attacks the same edges for each transmission. However, Theorem 2 can be applied to this kind of attack in the following way. That is, we find that Eve’s information with noise injection can be simulated by Eve’s information without noise injection even when the attacked edges are changed in the above way.
To see this reduction, we consider m transmissions over the network given by the direct graph ( V , E ) . We define the big graph ( V m , E m ) , where V m : = { ( v , i ) } v V , 1 i m and E m : = { ( e , i ) } e E , 1 i m and ( v , i ) and ( e , i ) express the vertex v and the edge e on the i-th transmission, respectively. Then, we can apply Theorem 2 with n = 1 to the network given by the directed graph ( V m , E m ) when the attacked edges are changed in the above way. Hence, we obtain the above reduction statement under the uniqueness condition for the network decided by the directed graph ( V m , E m ) .

3.5. Application to Network Model in Section 2.6

We consider how to apply the multiple transmission setting with sequential transmission with n = 2 to the network given in Section 2.6; i.e., we discuss the network given in Figure 1 and Figure 2 over the field F q with n = 2 . Then, we analyze the secrecy by applying Theorem 2.
Assume that Eve eavesdrops edges e ( 2 ) , e ( 5 ) , e ( 6 ) , e ( 7 ) , e ( 8 ) and contaminates edges e ( 2 ) , e ( 5 ) as Figure 1. Then, we set the function τ E from { 1 , , 11 } × { 1 , 2 } to { 1 , , 22 } as
τ E ( i , l ) = i + 11 ( l 1 ) .
Under the choice of η given in (14), the function τ E , η can be set in another way as
τ E , η ( i , l ) = i + 2 ( l 1 ) .
Since γ θ ¯ ( 2 , 1 ) = 7 , γ θ ¯ ( 5 , 1 ) = 6 , γ θ ¯ ( 2 , 2 ) = 18 , γ θ ¯ ( 5 , 2 ) = 17 , we have
w η , 1 , 1 = { ( 1 , 1 ) , ( 2 , 1 ) } , w η , 2 , 1 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) } w η , 1 , 2 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 4 , 1 ) , ( 5 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) } w η , 2 , 2 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 4 , 1 ) , ( 5 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) , ( 3 , 2 ) } .
However, when the function τ E is changed as
τ E ( i , l ) = i + 5 ( l 1 ) for i = 1 , , 5
τ E ( i , l ) = 5 + i + 6 ( l 1 ) for i = 6 , , 11 ,
w η , i , l has a different form as follows. Under the choice of η given in (14), while Eve can choose τ E , η in the same way as (29), since γ θ ¯ ( 2 , 1 ) = 12 , γ θ ¯ ( 5 , 1 ) = 11 , γ θ ¯ ( 2 , 2 ) = 18 , γ θ ¯ ( 5 , 2 ) = 17 , we have
w η , 1 , 1 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) } w η , 2 , 1 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) } w η , 1 , 2 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 4 , 1 ) , ( 5 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) } w η , 2 , 2 = { ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 4 , 1 ) , ( 5 , 1 ) , ( 1 , 2 ) , ( 2 , 2 ) , ( 3 , 2 ) } .
We construct a code, in which the secrecy holds and Bob can detect the existence of the error in this case. For this aim, we consider two cases: (i) there exists an element κ F q to satisfy the equation κ 2 = κ + 1 ; (ii) no element κ F q satisfies the equation κ 2 = κ + 1 . Our code works even with n = 1 in the case (i). However, it requires n = 2 in the case (ii). For simplicity, we give our code with n = 2 even in the case (i).
Assume the case (i). Alice’s message is M = ( M 1 , M 2 ) F q 2 , and Alice prepares scramble random numbers L i = ( L i , 1 , L i , 2 ) F q 2 with i = 1 , 2 . These variables are assumed to be subject to the uniform distribution independently. She encodes them as X 1 = L 1 , X 2 = L 1 κ + L 2 ( 1 + κ ) + M κ , X 3 = L 2 + M , and X 4 = L 2 . When Z 1 = Z 2 = 0 , Bob receives
Y B , 1 = X 1 + X 2 = L 1 ( 1 + κ ) + L 2 ( 1 + κ ) + M κ , Y B , 2 = X 1 + X 3 = L 1 + L 2 + M , Y B , 3 = X 1 + X 4 = L 1 + L 2 .
Then, since M = Y B , 2 Y B , 3 , he recovers the message by using Y B , 2 Y B , 3 .
As shown in the following, under this code, Eve cannot obtain any information for M even though she makes active attack under the model given Figure 1. Eve’s information is
Y E , 1 Y E , 2 Y E , 3 Y E , 4 Y E , 5 = κ κ 1 + κ 0 1 0 0 1 0 κ 1 + κ 1 + κ 0 1 0 M L 1 L 2
when Z i = 0 . Proposition 2 guarantees that Eve cannot obtain any information for M when Z i = 0 . Thus, due to Theorem 2, the secrecy holds even when Z i = 0 does not hold.
Indeed, the above attack can be considered as the following. Eve can eavesdrop all edges connected to the intermediate node v ( 2 ) and contaminate all edges incoming to the intermediate node v ( 2 ) . The above setting means that the intermediate node v ( 2 ) is partially captured by Eve. As other settings, we consider the case when Eve attacks another node v ( i ) for i = 1 , 3 , 4 . In this case, we allow a slightly stronger attack; i.e., Eve can eavesdrop and contaminate all edges connected to the intermediate node v ( i ) . That is, Eve’s attack is summarized as
(B1’) 
Eve can choose any one of nodes v ( 1 ) , , v ( 4 ) . When v ( 2 ) is chosen, she eavesdrops all edges connected to v ( 2 ) and contaminates all edges incoming to v ( 2 ) . When v ( i ) is chosen for i = 1 , 3 , 4 , she eavesdrops and contaminates all edges connected to v ( i ) .
To apply Corollary 1 for analysis of the secrecy, we write down the low vectors v i in Corollary 1 under “Vector” in Table 2. Hence, under this attack, this code has the same secrecy by the combination of Corollary 1 and Theorem 2.
In the case (ii), we set κ as the matrix 0 1 1 1 . Then, we introduce the algebraic extension F q [ κ ] of the field F q by using the element e to satisfy the equation κ 2 = κ + 1 . Then, we identify an element ( x 1 , x 2 ) F q 2 with x 1 + x 2 κ F q [ κ ] . Hence, the multiplication of the matrix κ in F q 2 can be identified with the multiplication of κ in F q [ κ ] . The above analysis works by identifying F q 2 with the algebraic extension F q [ κ ] in the case (ii).

3.6. Error Detection

Next, using the discussion in Section 2.2, we consider another type of security, i.e., the detectability of the existence of the error when n = 2 with the assumptions (B1’), (B2) and the following alternative assumption:
(B3’) 
The message set M is F q 2 , and all information on all edges per single use are F q .
(B4’) 
The encoder on the source node can be chosen, but is restricted to linear. It is allowed to use a scramble random number, which is an element of L : = F q k with a certain integer k. Formally, the encoder is given as as a linear function from M × L to F q 8 .
We employ the code given in Section 3.5 and consider that the contamination exists when Y B , 1 ( Y B , 3 + Y B , 2 κ ) is not zero. This code satisfies the secrecy and the detectability as follows.
To consider the case with v ( 2 ) , we set η ( 1 ) = 5 , η ( 2 ) = 2 . Regardless of whether Eve makes contamination, Y B , 2 Y B , 3 = L 1 + L 2 + Z 1 + M ( L 1 + L 2 + Z 1 ) = M . In the following, Y B , i for i = 1 , 2 , 3 expresses the variable when Eve makes contamination. Hence, Bob always recovers the original message M. Therefore, this code satisfies the desired security in the case with Figure 1.
In the case of v ( 3 ) , we set η ( 1 ) = 3 , η ( 2 ) = 6 , η ( 3 ) = 9 . Then, Y B , 1 ( Y B , 3 + Y B , 2 κ ) is calculated through ( Z 1 + Z 2 + Z 3 ) κ . Hence, when Z 1 + Z 2 + Z 3 = 0 , Bob detects no error. In this case, the contamination ( Z 1 , Z 2 , and Z 3 ) does not change Y B , 2 Y B , 3 , i.e., it does not cause any error for the decoded message. Hence, in order to detect an error in the decoded message, it is sufficient to check whether Y B , 1 ( Y B , 3 + Y B , 2 κ ) is zero or not. Since Y B , 2 = X 1 + X 3 + Z 1 + Z 2 + Z 3 , we have M κ = L 1 ( 1 + κ ) + L 2 ( 1 + κ ) + M κ ( L 1 + L 2 ) ( 1 + κ ) = Y B , 1 Y B , 3 ( 1 + κ ) . Hence, if Bob knows that only the edges e ( 3 ) , e ( 6 ) , and e ( 9 ) are contaminated, he can recover the message by ( Y B , 1 Y B , 3 ( 1 + κ ) ) κ 1 .
In the case of v ( 4 ) , we set η ( 1 ) = 4 , η ( 2 ) = 8 , η ( 3 ) = 10 , η ( 4 ) = 11 . When Y B , 1 ( Y B , 3 + Y B , 2 κ ) = ( Z 1 + Z 2 + Z 4 ) = 0 , Bob detects no error. In this case, the errors Z 1 , Z 2 , and Z 4 do not change Y B , 2 Y B , 3 . Hence, it is sufficient to check whether Y B , 1 ( Y B , 3 + Y B , 2 κ ) is zero or not. In addition, if Bob knows that only the edges e ( 4 ) , e ( 8 ) , e ( 10 ) , e ( 11 ) are contaminated, he can recover the message by Y B , 2 ( 1 + κ ) Y B , 1 .
Similarly, in the case of v ( 1 ) , we set η ( 1 ) = 1 , η ( 2 ) = 5 , η ( 3 ) = 10 . If Bob knows that only the edges e ( 1 ) , e ( 5 ) , e ( 10 ) are contaminated, he can recover the message by the original method Y B , 2 Y B , 3 because it equals L 1 + L 2 + M + Z 1 ( L 1 + L 2 + Z 1 ) . In summary, when this type attack is done, Bob can detect the existence of the error. If he identifies the attacked node v ( i ) by another method, he can recover the message.

3.7. Solution of Problem Given in Section 2.7

Next, we consider how to resolve the problem arisen in Section 2.7. That is, we discuss another type of attack given as (B1), and study the secrecy and the detectability of the existence of the error under the above-explained code with the assumptions (B2), (B3’), (B4’), and (B5).
To discuss this problem, we divide this network into two layers. The lower layer consists of the edges e ( 7 ) , e ( 9 ) , and e ( 11 ) , which are connected to the sink node. The upper layer does of the remaining edges. Eve eavesdrops and contaminates any one edge among the upper layer, and eavesdrops on any one edge among the lower layer.
Again, we consider the low vectors v i in Corollary 1. The vectors corresponding to the edges of the upper layer are ( 0 , 1 , 0 ) , ( κ , κ , 1 + κ ) , ( 1 , 0 , 1 ) , and ( 0 , 0 , 1 ) . The vectors corresponding to the edges of the lower layer are ( κ , 1 + κ , 1 + κ ) , ( 1 , 1 , 1 ) , and ( 0 , 1 , 1 ) . Any linear combination from the upper and lower layers is not ( 1 , 0 , 0 ) , which implies the secrecy condition given in Corollary 1. Hence, the secrecy holds under the lower type attack. Since the contamination of this type of attack is contained in the contamination of the attack discussed in the previous subsection, the detectability also holds.

4. Conclusions

We have discussed how sequential error injection affects the information leaked to Eve when node operations are linear. To discuss this problem, we have considered the possibility that the network does not have synchronization so that the information transmission on an edges starts before the end of the the information transmission on the previous edge. Hence, Eve might contaminate the information on several edges by using the original information of these edges. Additionally, we have discussed the multiple uses of the same network when the topology and the dynamics of the network do not change and there is no correlation with the delayed information.
As a result, we have shown that there is no advantage gained by injecting an artificial noise on attacked edges. This result can be regarded as a kind of reduction theorem because the secrecy analysis with contamination can be reduced to that without contamination. Indeed, when the linearity is not imposed, there is a counterexample of this reduction theorem [21].
In addition, we have derived the matrix formulas (20) and (21) for the relation between the outputs of Alice and Bob and the inputs of Alice and Eve in the case with the multiple transmission. As the extension of Theorem 1, the similar reduction theorem (Theorem 2) holds even for the multiple transmission. In fact, as explained in Section 3.7, this extension is essential because there exists an attack model over a network model such that the secrecy and the detectability of the error are possible with multiple uses of the same network, while it is impossible with the single use of the network. Additionally, another paper will discuss the application of these results to the asymptotic setting [33].
Indeed, there is a possibility that Eve changes H E sequentially. This problem has been done by the paper [35] essentially using the idea of our main theorems (Theorems 1 and 2) because it refers Proposition 1 of the conference version [36], which is equivalent to our main theorems. This fact shows the importance of our reduction theorem.

Author Contributions

Formal analysis, M.H., M.O. and G.K.; Writing—original draft, M.H.; Writing—review & editing, M.O., G.K. and N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This works was supported in part by the Japan Society of the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (B): 16KT0017, the Japan Society of the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (A): 17H01280, the Japan Society of the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (C): 16K00014, the Kayamori Foundation of Informational Science Advancement: K27-XX-467, the Okawa Research Grant: 15-02, Guangdong Provincial Key Laboratory: 2019B121203002, the Japan Society of the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (C): 17K05591.

Acknowledgments

M.H. and N.C. are very grateful to Wangmei Guo and Seunghoan Song for helpful discussions and comments.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

References

  1. Cai, N.; Yeung, R. Secure network coding. In Proceedings of the 2002 IEEE International Symposium on Information Theory (ISIT 2002), Lausanne, Switzerland, 30 June–5 July 2002; p. 323. [Google Scholar]
  2. Bennett, C.H.; Brassard, G.; Crépeau, C.; Maurer, U.M. Generalized privacy amplification. IEEE Trans. Inform. Theory 1995, 41, 1915–1923. [Google Scholar] [CrossRef] [Green Version]
  3. Håstad, J.; Impagliazzo, R.; Levin, L.A.; Luby, M. A Pseudorandom Generator from any One-way Function. SIAM J. Comput. 1999, 28, 1364. [Google Scholar] [CrossRef]
  4. Hayashi, M. Exponential decreasing rate of leaked information in universal random privacy amplification. IEEE Trans. Inform. Theory 2011, 57, 3989–4001. [Google Scholar] [CrossRef] [Green Version]
  5. Matsumoto, R.; Hayashi, M. Secure Multiplex Network Coding. In Proceedings of the 2011 International Symposium on Networking Coding, Beijing, China, 25–27 July 2011. [Google Scholar] [CrossRef]
  6. Matsumoto, R.; Hayashi, M. Universal Secure Multiplex Network Coding with Dependent and Non-Uniform Messages. IEEE Trans. Inform. Theory 2017, 63, 3773–3782. [Google Scholar] [CrossRef]
  7. Kurihara, J.; Matsumoto, R.; Uyematsu, T. Relative generalized rank weight of linear codes and its applications to network coding. IEEE Trans. Inform. Theory 2013, 61, 3912–3936. [Google Scholar] [CrossRef] [Green Version]
  8. Yeung, R.W.; Cai, N. Network error correction, Part I: Basic concepts and upper bounds. Commun. Inf. Syst. 2006, 6, 19–36. [Google Scholar]
  9. Cai, N.; Yeung, R.W. Network error correction, Part II: Lower bounds. Commun. Inf. Syst. 2006, 6, 37–54. [Google Scholar]
  10. Ho, T.; Leong, B.; Koetter, R.; Médard, M.; Effros, M.; Karger, D.R. Byzantine modification detection for multicast networks using randomized network coding. In Proceedings of the 2004 IEEE International Symposium on Information Theory (ISIT 2004), Chicago, IL, USA, 27 June–2 July 2004; p. 144. [Google Scholar]
  11. Jaggi, S.; Langberg, M.; Ho, T.; Effros, M. Correction of adversarial errors in networks. In Proceedings of the 2005 IEEE International Symposium on Information Theory, (ISIT 2005), Adelaide, Australia, 4–9 September 2005; pp. 1455–1459. [Google Scholar]
  12. Kadhe, S.; Sprintson, A.; Zhang, Q.E.; Bakshi, M.; Jaggi, S. Reliable and secure communication over adversarial multipath networks: A survey. In Proceedings of the 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2–4 December 2015. [Google Scholar]
  13. Zhang, Q.; Kadhe, S.; Bakshi, M.; Jaggi, S.; Sprintson, A. Talking reliably, secretly, and efficiently: A “complete” characterization. In Proceedings of the 2015 IEEE Information Theory Workshop (ITW), Jerusalem, Israel, 26 April–1 May 2015. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Kadhe, S.; Bakshi, M.; Jaggi, S.; Sprintson, A. Coding against a limited-view adversary: The effect of causality and feedback. In Proceedings of the 2005 IEEE International Symposium on Information Theory (ISIT 2015), Hong Kong, China, 14–19 June 2015; pp. 2530–2534. [Google Scholar]
  15. Blackwell, D.; Breiman, L.; Thomasian, A.J. The capacities of certain channel classes under random coding. Ann. Math. Stat. 1960, 31, 558–567. [Google Scholar] [CrossRef]
  16. Ahlswede, R. Elimination of correlation in random codes for arbitrarily varying channels. Z. Wahrsch. Verw. Geb. 1978, 44, 159–175. [Google Scholar] [CrossRef] [Green Version]
  17. Csiszár, I.; Narayan, P. The capacity of the arbitrarily varying channel revisited: Positivity, constraints. IEEE Trans. Inform. Theory 1988, 34, 181–193. [Google Scholar] [CrossRef] [Green Version]
  18. Dey, B.K.; Jaggi, S.; Langberg, M. Sufficiently Myopic Adversaries are Blind. IEEE Trans. Inf. Theory 2019, 65, 5718–5736. [Google Scholar] [CrossRef]
  19. Tian, P.; Jaggi, S.; Bakshi, M.; Kosut, O. Arbitrarily Varying Networks: Capacity-achieving Computationally Efficient Codes. In Proceedings of the 2016 IEEE International Symposium on Information Theory, Barcelona, Spain, 10–15 July 2016. [Google Scholar]
  20. Yao, H.; Silva, D.; Jaggi, S.; Langberg, M. Network Codes Resilient to Jamming and Eavesdropping. IEEE/ACM Trans. Netw. 2014, 22, 1978–1987. [Google Scholar] [CrossRef]
  21. Hayashi, M.; Cai, N. Secure network code over one-hop relay network. arXiv 2020, arXiv:2003.12223. [Google Scholar]
  22. Jaggi, S.; Langberg, M.; Katti, S.; Ho, T.; Katabi, D.; Medard, M. Resilient network coding in the presence of Byzantine adversaries. In Proceedings of the IEEE INFOCOM 2007, Anchorage, AK, 6–12 May 2007; pp. 616–624. [Google Scholar] [CrossRef] [Green Version]
  23. Jaggi, S.; Langberg, M.; Katti, S.; Ho, T.; Katabi, D.; Medard, M.; Effros, M. Resilient Network Coding in the Presence of Byzantine Adversaries. IEEE Trans. Inform. Theory 2008, 54, 2596–2603. [Google Scholar] [CrossRef]
  24. Jaggi, S.; Langberg, M. Resilient network codes in the presence of eavesdropping Byzantine adversaries. In Proceedings of the 2007 IEEE International Symposium on Information Theory (ISIT 2007), Nice, France, 24–29 June 2007; pp. 541–545. [Google Scholar] [CrossRef]
  25. Chan, T.; Grant, A. Capacity bounds for secure network coding. In Proceedings of the Australian Communication Theory Workshop, Christchurch, New Zealand, 30 January–1 February 2008; pp. 95–100. [Google Scholar]
  26. Rouayheb, S.E.; Soljanin, E.; Sprintson, A. Secure network coding for wiretap networks of type II. IEEE Trans. Inform. Theory 2012, 58, 1361–1371. [Google Scholar] [CrossRef]
  27. Ngai, C.-K.; Yeung, R.W.; Zhang, Z. Network generalized hamming weight. In Proceedings of the Workshop on Network Coding, Theory, and Applications, Lausanne, Switzerland, 15–16 June 2009; pp. 48–53. [Google Scholar]
  28. Cai, N.; Yeung, R.W. Secure Network Coding on a Wiretap Network. IEEE Trans. Inform. Theory 2011, 57, 424–435. [Google Scholar] [CrossRef]
  29. Cai, N.; Chan, T. Theory of Secure Network Coding. Proc. IEEE 2011, 99, 421–437. [Google Scholar]
  30. Ahlswede, R.; Cai, N.; Li, S.-Y.R.; Yeung, R.W. Network information flow. IEEE Trans. Inform. Theory 2000, 46, 1204–1216. [Google Scholar] [CrossRef]
  31. Yeung, R.W.; Li, S.-Y.R.; Cai, N.; Zhang, Z. Network Coding Theory; Now Publishers Inc.: Breda, The Netherlands, 2005. [Google Scholar]
  32. Koetter, R.; Médard, M. An Algebraic Approach to Network Coding. IEEE/ACM Trans. Netw. 2003, 11, 782–795. [Google Scholar] [CrossRef] [Green Version]
  33. Hayashi, M.; Cai, N. Asymptotically Secure Network Code for Active Attacks and its Application to Network Quantum Key Distribution. arXiv 2020, arXiv:2003.12225. [Google Scholar]
  34. Shioji, E.; Matsumoto, R.; Uyematsu, T. Vulnerability of MRD-Code-based Universal Secure Network Coding against Stronger Eavesdroppers. IEICE Trans. Fundam. 2010, E93-A, 2026–2033. [Google Scholar] [CrossRef] [Green Version]
  35. Cai, N.; Hayashi, M. Secure Network Code for Adaptive and Active Attacks with No-Randomness in Intermediate Nodes. IEEE Trans. Inform. Theory 2020, 66, 1428–1448. [Google Scholar] [CrossRef] [Green Version]
  36. Hayashi, M.; Owari, M.; Kato, G.; Cai, N. Secrecy and Robustness for Active Attack in Secure Network Coding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT 2017), Aachen, Germany, 25–30 June 2017; pp. 1172–1177. [Google Scholar]
Figure 1. Network of Section 2.6 with name of edges.
Figure 1. Network of Section 2.6 with name of edges.
Entropy 22 01053 g001
Figure 2. Network of Section 2.6 with network flow.
Figure 2. Network of Section 2.6 with network flow.
Entropy 22 01053 g002
Figure 3. Network of Section 2.6 with the wiretap and replacement model. Eve injects the replaced information on the edges e ( 2 ) and e ( 5 ) .
Figure 3. Network of Section 2.6 with the wiretap and replacement model. Eve injects the replaced information on the edges e ( 2 ) and e ( 5 ) .
Entropy 22 01053 g003
Table 1. Channel parameters.
Table 1. Channel parameters.
m 1 Number of edges
m 2 Number of vertecies
m 3 Dimension of Alice’s input information X
m 4 Dimension of Bob’s observed information Y B
m 5 Dimension of Eve’s injected information Z
m 6 Dimension of Eve’s wiretapped information Y E
m 7 m 1 m 3
Table 2. Summary of security analysis.
Table 2. Summary of security analysis.
NodeEavesdroppingVector η DetectionRecovery
v ( 1 ) e ( 1 ) ( 0 , 1 , 0 ) η ( 1 ) = 1 Z 1 κ Y B , 2 Y B , 3
e ( 5 ) ( 0 , 1 , 0 ) η ( 2 ) = 5
e ( 10 ) ( 0 , 1 , 0 ) η ( 3 ) = 10
v ( 2 ) e ( 2 ) ( κ , κ , 1 + κ ) Z 2 Z 1 κ Y B , 2 Y B , 3
e ( 5 ) ( 0 , 1 , 0 ) η ( 1 ) = 5
e ( 6 ) ( 0 , 1 , 0 )
e ( 7 ) ( κ , 1 + κ , 1 + κ ) η ( 2 ) = 2
e ( 8 ) ( 0 , 1 , 0 )
v ( 3 ) e ( 3 ) ( 1 , 0 , 1 ) η ( 1 ) = 3 ( Z 1 + Z 2 + Z 3 ) κ ( Y B , 1 Y B , 3 ( 1 + κ ) ) κ 1
e ( 6 ) ( 0 , 1 , 0 ) η ( 2 ) = 6
e ( 9 ) ( 1 , 1 , 1 ) η ( 3 ) = 9
v ( 4 ) e ( 4 ) ( 0 , 0 , 1 ) η ( 1 ) = 4 Z 1 Z 2 Z 4 Y B , 2 ( 1 + κ ) Y B , 1
e ( 8 ) ( 0 , 1 , 0 ) η ( 2 ) = 8
e ( 10 ) ( 0 , 1 , 0 ) η ( 3 ) = 10
e ( 11 ) ( 0 , 1 , 1 ) η ( 4 ) = 11
Vector expresses the low vectors v i of the matrix A in Corollary 1. For the case with ( 2 ) , the matrix A is given in Equation (33). Detection expresses Y B , 1 ( Y B , 3 + Y B , 2 κ ) . If this value is not zero, Bob considers that there exists the contamination. Recovery expresses Bob’s method that decodes the message M dependently of v ( i ) .

Share and Cite

MDPI and ACS Style

Hayashi, M.; Owari, M.; Kato, G.; Cai, N. Reduction Theorem for Secrecy over Linear Network Code for Active Attacks. Entropy 2020, 22, 1053. https://doi.org/10.3390/e22091053

AMA Style

Hayashi M, Owari M, Kato G, Cai N. Reduction Theorem for Secrecy over Linear Network Code for Active Attacks. Entropy. 2020; 22(9):1053. https://doi.org/10.3390/e22091053

Chicago/Turabian Style

Hayashi, Masahito, Masaki Owari, Go Kato, and Ning Cai. 2020. "Reduction Theorem for Secrecy over Linear Network Code for Active Attacks" Entropy 22, no. 9: 1053. https://doi.org/10.3390/e22091053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop