Detection Games under Fully Active Adversaries

We study a binary hypothesis testing problem in which a defender must decide whether a test sequence has been drawn from a given memoryless source P0, while an attacker strives to impede the correct detection. With respect to previous works, the adversarial setup addressed in this paper considers an attacker who is active under both hypotheses, namely, a fully active attacker, as opposed to a partially active attacker who is active under one hypothesis only. In the fully active setup, the attacker distorts sequences drawn both from P0 and from an alternative memoryless source P1, up to a certain distortion level, which is possibly different under the two hypotheses, to maximize the confusion in distinguishing between the two sources, i.e., to induce both false positive and false negative errors at the detector, also referred to as the defender. We model the defender–attacker interaction as a game and study two versions of this game, the Neyman–Pearson game and the Bayesian game. Our main result is in the characterization of an attack strategy that is asymptotically both dominant (i.e., optimal no matter what the defender’s strategy is) and universal, i.e., independent of P0 and P1. From the analysis of the equilibrium payoff, we also derive the best achievable performance of the defender, by relaxing the requirement on the exponential decay rate of the false positive error probability in the Neyman–Pearson setup and the tradeoff between the error exponents in the Bayesian setup. Such analysis permits characterizing the conditions for the distinguishability of the two sources given the distortion levels.


I. INTRODUCTION
There are many fields in signal processing and communications where the detection problem should naturally be framed within an adversarial setting: multimedia forensics (MF) [1], spam filtering [2], biometric-based verification [3], one-bit watermarking [4], and digital/analogue transmission under jammer attacks [5], just to name a few (see [6] for other examples).
In particular, the need for adversarial modeling has become evident in security-related applications and game theory is often harnessed as a useful tool in many research areas, such as steganalysis [7], watermarking [4], intrusion detection systems [8] and adversarial machine learning [9], [10].In recent literature, game theory and information theory have also been combined to address the problem of adversarial detection, especially in the field of digital watermarking, see, for instance, [4], [11], [12], [13].In all these works, the problem of designing watermarking codes that are robust to intentional attacks, is studied as a game between the information hider and the attacker.
An attempt to develop a general theory for the binary hypothesis testing problem in the presence of an adversary was made in [14].Specifically, in [14] the general problem of binary decision under adversarial conditions has been addressed and formulated as a game between two players, the defender and the attacker, which have conflicting goals.Given two discrete memoryless sources, P 0 and P 1 , the goal of the defender is to decide whether a given test sequence has been generated by P 0 (null hypothesis, H 0 ) or P 1 (alternative hypothesis, H 1 ).By adopting the Neyman-Pearson approach, the set of strategies the defender can choose from is the set of decision regions for H 0 ensuring that the false positive error probability is lower than a given threshold.On the other hand, the ultimate goal of the attacker in [14] is to cause a false negative decision, so the attacker acts under H 1 only.In other words, the attacker modifies a sequence generated by P 1 , in attempt to move it into the acceptance region of H 0 .The attacker is subjected to a distortion constraint, which limits his freedom in doing so.Such a struggle between the defender and the attacker is modeled in [14] as a competitive zero-sum game and the asymptotic equilibrium, that is, the equilibrium when the length of the observed sequence tends to infinity, is derived under the assumption that the defender bases his decision on the analysis of first order statistics only.
In this respect, the analysis conducted in [14] extends the one of [15] to the adversarial scenario.Some variants of this attack-detection game have also been studied: in [16], the setting was extended to the case where the sources are known to neither the defender nor the attacker, yet training data from both sources is available to both parties: within this framework, the case where part of the training data available to the defender is corrupted by the attacker has also been studied (see [17]).
There are many situations in which it is reasonable to assume that the attacker is active under both hypotheses with the goal of causing both false positive and false negative detection errors.For instance, in applications of camera fingerprint detection, an adversary might be interested to remove the fingerprint from a given image so that the generating camera would not be identified and at the same time, to implant the fingerprint from another camera [18], [19].Another example comes from watermarking, where an attacker can be interested in either removing or injecting the watermark from an image or a video, to redistribute the content with a fake copyright and no information (erased information) about the true ownership [20].Attacks under both hypotheses may also be present in applications of network intrusion detection [21].Network intrusion detection systems, in fact, can be subject to both evasion attacks [22], in which an adversary tries to avoid detection by manipulating malicious traffic, and overstimulation attacks [23], [24], in which the network is overstimulated by an adversary who sends synthetic traffic (matching the legitimate traffic) in order to cause a denial of service.
With the above ideas in mind, in this paper, we consider the game-theoretic formulation of the defenderattacker interaction when the attacker acts under both hypotheses.We refer to this scenario as a detection game with a fully active attacker.By contrast, when the attacker acts under hypothesis H 1 only (as in [14] and [16]), he is referred to as a partially active attacker.A distinction is made between the case where the underlying hypothesis is known to the attacker and the case where it is not.A little thought, however, immediately indicates that the latter is a special case of the former, and therefore, we focus on the former.We define and solve two versions of the detection game with fully active attackers, corresponding to two different formulations of the problem: the Neyman-Pearson formulation and the Bayesian formulation.In contrast to [14], here the players are allowed to adopt randomized strategies.Specifically, the defender adopts randomized decision strategies, while in [14] the defender's strategies were confined to deterministic decision rules.As for the attack, it consists of the application of a channel, whereas in [14] it was confined to the application of a deterministic function.Moreover, the partially active case of [14] can easily be obtained as a special case of the fully active case considered here.
The problem of solving the game and then finding the optimum detector in the adversarial setting is not trivial and may not be possible in general.Thus, we limit the complexity of the problem and make the analysis tractable by confining the decision to depend on a given set of statistics of the observation.Such an assumption, according to which the detector has access to a limited set of empirical statistics of the sequence, is referred to as limited resources assumption (see [15] for an introduction on this terminology).
In particular, as done in [14], [16], we limit the detection resources to first order statistics, which are, as is well known, sufficient statistics for memoryless systems [25,Section 2.9].While the sources are indeed assumed memoryless, one might still be concerned regarding the sufficiency of first order statistics, in our setting, since the attack channel is not assumed memoryless in the first place.Adopting, nonetheless, February 9, 2018 DRAFT the limited-resources assumption to first order statistics, is motivated mainly by its simplicity, but with the understanding that the results can easily be extended to deal with arbitrarily higher order empirical statistics as well.Moreover, an important bonus of this framework is that it allows us to obtain fairly strong results concerning the game between the defender and the attacker, as will be described below.
One of the main results of this paper is the characterization of an attack strategy which is both dominant (i.e., optimal no matter what the defence strategy is), and universal, i.e., independent of the (unknown) underlying sources.Moreover, this optimal attack is the same for both the Neyman-Pearson and Bayesian games.This result continues to hold also for the partially active case, thus creating a significant difference relative to previous works, where the existence of a dominant strategy was established regarding the defender only.
Some of our results (in particular, the derivation of the equilibrium point for both the Neyman-Pearson and the Bayesian games), have already appeared mostly without proofs in [26].Here we provide the full proofs of the main theorems, evaluate the payoff at equilibrium for both the Neyman-Pearson and Bayesian games and include the analysis of the ultimate performance of the games.Specifically, we characterize the so called indistinguishability region (to be defined formally in Section VI), namely the set of the sources for which it is not possible to attain strictly positive exponents for both false positive and false negative probabilities under the Neyman-Pearson and the Bayesian settings.Furthermore, the setup and analysis presented in [26] is extended by considering a more general case in which the maximum allowed distortion levels the attacker may introduce under the two hypotheses are different.
The paper is organized as follows.In Section II, we establish the notation and introduce the main concepts.In Section III, we formalize the problem and define the detection game with a fully active adversary for both the Neyman-Pearson and the Bayesian games, and then prove the existence of a dominant and universal attack strategy.The complete analysis of the Neyman-Pearson and Bayesian detection games, namely, the study of the equilibrium point of the game and the computation of the payoff at the equilibrium, are carried out in Sections IV and V, respectively.Finally, Section VI is devoted to the analysis of the best achievable performance of the defender and the characterization of the source distinguishability.

II. NOTATION AND DEFINITIONS
Throughout the paper, random variables will be denoted by capital letters and specific realizations will be denoted by the corresponding lower case letters.All random variables that denote signals in the system, will be assumed to have the same finite alphabet, denoted by A. Given a random variable X and a positive integer n, we denote by X = (X 1 , X 2 , ..., X n ), X i ∈ A, i = 1, 2, . . ., n, a sequence of n independent copies of X.According to the above-mentioned notation rules, a specific realization of X is denoted by x = (x 1 , x 2 , . . ., x n ).Sources will be denoted by the letter P .Whenever necessary, we will subscript P with the name of the relevant random variables: given a random variable X, P X denotes its probability mass function (PMF).Similarly, P XY denotes the joint PMF of a pair of random variables, (X, Y ).For two positive sequences, {a n } and {b n }, the notation a n The type of a sequence x ∈ A n is defined as the empirical probability distribution Px , that is, the vector { Px (x), x ∈ A} of the relative frequencies of the various alphabet symbols in x.A type class T (x) is defined as the set of all sequences having the same type as x.When we wish to emphasize the dependence of T (x) on Px , we will use the notation T ( Px ).Similarly, given a pair of sequences (x, y), both of length n, the joint type class T (x, y) is the set of sequence pairs {(x , y )} of length n having the same empirical joint probability distribution (or joint type) as (x, y), Pxy , and the conditional type class T (y|x) is the set of sequences {y } with Pxy = Pxy .
Regarding information measures, the entropy associated with Px , which is the empirical entropy of x, is denoted by Ĥx (X).Similarly, Ĥxy (X, Y ) designates the empirical joint entropy of x and y, and Ĥxy (X|Y ) is the conditional joint entropy.We denote by D(P Q) the Kullback-Leibler (K-L) divergence between two sources, P and Q with the same alphabet (see [25]).
Finally, we use letter A to denote an attack channel; accordingly, A(y|x) is the conditional probability of the channel output y given the channel input x.Given a permutation-invariant distortion function 1 d : A n × A n → IR + and a maximum distortion ∆, we define the class C ∆ of admissible channels {A(y|x), x, y ∈ A n } as those that assign zero probability to every y with d(x, y) > n∆.

A. Basics of Game Theory
For the sake of completeness, we introduce some basic definitions and concepts of game theory.
A two-player game is defined as a quadruple (S 1 , S 2 , u 1 , u 2 ), where S 1 = {s 1,1 . . .s 1,n1 } and S 2 = {s 2,1 . . .s 2,n2 } are the sets of strategies from which the first and the second player can choose, respectively, and u l (s 1,i , s 2,j ), l = 1, 2, is the payoff of the game for player l, when the first player chooses the strategy s 1,i and the second one chooses s 2,j .Each player aims at maximizing its payoff function.A pair of 1 A permutation-invariant distortion function, d(x, y), is a distortion function that is invariant if the same permutation is applied to both x and y.
strategies (s 1,i , s 2,j ) is called a profile.When u 1 (s 1,i , s 2,j ) + u 2 (s 1,i , s 2,j ) = 0, the game is said to be a zero-sum game.For such games, the payoff of the game u(s 1,i , s 2,j ) is usually defined by adopting the perspective of one of the two players: that is, u(s 1,i , s 2,j ) = u 1 (s 1,i , s 2,j ) = −u 2 (s 1,i , s 2,j ) if the defender's perspective is adopted or vice versa.The sets S 1 , S 2 and the payoff functions are assumed known to both players.In addition, we consider strategic games, i.e., games in which the players choose their strategies ahead of time, without knowing the strategy chosen by the opponent.
A common goal in game theory is to determine the existence of equilibrium points, i.e. profiles that in some sense represent a satisfactory choice for both players [27].The most famous notion of equilibrium is due to Nash [28].A profile is said to be a Nash equilibrium if no player can improve its payoff by changing its strategy unilaterally.
Despite its popularity, the practical meaning of Nash equilibrium is often unclear, since there is no guarantee that the players will end up playing at the Nash equilibrium.A particular kind of games for which stronger forms of equilibrium exist are the so called dominance solvable games [27].The concept of dominance-solvability is directly related to the notion of dominant and dominated strategies.In particular, a strategy is said to be strictly dominant for one player if it is the best strategy for this player, i.e., the strategy that maximizes the payoff, no matter what the strategy of the opponent may be.In a similar way, we say that a strategy s l,i is strictly dominated by strategy s l,j , if the payoff achieved by player l choosing s l,i is always lower than that obtained by playing s l,j , regardless of the strategy of the other player.Recursive elimination of dominated strategies is a common technique for solving games.In the first step, all the dominated strategies are removed from the set of available strategies, since no rational player2 would ever use them.In this way, a new, smaller game is obtained.At this point, some strategies that were not dominated before, may become dominated in the new, smaller version of the game, and hence are eliminated as well.The process goes on until no dominated strategy exists for either player.A rationalizable equilibrium is any profile which survives the iterated elimination of dominated strategies [29], [30].If at the end of the process only one profile is left, the remaining profile is said to be the only rationalizable equilibrium of the game, which is also the only Nash equilibrium point.Dominance solvable games are easy to analyze since, under the assumption of rational players, we can anticipate that the players will choose the strategies corresponding to the unique rationalizable equilibrium.Another, related, interesting notion of equilibrium is that of dominant equilibrium.A dominant equilibrium is a profile which corresponds to dominant strategies for both players and is the strongest kind of equilibrium that a strategic game may have.

A. Problem formulation
Given two discrete memoryless sources, P 0 and P 1 , defined over a common finite alphabet A, we denote by x = (x 1 , . . ., x n ) ∈ A n a sequence emitted by one of these sources.The sequence x is available to the attacker.Let y = (y 1 , y 2 , ..., y n ) ∈ A n denote the sequence observed by the defender: when an attack occurs under both H 0 and H 1 , the observed sequence y is obtained as the output of an attack channel fed by x.
In principle, we must distinguish between two cases: in the first, the attacker is aware of the underlying hypothesis (hypothesis-aware attacker), whereas in the second case it is not (hypothesis-unaware attacker).
In the hypothesis-aware case, the attack strategy is defined by two different conditional probability distributions, i.e., two different attack channels: A 0 (y|x), applied when H 0 holds, and A 1 (y|x), applied under H 1 .Let us denote by Q i (•) the PMF of y under H i ,i = 0, 1.The attack induces the following PMFs on y: Clearly, in the hypothesis-unaware case, the attacker will apply the same channel under H 0 and H 1 , that is, A 0 = A 1 , and we will denote the common attack channel simply by A. Throughout the paper, we focus on the hypothesis-aware case as in view of this formalism, the hypothesis-unaware case is just a special case.
Regarding the defender, we assume a randomized decision strategy, defined by Φ(H i |y), which designates the probability of deciding in favor of H i , i = 0, 1, given y.Accordingly, the probability of a false positive (FP) decision error is given by and similarly, the false negative (FN) probability assumes the form: Figure 1 provides a block diagram of the system with a fully active attacker.Obviously, the partially active case, where no attack occurs under H 0 , can be seen as a degenerate case of the fully active one, where A 0 is the identity channel I.As in [14], due to the limited resources assumption, the defender makes a decision based on first order empirical statistics of y, which implies that Φ(•|y) depends on y only via its type class T (y). x x Concerning the attack, in order to limit the amount of distortion, we assume a distortion constraint.
In the hypothesis-aware case, we allow the attacker different distortion levels, ∆ 0 and ∆ 1 , under H 0 and H 1 , respectively.Then, A 0 ∈ C ∆0 and A 1 ∈ C ∆1 , where, for simplicity, we assume that a common (permutation-invariant) distortion function d(•, •) is adopted in the two cases.

B. Definition of the Neyman-Pearson and Bayesian Games
One of the difficulties associated with the fully active setting is that, in the presence of a fully active attacker, both the FP and FN probabilities depend on the attack channels.We therefore consider two different approaches which lead to different formulations of the detection game: in the first, the detection game is based on the Neyman-Pearson criterion, and in the second one, the Bayesian approach is adopted.
For the Neyman-Pearson setting, we define the game by assuming that the defender adopts a conservative approach and imposes an FP constraint pertaining to the worst-case attack under H 0 .
Definition 1.The Neyman-Pearson detection game is a zero-sum, strategic game defined as follows.
• The set S D of strategies allowed to the defender is the class of randomized decision rules {Φ} that satisfy (i) Φ(H 0 |y) depends on y only via its type.
• The set S A of strategies allowed to the attacker is the class of pairs of attack channels (A 0 , A 1 ) • The payoff of the game is u(Φ, A 1 ) = P FN (Φ, A 1 ); the attacker is in the quest of maximizing u(Φ, A 1 ) whereas the defender wishes to minimize it.
In the above definition, we require that the FP probability decays exponentially fast with n, with an exponential rate at least as large as λ.In the case of partially-active attack (see the formulation in [26]), February 9, 2018 DRAFT the FP probability does not depend on the attack but on the defender only; accordingly, the constraint imposed by the defender in the above formulation becomes P FP (Φ) ≤ e −nλ .Regarding the attacker, we have S A ≡ C 0 × C ∆1 , where C 0 is a singleton that contains the identity channel only.
Another version of the detection game is defined by assuming that the defender follows a less conservative approach, that is, the Bayesian approach, and tries to minimize a particular Bayes risk.
Definition 2. The Bayesian detection game is a zero-sum, strategic game defined as follow.
• The set S D of strategies allowed to the defender is the class of the randomized decision rules {Φ} where Φ(H 0 |y) depends on y only via its type.
• The set S A of strategies allowed to the attacker is • The payoff of the game is for some constant a, independent of n.
We observe that, in the definition of the payoff, the parameter a controls the tradeoff between the two terms in the exponential scale; whenever possible, the optimum defence strategy is expected to yield error exponents that differ exactly by a, so as to balance the contributions of the two terms of (3).
Notice also that, by defining the payoff as in (3), we are implicitly considering for the defender only the strategies Φ(•|y) such that P FP (Φ, A 0 ) • ≤ e −an .In fact, any strategy that does not satisfy this inequality yields a payoff u > 1, that cannot be optimal, as it can be improved by always deciding in favor of H 0 regardless of y (u = 1).
As in [14], we focus on the asymptotic behavior of the game as n tends to infinity.In particular, we are interested in the FP and FN exponents defined as: We say that a strategy is asymptotically optimum (or dominant) if it is optimum (dominant) with respect to the asymptotic exponential decay rate (or the exponent, for short) of the payoff.

C. Asymptotically Dominant and Universal Attack
In this subsection, we characterize an attack channel that, for both games, is asymptotically dominant and universal, in the sense of being independent of the unknown underlying sources.This result paves the way to the solution of the two games.
Let u denote a generic payoff function of the form where β and γ are given positive constants, possibly dependent on n.
We notice that the payoff of the Neyman-Pearson and Bayesian games defined in the previous section can be obtained as particular cases: specifically, γ = 1 and β = 0 for the Neyman-Pearson game and γ = 1 and β = e an for the Bayesian one.
Theorem 1.Let c n (x) denote the reciprocal of the total number of conditional type classes {T (y|x)} that satisfy the constraint d(x, y) ≤ n∆ for a given ∆ > 0, namely, admissible conditional type classes3 . Define: Among all pairs of channels (A 0 , A 1 ) ∈ S A , the pair (A * ∆0 , A * ∆1 ) minimizes the asymptotic exponent of u for every P 0 , P 1 , every γ, β ≥ 0 and every permutation-invariant Φ(H 0 |•).
Proof.We first focus on the attack under H 1 and therefore on the FN probability.
Consider an arbitrary channel A 1 ∈ C ∆1 .Let Π : A n → A n denote a permutation operator that permutes any member of A n according to a given permutation matrix and let Since the distortion function is assumed permutation-invariant, the channel A Π (y|x) introduces the same distortion as A 1 and hence satisfies the distortion constraint.Due to the memorylessness of P 1 and the assumption that Φ(H 0 |y) belongs to S D , we have: and so, P FN (Φ, A 1 ) = P FN (Φ, Ā) where we have defined which also introduces the same distortion as A 1 .Now, notice that this channel assigns the same conditional probability to all sequences in the same conditional type class T (y|x).To see why this is true, we observe that any sequence y ∈ T (y|x) can be seen as being obtained from y through the application of a permutation Π which leaves x unaltered.Then, we have: Therefore, since Ā(T (y|x)|x) ≤ 1, we argue that which implies that, for every permutation-invariant defence strategy Φ, or equivalently We conclude that A * ∆1 minimizes the error exponent of P FN (Φ, A 1 ) across all channels in C ∆1 and for every Φ ∈ S D , regardless of P 1 .
A similar argument applies to the FP probability to derive the optimum channel under H 0 ; that is, from the memorylessness of P 0 and the permutation-invariance of Φ(H 1 |•), we have: for every A 0 ∈ C ∆0 .Accordingly, A * ∆0 minimizes the error exponent of P FP (Φ, A 0 ).We then have: for every A 0 ∈ C ∆0 and A 1 ∈ C ∆1 .Notice that, since the asymptotic equality is defined in the logarithmic scale, eq. ( 15 the pair of channels (A * ∆0 , A * ∆1 ) minimizes the asymptotic exponent of u for any permutation-invariant decision rule Φ(H 0 |•) and for any γ, β ≥ 0.
According to Theorem 1, for every zero-sum game with payoff function of the form in ( 5), if Φ is permutation-invariant, the pair of attack channels which is the most favorable to the attacker is , which does not depend on Φ.Then, the optimum attack strategy (A * ∆0 , A * ∆1 ) is dominant.Specifically, given x, in order to generate y which causes a detection error with the prescribed maximum allowed distortion, the attacker cannot do any better than randomly selecting an admissible conditional type class according to the uniform distribution and then choose at random y within this conditional type class.Figure 2 illustrates the intuition behind the definition of the attack channel in (6): since the number of conditional type classes is only polynomial in n, the random choice of the conditional type class does not affect the exponent of the error probabilities; besides, since the decision is the same for all sequences within the same conditional type class, the choice of y within that conditional type class is immaterial.
As an additional result, Theorem 1 states that, whenever an adversary aims at maximizing a payoff function of the form (5), and as long as the defence strategy is confined to the analysis of the first order statistics, the (asymptotically) optimum attack strategy is universal w.r.t. the sources P 0 and P 1 , i.e., it depends neither on P 0 nor on P 1 .
Finally, if ∆ 0 = ∆ 1 = ∆, the optimum attack consists of applying the same channel A * ∆ regardless of the underlying hypothesis and then the optimum attack strategy is fully-universal: the attacker needs to know neither the sources (P 0 and P 1 ), nor the underlying hypothesis.In this case, it becomes immaterial whether the attacker is aware or unaware of the true hypothesis.As a consequence of this property, in the hypothesis-unaware case, when the attacker applies the same channel under both hypotheses, subject to a fixed maximum distortion ∆, the optimum channel remains A * ∆ .As a final remark, according to Theorem 1, for the partially active case, there exists an (asymptotically) dominant and universal attack channel.This result marks a considerable difference relative to the results of [14], where the optimum deterministic attack function is found using the rationalizability argument, that is, by exploiting the existence of a dominant defence strategy, and hence it is neither dominant nor universal.

IV. THE NEYMAN-PEARSON DETECTION GAME
In this section, we study the detection game with a fully active attacker in the Neyman-Pearson setup as defined in Definition 1. From the analysis of Section III-C, we already know that there exists a dominant attack strategy.Regarding the defender, we will determine the asymptotically optimum strategy regardless of the dominant pair of attack channels; in particular, as will been seen in Lemma 1 below, an asymptotically dominant defense strategy can be derived from a detailed analysis of the FP constraint.
As a consequence, the Neyman-Pearson detection game has a dominant equilibrium.

A. Optimal Detection and Game Equilibrium
The following lemma characterizes the optimal detection strategy in the Neyman-Pearson setting.
Lemma 1.For the Neyman-Pearson game of Definition 1, the defence strategy is asymptotically dominant for the defender.
The proof appears in Appendix I-A.
We point out that when the attacker is partially-active, it is known from [26] that the optimum defence strategy is From (17), it is easy to argue that there exists a deterministic strategy, corresponding to the Hoeffding test [31], which is asymptotically equivalent to Φ * (H 1 |y).This result is in line with the one in [14] (Lemma 1), where the class of defence strategies is confined to deterministic decision rules.
Intuitively, the extension from ( 17) to ( 16) is explained as follows.In the case of fully active attacker, the defender is subject to a constraint on the maximum FP probability over S A , that is, the set of the admissible channels A ∈ C ∆0 (see Definition 1).From the analysis of Section III-C, channel A * ∆0 minimizes the FP exponent over this set.In order to satisfy the constraint for a given sequence y, the defender must handle the worst-case value (i.e., the minimum) of D( Px P 0 ) over all the type classes T (x|y) which satisfy the distortion constraint, or equivalently, all the sequences x such that d(x, y) ≤ n∆ 0 .
According to Lemma 1, the best defence strategy is asymptotically dominant.Also, since Φ * depends on P 0 only, and not on P 1 , it is referred to as semi-universal.
Concerning the attacker, since the payoff is a special case of ( 5) with γ = 1 and β = 0, the optimum pair of attack channels is given by Theorem 1 and corresponds to (A * ∆0 , A * ∆1 ).The following comment is in order.Since the payoff of the game is defined in terms of the FN probability only, it is independent of A 0 ∈ C ∆0 .Furthermore, since the defender adopts a conservative approach to guarantee the FP constraint for every A 0 , the constraint is satisfied for every A 0 and therefore all channel pairs of the form (A 0 , A * ∆1 ), A 0 ∈ S A , are equivalent in terms of the payoff.Accordingly, in the hypothesis-aware case, the attacker can employ any admissible channel under H 0 .In the Neyman-Pearson setting, the sole fact that the attacker is active under H 0 forces the defender to take countermeasures that make the choice of A 0 immaterial.
Due to the existence of dominant strategies for both players, we can immediately state the following theorem.

B. Payoff at the Equilibrium
In this section, we derive the payoff of the Neyman-Pearson game at the equilibrium of Theorem 2.
To do this, we will assume an additive distortion function, i.e., d(x, y) = n i=1 d(x i , y i ).In this case, d(x, y) can be expressed as ij n xy (i, j)d(i, j), where n xy (i, j) = n Pxy (i, j) denotes the number of occurrences of the pair (i, j) ∈ A 2 in (x, y).Therefore, the distortion constraint regarding A 0 can be rewritten as (i,j)∈A 2 Pxy (i, j)d(i, j) ≤ ∆ 0 .A similar formulation holds for A 1 .

Let us define
where E xy denotes the empirical expectation, defined as and the minimization is carried out for a given Py .Accordingly, the strategy in ( 16) can be rewritten as When n → ∞, Dn ∆ becomes4 where E XY denotes expectation w.r.t.P XY .
Definition ( 21) can be stated for any PMF P Y in the probability simplex in R |A| .Note that the minimization problem in ( 21) has a unique solution as it is a convex program.
The function D∆ will have an important role in the remaining part of the paper, especially in the characterization of the asymptotic behavior of the games.To draw a parallelism, D∆ plays a role similar to that of the Kullback-Leibler divergence D in classical detection theory for the non-adversarial case.
The basic properties of the functional D∆ (P Y , P ) are the following: (i) it is continuous in it has convex level sets, i.e., the set {P Y : D∆ (P Y , P ) ≤ t} is convex for every t ≥ 0. Point (ii) is a consequence of the following property, which will turn out to be useful for proving some of the results in the sequel (in particular, Theorem 3, 7 and also 8).
Property 1.The function D∆ (P Y , P ) is convex in P Y for every fixed P .
The proof follows from the convexity of the divergence functional (see Appendix I-B).
Using the above definitions, the equilibrium payoff is given by the following theorem: The proof, which appears in Appendix I-C, is based on Sanov's theorem [32], [33], by exploiting the compactness of the set {P Y : D∆0 (P Y , P 0 ) ≤ λ}.
From Theorem 3 it follows that ε FN (λ) = 0 whenever there exists a PMF P Y inside the set {P Y : D∆0 (P Y , P 0 ) ≤ λ} with ∆ 1 -limited expected distortion from P 1 .When this condition does not hold, P FN (Φ * , A * ∆1 ) → 0 exponentially rapidly.For a partially-active attacker, the error exponent in (22) becomes It can be shown that the error exponent in ( 23) is the same as the error exponent of Theorem 2 in [14] (and Theorem 2 in [34]), where deterministic strategies are considered for both the defender and the attacker.
Such equivalence can be explained as follows.As already pointed, the optimum defence strategy in (17) and the deterministic rule found in [14] are asymptotically equivalent (see the discussion immediately after Lemma 1).Concerning the attacker, even in the more general setup (with randomized strategies) considered here, an asymptotically optimum attack could be derived as in [14], that is, by considering the best response to the dominant defence strategy in [14].Such attack consists of minimizing the divergence w.r.t.P 0 , namely D( Py ||P 0 ), over all the admissible sequences y, and then is deterministic.Therefore, concerning the partially active case, the asymptotic behavior of the game is equivalent to the one in [14].
The main difference between the setup in [14] and the more general one addressed in this paper relies on the kind of game equilibrium, which is stronger here (namely, a dominant equilibrium) due to the existence of dominant strategies for both the defender and the attacker, rather than for the defender only.
When the distortion function d is a metric, we can state the following result, whose proof appears in Appendix I-D.
Theorem 4. When the distortion function d is a metric, eq. ( 22) can be rephrased as Comparing eq. ( 24) with ( 23) is insightful for understanding the difference between the fully active and partially active cases.Specifically, the FN error exponents of both cases are the same when the distortion under H 1 in the partially-active case is ∆ 0 + ∆ 1 (instead of ∆ 1 ).
When d is not a metric, (24) is only an upper bound on ε FN (λ), as can be seen from the proof of Theorem 4. Accordingly, in the general case (d is not a metric), applying distortion levels ∆ 0 and ∆ 1 to sequences from, respectively, H 0 and H 1 (in the fully active setup) is more favorable to the attacker with respect to applying a distortion ∆ 0 + ∆ 1 to sequences from H 0 only (in the partially active setup).

V. THE BAYESIAN DETECTION GAME
In this section, we study the Bayesian game (Definition 2).In contrast to the Neyman-Pearson game, in the Bayesian game, the optimal defence strategy is found by assuming that the strategy played by the attacker, namely the optimum pair of channels (A * 0 , A * 1 ) of Theorem 1, is known to the defender, that is, by exploiting the rationalizability argument (see Section II-A).Accordingly, the resulting optimum strategy is not dominant, and so, the associated equilibrium is weaker compared to that of the Neyman-Pearson game.

A. Optimum Defence and Game Equilibrium
Since the payoff in ( 3) is a special case of ( 5) with γ = 1 and β = e an , for any defence strategy Φ ∈ S D , the asymptotically optimum attack channels under H 0 and H 1 are given by Theorem 1, and correspond to the pair (A * ∆0 , A * ∆1 ).Then, we can determine the best defence strategy by assuming that the attacker will play (A * ∆0 , A * ∆1 ) and evaluating the best response of the defender to this pair of channels.Our solution for the Bayesian detection game is given in the following theorem, whose proof appears in Appendix II-A.
Theorem 5. Consider the Bayesian detection game of Definition 2. Let Q * 0 (y) and Q * 1 (y) be the probability distributions induced by channels A * ∆0 and A * ∆1 , respectively.Then,6 is the optimum defence strategy.
If, in addition, the distortion measure is additive, the defence strategy is asymptotically optimum.
It is useful to provide the asymptotically optimum strategy, Φ † , in addition to the optimal one, Φ # , for the following reason: while Φ # requires the non-trivial computation of the two probabilities Q 1 (y) and Q 0 (y), the strategy Φ † , which leads to the same payoff asymptotically, is easier to implement because of its single-letter form.
We now state the following theorem.
The analysis in this section can be easily generalized to any payoff function defined as in (5), i.e., for any γ, β ≥ 0.
Finally, we observe that, the fact that the equilibrium found in the Bayesian case (namely, a rationalizable equilibrium) is weaker with respect to the equilibrium derived for the Neyman-Pearson game (namely, a dominant equilibrium) is a consequence of the fact that the Bayesian game is defined in a less restrictive manner than the Neyman-Pearson game.This is due to the conservative approach adopted in the latter: while in the Bayesian game the defender cares about both FP and FN probabilities and their tradeoff, in the Neymam-Pearson game the defender does not care about the value of the FP probability provided that its exponent is larger than λ, which is automatically guaranteed by restricting the set of strategies.This restriction simplifies the game so that a dominant strategy can be found for the restricted game.

B. Equilibrium Payoff
We now derive the equilibrium payoff of the Bayesian game.As in the Neyman-Pearson game, we assume an additive distortion measure.For simplicity, we focus on the asymptotically optimum defence strategy Φ † .We have the following theorem.
Theorem 7. Let the Bayesian detection game be as in Definition 2. Let (Φ † , (A * ∆0 , A * ∆1 )) be the equilibrium profile of Theorem 6.The asymptotic exponential rate of the equilibrium Bayes payoff u is given by The proof appears in Appendix II-B.
According to Theorem 7, the asymptotic exponent of u is zero if there exists a PMF P * Y with ∆ 1limited expected distortion from P 1 such that D∆0 (P * Y , P 0 ) ≤ a.Therefore, when we focus on the case of zero asymptotic exponent of the payoff, the parameter a plays a role similar to λ in the Neyman-Pearson game.By further inspecting the exponent expressions of Theorems 7 and 3, we observe that, when a = λ, the exponent in (27) is smaller than or equal to the one in (22), where equality holds only when both (27) and ( 22) vanish.However, comparing these two cases in the general case is difficult because of the different definition of the payoff functions and, in particular, the different role taken by the parameters λ and a.In the Neyman-Pearson game, in fact, the payoff corresponds to the FN probability and is not affected by the value of the FP probability, provided that its exponent is larger than λ; in this way, the ratio between FP and FN error exponent at the equilibrium is generally smaller than λ (a part for the case in which the asymptotic exponent of the payoff is zero).In the Bayesian case, the payoff is a weighted combination of the two types of errors and then the term with the largest exponent is the dominating term, namely, the one which determines the asymptotic behavior; in this case, the parameter a determines the exact tradeoff between the FP and FN exponent in the equilibrium payoff.

VI. SOURCE DISTINGUISHABILITY
In this section, we investigate the performance of the Neyman-Pearson and Bayesian games as functions of λ and a respectively.From the expressions of the equilibrium payoff exponents, it is clear that the Neyman-Pearson and the Bayesian payoffs increase as λ and a decrease, respectively.In particular, by setting λ = 0 and a = 0, we obtain the largest achievable payoffs of both games which correspond to the best achievable performance for the defender.Therefore, we say that two sources are distinguishable under the Neyman-Pearson (resp.Bayesian) setting, if there exists a value of λ (resp.α) such that the FP and FN exponents at the equilibrium of the game are simultaneously strictly positive.When such a condition does not hold, we say that the sources are indistinguishable.Specifically, in this section, we characterize, under both the Neyman-Pearson and the Bayesian settings, the indistinguishability region, defined as the set of the alternative sources that cannot be distinguished from a given source P 0 , given the attack distortion levels ∆ 0 and ∆ 1 .Although each game has a different asymptotic behavior, we will see that the indistinguishability regions in the Neyman-Pearson and the Bayesian settings are the same.
The study of the distinguishability between the sources under adversarial conditions, performed in this section, in a way extends the Chernoff-Stein lemma [25] to the adversarial setup (see [34]).
We start by proving the following result for the Neyman-Pearson game.
Theorem 8. Given two memoryless sources P 0 and P 1 and distortion levels ∆ 0 and ∆ 1 , the maximum achievable FN exponent for the Neyman-Pearson game is: where ε FN (λ) is as in Theorem 3.
The theorem is an immediate consequence of the continuity of ε FN (λ) as λ → 0 + , which follows by the continuity of D∆ with respect to P Y and the density of the set {P Y : D∆0 (P Y , P 0 ) ≤ λ} in {P Y : D∆0 (P Y , P 0 ) = 0} as λ → 0 + 7 .
We notice that, if ∆ 0 = ∆ 1 = 0, there is only an admissible point in the set in (28), for which P Y = P 0 ; then, ε FN (0) = D(P 0 ||P 1 ), which corresponds to the best achievable FN exponent known from the classical literature for the non-adversarial case (Stein lemma [25], Theorem 11.8.3).
Regarding the Bayesian setting, we have the following theorem, the proof of which appears in Appendix III-A.
Theorem 9. Given two memoryless sources P 0 and P 1 and distortion levels ∆ 0 and ∆ 1 , the maximum achievable exponent of the equilibrium Bayes payoff is where the inner limit at the left hand side is as defined in Theorem 7.
Since D∆1 (P Y , P 1 ), and similarly D∆0 (P Y , P 0 ), are convex functions of P Y , and reach their minimum in P 1 , resp.P 0 , 8 the minimum over P Y of the maximum between these quantities (right-hand side of ( 29)) is attained when D∆1 (P * Y , P 1 ) = D∆0 (P * Y , P 0 ), for some PMF P * Y .This resembles the best achievable exponent in the Bayesian probability of error for the non-adversarial case, which is attained when D(P * Y P 0 ) = D(P * Y P 1 ) for some P * Y (see [25], Theorem 11.9.1).In that case, from the expression of the divergence function, such P * Y is found in a closed form and the resulting exponent is equivalent to the Chernoff information (see Section 11.9 in [25]).
From Theorem 8 and 9, it follows that there is no positive λ, res.a, for which the asymptotic exponent of the equilibrium payoff is strictly positive, if there exists a PMF P Y such that the following conditions are both satisfied: In this case, then, P 0 and P 1 are indistinguishable under both the Neyman-Pearson and the Bayesian settings.We observe that the condition D∆ (P Y , P X ) = 0 is equivalent to the following: 9 min QXY : 7 It holds true from Property 1. 8 The fact that D∆ 0 ( D∆ 1 ) is 0 in a ∆0-limited (∆1-limited) neighborhood of P0 (P1), and not just in P0 (P1), does not affect the argument. 9For ease of notation, given a joint PMF QXY with marginal PMFs PX and PY , we use notation (QXY )Y = PY (res. (QXY )X = PX ) as short for x QXY (x, y) = PY (y), ∀y ∈ A (res. y QXY (x, y) = PX (x), ∀x ∈ A).
where the expectation E XY is w.r.t Q XY .In computer vision applications, the left-hand side of ( 31) is known as the Earth Mover Distance (EMD) between P X and P Y , which is denoted by EMD d (P X , P Y ) (or, by symmetry, EMD d (P Y , P X )) [35].It is also known as the ρ-bar distortion measure [36].
A brief comment concerning the analogy between the minimization in (31) and optimal transport theory is worth.The minimization problem in (31) is known in the Operations Research literature as Hitchcock Transportation Problem (TP) [37].Referring to the original Monge formulation of this problem [38], P X and P Y can be interpreted as two different ways of piling up a certain amount of soil; then, P XY (x, y) denotes the quantity of soil shipped from location (source) x in P X to location (sink) y in P Y and d(x, y) is the cost for shipping a unitary amount of soil from x to y.In transport theory terminology, P XY is referred to as transportation map.According to this perspective, evaluating the EMD corresponds to finding the minimal transportation cost of moving a pile of soil into the other.Further insights on this parallel can be found in [34].
We summarize our findings in the following corollary, which characterizes the conditions for distinguishability under both the Neyman-Pearson and the Bayesian setting.
Corollary 1 (Corollary to Theorems 8 and 9).Given a memoryless source P 0 and distortion levels ∆ 0 and ∆ 1 , the set of the PMFs that cannot be distinguished from P 0 in both the Neyman-Pearson and Bayesian settings is given by Γ = P : min Set Γ is the indistinguishability region.By definition (see the beginning of this section), the PMFs inside Γ are those for which, as a consequence of the attack, the FP and FN probabilities cannot go to zero simultaneously with strictly positive exponents.Clearly, if ∆ 0 = ∆ 1 = 0, that is, in the non-adversarial case, Γ = {P 0 }, as any two distinct sources are always distinguishable.
When d is a metric, for a given P ∈ Γ, the computation of the optimum P Y can be traced back to the computation of the EMD between P 0 and P , as stated by the following corollary, whose proof appears in Appendix III-B.
Corollary 2 (Corollary to Theorems 8 and 9).When d is a metric, given the source P 0 and distortion levels ∆ 0 and ∆ 1 , for any fixed P , the minimum in (32) is achieved when Then, the set of PMFs that cannot be distinguished from P 0 in the Neyman-Pearson and Bayesian setting is given by According to Corollary 2, when d is a metric, the performance of the game depends only on the sum of distortions, ∆ 0 + ∆ 1 , and it is immaterial how this amount is distributed between the two hypotheses.
In the general case (d not a metric), the condition on the EMD stated in (34) is sufficient in order for P 0 and P be indistinguishable, that is Γ ⊇ {P : EMD d (P 0 , P ) ≤ ∆ 0 + ∆ 1 } (see discussion in Appendix III-B, at the end of the proof of Corollary 2).Furthermore, in the case of an L p p distortion function (p ≥ 1), i.e., d(x, y) = n i=1 |x i − y i | p , we have the following corollary.
Corollary 3 (Corollary to Theorems 8 and 9).When d is the L p p distortion function, for some p ≥ 1, the set Γ can be bounded as follows Corollary 3 can be proven by exploiting the Hölder inequality [39] (see Appendix III-C).

VII. CONCLUSIONS
We considered the problem of binary hypothesis testing when an attacker is active under both hypotheses, and then an attack is carried out aiming at both false negative and false positive errors.By modeling the defender-attacker interaction as a game, we defined and solved two different detection games: the Neyman-Pearson and the Bayesian game.This paper extends the analysis in [14] [14], where the attacker is active under the alternative hypothesis only.Another aspect of greater generality is that here both players are allowed to use randomized strategies.By relying on the method of types, the main result of this paper is the existence of an attack strategy which is both dominant and universal, that is, optimal regardless of the statistics of the sources.The optimum attack strategy is also independent of the underlying hypothesis, namely fully-universal, when the distortion introduced by the attacker in the two cases is the same.From the analysis of the asymptotic behavior of the equilibrium payoff we are able to establish conditions under which the sources can be reliably distinguished in the fully-active adversarial setup.The theory developed permits to assess the security of the detection in adversarial setting and give insights on how the detector should be designed in such a way to make the attack hard.
Among the possible directions for future work, we mention the extension to multiple hypothesis testing.
Another interesting direction is the extension to continuous alphabets, which calls for an extension of the method of types to this case, or to more realistic models of finite alphabet sources, still amenable to analysis, like Markov sources.As mentioned in the introduction, it would be also relevant to overcome the limitation to first order statistics, by extending the analysis to higher order statistics and getting equilibria in a similar fashion.Finally, we mention the case of unknown sources, where the sources are estimated from training data, possibly corrupted by the attacker.In this scenario, the detection game has been studied for a partially active case, with both uncorrupted and corrupted training data [16], [17].The extension of such analyses to the fully active scenario considered in this paper is a further interesting direction for future research.

ACKNOWLEDGMENT
We thank Alessandro Agnetis of the University of Siena, for the useful discussions on optimization concepts underlying the computation of the EMD.

APPENDIX I NEYMAN-PEARSON DETECTION GAME
This appendix contains the proofs of the results in Section IV.

A. Proof of Lemma 1
Whenever existent, the dominant defence strategy can be obtained by solving: where then, H ⊂ H, where the set H is separable in Q XY and Q XY .Accordingly, (I.9)-(I.10)can be upper bounded by min (I.12) By the convexity of D (Q XY ) X P with respect to Q XY10 , it follows that Note that the above relation is not strict since it might be that (Q XY ) X = (Q XY ) X = P .Then, an upper bound for D∆ (λP Y1 + (1 − λ)P Y2 , P ) is given by min thus proving (I.6).

C. Proof of Theorem 3
We start by proving the upper bound for the FN probability: We now move on to the lower bound.
Py : Dn By combining the upper and lower bounds, we conclude that lim sup and lim inf coincide.Therefore the limit of the sequence 1/n ln P FN exists and the theorem is proven.

D. Proof of Theorem 4
First, observe that, by exploiting the definition of D∆ , ( 22) can be rewritten as Given the probability distributions Q 0 (y) and Q 1 (y) induced by A * ∆0 and A * ∆1 respectively, the optimum decision rule is deterministic and is given by the likelihood ratio test (LRT) [40]: which proves the optimality of the decision rule in (25).
To prove the asymptotic optimality of the decision rule in (26), let us approximate Q 0 (y) and Q 1 (y) using the method of types as follows:

B. Proof of Theorem 7
In order to make the expression of u(P FN (Φ † , (A * ∆0 , A * ∆1 ))) explicit, let us first evaluate the two error probabilities at equilibrium.Below, we derive the lower and upper bound on the probability of y under H 1 , when the attack channel is A * ∆1 : (n + 1) The same bounds hold for Q * 0 (y), with D∆0 replacing D∆0 .For the FN probability, the upper bound is  From (II.14), we see that, as argued, the profile (Φ † , (A * ∆0 , A * ∆1 )) leads to a FP exponent always at least as large as a.
We are now ready to evaluate the asymptotic behavior of the payoff of the Bayesian detection game: where the asymptotic equality in the last line follows from the density of the set of empirical probability distributions of n-length sequences in the probability simplex and from the continuity of the to-beminimized expression in round brackets as a function of P Y .

APPENDIX III SOURCE DISTINGUISHABILITY
This appendix contains the proofs for Section VI.

A. Proof of Theorem 9
The theorem directly follows from Theorem 7. In fact, by letting e a (P Y ) = max D∆1 (P Y , P 1 ), D∆0 (P Y , P 0 ) − a , (III.1) a ≥ 0, the limit in (29) can be derived as follows: where the order of limit and minimum can be exchanged because of the uniform convergence of e a (P Y ) to e 0 (P Y ) as a tends to 0.

B. Proof of Corollary 2
The corollary can be proven by exploiting the fact that, when d is a metric, the EMD is a metric and then EMD d (P 0 , P ) satisfies the triangular inequality.In this case, it is easy to argue that the P Y achieving the minimum in (32) is the one for which the triangular relation holds at the equality, which corresponds to the convex combination of P 0 and P (i.e., the PMF lying on the straight line between P 0 and P ) with combination coefficient α such that EMD d (P 0 , P Y ) (or equivalently, by symmetry, EMD d (P Y , P 0 )) is exactly equal to ∆ 0 .
Formally, let X ∼ P 0 and Z ∼ P .We want to find the PMF P Y which solves From the arbitrariness of the choice of P XY and P Y Z , if we let P * XY and P * Y Z be the joint distributions achieving the EMD between X and Y , and Y and Z, we get EMD(P 0 , P ) ≤ EMD(P 0 , P Y ) + EMD(P Y , P ). (III.6) From the above relation, we can derive the following lower bound for the to-be-minimized quantity in (III.We now show that P Y defined as in (33) achieves the above lower bound while satisfying the constraint EMD(P 0 , P Y ) ≤ ∆ 0 , and then gets the minimum value in (III.3).
To prove the second part of the corollary, we just need to observe that a PMF P belongs to the indistinguishability set in (32) if and only if EMD(P Y , P ) = EMD(P 0 , P ) − ∆ 0 ≤ ∆ 1 , (III.10) that is EMD(P 0 , P ) ≤ ∆ 0 + ∆ 1 .
From the above proof, we notice that, for any P in the set in (34), i.e., such that EMD d (P 0 , P ) ≤ ∆ 0 + ∆ 1 , the PMF P Y = αP 0 + (1 − α)P with α as in (33) satisfies EMD(P Y , P 0 ) = ∆ 0 and EMD(P Y , P 1 ) = ∆ 1 for any choice of d.Then, when d is not a metric, the region in (34) is contained in the indistinguishability region.

C. Proof of Corollary 3
By inspecting the minimization in (32), we see that for any source P that cannot be distinguished from P 0 , it is possible to find a source P Y such that EMD d (P Y , P ) ≤ ∆ 1 and EMD d (P Y , P 0 ) ≤ ∆ 0 .In order to prove the corollary, we need to show that such P lies inside the set defined in (35).
We give the following definition.Given two random variables X and Y , the Hölder inequality applied to the expectation function ( [39]) reads: 1/q , (III.11) 14 We also argue that the choice made for PY Z minimizes the expected distortion between Y and Z, i.e., it yields

•=
b n stands for exponential equivalence, i.e., lim n→∞ 1/n ln (a n /b n ) = 0, and a n • ≤ b n designates that lim sup n→∞ 1/n ln (a n /b n ) ≤ 0. For a given real s, we denote [s] + = max{s, 0}.We use notation U (•) for the Heaviside step function.

Fig. 1 .
Fig. 1.Schematic representation of the adversarial setup considered in this paper.In the case of partially active attacker, channel A0 corresponds to the identity channel.