Next Article in Journal
The History and Perspectives of Efficiency at Maximum Power of the Carnot Engine
Next Article in Special Issue
Statistical Process Control for Unimodal Distribution Based on Maximum Entropy Distribution Approximation
Previous Article in Journal
Nonequilibrium Entropy in a Shock
Previous Article in Special Issue
A Bayesian Optimal Design for Sequential Accelerated Degradation Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Detection under the Restricted Bayesian Criterion

1
College of Communication Engineering, Chongqing University, Chongqing 400044, China
2
Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(7), 370; https://doi.org/10.3390/e19070370
Submission received: 8 May 2017 / Revised: 11 July 2017 / Accepted: 18 July 2017 / Published: 19 July 2017
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)

Abstract

:
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

1. Introduction

Bayesian, Minimax, and Neyman-Pearson (NP) decisions are three common approaches in the applications of signal detection and processing [1,2,3,4,5,6,7,8]. For instance, a Bayesian approach is proposed in [2] for the signal detection in compressed sensing (CS). In [4], a Minimax framework is introduced for multiclass classification, which can be applied to general data including imagery and other types of high-dimensional data. In order to detect ultra-wideband signals in the presence of dense multipath channels and ambient noise, the NP theorem is used to derive the ultra-wideband (UWB) signal detector [5]. The NP and Bayesian framework are also utilized to access the performance of channel-aware binary-decision fusion over a shared Rayleigh flat-fading channel with multiple antennas at the Decision Fusion Center (DFC) [8], and the DFC is widely used in the spectrum sensing for cognitive radio scenarios [9].
In the classical hypothesis testing problem, a relevant part of uncertainty is usually represented by the prior probability distributions [10]. The aim of the Bayesian criterion is to minimize the Bayes risk [11] and the Bayesian decision rule is determined based on posterior probabilities, where the prior information is assumed to be completely known. On the other hand, no prior information is considered for the Minimax decision rule [12], which minimizes the maximum of the conditional risks given over the parameter space. In addition, the Neyman-Pearson decision rule can be also conceived in the presence of prior distributions [13,14,15]. Therefore, the Bayesian, Neyman-Pearson, and Minimax approaches can be viewed as three different ways of exploiting prior information. In the former two cases, prior information is considered to be completely available, whereas no prior information is required in the latter. In practice, however, the extreme cases rarely happen, and the partial prior information or the prior information with uncertainty is often available, such as the adaptive vector subspace signal detection in partially homogeneous Gaussian disturbance and structured (unknown) deterministic interference within the framework of invariance theory [16].
In most practical cases, the prior distributions are available with certain degree of uncertainty because they are usually obtained based on the previous decisions and experiences [8]. As a result, the Bayesian and Neyman-Pearson approaches are ineffective due to the absence of complete prior information, and the Minimax approach achieves a poor performance since the available partial prior information is ignored. In order to utilize the partial information and to achieve a better performance, several studies have been conducted [8,17,18,19,20,21,22,23,24,25], for example, maximum entropy (ME), Γ-minimax, restricted Bayes, and restricted Neyman-Pearson approaches, to name but a few. For instance, the ME method is utilized in [23] to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The restricted NP approach is applicable for a binary composite hypothesis-testing problem in the presence of prior information uncertainty [10]. In [25], the group minimax is obtained through the emergent theory of quantizing with Bregman divergences and a closed form Stolarsky mean expression is obtained by optimizing the minimax Bayes risk error divergence for the binary simple hypothesis testing with a partially known prior probability.
To the best of our knowledge, no previous work has focused on the optimal decision rule under the restricted Bayesian criterion for a binary composite hypothesis-testing problem in the presence of prior distribution uncertainty, where the uncertainties may exist not only in the prior probability of the null hypothesis, but also in the distribution probability of each parameter value under the null and the alternative hypotheses. In this paper, we utilize the constraints that the conditional risks should be less than a predefined value to reduce the negative effects of the uncertain prior information, thereby the focus of this paper is to find the optimal decision rule to minimize the Bayes risk on the basis of the constraints and then to explore the relations between the Bayes risk and the predefined value.
In order to solve the optimization problem, the Lagrange duality is applied to convert the minimization of the Bayes risk under the constraints on conditional risks to an unconstrained one. In doing so, the minimization of the Bayes risk under the constraints on conditional risks is equivalent to the minimization of the Bayes risk with a modified prior distribution based on this conversion. Finally, the corresponding theorems and algorithms are developed to search the restricted Bayes decision rule. If there is no uncertainty in the prior distribution, the constraint on the conditional risks is not necessary. In such a case, the classical Bayesian decision rule is applicable, which minimizes the Bayes risk without any constraints. If the value of the constraint is larger than the maximum conditional risk obtained in the classical Bayesian approach, the constraint on the conditional risks is ineffective and the restricted Bayesian decision rule is identical to the classical Bayesian decision rule. On the other hand, if the prior information is full of uncertainty, the Minimax decision rule can be utilized to minimize the worst-case (maximum) conditional Bayes risk. It should be noted that the lowest constraint achieved in the restricted Bayesian approach is equal to the Bayes risk obtained via the Minimax decision rule. Therefore, the classical Bayesian and Minimax approaches are two extreme cases of the restricted Bayesian approaches. In addition, the Bayes risk is a strictly decreasing and convex function of the constraint on conditional risks in the restricted Bayesian approach. The main contributions of this paper are summarized as follows:
  • Formulation of the restricted Bayesian framework, which aims to minimize the Bayes risk under the constraint on the conditional risks.
  • Derivations of the restricted Bayes decision rule.
  • Algorithm for searching the restricted Bayes decision rule.
  • Characteristics of the Bayes risk versus the constraint on the conditional risks.
The remainder of this paper is organized as follows: in Section 2, a restricted Bayesian framework is formulated for a binary composite hypothesis-testing problem, which aims to minimize the Bayes risk under the constraint on the conditional risks. The optimal restricted Bayesian decision rule is explored in Section 3 and the corresponding algorithm is provided. Furthermore, the relation between the Bayes risk and the constraint is discussed in Section 4. Finally, numerical examples are presented in Section 5 to illustrate the theoretical results and conclusions are made in Section 6.

2. Problem Formulation

In the theory of signal detection, the detection problems such as radar and communication signal detection are usually formulated as a hypothesis testing problem, and the corresponding framework is developed to provide a theoretical and analytical basis for the detection of useful signal. In this paper, we consider a binary composite hypothesis-testing problem with partially known prior distribution, given by:
H i : X | θ ~ P θ ,   θ Λ i ,   i = 0 , 1 ,
where H 0 and H 1 are the null and the alternative hypotheses, respectively, X is a random variable with the sample space Γ and a K-dimensional observation vector x K , p θ ( x ) denotes the pdf of x for a given parameter value Θ = θ , Λ 0 and Λ 1 are the respective sets of all possible parameter values of Θ under H 0 and H 1 . Intuitively, the union of Λ 0 and Λ 1 forms the parameter set Λ, i.e., Λ = Λ 0 Λ 1 , and Λ 0 Λ 1 = . In addition, the prior distribution of Θ is denoted by ω ( θ ) , which is usually estimated based on previous observations and known up to a given degree of uncertainty due to the estimation errors.
If the Bayes risk is calculated based on the estimated prior distribution and the minimization of the Bayes risk is performed under the classical Bayesian criterion, then it means that the estimation errors are ignored directly. In doing so, a poor performance is obtained if the estimated distribution differs significantly from the correct one. On the other hand, if the Minimax criterion is utilized and the maximum conditional risk is minimized, then it fails to take the advantage of the useful prior information contained in the estimated prior distribution. In order to utilize the estimated prior distribution and alleviate the negative impact caused by the mismatch between the estimated prior distribution and the correct one, the restricted Bayesian criterion is applied in this paper. More specifically, this paper aims to minimize the Bayes risk, calculated based on the estimated prior distribution, under the constraint that the maximum conditional Bayes risk stays below a significance level that can be adjusted based on the degree of uncertainty in the estimated prior distribution.
Accordingly, the restricted Bayes optimization problem can be formulated by:
min ϕ Λ R θ ( ϕ ) ω ( θ ) d θ
subject to:
max θ Λ   R θ ( ϕ ) α ,
where r ( ϕ ) = Λ R θ ( ϕ ) ω ( θ ) d θ = E { R θ ( ϕ ) } denotes the Bayes risk, ϕ represents the decision rule which maps the observation vector to 0 or 1, α is the upper limit on the maximum conditional risk, and R θ ( ϕ ) represents the conditional risk of ϕ ( ) for Θ = θ and θ ∈ Λ. Prior to calculating the conditional risk R θ ( ϕ ) , a cost function C [ i , θ ] is used to assign costs to the decision results, where C [ i , θ ] denotes the cost of choosing H i when Θ = θ and θ ∈ Λ. The R θ ( ϕ ) can be calculated as the average cost of decision rule ϕ ( x ) for Θ = θ, given by:
R θ ( ϕ ) = E θ { C [ ϕ ( x ) , θ ] } = Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x .
In order to solve the constrained optimization problem in (2) and (3), we introduce a regularization parameter λ to construct an unconstrained optimization problem as below:
min ϕ { λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x } ,
which is also a transformation of the Lagrangian of the inequality-constrained optimization problem in (2) and (3), where λ is designed according to α and 0 ≤ λ ≤ 1. In particular, λ decreases as α decreases, and this fact can be utilized to adjust the value of λ. Accordingly, the unconstrained optimization problem in (5) with a suitable λ is an alternative representation of the constrained optimization problem in (2) and (3) according to the Lagrange duality, the equivalence of them will be proved in the next section.

3. Restricted Bayes Decision Rule

In this section, based on the problem formulation in Section 2, the characteristics of the optimal decision rule under the restricted Bayes criterion are investigated first, and then an algorithm for finding the restricted Bayes decision rule is developed.

3.1. Characteristics of the Restricted Bayes Decision Rule

According to the formulation in (5), the following theorems are developed to characterize the restricted Bayes decision rule under certain conditions.
Theorem 1.
Let g ( θ ) = λ ω ( θ ) + ( 1 λ ) v ( θ ) , where v ( θ ) is any valid pdf. If ϕ is the classical Bayes decision rule for the modified prior density g ( θ ) and satisfies following equality:
Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x ,
then ϕ is a solution of the optimization problem in Equation (5).
The proof is presented in the Appendix. Theorem 1 indicates that the solution of the unconstrained optimization problem in (5) is calculated as a classical Bayes decision rule for a modified prior density under certain conditions. In other words, the optimization problem in (5) can be equivalent to a classical Bayesian optimization problem for the modified prior distribution g ( θ ) = λ ω ( θ ) + ( 1 λ ) v ( θ ) if g ( θ ) satisfies the equality in (8).
Next, in order to illustrate the equivalence between the optimization problem in (5) and that in (2) and (3), a proposition is developed as follows.
Proposition 1.
Under the conditions in Theorem 1, ϕ is also the solution of the optimization problem in Equations (2) and (3) if max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x = α .
The proof is given in the Appendix. Proposition 1 implies that if the decision rule ϕ defined in Theorem 1 meets the constraint in (3) with equality, then it also provides a solution for the restricted Bayes optimization problem in (2) and (3). In other words, the minimum Bayes risk is achieved when the maximum conditional risk for θ ∈ Λ is equal to the upper limit α. Theorem 1 and Proposition 1 also build a relationship between λ and α. For any λ, the achievable upper limit α can be calculated by the equality in Proposition 1.
In addition, the modified prior distribution g ( θ ) specified in Theorem 1 is the least favorable distribution among a family of pdfs, which have the same form with g ( θ ) . Explicitly, the achievable minimum Bayes risk corresponding to g ( θ ) specified in Theorem 1 is greater than or equal to that corresponding to any other distribution with the pdf form of g ^ ( θ ) = λ ^ ω ( θ ) + ( 1 λ ^ ) v ^ ( θ ) , where λ ^ λ and v ^ ( θ ) is any valid pdf. Theorem 2 further states this case.
Theorem 2.
Under the conditions in Theorem 1, the g ( θ ) specified in Theorem 1 maximizes the Bayes risk among all probability distributions in the form of g ^ ( θ ) = λ ^ ω ( θ ) + ( 1 λ ^ ) v ^ ( θ ) , where λ ^ λ and v ^ ( θ ) is any valid pdf. Equivalently:
Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ Λ g ^ ( θ ) Γ C [ ϕ ^ ( x ) , θ ] p θ ( x ) d x d θ ,
where ϕ and ϕ ^ are the classical Bayes decision rules corresponding to g ( θ ) and g ^ ( θ ) , respectively.
The proof is presented in the Appendix. As pointed out in Theorem 2, since g ( θ ) is the least favorable distribution, this property of g ( θ ) can be utilized to search its explicit expression. From the definitions of g ( θ ) and g ^ ( θ ) , λ is a special case of λ ^ and only the case of λ ^ = λ is concerned in practical applications.
For the rest of this section, we first present the solution to the classical Bayes decision rule, and then develop the algorithm for finding the g ( θ ) specified in Theorem 1 and the optimal decision rule for the restricted Bayesian optimization problem.

3.2. Classical Bayes Decision Rule

As discussed above, the classical Bayes decision rule ϕ corresponding to g ( θ ) can be expressed by:
ϕ = min ϕ Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = min ϕ Λ g ( θ ) R θ ( ϕ ) d θ ,
where g ( θ ) = λ ω ( θ ) + ( 1 λ ) v ( θ ) . Since R θ ( ϕ ) = E θ { C [ ϕ ( x ) , θ ] } = E θ { C [ ϕ ( x ) , Θ ^ ] | Θ ^ = θ } , where Θ ^ ~ g ( θ ) , the modified Bayes risk r ^ ( ϕ ) is calculated by:
r ^ ( ϕ ) = Λ g ( θ ) R θ ( ϕ ) d θ = E { E θ { C [ ϕ ( x ) , Θ ^ ] | Θ ^ = θ } } = E { C [ ϕ ( x ) , Θ ^ ] } ,
where the last equality holds due to the use of the iterated expectation of E { B } = E { E { B | A } } . Therefore, r ^ ( ϕ ) can be simply viewed as the cost of using ϕ averaged over Θ ^ and x. By using the iterated expectation again, r ^ ( ϕ ) is rewritten as:
r ^ ( ϕ ) = E { E { C [ ϕ ( x ) , Θ ^ ] | x } } .
According to the Bayes lemma [11], the optimal decision rule to minimize r ^ ( ϕ ) is expressed by:
ϕ ( x ) = { 1 if   E { C [ 1 , Θ ^ ] | x = x } < E { C [ 0 , Θ ^ ] | x = x } 0   or   1 if   E { C [ 1 , Θ ^ ] | x = x } = E { C [ 0 , Θ ^ ] | x = x } 0 if   E { C [ 1 , Θ ^ ] | x = x } > E { C [ 0 , Θ ^ ] | x = x } .
In many cases, the costs over sets Λ 0 and Λ 1 are uniform, i.e.:
C [ i , Θ ^ ] = C i j , Θ ^ Λ j .
Usually, the cost of a right decision is less than that of a wrong one, i.e., C 11 < C 01 and C 00 < C 10 . Then (11) is simplified as:
ϕ ( x ) = { 1 if   L ( x ) > l 0 0   or   1 if   L ( x ) = l 0 0 if   L ( x ) < l 0 ,
where L ( x ) = P ( Θ ^ Λ 1 | x = x ) / P ( Θ ^ Λ 0 | x = x ) , l 0 = ( C 10 C 00 ) / ( C 01 C 11 ) , and P ( Θ ^ Λ j | x = x ) represents the conditional probability for Θ ^ Λ j given that x = x . Based on the total probability formula, P ( Θ ^ Λ j | x = x ) is calculated by:
P ( Θ ^ Λ j | x = x ) = p ( x | Θ ^ Λ j ) P ( Θ ^ Λ j ) p ( x ) ,
where p ( x ) = j = 0 1 p ( x | Θ ^ Λ j ) P ( Θ ^ Λ j ) , and P ( Θ ^ Λ j ) = Λ j g ( θ ) d θ = p ( H j ) denotes the probability of Θ ^ Λ j , j = 0 , 1 . Based on the notation introduced in [11], p ( x | Θ ^ Λ j ) is given as:
p ( x | Θ ^ Λ j ) = Λ p θ ( x ) g j ( θ ) d θ ,
where g j ( θ ) denotes the pdf of Θ ^ given that Θ ^ Λ j , and it is:
g j ( θ ) = { 0 if   θ Λ j g ( θ ) / p ( Θ ^ Λ j ) if   θ Λ j .
Accordingly, Equation (13) is rewritten as:
ϕ ( x ) = { 1 if   Λ p θ ( x ) g 1 ( θ ) d θ > γ Λ p θ ( x ) g 0 ( θ ) d θ 0   or   1 if   Λ p θ ( x ) g 1 ( θ ) d θ = γ Λ p θ ( x ) g 0 ( θ ) d θ 0 if   Λ p θ ( x ) g 1 ( θ ) d θ < γ Λ p θ ( x ) g 0 ( θ ) d θ ,
where γ = ( p ( H 0 ) ( C 10 C 00 ) ) / ( p ( H 1 ) ( C 01 C 11 ) ) . As a result, the classical Bayes decision rule is specified by the observation densities of (15), which depends on the probability density of the observation x and the conditional densities g j ( θ ) , and the decision threshold is determined by the prior probabilities p ( H j ) and the costs over sets Λ 0 and Λ 1 .

3.3. Algorithm for Finding the Restricted Bayes Decision Rule

Based on the analysis in Section 3.1, the function of g ( θ ) and the corresponding classical Bayes decision rule ϕ specified in Theorem 1 are required in order to solve the constrained optimization problem in (2) and (3). It also implies that the explicit expression of v ( θ ) and the value of λ should be determined in advance. To achieve this, the condition in Theorem 1 can be expressed as:
Λ R θ ( ϕ ) v ( θ ) d θ = max θ Λ R θ ( ϕ ) ,
with R θ ( ϕ ) = Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x . It is obvious that v ( θ ) assigns non-zero probabilities only for the values of θ that correspond to the global maximum of R θ ( ϕ ) from Equation (18). In fact, there could be unique, multiple or infinite number of maximizers that achieve global maximum. Therefore, in this work, the solution of the optimization problem in Equations (2) and (3) is discussed under three cases according to the number of the maximizers in the following subsection.
First, assume that R θ ( ϕ ) has a unique maximizer that achieves the global maximum, then v ( θ ) can be expressed by:
v ( θ ) = δ ( θ θ 1 ) ,
where θ 1 denotes the unique value of θ corresponding to the maximum R θ ( ϕ ) and δ denotes the Dirac delta function, i.e., δ ( x ) = { ,   x = 0 0 ,   x 0 and δ ( x ) d x = 1 . If ϕ satisfies the conditions in Theorem 1 and Proposition 1, ϕ is the solution of the restricted Bayesian optimization problem in (2) and (3). Based on this assumption, Algorithm 1 is developed to find the restricted Bayes decision rule ϕ and θ 1 .
Algorithm 1. Restricted Bayes decision rule.
1: Initialize λ inf = 0 , λ sup = 1 and λ = ( λ inf + λ sup ) / 2 .
2: Obtain R θ ( ϕ θ 1 ) = Γ C [ ϕ θ 1 ( x ) , θ ] p θ ( x ) d x for all θ 1 Λ , where ϕ θ 1 denotes the classical Bayes decision rule corresponding to g ( θ ) = λ ω ( θ ) + ( 1 λ ) δ ( θ θ 1 ) as determined in Section 3.2.
3: Calculate:
θ 1 = arg max θ 1 Λ   ( θ 1 ) ,
where ( θ 1 ) = λ Λ R θ ( ϕ θ 1 ) ω ( θ ) d θ + ( 1 λ ) R θ 1 ( ϕ θ 1 ) and R θ 1 ( ϕ θ 1 ) = Γ C [ ϕ θ 1 ( x ) , θ 1 ] p θ 1 ( x ) d x .
4: If R θ 1 ( ϕ θ 1 ) max θ Λ   R θ ( ϕ θ 1 ) , terminate this algorithm, reset v ( θ ) and restart the algorithm. When R θ 1 ( ϕ θ 1 ) = max θ Λ   R θ ( ϕ θ 1 ) , if the difference between max θ Λ   R θ ( ϕ θ 1 ) and α is less than the predefined precision value, ϕ θ 1 is a solution of the restricted Bayes optimization problem in (2) and (3), and terminate the algorithm. Otherwise, continue to the next step.
5: If max θ Λ   R θ ( ϕ θ 1 ) > α , λ and λ sup are replaced with ( λ + λ inf ) / 2 and λ , respectively. If max θ Λ   R θ ( ϕ θ 1 ) < α , λ and λ inf are replaced with ( λ + λ sup ) / 2 and λ , respectively.
6: If the difference between λ sup and λ inf is less than the predefined precision value, terminate the algorithm and reset v ( θ ) to restart the algorithm; else go to Step 2.
In Algorithm 1, Step 3 follows from Theorem 2 and Step 4 is developed according to Theorem 1 and Proposition 1. In Step 3, ( θ 1 ) can be treated as the minimum Bayes risk without any constraints on conditional risks corresponding to g ( θ ) = λ ω ( θ ) + ( 1 λ ) δ ( θ θ 1 ) . It should be noted that the value of λ is related to α through Proposition 1, and α increases with the increase of λ . Hence, the constraint α can be achieved by adjusting the value of λ . For any θ 1 that satisfies (20), if the corresponding classical Bayes decision rule ϕ θ 1 satisfies R θ 1 ( ϕ θ 1 ) = max θ Λ   R θ ( ϕ θ 1 ) and the difference between R θ 1 ( ϕ θ 1 ) and α is less than the predefined precision value, the θ 1 is the maximizer that achieves the global maximum of R θ ( ϕ ) and ϕ θ 1 can be viewed as the solution of the restricted Bayes optimization problem in (2) and (3). In addition, since the classical Bayes decision rule can be obtained based on Section 3.2 for any valid g ( θ ) , the solution of the restricted optimization problem in (2) and (3) exists as long as α is properly defined. The effective interval of α will be discussed in Section 4.
Second, suppose that there are multiple values of θ corresponding to the global maximum of R θ ( ϕ ) , v ( θ ) should be expressed as:
v ( θ ) = l = 1 L η l δ ( θ θ l ) ,
where η l 1 , l = 1 L η l = 1 , L and θ l denote the number and the value of θ that achieve the maximum R θ ( ϕ ) , respectively. For the convenience of analysis, let denote the vector of all unknown components of v ( θ ) , i.e.:
= [ ( η 1 , θ 1 ) , , ( η L , θ L ) ] .
In order to solve the restricted Bayes optimization problem in (2) and (3), the Step 3 should be replaced by:
= arg max   ( ) = arg max { λ Λ R θ ( ϕ ) ω ( θ ) d θ + ( 1 λ ) l = 1 L η l R θ l ( ϕ ) } ,
where ϕ represents the classical Bayes decision rule for the modified prior distribution g ( θ ) = λ ω ( θ ) + ( 1 λ ) l = 1 L η l δ ( θ θ l ) . Correspondingly, the condition in Step (4) should be updated by replacing R θ 1 ( ϕ θ 1 ) and max θ Λ   R θ ( ϕ θ 1 ) with R ( ϕ ) and max   R ( ϕ ) , respectively. Obviously, compared with the case where R θ ( ϕ ) has only one maximizer, the computational complexity of multiple maximizers increases significantly. In such case, the global optimization algorithms such as the particle swarm optimization algorithm (PSO) and the ant colony algorithm (ACO) can be utilized to find [26,27,28].
Finally, if there are infinitely many values of θ that correspond to the global maximum of R θ ( ϕ ) , it is difficult to obtain an accurate solution of the optimization problem in (2) and (3). In this case, an approximate solution can be obtained by employing the Parzen window density estimation to approximate the form of v ( θ ) , where v ( θ ) is expressed approximately by a convex combination of many window functions, given by:
v ( θ ) = l = 1 W τ l φ ( θ θ l ) ,
where τ l 1 , l = 1 W τ l = 1 , and φ denotes a window function which could be a Rectangular, Gauss, or Cosine window function. In order to solve the optimization problem in (2) and (3), Algorithm 1 should be updated by replacing and ϕ with = [ ( τ 1 , θ 1 ) , , ( τ W , θ W ) ] and the classical Bayes decision rule corresponding to g ( θ ) = λ ω ( θ ) + ( 1 λ ) l = 1 W τ l φ ( θ θ l ) , respectively.
In real applications, it may be difficult to know all the exact values of θ that achieve the global maximum of R θ ( ϕ ) in advance. The practical way is that we start with the assumption that there is a unique θ that achieves the global maximum. Based on the assumption and Algorithm 1, θ 1 and the corresponding ϕ θ 1 is obtained. If θ 1 and ϕ θ 1 satisfy the condition in Step 3, the restricted Bayes optimization problem in (2) and (3) is solved. Otherwise, the number of θ which are assumed to achieve the global maximum of R θ ( ϕ ) is increased incrementally until the ϕ calculated through the updated Algorithm 1 is verified as the solution of the restricted Bayes optimization problem.

4. Relationship between Bayes Risk and the Constraint

In this section, the relationship between Bayes risk and the constraint on the conditional risks is explored. As the analysis in introduction, the constraint on the conditional risks is introduced from the uncertainty of ω ( θ ) . Generally speaking, the classical Bayesian criterion is considered to minimize the average Bayes risk when ω ( θ ) is completely known, and the Minimax decision rule can be utilized to minimize the worst-case (maximum) conditional Bayes risk if the prior information is full of uncertainty. Therefore, the classical Bayesian and Minimax approaches can be viewed as two extreme cases of the restricted Bayesian approaches.
For the notational simplicity, here we respectively denote the classical Bayes and the Minimax decision rules by ϕ b o and ϕ m o , i.e.:
ϕ b o = arg min ϕ Λ R θ ( ϕ ) ω ( θ ) d θ ,  
ϕ m o = arg min ϕ   max θ Λ   R θ ( ϕ ) .
Then define B = max θ Λ   R θ ( ϕ b o ) and M = max θ Λ   R θ ( ϕ m o ) as the maximum conditional risks achieved by the classical Bayesian and Minimax decision rules, respectively. In addition, the restricted Bayes decision rule where max θ Λ   R θ ( ϕ ) α is denoted by ϕ r α . From the definitions of ϕ b o , ϕ m o and ϕ r α , it should be noted that r ( ϕ b o ) r ( ϕ r α ) r ( ϕ m o ) and M max θ Λ   R θ ( ϕ r α ) B .
According to the definition of the Minimax criterion, M is the achievable minimum of the maximum conditional risk, which means that there is no solution for the optimization problem in (2) and (3) if the value of α is less than M . On the other hand, it is obvious that the constraint on the conditional risks enforced by α is ineffective if α is greater than B . In other words, the restricted Bayes optimization problem in (2) and (3) is equivalent to the Bayes optimization problem in (25) for any α B . Namely ϕ r α = ϕ b o and r ( ϕ b o ) = r ( ϕ r α ) if α B . As a result, α should be defined in the interval of [ M , B ] ; otherwise the constraint is ineffective or meaningless.
For the restricted Bayes optimization problem formulated in (2) and (3), the maximum conditional risk max θ Λ   R θ ( ϕ r α ) and the Bayes risk r ( ϕ r α ) are closely related to the value of the constraint α . When α [ M , B ] , the following conclusion holds that:
max θ Λ   R θ ( ϕ r α ) = α .
A simple contradiction method can be used here to show this conclusion. Suppose that max θ Λ   R θ ( ϕ r α ) < α , and let ϕ = ξ ϕ r α + ( 1 ξ ) ϕ b o , where 0 ξ 1 and ϕ b o denotes a classical Bayes decision rule obtained in (25), max θ Λ   R θ ( ϕ b o ) = B > α and r ( ϕ b o ) < r ( ϕ r α ) . There must exist a ξ such that max θ Λ   R θ ( ϕ ) = α and r ( ϕ ) = ξ r ( ϕ r α ) + ( 1 ξ ) r ( ϕ b o ) < r ( ϕ r α ) . Obviously, it contradicts with the definition of ϕ r α . Therefore, the conclusion in (27) is true.
Based on the discussions above, the relationship between the restricted Bayesian decision rule r ( ϕ r α ) and the value of the constraint on the conditional risks is presented by the Theorem 3.
Theorem 3.
The Bayes risk r ( ϕ r α ) obtained by the restricted Bayes decision rule is a strictly decreasing and convex function of α for α [ M , B ] .
The proof is provided in the Appendix. Theorem 3 implies the relationship between Bayes risk and the constraint on conditional risks. Moreover, the value of α is closely related to the uncertainty of ω ( θ ) . In general, a smaller value of α should be specified for a greater degree of the uncertainty. Therefore, based on these characteristics, one can predefine a suitable value of α to obtain the expected Bayes risk in practice. In addition, the restricted optimization problem in (2) and (3) is analyzed and solved in Section 3 from a special perspective that its solution is identical to a classical Bayes decision rule with a modified prior distribution as presented in Theorem 1 and Proposition 1. For any fixed prior distribution, the corresponding classical Bayes decision rule can be obtained from Section 3.2, thereby the solution of the restricted optimization problem in (2) and (3) exists as long as α is properly defined in [ M , B ] and can be obtained further by the algorithm in Section 3.3.

5. Numerical Results

In this section, a binary hypothesis-testing problem is studied to illustrate theoretical results. The hypotheses are defined as:
{ H 0 : x = Θ + n , Θ Λ 0 H 1 : x = Θ + n , Θ Λ 1 ,
where x , Λ 0 and Λ 1 denote the sets of all possible values of parameter Θ under H 0 and H 1 , respectively, which are specified as Λ 0 = { 0 } and Λ 1 = { A , A } , where A is a known constant. In addition, n denotes a zero mean noise that is a mixture of Rayleigh distributed components; that is, p n ( n ) = i = 1 N m i φ i ( n μ i ) , where m i 0 for i = 1 , , N , i = 1 N m i = 1 , and:
φ ( n ) = { n σ i 2 exp ( n 2 2 σ i 2 ) , n 0 0 , n < 0 .
In the numerical results, the same variance is assumed, i.e., σ i = σ for i = 1 , , N . Furthermore, the parameters are specified as N = 4 , μ 1 = 0.2 , μ 2 = 1 , μ 3 = 2 σ π 2 0.2 , μ 4 = 2 σ π 2 1 , and m i = 0.25 for i = 1 , , 4 . Then the conditional pdf of x for a given value of Θ = θ can be expressed by:
p θ ( x ) = i = 1 N m i φ i ( x μ i θ ) .
With the assumption, the detection problem in (28) is equivalent to the detection of a signal that employs binary phase shift keying (BPSK). Accordingly, the probability distribution of Θ under H 0 is directly expressed as ω 0 ( θ ) = δ ( θ ) . On the other hand, the probability distribution of Θ under H 1 can be modeled as:
ω 1 ( θ ) = ρ δ ( θ A ) + ( 1 ρ ) δ ( θ + A ) ,
where ρ is estimated based on the previous experiences, and it is known with some uncertainty due to the presence of estimation errors. As a result, the probability distribution of Θ can be determined by:
ω ( θ ) = i = 0 1 p ( H i ) ω i ( θ ) ,
where p ( H i ) 0 , i = 0 1 p ( H i ) = 1 and p ( H i ) denotes the prior probability of H i for i = 0 , 1 .
In order to obtain the optimal restricted Bayesian decision rule, the prior distribution ω ( θ ) should be modified via a pdf v ( θ ) according to Theorem 1. Since Λ = Λ 0 Λ 1 = { 0 , A , A } , the complete form of v ( θ ) is expressed as:
v ( θ ) = i = 1 3 t i δ ( θ θ i ) ,
where t 1 , t 2 and t 3 are the respective weights assigned for θ 1 = 0 , θ 2 = A and θ 2 = A , t i 0 for i = 1 , , 3 and i = 1 3 t i = 1 .
First, it is assumed that any two of the three assigned weights are zero, i.e., v ( θ ) = δ ( θ θ i ) . By employing the algorithm based on (20), the optimal decision rule is obtained if the condition in the third step is satisfied. Otherwise, we assume that only one of the three assigned weights is zero, i.e., v ( θ ) = t δ ( θ θ i ) + ( 1 t ) δ ( θ θ j ) , where t = t i , θ i , θ j Λ , and θ i θ j . In such a case, as t is the only one unknown parameter in v ( θ ) , the algorithm based on (21) and (22) is used to find the value of t and the corresponding g ( θ ) = λ ω ( θ ) + ( 1 λ ) v ( θ ) that maximizes the Bayes risk. If the corresponding decision rule satisfies the condition in (23), it is the optimal restricted Bayesian rule that we seek. Finally, if the condition is still not met in the second case, the v ( θ ) should be determined as v ( θ ) = t 1 δ ( θ ) + t 2 δ ( θ A ) + t 3 δ ( θ + A ) and t i 0 for i = 1 , , 3 . If we find the values of t 1 and t 2 that maximize the Bayes risk corresponding to g ( θ ) , the optimal restricted Bayesian rule is also obtained.
In Figure 1, the achievable minimum Bayes risks obtained by the restricted Bayesian decision rule for different values of ρ are plotted against α , where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 . As analyzed in Section 4, when α is equal to the Bayes risk of the Minimax decision rule, the restricted Bayesian decision rule is identical to the Minimax decision rule. When α is smaller than the Bayes risk of the Minimax decision rule, there is no restricted Bayesian decision rule that satisfies the limit established by α. On the other hand, the detection performance of the restricted Bayesian decision rule is the same with that of the classical Bayesian decision rule when α is greater than and equal to the maximum conditional risk of the classical Bayesian decision rule. As expected, the lowest Bayes risks are obtained by the classical Bayesian decision rule, but it also leads to the highest maximum conditional risk that are 0.3456, 0.3657, 0.3816, 0.4131 and 0.4325 for ρ = 0.6 , 0.65, 0.686, 0.75 and 0.8, respectively. On the contrary, the Minimax decision rule achieves the lowest maximum conditional risk, but it produces the worst Bayes risk. It should be noted that the conditional risks of the Minimax decision rule are equal to each other, which are 0.2381.
For the restricted Bayesian decision rule, the maximum conditional risk is equal to α except for the linear part that corresponds to the classical Bayesian decision rule. In fact, the restricted Bayes decision rule makes a tradeoff between the maximum conditional risk and Bayes risk, and generalizes the Minimax and the classical Bayesian decision rules. It is also shown from the figure that with the increase of ρ , the performance difference of the classical Bayesian and the Minimax decision rules increases for ρ 0.5518 . Specifically, the maximum conditional risk of the classical Bayesian decision rule increases and the corresponding Bayes risk decreases as ρ increases. Furthermore, when α is greater than the maximum conditional risk of the Minimax decision rule and smaller than that of the classical Bayesian decision rule, the Bayes risk is a strictly decreasing and convex function of α. This agrees with the conclusion made in Theorem 3. In addition, Figure 1 acts a guideline for the design of α in practice by observing the corresponding Bayes risk for each α. Hence, instead of assigning a value for α arbitrarily, Figure 1 can be utilized to choose a more appropriate α in practical problems.
Figure 2 compares the achievable lowest maximum conditional risk obtained by the restricted Bayes decision rule and the corresponding Bayes risk versus λ in (5) for ρ = 0.6 , 0.686 and 0.8, where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 . The maximum conditional risk is equal to the Bayes risk for 0 λ 0.7036 when ρ = 0.8, for 0 λ 0.8207 when ρ = 0.6 and ρ = 0.686 . This shows that the restricted Bayesian decision rules becomes identical to the Minimax decision rule when λ reduces to less than or equal to a certain value. For convenience, this value is denoted by λ m , and it is seen that λ m = 0.7036 , 0.8207 and 0.8207 for ρ = 0.6 , 0.686 and 0.8, respectively. In fact, the value of λ m increases from 0.7035 to 0.8207 when ρ decreases from 0.8 to 0.686 and maintains at 0.8207 when ρ [ 0.6 , 0.686 ] . In addition, when λ λ m , the achievable lowest maximum conditional risk increases and the corresponding Bayes risk decreases with the increase of λ . In order to further illustrate the results in Figure 2, Table 1, Table 2 and Table 3 are provided to show the parameters of t 1 , t 2 and t 3 in v ( θ ) , the maximum conditional and Bayes risks of the restricted Bayesian decision rule for different values of λ for ρ = 0.6 , 0.686 and 0.8.
From the Table 1, Table 2 and Table 3, it is observed that the maximum conditional risk is always equal to the conditional risk for Θ = A . Actually, the maximum conditional risk can also be viewed as the achievable minimum α . Since Θ = 0 is independent on ρ , the weight t 1 of Θ = 0 in v ( θ ) is same for each λ when ρ = 0.6 , 0.686 and 0.8. Conversely, parameters of t 2 and t 3 change with ρ . In the simulation results, the distribution of Θ based on the Minimax criterion can be calculated as ω m ( θ ) = 0.3693 δ ( θ ) + 0.3096 δ ( θ A ) + 0.3211 δ ( θ + A ) . In order to achieve the same performance with the Minimax decision rule, the ω ( θ ) in (31) should be modified by making g ( θ ) equal to ω m ( θ ) , i.e., g ( θ ) = λ ω ( θ ) + ( 1 λ ) v ( θ ) = ω m ( θ ) . For instance, as listed in Table 1, Table 2 and Table 3, v ( θ ) = 0.2162 δ ( θ A ) + 0.7838 δ ( θ + A ) and λ = 0.8207 for ρ = 0.6 , v ( θ ) = δ ( θ + A ) and λ = 0.8207 for ρ = 0.686 , v ( θ ) = 0.1777 δ ( θ ) + 0.8223 δ ( θ + A ) and λ = 0.7036 for ρ = 0.8 . Therefore, Figure 2 and Table 1, Table 2 and Table 3 also provide a guideline to select a suitable value of λ .
Figure 3a plots the achievable minimum α obtained via the restricted Bayesian decision rule versus σ for λ = 1 , 0.8 and 0.6 when p ( H 0 ) = 0.45 and A = 2 , and the corresponding minimum Bayes risks are presented in Figure 3b. In general, with the increase of σ , the achievable minimum α and the Bayes risk increase first, and then decrease to troughs, and increase gradually again, all regardless of the values of λ . In fact, when λ = 1 , the restricted Bayesian decision rule is identical to the classical Bayesian decision rule. In addition, the maximum conditional risk is equal to α for any λ , which agrees with the conclusion in (27). As expected, the classical Bayesian decision rule achieves the highest maximum conditional risk and the minimum Bayes risk. It is also seen that the maximum conditional risk decreases and the Bayes risk increases with the decrease of λ . In this case, when λ = 0.6 , it is the same with that of the Minimax decision rule when σ ( 0.26 , 0.61 ) , which implies λ m 0.6 for σ ( 0.26 , 0.61 ) . Similarly, the cases for λ ( 0.6 , λ m ) are the same to that when σ ( 0.26 , 0.61 ) . From Figure 3a,b, compared with the performance of the classical Bayesian decision rules, the maximum conditional risks obtained via the restricted Bayesian decision rules decrease significantly with much lower increase of the corresponding Bayes risks, especially for σ > 0.92 and λ = 0.8 . To clearly investigate the results in Figure 3, the restricted Bayes decision rules for different cases are presented. It should be noted beforehand that the form of the optimal decision rule can be determined as follows:
ϕ * = { 1 , x Γ 1 0 , o t h e r w i s e ,
where Γ 1 represents a part of the sample space. The restricted Bayes decision rules, the constraint α and the Bayes risks for different values of σ are provided in Table 4, Table 5 and Table 6 when λ = 1 , 0.8 and 0.6.
In Figure 4a, the achievable minimum values of α are plotted versus A for λ = 1 , 0.8 and 0.6 when p ( H 0 ) = 0.45 and σ = 0.5 , where the corresponding minimum Bayes risks are illustrated in Figure 4b. Similarly, the decision rule for λ = 1 is identical to the classical Bayesian decision rule, and the maximum conditional risk is equal to α. When A is small such as A ( 0 , 0.2 ) , the maximum conditional risk decreases significantly with a small increase of the Bayes risk via the restricted decision rule for both λ = 0.8 and 0.6. In general, a lower maximum conditional risk and a higher Bayes risk can be obtained for a smaller value of λ , but it is not true if λ ( λ , λ m ) according to the results shown in Figure 2. Especially, the Bayes risk obtained via the restricted Bayes decision rule when λ = 0.8 is almost equal to that obtained via the classical Bayes decision rule for some values of A, while it decreases significantly when λ = 0.6 . The difference between λ = 0.8 and 0.6 is especially noticeable when A = 2.45 . In addition, when A > 2.45 , α and the Bayes risk decrease gradually to zero as A increases.
.

6. Conclusions

In this paper, the restricted Bayes decision rule is developed for a binary composite hypothesis-testing problem with a partially known prior distribution. Generally, the prior distribution is estimated based on previous information and is not completely accurate due to the estimate errors. In order to utilize the useful prior information, the Bayes risk is calculated based on the estimated prior distribution. On the other hand, a constraint on the maximum conditional risk is applied to alleviate the negative impact caused by the mismatch between the estimated prior distribution and the correct one. By applying the Lagrange duality, the restricted Bayesian optimization problem is transformed to an unconstrained optimization problem. Based on the transformation, Theorems are derived to provide theoretical supports for the algorithms to find the optimal restricted Bayes decision rule. Specially, an additional conclusion is made that the Bayes risk obtained via the restricted Bayes decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. In addition, the constraint should be defined in an appropriate interval, otherwise the constraint is not effective. Finally, the numerical examples are provided to demonstrate the performance of the proposed decision rule, which are consistent with theoretical results.
The classical Bayes and Minimax decision rules are usually used in two different scenarios from the restricted Bayesian optimization problem, but they can be realized as two special cases of the restricted Bayes decision rule by defining two proper values of the constraint. In fact, the classical Bayesian decision rule achieves the lowest Bayes risk but with the highest maximum conditional risk, whereas the Maximum decision rule obtains the lowest maximum conditional risk but with a worst Bayes risk. By adjusting the constraint, the restricted Bayesian decision rule provides the ability to keep the balance between the maximum conditional risk and the Bayes risk.

Acknowledgments

This research is partly supported by the Basic and Advanced Research Project in Chongqing (Grant No. cstc2016jcyjA0134, No. cstc2016jcyjA0043), graduate scientific research and innovation foundation of Chongqing, China (Grant No. CYB17041), the National Natural Science Foundation of China (Grant No. 61501072, No. 41404027, N0. 61571069, No. 61675036, No. 61471073) and the Fundamental Research Funds for the Central Universities (Grant No. 106112017CDJQJ168817, No. 106112017CDJQJ168819) supported by the Fundamental Research Funds for the Central Universities.

Author Contributions

Shujun Liu raised the idea of the framework to find a suitable decision rule for a binary composite hypothesis-testing problem. Ting Yang and Shujun Liu contributed to the drafting of the manuscript, interpretation of the results, some experimental design and checked the manuscript. Hongqing Liu contributed to analyze the optimal restricted Bayesian decision rule and develop the corresponding algorithm. Ting Yang contributed to the proofs of the theories developed in this paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
It is obvious that:
λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = Λ ( λ ω ( θ ) + ( 1 λ ) v ( θ ) ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ ,
where the first inequality holds as max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ . Since ϕ is the classical Bayes decision rule, which minimizes the Bayes risk without any constraints for the modified prior distribution of θ denoted by g ( θ ) , one obtains:
Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ .
If Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x , the following equality holds
λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x .
Therefore, ϕ is a solution of the optimization problem in (5). Especially, the value of the expression in (5) is always greater than or equal to (A3). □
Proof of Proposition 1.
Based on the assumption in Theorem 1, ϕ is the optimal decision rule to minimize the value of the objective function in (5), which means:
λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x .
Since max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x α holds due to the constraint on the conditional risks in (6) and max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x = α holds based on the assumption in the corollary, Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ should be greater than or equal to Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ in order to satisfy the inequality in (A4). Therefore, ϕ is the optimal decision rule that minimizes the Bayes risk under the constraint on the conditional risks if max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x = α . In other words, ϕ is a solution of the optimization problem in (2) and (3). □
Proof of Theorem 2.
Under the conditions in Theorem 1, the minimum Bayes risk corresponding to g ( θ ) is expressed as:
Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) Λ v ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = λ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x ,  
where the second equality holds according to the condition (6) in Theorem 1. Since Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x , one finds:
Λ g ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ λ ^ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ^ ) max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x λ ^ Λ ω ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ + ( 1 λ ^ ) Λ v ^ ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = Λ ( λ ^ ω ( θ ) + ( 1 λ ^ ) v ^ ( θ ) ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ = Λ g ^ ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ Λ g ^ ( θ ) Γ C [ ϕ ^ ( x ) , θ ] p θ ( x ) d x d θ
for any λ ^ λ , where the second inequality follows from max θ Λ Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x Λ v ^ ( θ ) Γ C [ ϕ ( x ) , θ ] p θ ( x ) d x d θ , and the last inequality holds based on the definition of ϕ ^ . Therefore, g ( θ ) specified in Theorem 1 is the least favorable distribution that maximizes the Bayes risk among all the probability distributions in the form of g ^ ( θ ) = λ ^ ω ( θ ) + ( 1 λ ^ ) v ^ ( θ ) for λ ^ λ . □
Proof of Theorem 3.
According to the definition of the restricted Bayes optimization problem formulated in (A), r ( ϕ r α ) is a non-increasing function in terms of α . First, we shall use the contradiction method to prove the strictly decreasing property of r ( ϕ r α ) . Suppose that M α 1 α 2 B and r ( ϕ r α 1 ) = r ( ϕ r α 2 ) . Actually, ϕ r α 2 is a restricted Bayes rule corresponding to α 1 , which means max θ Λ   R θ ( ϕ r α 2 ) = α 1 < α 2 . Obviously, it contradicts with the conclusion in (27). Hence, r ( ϕ r α 1 ) r ( ϕ r α 2 ) must be satisfied.
Next, we shall prove the convexity of r ( ϕ r α ) over α [ M , B ] by defining a decision rule ϕ as follows:
ϕ = ξ ϕ r α 1 + ( 1 ξ ) ϕ r α 2 ,
where 0 ξ 1 , M α 1 α 2 B , ϕ r α 1 and ϕ r α 2 denote the restricted Bayes decision rules corresponding to the constraint α = α 1 and α = α 1 , respectively. The conditional risks for θ Λ and the Bayes risk corresponding to ϕ are respectively calculated by:
R θ ( ϕ ) = ξ R θ ( ϕ r α 1 ) + ( 1 ξ ) R θ ( ϕ r α 2 ) ,
r ( ϕ ) = ξ r ( ϕ r α 1 ) + ( 1 ξ ) r ( ϕ r α 2 ) ,
From (A8), the maximum conditional risk is upper bounded by:
max θ Λ   R θ ( ϕ ) = max θ Λ { ξ R θ ( ϕ r α 1 ) + ( 1 ξ ) R θ ( ϕ r α 2 ) } ξ max θ Λ   R θ ( ϕ r α 1 ) + ( 1 ξ ) max θ Λ   R θ ( ϕ r α 2 ) = ξ α 1 + ( 1 ξ ) α 2 ,
where the last equality holds according to (27). Let α o = max θ Λ   R θ ( ϕ ) and α ^ = ξ α 1 + ( 1 ξ ) α 2 , one obtains:
r ( ϕ ) r ( ϕ r α o ) r ( ϕ r α ^ ) ,
where the first inequality is satisfied since ϕ r α o is the optimal decision rule that minimizes the Bayes risk under the constraint α = α o on the conditional risks, and the last inequality holds since r ( ϕ r α ) is non-increasing with respect to (w.r.t.) α . Therefore, r ( ϕ r α ) is a strictly decreasing and convex function of α for α [ M , B ] . □

References

  1. Madadi, Z.; Anand, G.V.; Premkumar, A.B. Signal detection in generalized gaussian noise by nonlinear wavelet denoising. IEEE Trans. Circuits Syst. I 2013, 60, 2973–2986. [Google Scholar] [CrossRef]
  2. Cao, J.; Lin, Z. Bayesian signal detection with compressed measurements. Inf. Sci. 2014, 289, 241–253. [Google Scholar] [CrossRef]
  3. Higger, M.; Akcakaya, M.; Nezamfar, H.; LaMountain, G.; Orhan, U.; Erdogmus, D. A Bayesian framework for intent detection and stimulation selection in SSVEP BCIs. IEEE Signal Process. Lett. 2015, 22, 743–747. [Google Scholar] [CrossRef]
  4. Cheng, Q.; Zhou, H.; Cheng, J.; Li, H. A Minimax framework for classification with applications to images and high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
  5. Alhakim, R.; Raoof, K.; Simeu, E. Detection of UWB signal using dirty template approach. Signal Image Video Process. 2014, 8, 549–563. [Google Scholar] [CrossRef]
  6. Shevlyakov, G.; Shin, V.; Lee, S.; Kim, K. Asymptotically stable detection of a weak signal. Int. J. Adapt. Control Signal Process. 2014, 28, 848–858. [Google Scholar] [CrossRef]
  7. Morey, R.D.; Wagenmakers, E.J. Simple relation between Bayesian order-restricted and point-null hypothesis tests. Stat. Probab. Lett. 2014, 92, 121–124. [Google Scholar] [CrossRef]
  8. Ciuonzo, D.; Romano, G.; Rossi, P.S. Channel-aware decision fusion in distributed MIMO wireless sensor networks: Decode-and-fuse vs. decode-then-fuse. IEEE Trans. Wirel. Commun. 2012, 11, 2976–2985. [Google Scholar] [CrossRef]
  9. Rossi, P.S.; Ciuonzo, D.; Romano, G. Orthogonality and Cooperation in Collaborative Spectrum Sensing through MIMO Decision Fusion. IEEE Trans. Wirel. Commun. 2013, 12, 5826–5836. [Google Scholar] [CrossRef]
  10. Bayram, S.; Gezici, S. On the restricted Neyman–Pearson approach for composite hypothesis-testing in presence of prior distribution uncertainty. IEEE Trans. Signal Process. 2011, 59, 5056–5065. [Google Scholar] [CrossRef]
  11. Poor, H.V. An Introduction to Signal Detection and Estimation; Springer: New York, NY, USA, 1994. [Google Scholar]
  12. Strawderman, W.E. Minimaxity. J. Am. Stat. Assoc. 2000, 95, 1364–1368. [Google Scholar] [CrossRef]
  13. Lehmann, E.L. Some history of optimality. Lect. Notes Mono-Graph Ser. 2009, 57, 11–17. [Google Scholar]
  14. Begum, N.; King, M.L. A new class of test for testing a composite null against a composite alternative hypothesis. In Proceedings of the Australasian Meeting of the Econometric Society, Brisbane, Australia, 3–6 July 2007. [Google Scholar]
  15. Van Trees, H.L. Detection, Estimation, and Modulation Theory: Part I, 2nd ed.; Wiley: New York, NY, USA, 2001. [Google Scholar]
  16. Ciuonzo, D.; Maio, A.D.; Orlando, D. On the Statistical Invariance for Adaptive Radar Detection in Partially Homogeneous Disturbance Plus Structured Interference. IEEE Trans. Signal Process. 2016, 65, 1222–1234. [Google Scholar] [CrossRef]
  17. Blum, J.R.; Rosenblatt, J. On partial a priori information in statistical inference. Ann. Math. Stat. 1967, 38, 1671–1678. [Google Scholar] [CrossRef]
  18. Hodges, J.L., Jr.; Lehmann, E.L. The use of previous experience in reaching statistical decisions. Ann. Math. Stat. 1952, 23, 396–407. [Google Scholar] [CrossRef]
  19. Robbins, H. An empirical Bayes approach to statistics. In Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability 1, Berkeley, CA, USA, December 1954–July & August 1955; pp. 157–164. [Google Scholar]
  20. Robbins, H. The empirical Bayes approach to statistical decision problems. Ann. Math. Stat. 1964, 35, 1–20. [Google Scholar] [CrossRef]
  21. Savage, L.J. The Foundations of Statistics, 2nd ed.; Dover: New York, NY, USA, 1972. [Google Scholar]
  22. Watson, S.R. On Bayesian inference with incompletely specified prior distributions. Biometrika 1974, 61, 193–196. [Google Scholar] [CrossRef]
  23. Caticha, A.; Preuss, R. Maximum entropy and Bayesian data analysis: Entropic prior distributions. Phys. Rev. E 2014, 70. [Google Scholar] [CrossRef] [PubMed]
  24. Palmieri, F.A.N.; Ciuonzo, D. Objective priors from maximum entropy in data classification. Inf. Fusion 2013, 14, 186–198. [Google Scholar] [CrossRef]
  25. Varshney, K.R.; Varshney, L.R. Optimal Grouping for Group Minimax Hypothesis Testing. IEEE Trans. Inf. Theory 2014, 60, 6511–6521. [Google Scholar] [CrossRef]
  26. Parsopoulos, K.E.; Vrahatis, M.N. Particle Swarm Optimization Method for Constrained Optimization Problems; IOS Press: Amsterdam, The Netherlands, 2002; pp. 214–220. [Google Scholar]
  27. Hu, X.; Eberhart, R. Solving constrained nonlinear optimization problems with particle swarm optimization. In Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, 14–18 July 2002. [Google Scholar]
  28. Price, K.V.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer: New York, NY, USA, 2005. [Google Scholar]
Figure 1. Bayes risk versus α of the restricted Bayesian decision rule for ρ = 0.6 , 0.65, 0.686, 0.75 and 0.8, where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 .
Figure 1. Bayes risk versus α of the restricted Bayesian decision rule for ρ = 0.6 , 0.65, 0.686, 0.75 and 0.8, where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 .
Entropy 19 00370 g001
Figure 2. The maximum conditional and Bayes risks of the restricted Bayesian decision rule versus λ for ρ = 0.6 , 0.686 and 0.8, where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 .
Figure 2. The maximum conditional and Bayes risks of the restricted Bayesian decision rule versus λ for ρ = 0.6 , 0.686 and 0.8, where p ( H 0 ) = 0.45 , A = 2 and σ = 0.5 .
Entropy 19 00370 g002
Figure 3. The achievable minimum α and the corresponding minimum Bayes risk as function of σ for λ = 1 , 0.8 and 0.6 in (a,b), respectively, when p ( H 0 ) = 0.45 and A = 2.
Figure 3. The achievable minimum α and the corresponding minimum Bayes risk as function of σ for λ = 1 , 0.8 and 0.6 in (a,b), respectively, when p ( H 0 ) = 0.45 and A = 2.
Entropy 19 00370 g003
Figure 4. The achievable minimum α and the corresponding minimum Bayes risk versus A for λ = 1 , 0.8 and 0.6 in (a,b), respectively when p ( H 0 ) = 0.45 and σ = 0.5 .
Figure 4. The achievable minimum α and the corresponding minimum Bayes risk versus A for λ = 1 , 0.8 and 0.6 in (a,b), respectively when p ( H 0 ) = 0.45 and σ = 0.5 .
Entropy 19 00370 g004
Table 1. The parameters of v ( θ ) and different risks for various λ when ρ = 0.6 .
Table 1. The parameters of v ( θ ) and different risks for various λ when ρ = 0.6 .
λ t 1 / t 2 / t 3 R 0 / R A / R A max θ Λ   R θ r ( ϕ * )
1.0000--/--/--0.1188/0.2951/0.34560.34560.2269
0.92500/0/10.1474/0.2938/0.29740.29740.2287
0.82070/0.2162/0.78380.2381/0.2381/0.23810.23810.2381
0.70360.1777/0.2612/0.56110.2381/0.2381/0.23810.23810.2381
0.65000.2194/0.2717/0.50890.2381/0.2381/0.23810.23810.2381
Table 2. The parameters of v ( θ ) and different risks for various λ when ρ = 0.686 .
Table 2. The parameters of v ( θ ) and different risks for various λ when ρ = 0.686 .
λ t 1 / t 2 / t 3 R 0 / R A / R A max θ Λ   R θ r ( ϕ * )
1.0000--/--/--0.1340/0.2632/0.36570.34560.2224
0.92500/0/10.1739/0.2402/0.32270.29740.2246
0.82070/0/10.2381/0.2381/0.23810.23810.2381
0.70360.1777/0.1489/0.67340.2381/0.2381/0.23810.23810.2381
0.65000.2194/0.1839/0.59670.2381/0.2381/0.23810.23810.2381
Table 3. The parameters of v ( θ ) and different risks for various λ when ρ = 0.8 .
Table 3. The parameters of v ( θ ) and different risks for various λ when ρ = 0.8 .
λ t 1 / t 2 / t 3 R 0 / R A / R A max θ Λ   R θ r ( ϕ * )
1.0000--/--/--0.1839/0.1808/0.43250.43250.2099
0.92500/0/10.1964/0.1763/0.40490.40490.2105
0.82070/0/10.2212/0.1692/0.35830.35830.2134
0.70360.1777/0/0.82230.2381/0.2381/0.23810.23810.2381
0.65000.2194/0.0674/0.71310.2381/0.2381/0.23810.23810.2381
Table 4. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 1.
Table 4. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 1.
σΓ1α r ( ϕ )
0.10(−∞,−1.2508) (−0.9744,−0.4508) (0.7492,1.0086) (1.5492, ∞)0.00870.0076
0.32(−∞,−1.8020) (0.1986,0.2724) (0.9990,1.0486) (2.2000, ∞)0.50010.2749
0.90(−∞,−3.1154) (−0.9326,0.9838) (2.6262, ∞)0.21470.1700
2.00(−∞,−5.4212) (−2.5198,1.1780) (3.6814, ∞)0.45230.2864
Table 5. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 0.8.
Table 5. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 0.8.
σΓ1α r ( ϕ )
0.10(−∞,−1.2508) (−0.9776,−0.4508) (0.7492,1.0122) (1.5492, ∞)0.00790.0077
0.32(−∞,−1.8014) (0.1980,0.2832) (0.9990,1.0486) (2.2000, ∞)0.50010.2750
0.90(−∞,−3.0248) (−0.9674,0.9198) (2.6970, ∞)0.19750.1715
2.00(−∞,−4.9886) (−2.4148,1.4090) (3.7878, ∞)0.36610.2916
Table 6. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 0.6.
Table 6. The restricted Bayes decision rules, the achievable minimum α and the corresponding Bayes risk for different σ when λ = 0.6.
σΓ1 α r ( ϕ )
0.10(−∞,−1.2508) (−0.9776,−0.4508) (0.7492,1.0122) (1.5492, ∞)0.00790.0077
0.32(−∞,−1.8012) (−1.3404,−1.0020) (−0.5420,0.4810) (0.9984,1.2714) (2.2000, ∞)0.33230.3323
0.90(−∞,−2.9516) (−0.9956,0.8524) (2.8002, ∞)0.18670.1762
2.00(−∞,−4.7802) (−2.1130,1.5476) (4.0936, ∞)0.33370.3059

Share and Cite

MDPI and ACS Style

Liu, S.; Yang, T.; Liu, H. Optimal Detection under the Restricted Bayesian Criterion. Entropy 2017, 19, 370. https://doi.org/10.3390/e19070370

AMA Style

Liu S, Yang T, Liu H. Optimal Detection under the Restricted Bayesian Criterion. Entropy. 2017; 19(7):370. https://doi.org/10.3390/e19070370

Chicago/Turabian Style

Liu, Shujun, Ting Yang, and Hongqing Liu. 2017. "Optimal Detection under the Restricted Bayesian Criterion" Entropy 19, no. 7: 370. https://doi.org/10.3390/e19070370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop