Open Access
This article is

- freely available
- re-usable

*Sensors*
**2016**,
*16*(12),
2152;
https://doi.org/10.3390/s16122152

Article

Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks

^{1}

School of Electronic and Information Engineering, Xi’an Jiaotong University, No. 28 West Xianning Road, Xi’an 710049, China

^{2}

Shaanxi Engineering Research Center of Smart Networks and Ubiquitous Access, Xi’an Jiaotong University, Xi’an 710049, China

^{*}

Author to whom correspondence should be addressed.

Academic Editor:
Leonhard M. Reindl

Received: 17 October 2016 / Accepted: 13 December 2016 / Published: 16 December 2016

## Abstract

**:**

We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor’s reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured.

Keywords:

Internet of Things; wireless sensor network; distributed detection; eavesdropping; physical layer security; energy constraint; decision fusion## 1. Introduction

With the rapid advances in low-cost wireless sensors, radio frequency identification (RFID), Web technologies and wireless communications recently, connecting various smart objects to Internet and realizing the communications of machine-to-human and machine-to-machine with the physical world have been expected widely [1]. That is the concept of Internet of Things (IoT), which can provide ubiquitous connectivity, information gathering and data transmitting capabilities in different fields, such as health monitoring, emergencies, environment control, military and industries. The pervasive sensing and control capabilities brought by IoT will change our daily life significantly [2,3,4].

In an era of IoT, there are billions of devices linked to the Internet. Cisco predicts that 50 billion devices are going to be in use in 2020 [3]. Such a large number of devices deployed in the IoT lead to many technical challenges including spectrum scarcity, energy consumption and security [4,5,6]. Aiming to the spectrum scarcity problem, some enhanced technologies with high spectrum efficiency are advocated, for example, the cognitive Internet of Things (CIoT) who introduces the cognitive radio technology to the IoT network [5]. A decentralized inference network where the nodes transmit the compressed observations to reduce the required bandwidth is another solution [7], and the distributed detection technique utilized in sensor networks is a typical instance [8,9,10,11]. Since a huge number of devices are included in IoT, the energy to be spent for communication and computation is extremely large and improving energy efficiency becomes more important. Although the energy harvesting techniques can use the external energy source and relieve devices from the constraints induced by battery usage, energy as a scarce resource should always be utilized carefully. Thus, an energy efficiency solution has a significant role in IoT [4,12]. With the developing of IoT network, devices will become smarter and start to handle more tasks of human. Thus, the devices have to be more reliable and trustable [1]. However, there are a variety of attacks over different protocol layers which attempt to disrupt the network or intercept the information in the IoT, including denial of service (DoS) attacks, spoofed routing information attacks at network layer, flooding attacks at transport layer, resource exhaustion attacks at link layer, jamming and tampering attacks at physical layer and many others [13]. Now security has turned into an important aspect for IoT deployments [14,15]. Among various attacks, eavesdropping attack is the most common form of attack on data privacy [2,13]. In order to realize secure transmission, traditional key-based enciphering techniques at network layer have been entrusted. However, in IoT networks with low-complex devices, the key distribution for symmetric cryptosystems and the highly complex computation of asymmetric cryptosystems can be very challenging [16]. Therefore, the robust physical-layer security methods with little or no aid of encryption key and with low computational complexity can be adopted in IoT [2,5,17], further, they could be combined with other lightweight cryptographic protocols to fulfill different security targets of IoT.

An IoT system would integrate various technologies and communications solutions, such as identification and tracking techniques, wired and wireless sensor and actuator networks and enhanced communication protocols [1,18,19,20]. Sensor networks, especially the wireless sensor networks (WSN), will play a crucial role in the IoT. Ubiquitous sensing provided by WSN can offer the ability to measure, infer and understand environmental indicators. Cooperating with RFID system, WSN can track the status of things better and build a bridge between the physical and digital world [18,21]. With the size and complication of WSN growing, the spectrum scarcity and energy consumption problems become more serious [22]. Furthermore, the broadcasting nature of wireless communications from sensors to the controllers or fusion centers makes WSN vulnerable to eavesdropping. The physical layer security solutions with low complexity and low overhead are obviously more suitable for WSN, since the sensors have some practical constraints including limited computing capabilities, limited storage memories and severe energy constraints [2,10].

Due to the low bandwidth and power requirement at sensors and the robustness to the environments’ rapid changes, distributed detection in WSN has been utilized in a wide range of fields such as emergency response, environment monitoring, medical monitoring and military surveillance [10,23]. For distributed detection, sensors are deployed over a certain area to sense the physical phenomena with binary state in a decentralized fashion. Each sensor makes a binary decision based on its local observation and then transmits the local decision to a fusion center (FC) over wireless channels [23]. For the practical resource constraints and the serious security issues in front of WSN, secure distributed detection schemes under energy constraints are necessary for the development of an efficient IoT. Various secure strategies for distributed detection have been proposed under different assumptions on the eavesdroppers and transmission channels [8,9,10,23,24,25,26,27,28,29]. However, these studies focused on either the local detection at sensors or the information transmission from sensors to the FC. Moreover, the vast majority of them did not involve an energy constraint. Therefore, an efficient hybrid solution combining the local decision with the transmission under an energy constraint, along with a mathematic framework of analyzing error performance and optimizing parameters for the developed schemes are selected as the research contents of this paper. The contributions of this paper can be summarized as follows.

(1) In order to enhance the operability of the channel aware flipping method [10] in an energy constrained WSN, a specific energy limit indicator represented by the sensors’ activity probability is taken as the additional design constraint over the perfect secrecy. We call this modified scheme the transmission channel based only (TCBO) secure detection under energy constraint. Then, the simplified log-likelihood ratios (LLR) computed approximately under the low and high signal-to-noise ratio (SNR) conditions are derived. Following that, we obtain asymptotic error probabilities of the ally fusion center (AFC) at the worst and best noise situations with help of the central limit theorem (CLT). Next, the optimization problems with the perfect secrecy and energy constraint are established to find three comparison thresholds used in the randomly flipping operation. After simplifying the optimization target functions, the optimal thresholds are discussed and achieved. The above framework for error probability analysis and parameters optimization will also be taken as the mathematic approach in our newly designed scheme to solve for the main parameters.

(2) Considering local detection performance also affects the decision fusion evidently, we combine the local observation quality with the transmission channel information to design a more efficient hybrid scheme. Here, the energy constraint is satisfied by censoring the sensor with a less informative local LLR and transmission security is guaranteed through randomly flipping the local decisions based on the estimated channel gains. This innovative scheme is called the joint local decision and wireless transmission (JLDWT) scheme. Then, following the mathematic framework given by the first work, two local detection thresholds and one flipping comparison threshold are optimized to minimize the AFC’s error rates, besides, satisfy the perfect secrecy condition and the energy limitation.

(3) At last, through an overall simulation from diffident perspectives, the above two schemes are evaluated in a practical wireless transmission environment. The simulation results demonstrate that the new proposed hybrid scheme can improve the error performance of the AFC under a relatively high SNR transmission environment with a more severe energy constraint, as well as, maintain the perfect secrecy.

The rest of the paper is organized as follows: an overview of related work is discussed in Section 2. Section 3 describes the system model. The TCBO and JLDWT schemes are presented in Section 4 and Section 5, respectively. The simulation results are discussed in Section 6. Section 7 concludes the paper.

## 2. Related Work

In this section, we summarize the related work about physical layer security suitable for the IoT. The communication network consisting of controllers and actuators and the sensor network composed of sensors and controllers are two main subsystems of an abstracted IoT network [2]. The physical layer security solutions possibly available for both subsystems will be presented in the following text.

In the communication network of the IoT, the controllers are the signal transmitters, which could be equipped with multiple antennas and an adequate energy supply. Then, some of the classical secure schemes at physical layer proposed for the downlink in LTE-Advanced network may be usable [30,31,32,33,34,35,36,37,38,39]. When the main channel (the transmitter to legitimate receiver channel) and the eavesdropper channel are perfectly known, the beamforming (precoding) techniques can be adopted to maximize the signal quality difference between the destination and the eavesdropper by strengthening or weakening signals in certain dimensions. For the scenario of multiple-input, single-output and multi-antenna eavesdropper (MISOME) with a single legitimate receiver, the optimal beamforming vector is the generalized eigenvector corresponding to the largest generalized eigenvalue of the receiver and the eavesdropper channel covariance matrice [30]. While, under the multiple-input, multiple-output and multi-antenna eavesdropper (MIMOME) scenario, the search for the optimal precoder with a total power constraint has a non-convex form and the solution can be found numerically. If the power covariance constraint is considered, a closed form solution based on the generalized eigenvalue decomposition (GEVD) can be obtained [31]. As for the case of multiple receivers and eavesdroppers, the achievable secrecy rates can be used to build optimization problems to find a secrecy beamformer or precoder [32], further, a simpler but less effective design can be achieved using the channel inversion technique [33]. In addition, when the eavesdropper’s CSI is unknown, emitting artificial noise (AN) is helpful to prevent the eavesdropper from getting a good channel. The AN is often added in the null space of the main channel with single destination and eavesdropper [34]. While, for the case with multiple receivers and eavesdroppers, the AN would be placed in the null space of the effective channels of all receivers [35]. Since AN may reduce the transmission power of the useful data, power allocation between data and AN should be examined to ensure good performance under secrecy constraint [36]. Another novel strategy to degrade the eavesdropper’s channel quality is based on noise aggregation [40,41], where two adjacent timeslots are bounded to transmit two packets and the transmitter performs bitwise exclusive-or (XOR) operation on the even packet with previous odd one. Because the legitimate receiver can detect the packets in odd slots correctly by an ARQ protocol while eavesdropper may only have a noisy observation, the channel noise in odd slots is aggregated to even slots [41]. Obviously, many of the above security schemes are difficult to be directly employed in an IoT setting, because the accurate legitimate channel state information at the transmitter (CSIT) is difficult to acquire for the channel training opportunities are limited and the high rate feedback channels are lack in the IoT. Moreover, the eavesdropper CSIT is more difficult to yield since eavesdroppers remain completely passive. As for the AN based methods are also not desirable due to their higher energy expenditure [2].

In addition, a variety of physical layer security solutions have been proposed in literature for the distributed detection in sensor networks. With the assumption that the eavesdropping fusion center (EFC) can only distinguish busy-idle state of sensor’s transmission, an optimal sensor censoring scheme with a perfect secrecy and energy constraint was given in [8]. But the processing capability of the EFC was too limited. Another category of effective scheme is the probabilistic ciphering based one, where the sensor’s observation is randomly mapped to a set of quantization levels according to an optimal mapping probabilities matrix [9,24,25]. However, the security is assured by assuming the EFC being completely ignorant about the mapping probabilities. Moreover, the crucial energy efficient issue was not discussed. In [26,27], the optimal local quantizer was examined through minimizing the detection cost at the AFC meanwhile satisfying the constraints to the EFC detection cost or error performance, but the energy consumption problem was not concerned, either. In addition, all of the above solutions were not evaluated over a practical wireless channel and the effect of the transmission channel on their security was not discussed. Afterwards, a category of channel aware encryption method was proposed to realize the perfect secrecy from the EFC, including the type-based multiple access scheme proposed in [23] and the channel-based bit flipping scheme designed in [10], where not the accurate channel coefficients were needed, but only the channel gains had to be estimated using the pilot signal from the AFC. In channel aware encryption, the good energy efficiency could be realized through introducing the dormant sensors. The inherent significant difference of the wireless channels for the EFC and the AFC was explored to achieve the perfect secrecy of the sensor’s information transmission, due to the channels from sensors to the EFC and the AFC are independent of each other. Especially, the channel-based randomly flipping method is very suitable for the distributed detection due to its low complexity, good scalability and less limitation on the EFC. However, the work in [10] did not give an efficient solution to optimize three comparison thresholds. In addition, when the sleeping sensor was chosen, the channel gain was taken as the only metric while the local decision quality was not concerned although it may induce more important influence on the fusion performance. In addition, AN based mechanisms that let a part of sensors or the AFC transmit the jamming signal to degrade the SINR of the EFC were also introduced to the sensor network [28,29]. However, the performance of the AFC would also be reduced when the jamming signal worsens the EFC channel [29] or the external energy would be spent by the AFC to interfere the EFC [28]. Based on the above drawbacks of the previous works, we propose the secure and energy efficient JLDWT scheme, which is a hybrid method combing the local detection and the wireless transmission, after designing an analysis framework to complete the performance analysis and thresholds optimization of TCBO scheme.

## 3. System Model

In this section, the concerned IoT sensor network scenario is given. The local detection and the transmission scheme of local decisions from the sensors to the fusion center are introduced.

#### 3.1. IoT Sensor Network

Consider a sensor network in IoT system illustrated by Figure 1, which performs distributed detection for a binary hypothesis test of ${\theta}_{0}$ against ${\theta}_{1}$. A number of sensors are distributed near the physical system to detect a binary target state and transmit their local decision results to an AFC through a wireless parallel access channel (PAC). Meanwhile, a passive EFC overhears the communications between the sensors and AFC and also attempts to detect the state of θ. The channels from sensors to the AFC and the EFC are called the main and eavesdropping channels, respectively. Moreover, the concerned sensor network is energy-constrained for the power supplies of the sensors are usually severely constrained. Obviously, the security and energy saving are the main challenges faced by our senor network. Therefore, in each local decision reporting slot, some sensors will keep dormant to meet the energy constraint and some sensors among the active ones will transmit the bit-flipping version of local detection results to make the EFC confused.

In Figure 1, the sensors with the indices in the sets of $\{{i}_{1},{i}_{2},\dots ,{i}_{{K}_{N}}\}$, $\{{j}_{1},{j}_{2},\dots ,{j}_{{K}_{F}}\}$ and $\{{k}_{1},{k}_{2},\dots ,{k}_{{K}_{D}}\}$ are included in the non-flipping group, flipping group and the dormant group, respectively. Thus, the total number of sensors in the network is $K={K}_{F}+{K}_{D}+{K}_{N}$. In addition, the observation to the physical system of the k-th sensor is denoted by ${x}_{k}$. The communication channels from sensors to the AFC and the EFC are represented by ${h}_{k}^{A}$ and ${h}_{k}^{E}$, respectively. And they are assumed to be independent and identically distributed (i.i.d.) Rayleigh block fading channels. Moreover, a transmission probability or an activation probability β, which is proportional to the per-sensor energy consumption, is introduced to represent the energy constraint.

#### 3.2. Local Detection of Sensors

For the k-th sensor, the acquired observation corrupted by additive noise is modeled as:
where ${w}_{k}$ is an i.i.d zero-mean Gaussian random variable with variance ${\sigma}^{2}$, i.e., ${w}_{k}\sim \mathcal{N}(0,{\sigma}^{2})$. Thus the SNR of local detection can be computed and denoted by $sn{r}_{L}={\theta}^{2}/{\sigma}^{2}$. Based on the observation, the sensor makes a one-bit local decision ${b}_{k}\in \{0,1\}$ to indicate the absence or presence of θ by using the Bayesian detection criteria:
where $f\left(\left.{\theta}_{i}\right|{x}_{k}\right)$ is the posterior probability distribution function (PDF) of ${\theta}_{i}$ based on ${x}_{k}$ for $i=0,1$. The main difference of Equation (2) from the traditional Bayesian detection is that two rather than one local decision thresholds are set here. ${\lambda}_{U}$ and ${\lambda}_{L}$, which meet $0<{\lambda}_{L}\le {\lambda}_{U}<\infty $, are the upper and lower thresholds and assumed to be identical at all the sensors. If the ratio of the posterior probability distribution lies inside the region of $[{\lambda}_{L},{\lambda}_{U}]$, it means that the observation appears less informative for discriminating between ${\theta}_{0}$ and ${\theta}_{1}$, so the corresponding decision result is more likely to be false. As for such kind of sensors, it is better to keep them silent for energy efficiency. Of course, this is the basic idea of the sensor censoring technique [8,42]. However, in this paper, we adopt it to realize the energy saving for the secure transmission of sensors and the details are described in Section 5.

$$\begin{array}{c}{\theta}_{0}:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{x}_{k}={w}_{k}\hfill \\ {\theta}_{1}:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{x}_{k}=\theta +{w}_{k}\hfill \end{array}$$

$$\frac{f\left(\left.{\theta}_{1}\right|{x}_{k}\right)}{f\left(\left.{\theta}_{0}\right|{x}_{k}\right)}\begin{array}{c}\stackrel{{b}_{k}=1}{>}{\lambda}_{U}\\ \underset{{b}_{k}=0}{<}{\lambda}_{L}\end{array}$$

The prior probabilities of ${\theta}_{0}$ and ${\theta}_{1}$ are assumed to be ${q}_{0}$ and ${q}_{1}$, respectively. Then the Equation (2) can be transformed into:
where $f\left(\left.{x}_{k}\right|{\theta}_{i}\right)$ is the conditional PDF of ${x}_{k}$ under the hypothesis ${\theta}_{i}$, and ${\lambda}_{k}$ is the likelihood ratio (LR). From Equation (1), it can be obtained that

$${\lambda}_{k}=\frac{f\left(\left.{x}_{k}\right|{\theta}_{1}\right)}{f\left(\left.{x}_{k}\right|{\theta}_{0}\right)}\begin{array}{c}\stackrel{{b}_{k}=1}{>}{\lambda}_{U}({q}_{0}/{q}_{1})\\ \underset{{b}_{k}=0}{<}{\lambda}_{L}({q}_{0}/{q}_{1})\end{array}$$

$$\begin{array}{c}f\left(\left.{x}_{k}\right|{\theta}_{1}\right)=\frac{exp[-{({x}_{k}-\theta )}^{2}/2{\sigma}^{2}]}{\sqrt{2\pi}\sigma}\hfill \\ \\ f\left(\left.{x}_{k}\right|{\theta}_{0}\right)=\frac{exp[-{\left({x}_{k}\right)}^{2}/2{\sigma}^{2}]}{\sqrt{2\pi}\sigma}\hfill \end{array}$$

Furthermore, the log-likelihood ratio (LLR) can be written as

$${\mathrm{\Lambda}}_{k}^{L}=log\left({\lambda}_{k}\right)=\frac{\theta}{{\sigma}^{2}}{x}_{k}-\frac{{\theta}^{2}}{2{\sigma}^{2}}$$

Combining Equations (4) and (5), it can be easily derived that the conditional PDFs of ${\mathrm{\Lambda}}_{k}^{L}$ are

$$\begin{array}{c}f\left(\left.{\mathrm{\Lambda}}_{k}^{L}\right|{\theta}_{1}\right)=\frac{1}{\sqrt{2\pi \xb7sn{r}_{L}}}exp\left(-\frac{{({\mathrm{\Lambda}}_{k}^{L}-sn{r}_{L}/2)}^{2}}{2\xb7sn{r}_{L}}\right)\hfill \\ \\ f\left(\left.{\mathrm{\Lambda}}_{k}^{L}\right|{\theta}_{0}\right)=\frac{1}{\sqrt{2\pi \xb7sn{r}_{L}}}exp\left(-\frac{{({\mathrm{\Lambda}}_{k}^{L}+sn{r}_{L}/2)}^{2}}{2\xb7sn{r}_{L}}\right)\hfill \end{array}$$

Furthermore, we can obtain that the equation $f\left(\left.{\mathrm{\Lambda}}_{k}^{L}\right|{\theta}_{1}\right)/f\left(\left.{\mathrm{\Lambda}}_{k}^{L}\right|{\theta}_{0}\right)=exp\left({\mathrm{\Lambda}}_{k}^{L}\right)$ is satisfied and this is the nesting property of the LR.

There are four possible cases for local detection, namely correct decisions under two states, missed detection and false alarm. Based on Equations (3) and (6), we can calculate the probabilities of four cases and obtain
where ${P}_{0d}$ is the probability of correct detection under θ being non-existent and $Q\left(x\right)\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}1/\sqrt{2\pi}{\int}_{x}^{\infty}exp(-{t}^{2}/2)dt$. In addition, the error probability of local detection for each sensor can be defined as ${P}_{{E}_{L}}={q}_{0}{P}_{f}+{q}_{1}{P}_{m}$. If we set ${\lambda}_{U}={\lambda}_{L}=\lambda $, this error probability can be given by

$$\begin{array}{c}{P}_{d}={\int}_{log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})}^{\infty}f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)d{\mathrm{\Lambda}}_{k}^{L}=Q\left(\frac{log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})-sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)\hfill \\ {P}_{m}={\int}_{-\infty}^{log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})}f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)d{\mathrm{\Lambda}}_{k}^{L}=1-Q\left(\frac{log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})-sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)\hfill \\ {P}_{f}={\int}_{log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})}^{+\infty}f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)d{\mathrm{\Lambda}}_{k}^{L}=Q\left(\frac{log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})+sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)\hfill \\ {P}_{0d}={\int}_{-\infty}^{log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})}f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)d{\mathrm{\Lambda}}_{k}^{L}=1-Q\left(\frac{log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})+sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)\hfill \end{array}$$

$${P}_{{E}_{L}}={q}_{0}Q\left(\frac{log(\lambda \xb7{q}_{0}/{q}_{1})+sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)+{q}_{1}\left[1-Q\left(\frac{log(\lambda \xb7{q}_{0}/{q}_{1})-sn{r}_{L}/2}{\sqrt{sn{r}_{L}}}\right)\right]$$

Furthermore, the first-order derivation of ${P}_{{E}_{L}}$ with respect to λ is

$$\begin{array}{c}\frac{d{P}_{{E}_{L}}}{d\lambda}=\frac{\mathrm{l}}{\lambda \xb7\sqrt{2sn{r}_{L}}}exp\left(-\frac{{[log(\lambda \xb7{q}_{0}/{q}_{1})]}^{2}+{\left(sn{r}_{L}\right)}^{2}/4}{2sn{r}_{L}}\right)\xb7\left(\frac{{q}_{1}}{\sqrt{\pi}}exp[\frac{log(\lambda \xb7{q}_{0}/{q}_{1})}{2}]-\frac{{q}_{0}}{\sqrt{\pi}}exp[-\frac{log(\lambda \xb7{q}_{0}/{q}_{1})}{2}]\right)\hfill \end{array}$$

Through letting $\frac{d{P}_{{E}_{L}}}{d\lambda}=0$, it can be obtained that the optimized ${\lambda}^{*}$ meeting $0<\lambda <\infty $ to minimize ${P}_{{E}_{L}}$ is ${\lambda}^{*}=1$.

#### 3.3. Transmission of Local Decisions from Sensors to FC

After the local decisions are achieved, the sensors would deliver them to the AFC. In this paper, a wireless PAC between the sensors and the AFC is considered and the transmission channels from different sensors to the fusion center are orthogonal. However, the sensors’ transmissions are overheard by the EFC, who also wishes to detect the target state. From the literature [2,7,9], we have seen that the stochastic ciphering could be employed to protect the information of the sensors from the EFC efficiently, since each sensor would flip its decision randomly and the EFC would be confused when it was ignorant about the flipping probability (i.e., the encryption key). However, the key exchange between the AFC and the sensor itself may be not secure from the EFC. In this case, the channel-aware stochastic cipher [10], whose seeds are based on the randomness of the transmission channels, are preferable. Because the channels to the AFC and the EFC from a sensor are independent, it is impossible for the EFC to deterministically know the flipping action of a sensor based on the main channel gain. Thus, the formation leaked to the EFC reduces, although the flipping probability is completely known by the EFC. Therefore, the channel-based stochastic ciphering is still adopted by us to realize the secure transmission of local decisions from sensors to the AFC.

In order to sense the channel information, the sensors would firstly receive the known pilot signal from the AFC, as well as three thresholds for comparison. Then the estimated channel gain would be compared with the thresholds to determine which action should be selected by a sensor. The sensor may report an unaltered local decision, a “flipped” decision, or stay dormant to satisfy the energy constraint.

Assume the main channel and the eavesdropping channel both follow the Rayleigh distribution with unit power, i.e., $f\left(h\right)=2hexp(-{h}^{2})$ and $h\in [0,\infty )$, which is usually considered in existing studies [10,23,42]. Assume the pilot signal is so strong that the sensors can obtain the exact channel gains. Basing on the channel reciprocity, the sensors’ estimated channel gains can be used to indicate the sensor-to-AFC channels. Moreover, they are unknown by the EFC due to the statistical independence of the main channel and the eavesdropping channel. The thresholds broadcasted by the AFC are $\{{t}_{1},{t}_{2},{t}_{3}\}$ with $0\le {t}_{3}\le {t}_{2}\le {t}_{1}<\infty $. Thus, the secure transmission strategy with energy limitation is that, sensor k reports its original local decision if ${h}_{k}^{A}>{t}_{1}$, reports a bit-flipping decision if ${t}_{3}\le {h}_{k}^{A}\le {t}_{2}$ and stays silent for energy efficiency otherwise. From the security analysis given in [10], we can see that the condition for perfect secrecy is ${\lambda}_{1}\stackrel{\mathrm{def}}{=}{\int}_{{t}_{1}}^{\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}={\int}_{{t}_{3}}^{{t}_{2}}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}\stackrel{\mathrm{def}}{=}{\lambda}_{2}$. Obviously, to meet the energy constraint of network, the inequality of ${\lambda}_{1}+{\lambda}_{2}\le \beta $ should also be held. Moreover, the case with a single “no-send” region is concerned in this paper. That is to say either ${t}_{3}=0$ or ${t}_{1}={t}_{2}$, which is illustrated in Figure 2.

## 4. Transmission Channel Based Only Secure Detection under Energy Constraint

In [10], the authors designed a confidential and energy efficient distributed detection method, called channel aware encryption, only from the view of the wireless transmission between sensors and the fusion center. And the condition for perfect secrecy was derived. Moreover, the LLR based decision fusion was studied, further, a simplified decision fusion rule in high SNR region was given. However, the more detailed analysis about the error probability of decision fusion and the optimization of thresholds were absent. In this section, we will analyze the error performance of the AFC based on the approximated LLRs derived under low and high SNR conditions, respectively. Afterwards, three thresholds will be optimized to minimize the probability of error at the AFC while ensuring the perfect secrecy from the EFC and satisfying the energy constraint. It should be noted that a specified energy constraint of $\beta \le 1$ is introduced by us. And the adjusted scheme is called the TCBO secure detection under energy constraint in our paper.

#### 4.1. Approximation of LLR and Error Probabilities of FC

For the secure scheme only basing on transmission channels, the confidentiality from the eavesdropper and the energy saving are both provided by the reporting strategy of local decisions. Thus, the thresholds used in the local detection are set as ${\lambda}_{L}={\lambda}_{U}={\lambda}^{*}$ to optimize the sensor’s local performance. Then, we have ${P}_{m}=1-{P}_{d}$ and ${P}_{0d}=1-{P}_{f}$. In addition, the common binary phase shift keying (BPSK) modulation is utilized by each sensor to deliver its one-bit decision. At the fusion center, the LLR based fusion rule is used and the transmission channel information is unknown. In addition, it is assumed that the fusion rules and the Prior information at the EFC are identical with those at the AFC and this is a worst case from the view of security.

The received signals at the AFC and EFC from sensor k are denoted as ${y}_{k}^{\mathrm{A}}$ and ${y}_{k}^{\mathrm{E}}$, respectively. They can be described as
where ${n}_{k}^{A}\sim \mathcal{N}(0,{\delta}_{A}^{2})$ and ${n}_{k}^{E}\sim \mathcal{N}(0,{\delta}_{E}^{2})$. Thus, the transmission channel SNR for the AFC and EFC can be written as $SN{R}_{A}={\left|{h}_{k}^{A}{x}_{k}\right|}^{2}/{\delta}_{A}^{2}$ and $SN{R}_{E}={\left|{h}_{k}^{E}{x}_{k}\right|}^{2}/{\delta}_{E}^{2}$, respectively. Following the channel-aware flipping rule, we have ${x}_{k}=2{b}_{k}-1$ for ${h}_{k}^{A}>{t}_{1}$, ${x}_{k}=2{\overline{b}}_{k}-1$ for ${t}_{3}\le {h}_{k}^{A}\le {t}_{2}$ and ${x}_{k}=0$ for other ${h}_{k}^{\mathrm{A}}$. The LLR at the AFC can be expressed in terms of ${\mathbf{y}}^{A}=[{y}_{1}^{A},{y}_{2}^{A},...,{y}_{K}^{A}]$ as
where (a) is due to the independence of different ${y}_{k}^{A}$ and $f\left({y}_{k}^{A}|{\theta}_{i}\right)$ denotes the likelihood function of sensor k for the hypothesis ${\theta}_{i}$. For the Bayesian setup, the optimal decision rule can be given by ${\mathrm{\Lambda}}^{A}\begin{array}{c}\stackrel{{\theta}_{1}}{<}\\ \underset{{\theta}_{0}}{>}\end{array}$ log(q0/q1).

$$\begin{array}{c}{y}_{k}^{A}={h}_{k}^{A}{x}_{k}+{n}_{k}^{A}\hfill \\ \\ {y}_{k}^{E}={h}_{k}^{E}{x}_{k}+{n}_{k}^{E}\hfill \end{array}$$

$${\mathrm{\Lambda}}^{A}=\frac{1}{K}log\frac{f\left({\mathbf{y}}^{A}|{\theta}_{1}\right)}{f\left({\mathbf{y}}^{A}|{\theta}_{0}\right)}\stackrel{\left(\mathrm{a}\right)}{=}\frac{1}{K}\sum _{k=1}^{K}log\frac{f\left({y}_{k}^{A}|{\theta}_{1}\right)}{f\left({y}_{k}^{A}|{\theta}_{0}\right)}$$

By using the similar derivation method in Section IV of [10], it can be achieved
where

$$\begin{array}{cc}f\left({y}_{k}^{A}|{\theta}_{i}\right)\hfill & =P\left({b}_{k}=1|{\theta}_{i}\right)\left[\mathrm{\Phi}\left({t}_{1},\infty ,1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left({t}_{3},{t}_{2},-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]\hfill \\ \\ & +P\left({b}_{k}=0|{\theta}_{i}\right)\left[\mathrm{\Phi}\left({t}_{1},\infty ,-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left({t}_{3},{t}_{2},1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]\hfill \\ \\ & +\left[\mathrm{\Phi}\left(0,{t}_{3},0,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left({t}_{2},{t}_{1},0,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]\hfill \end{array}$$

$$\begin{array}{cc}\mathrm{\Phi}\left({t}_{a},{t}_{b},{x}_{k},{y}_{k}^{A},{\delta}_{A}^{2}\right)\hfill & ={\int}_{{t}_{a}}^{{t}_{b}}f\left({y}_{k}^{A}\right|{h}_{k}^{A},{x}_{k})f\left({h}_{k}^{A}\right)d{h}_{k}^{A}\hfill \\ & ={\int}_{{t}_{a}}^{{t}_{b}}\frac{1}{\sqrt{2\pi}{\delta}_{A}}exp\left(-\frac{{\left({y}_{k}^{A}-{h}_{k}^{A}{x}_{k}\right)}^{2}}{2{\delta}_{A}^{2}}\right)2{h}_{k}^{A}exp(-h{{}_{k}^{A}}^{2})d{h}_{k}^{A}\hfill \end{array}$$

Note that the LLR based on Equation (12) requires numerical integrations. It is greatly unfavorable to the performance analysis of decision fusion and the optimization of comparison thresholds. Therefore, the approximations of LLR under low SNR and high SNR scenarios would be examined. Moreover, the error probabilities based on these approximations would be analyzed in follows.

#### 4.1.1. Approximation of LLR and Error Performance under Low SNR

As the channel noise variance ${\delta}_{A}^{2}\to \infty $, we can get
where $N({y}_{k}^{A},{\delta}_{A}^{2})=1/\left(\sqrt{2\pi}{\delta}_{A}\right)exp[-{\left({y}_{k}^{A}\right)}^{2}/\left(2{\delta}_{A}^{2}\right)]$. The detailed derivation of Equation (14) is given in the Appendix A. Applying Equation (14) to Equation (12), it can be obtained that
where

$$\begin{array}{cc}\mathrm{\Phi}\left({t}_{a},{t}_{b},{x}_{k},{y}_{k}^{A},{\delta}_{A}^{2}\right)\hfill & \approx N({y}_{k}^{A},{\delta}_{A}^{2})\{exp(-{t}_{a}^{2})-exp(-{t}_{b}^{2})\hfill \\ \\ & +\frac{{y}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}}[{t}_{a}exp(-{t}_{a}^{2})-{t}_{b}exp(-{t}_{b}^{2})+{\int}_{{t}_{a}}^{{t}_{b}}exp(-{h}^{2})dh]\}\hfill \end{array}$$

$$\begin{array}{cc}\hfill f\left({y}_{k}^{A}|{\theta}_{1}\right)& =N({y}_{k}^{A},{\delta}_{A}^{2})\xb7\hfill \\ & \{[\mathrm{\Phi}\left({t}_{1},\infty ,-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left({t}_{3},{t}_{2},1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\phantom{\rule{1.0pt}{0ex}}+\mathrm{\Phi}\left(0,{t}_{3},0,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left({t}_{2},{t}_{1},0,{y}_{k}^{A},{\delta}_{A}^{2}\right)]\hfill \\ & +{P}_{d}[\mathrm{\Phi}\left({t}_{1},\infty ,1,{y}_{k}^{A},{\delta}_{A}^{2}\right)-\mathrm{\Phi}\left({t}_{1},\infty ,-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\phantom{\rule{1.0pt}{0ex}}+\mathrm{\Phi}\left({t}_{3},{t}_{2},-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)-\mathrm{\Phi}\left({t}_{3},{t}_{2},1,{y}_{k}^{A},{\delta}_{A}^{2}\right)]\}\hfill \\ & \approx N({y}_{k}^{A},{\delta}_{A}^{2})\{1+\frac{{y}_{k}^{A}}{{\delta}_{A}^{2}}\left[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)\right](2{P}_{d}-1)\}\hfill \end{array}$$

$$f\left({y}_{k}^{A}|{\theta}_{0}\right)=N({y}_{k}^{A},{\delta}_{A}^{2})\{1+\frac{{y}_{k}^{A}}{{\delta}_{A}^{2}}\left[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)\right](2{P}_{f}-1)\}$$

$$\begin{array}{c}m\left({t}_{1}\right)={t}_{1}exp(-{t}_{1}^{2})+{\int}_{{t}_{1}}^{\infty}exp(-{h}^{2})dh\hfill \\ n\left({t}_{3},{t}_{2}\right)={t}_{3}exp(-{t}_{3}^{2})-{t}_{2}exp(-{t}_{2}^{2})+{\int}_{{t}_{3}}^{{t}_{2}}exp(-{h}^{2})dh\hfill \end{array}$$

From Equations (15) and (16), we achieve

$$\begin{array}{cc}{\mathrm{\Lambda}}_{k}^{A}\hfill & =log\frac{f\left({y}_{k}^{A}|{\theta}_{1}\right)}{f\left({y}_{k}^{A}|{\theta}_{0}\right)}=log[1+\frac{f\left({y}_{k}^{A}|{\theta}_{1}\right)-f\left({y}_{k}^{A}|{\theta}_{0}\right)}{f\left({y}_{k}^{A}|{\theta}_{0}\right)}]\\ & \approx log\{1+\frac{2({P}_{d}-{P}_{f})\frac{{y}_{k}^{A}}{{\delta}_{A}^{2}}[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)]}{1+(2{P}_{f}-1)\frac{{y}_{k}^{A}}{{\delta}_{A}^{2}}[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)]}\}\hfill \end{array}$$

Following the assumption of ${\delta}_{A}^{2}\to \infty $ and the fact that $log(1+x)\approx x$ with x closing to zero, we can further reduce Equation (18) to

$$\begin{array}{cc}{\mathrm{\Lambda}}_{k}^{A}\hfill & \approx 2({P}_{d}-{P}_{f})\frac{[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)]}{{\delta}_{A}^{2}}{y}_{k}^{A}\\ & =\mathrm{\Gamma}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})\xb7{y}_{k}^{A}\hfill \end{array}$$

From Equation (19), we can see that the calculation of LLR can be simplified significantly for large noise variance. Note that the formulas from Equation (11) to Equation (19) are also available for the EFC provided it has the same prior information as the AFC. The only variation is the different received signal ${y}_{k}^{E}$ from ${y}_{k}^{A}$.

Since ${y}_{k}^{A}$ is independent from each other, ${\mathrm{\Lambda}}^{A}=\frac{1}{K}{\displaystyle \sum _{k=1}^{K}}{\mathrm{\Lambda}}_{k}^{A}$ can be taken as the average of K i.i.d. random variables. Then, invoking the central limit theorem [9,23], we can deem that the statistic of ${\mathrm{\Lambda}}^{A}$ converges to a normal distribution for a large K. That is ${\mathrm{\Lambda}}^{A}|{\theta}_{i}\sim \mathcal{N}({\mu}_{Ak}|{\theta}_{i},\frac{{\gamma}_{Ak}^{2}|{\theta}_{i}}{K})$, where ${\mu}_{Ak}|{\theta}_{i}$ and ${\gamma}_{Ak}^{2}|{\theta}_{i}$ are the mean and variance of ${\mathrm{\Lambda}}_{k}^{A}$ conditioned on ${\theta}_{i}$, respectively. And they are directly related with the mean and the variance of ${y}_{k}^{A}$, which can be seen from Equation (19). Next, our target is to calculate $E\left({y}_{k}^{A}|{\theta}_{i}\right)$ and $Var\left({y}_{k}^{A}|{\theta}_{i}\right)$.

Utilizing Equation (15), we can write
where (a) is due to ${\int}_{-\infty}^{+\infty}{y}_{k}^{A}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}=0$ and ${\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}={\delta}_{A}^{2}$, whose derivations are described in Appendix B.

$$\begin{array}{c}E\left({y}_{k}^{A}|{\theta}_{1}\right)={\int}_{-\infty}^{+\infty}{y}_{k}^{A}f\left({y}_{k}^{A}|{\theta}_{1}\right)d{y}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\approx {\int}_{-\infty}^{+\infty}{y}_{k}^{A}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}+\frac{[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)](2{P}_{d}-1)}{{\delta}_{A}^{2}}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\stackrel{\left(\mathrm{a}\right)}{=}[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)](2{P}_{d}-1)\hfill \end{array}$$

$$E\left({y}_{k}^{A}|{\theta}_{0}\right)=[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)](2{P}_{f}-1)$$

In order to obtain $Var\left({y}_{k}^{A}|{\theta}_{i}\right)$, we firstly calculate
where (a) follows the fact of ${\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{3}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}=0$ verified also in Appendix B. Obviously, $E\left({\left({y}_{k}^{A}\right)}^{2}|{\theta}_{0}\right)={\delta}_{A}^{2}$. Then $Var\left({y}_{k}^{A}|{\theta}_{i}\right)$ can be achieved through $Var\left({y}_{k}^{A}|{\theta}_{i}\right)=E\left({\left({y}_{k}^{A}\right)}^{2}|{\theta}_{i}\right)-{E}^{2}\left({y}_{k}^{A}|{\theta}_{i}\right)$.

$$\begin{array}{c}E\left({\left({y}_{k}^{A}\right)}^{2}|{\theta}_{1}\right)={\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}f\left({y}_{k}^{A}|{\theta}_{1}\right)d{y}_{k}^{A}\hfill \\ \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\approx {\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}+\frac{\left(2{P}_{d}-1\right)}{{\delta}_{A}^{2}}\left[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)\right]{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{3}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}\hfill \\ \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\stackrel{\left(\mathrm{a}\right)}{=}{\delta}_{A}^{2}\hfill \end{array}$$

Combing Equations (19)∼(22), along with the Bayesian decision rule, we can yield the error probability for the AFC as follows:

$$\begin{array}{cc}\hfill {P}_{e}^{A}& ={q}_{0}P\left({\mathrm{\Lambda}}^{A}\ge log({q}_{0}/{q}_{1})\left|{\theta}_{0}\right.\right)+{q}_{1}P\left({\mathrm{\Lambda}}^{A}<log({q}_{0}/{q}_{1})\left|{\theta}_{1}\right.\right)\\ & ={q}_{0}Q\left(\frac{log({q}_{0}/{q}_{1})-\mathrm{\Gamma}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})E\left({y}_{k}^{A}|{\theta}_{0}\right)}{\sqrt{{\mathrm{\Gamma}}^{2}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})\left[{\delta}_{A}^{2}-{E}^{2}\left({y}_{k}^{A}|{\theta}_{0}\right)\right]/K}}\right)\hfill \\ & +{q}_{1}\left[1-Q\left(\frac{log({q}_{0}/{q}_{1})-\mathrm{\Gamma}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})E\left({y}_{k}^{A}|{\theta}_{1}\right)}{\sqrt{{\mathrm{\Gamma}}^{2}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})\left[{\delta}_{A}^{2}-{E}^{2}\left({y}_{k}^{A}|{\theta}_{1}\right)\right]/K}}\right)\right]\hfill \end{array}$$

Clearly, the error probability for large ${\delta}_{A}^{2}$ has been expressed as a function of some specific parameters, namely ${\lambda}^{*},{t}_{3},{t}_{2},{t}_{1}$ and ${\delta}_{A}^{2}$. In Section 4.2, this asymptotic error probability would be taken as the optimization objection for finding the optimal comparison thresholds.

#### 4.1.2. Approximation of LLR and Error Performance under High SNR

Considering the high SNR scenario, i.e., ${\delta}_{A}^{2}\to 0$, we derive a simplified LLR referring to the idea of [10]. Assume the FC can estimate the instantaneous sensor-to-FC channel gain as ${\widehat{h}}_{k}^{A}=\left|{y}_{k}^{A}\right|$ since ${y}_{k}^{A}\approx {h}_{k}^{A}{x}_{k}$ and $\left|{x}_{k}\right|=1$ except under the dormant case. Then, a simple hard decision rule determining which one a received signal ${y}_{k}^{A}$ comes from among three groups can be realized. A hard decision threshold ${t}_{h}$ is selected to satisfy ${\int}_{{\tau}_{3}}^{{t}_{h}}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}={\int}_{{t}_{h}}^{\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}$. Thus, the following conditional probability can reduce to
where ${\delta}_{x,b}$ is the Kronecker delta function. Thus, the likelihood function $f\left({y}_{k}^{A}|{\theta}_{i}\right)$ can be calculated as

$$p\left({x}_{k}|{b}_{k}\right)=\left\{\begin{array}{cc}{\delta}_{{x}_{k},(2{b}_{k}-1)}\hfill & {\widehat{h}}_{k}^{A}\ge {t}_{h}\\ {\delta}_{-{x}_{k},(2{b}_{k}-1)}\hfill & {\widehat{h}}_{k}^{A}<{t}_{h}\end{array}\right.$$

$$\begin{array}{c}f\left({y}_{k}^{A}|{\theta}_{i}\right)\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}={\displaystyle \sum _{{b}_{k}}}p\left({b}_{k}|{\theta}_{i}\right){\displaystyle \sum _{{x}_{k}}}f\left({y}_{k}^{A}|{x}_{k},{\widehat{h}}_{k}^{A}\right)p\left({x}_{k}|{b}_{k}\right)\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=\left\{\begin{array}{c}p\left({b}_{k}=1|{\theta}_{i}\right)f\left({y}_{k}^{A}|{x}_{k}=1,{\widehat{h}}_{k}^{A}\right)+p\left({b}_{k}=0|{\theta}_{i}\right)f\left({y}_{k}^{A}|{x}_{k}=-1,{\widehat{h}}_{k}^{A}\right),\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}\ge {t}_{h}\\ p\left({b}_{k}=1|{\theta}_{i}\right)f\left({y}_{k}^{A}|{x}_{k}=-1,{\widehat{h}}_{k}^{A}\right)+p\left({b}_{k}=0|{\theta}_{i}\right)f\left({y}_{k}^{A}|{x}_{k}=1,{\widehat{h}}_{k}^{A}\right),\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}<{t}_{h}\end{array}\right.\hfill \end{array}$$

Further derivation whose detail is provided in Appendix C gives that

$${\mathrm{\Lambda}}_{k}^{A}=\left\{\begin{array}{c}0,\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{y}_{k}^{A}=0\hfill \\ log\frac{{P}_{d}}{{P}_{f}},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}\ge {t}_{h}\cap {y}_{k}^{A}>0\hfill \\ log\frac{1-{P}_{d}}{1-{P}_{f}},{\widehat{h}}_{k}^{A}\ge {t}_{h}\cap {y}_{k}^{A}<0\hfill \\ log\frac{1-{P}_{d}}{1-{P}_{f}},{\widehat{h}}_{k}^{A}<{t}_{h}\cap {y}_{k}^{A}>0\hfill \\ log\frac{{P}_{d}}{{P}_{f}},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}<{t}_{h}\cap {y}_{k}^{A}<0\hfill \end{array}\right.$$

Replacing ${y}_{k}^{A}$ and $\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}$ with ${y}_{k}^{E}$ and $\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{E}$ in Equation (26), the simplified LLR under high SNR for the EFC is got.

In order to yield the error probability, the mean and variance of ${\mathrm{\Lambda}}_{k}^{A}$ are needed when the CLT is still used. Because ${h}_{k}^{A}\ge 0$, we have ${y}_{k}^{A}>0$ is equivalent to ${x}_{k}=1$ and ${y}_{k}^{A}<0$ corresponds to ${x}_{k}=-1$. Further, with the assumption of ${h}_{k}^{A}\approx {\widehat{h}}_{k}^{A}$, it can be derived

$$\begin{array}{c}E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1}\right)=\left({\lambda}_{1}+{\lambda}_{2}\right)[{P}_{d}log\frac{{P}_{d}}{{P}_{f}}+\left(1-{P}_{d}\right)log\frac{1-{P}_{d}}{1-{P}_{f}}]\hfill \\ E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0}\right)=\left({\lambda}_{1}+{\lambda}_{2}\right)[{P}_{f}log\frac{{P}_{d}}{{P}_{f}}+\left(1-{P}_{f}\right)log\frac{1-{P}_{d}}{1-{P}_{f}}]\hfill \end{array}$$

$$\begin{array}{c}E\left[{\left({\mathrm{\Lambda}}_{k}^{A}\right)}^{2}\right|{\theta}_{1}]=\left({\lambda}_{1}+{\lambda}_{2}\right)[{P}_{d}{(log\frac{{P}_{d}}{{P}_{f}})}^{2}+\left(1-{P}_{d}\right){(log\frac{1-{P}_{d}}{1-{P}_{f}})}^{2}]\hfill \\ E\left[{\left({\mathrm{\Lambda}}_{k}^{A}\right)}^{2}\right|{\theta}_{0}]=\left({\lambda}_{1}+{\lambda}_{2}\right)[{P}_{f}{(log\frac{{P}_{d}}{{P}_{f}})}^{2}+(1-{P}_{f}){(log\frac{1-{P}_{d}}{1-{P}_{f}})}^{2}]\hfill \end{array}$$

The derivations of Equations (27) and (28) are referred to Appendix D. Moreover, applying Equations (27) and (28) to calculate the error probability obtains

$$\begin{array}{c}{P}_{e}^{A}={q}_{0}Q\left(\frac{log({q}_{0}/{q}_{1})-E({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0})}{\sqrt{\left[E[{({\mathrm{\Lambda}}_{k}^{A})}^{2}|{\theta}_{0}]-{E}^{2}({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0})\right]/K}}\right)\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{q}_{1}[1-Q\left(\frac{log({q}_{0}/{q}_{1})-E({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1})}{\sqrt{\left[E[{({\mathrm{\Lambda}}_{k}^{A})}^{2}|{\theta}_{1}]-{E}^{2}({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1})\right]/K}}\right)]\hfill \end{array}$$

#### 4.2. Optimization of Comparison Thresholds

In Section 4.1, the asymptotic error probabilities at the AFC for extremely low and high SNR scenarios are obtained. They would be taken as the utility function for optimizing ${t}_{3},{t}_{2}$ and ${t}_{1}$ in this section. Our design target is to minimize the error probability of the AFC while satisfying the constraints of perfect secrecy and energy limitation. This problem can be stated as follows:
where the first constraint is the perfect secrecy condition to make the EFC totally be confused [10]. The second inequality constraint is to guarantee the specified energy efficiency.

$$\begin{array}{ccc}\phantom{\rule{-6.0pt}{0ex}}P0:\hfill & \underset{{t}_{3},{t}_{2},{t}_{1}}{min}\phantom{\rule{1.em}{0ex}}{P}_{e}^{A}& \\ & subject\phantom{\rule{0.277778em}{0ex}}to:& {\lambda}_{1}={\lambda}_{2}\\ & & {\lambda}_{1}+{\lambda}_{2}\le \beta \end{array}$$

Observing the Equations (23) and (29), we find that the numerical integration is included in ${P}_{e}^{A}$ and the variables to be optimized exist in the integral limits in a complicated form. These raise the difficulty to solve the problem. The utility function should be simplified.

Fortunately, it can be seen that ${P}_{e}^{A}$ decreases with $E\left({\mathrm{\Lambda}}_{k}^{A}\right|{\theta}_{1})$ and increases with $E\left({\mathrm{\Lambda}}_{k}^{A}\right|{\theta}_{0})$ since the impact of the variance of ${\mathrm{\Lambda}}_{k}^{A}$ can be ignored compared with its mean for a large K. Therefore, $E\left({\mathrm{\Lambda}}_{k}^{A}\right|{\theta}_{1})-E\left({\mathrm{\Lambda}}_{k}^{A}\right|{\theta}_{0})$ can be used to replace the cost function in $P0$. The same idea was used in [9] to find the optimal encryption matrix. Thus, the optimization problem under the case of low SNR is given by

$$\begin{array}{c}\phantom{\rule{-6.0pt}{0ex}}P1:\phantom{\rule{1.em}{0ex}}\underset{{t}_{3},{t}_{2},{t}_{1}}{max}\phantom{\rule{1.em}{0ex}}\mathrm{\Gamma}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})\left[E\left({y}_{k}^{A}|{\theta}_{1}\right)-E\left({y}_{k}^{A}|{\theta}_{0}\right)\right]\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}subject\phantom{\rule{0.277778em}{0ex}}to:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\lambda}_{1}={\lambda}_{2}\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\lambda}_{1}+{\lambda}_{2}\le \beta \hfill \end{array}$$

From Equations (19)∼(21), we achieve

$$\mathrm{\Gamma}({\lambda}^{*},{t}_{3},{t}_{2},{t}_{1})\left[E\left({y}_{k}^{A}|{\theta}_{1}\right)-E\left({y}_{k}^{A}|{\theta}_{0}\right)\right]=\frac{4{({P}_{d}-{P}_{f})}^{2}}{{\delta}_{A}^{2}}{[m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)]}^{2}$$

Because the first item of the right side in Equation (32) is independent on the variables to be optimized, the final object is to maximize $\left|D({t}_{3},{t}_{2},{t}_{1})\right|=\left|m\left({t}_{1}\right)-n\left({t}_{3},{t}_{2}\right)\right|$ while keep ${\lambda}_{1}={\lambda}_{2}$ and ${\lambda}_{1}+{\lambda}_{2}\le \beta $. Moreover, according to the Rayleigh distribution function, we have

$${\lambda}_{1}=exp\left(-{t}_{1}^{2}\right)\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{and}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\lambda}_{2}=exp\left(-{t}_{3}^{2}\right)-exp\left(-{t}_{2}^{2}\right)$$

Now, in order to determine three appropriate thresholds, we should discuss the relationship of the target function $D({t}_{3},{t}_{2},{t}_{1})$ and the actual energy consumption indicator, i.e., $\alpha ={\lambda}_{1}+{\lambda}_{2}$. Taking the $D({t}_{3},{t}_{2},{t}_{1})$ as a function of α, we can derive that

$${\delta}_{D}\left(\alpha \right)=\frac{dD\left(\alpha \right)}{d\alpha}=\left\{\begin{array}{c}\frac{{t}_{1}-{t}_{2}}{2},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{t}_{3}=0\hfill \\ {t}_{1}-{t}_{3},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{t}_{1}={t}_{2}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\hfill \end{array}\right.$$

The detail of the calculation process for Equation (34) is shown in Appendix E.

From Equation (34), it can be easily seen that ${\delta}_{D}\left(\alpha \right)\ge 0$ for both cases of ${t}_{3}=0$ and $\phantom{\rule{1.0pt}{0ex}}{t}_{1}={t}_{2}$ due to the fact $0\le {t}_{3}\le {t}_{2}\le {t}_{1}<\infty $. This results in that $D({t}_{3},{t}_{2},{t}_{1})$ is strictly increasing with α. In particular, we can get $D({t}_{3},{t}_{2},{t}_{1})=0$ for $\alpha =0$. Thus, there is $D({t}_{3},{t}_{2},{t}_{1})\ge 0$ at the whole range of $\alpha \in [0,1]$ and then the absolute calculation in the target function can be omitted. The above analysis contributes to that the equality (i.e., ${\lambda}_{1}+{\lambda}_{2}=\beta $) should be selected in the second constraint to maximize the cost function in Problem $P1$.

Moreover, we also find from Equation (34) that, with $\alpha \to 1$, there is ${\delta}_{D}\left(\alpha \right)\to 0$ for ${t}_{3}=0$, while ${\delta}_{D}\left(\alpha \right)\to {t}_{1}$ for ${t}_{1}={t}_{2}\phantom{\rule{1.0pt}{0ex}}$. This finding further tells us $D({t}_{3},{t}_{2},{t}_{1})$ will decrease faster for ${t}_{1}={t}_{2}\phantom{\rule{1.0pt}{0ex}}$ than for ${t}_{3}=0$ when α reduces from 1. Then, from the view of network robustness, choosing ${t}_{3}=0$ is preferred and this result will also be confirmed by the simulations given in Section 6.

Summarizing the above analysis can directly obtain the optimized thresholds given by

$${t}_{1}=\sqrt{log(2/\beta )},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{t}_{2}=\sqrt{log[2/(2-\beta \left)\right]},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{t}_{3}=0$$

Now, let’s come to the case of high SNR. Referring to the analyzing methods for the low SNR, the following optimization problem is established

$$\begin{array}{c}\phantom{\rule{-6.0pt}{0ex}}P2:\phantom{\rule{1.em}{0ex}}\underset{{t}_{3},{t}_{2},{t}_{1}}{max}\phantom{\rule{1.em}{0ex}}E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1}\right)-E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0}\right)\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}subject\phantom{\rule{0.277778em}{0ex}}to:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\lambda}_{1}={\lambda}_{2}\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\lambda}_{1}+{\lambda}_{2}\le \beta \hfill \end{array}$$

Applying Equation (27) yields

$$E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1}\right)-E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0}\right)=\left({\lambda}_{1}+{\lambda}_{2}\right)\left({P}_{d}-{P}_{f}\right)log\frac{{P}_{d}\left(1-{P}_{f}\right)}{{P}_{f}\left(1-{P}_{d}\right)}$$

Obviously, the cost function is strictly increasing with ${\lambda}_{1}+{\lambda}_{2}$, since the local detection probability is always larger than the false alarm probability in practice so the item $\left({P}_{d}-{P}_{f}\right)log\frac{{P}_{d}\left(1-{P}_{f}\right)}{{P}_{f}\left(1-{P}_{d}\right)}$ is larger than zero. Thus, we should also choose ${\lambda}_{1}+{\lambda}_{2}=\beta $. However, which is better between ${t}_{1}={t}_{2}\phantom{\rule{1.0pt}{0ex}}$ and ${t}_{3}=0$ could not be determined from Equation (37). Actually, they have the identical detection performances for the extreme case of ${\delta}_{A}^{2}=0$. This phenomenon will be demonstrated in our simulations. Consequently, the thresholds given in Equation (35) should also be used under the high SNR situation.

## 5. Joint Local Decision and Wireless Transmission Based Secure Detection under Energy Constraint

In TCBO secure detection scheme, in order to meet the energy constraint of network, the sensors whose channel gains fall in the region between ${t}_{1}$ and ${t}_{2}$ (Consider the case of ${t}_{3}=0$.) will keep inactive. Of course, this gap between ${t}_{1}$ and ${t}_{2}$ can facilitate the AFC to tells the signals from flipping group and non-flipping group to some extent. However, the decision quality of the sensor’s local detection is not considered. That is to say the sensor with an error decision may be permitted to report its detection result to the FC, while the one with a correct decision perhaps is forbidden. We think this phenomenon maybe worsen the performance of decision fusion .

Therefore, we propose to select the dormant sensor basing on its local decision quality that can be quantified by the local Log-Likelihood Ratio ${\mathrm{\Lambda}}_{k}^{L}$. Sensors with very small or very large LLR will send data to the fusion center, while the others stay silent to save energy. Obviously, this is the core idea of censoring sensor technique [8,11]. In particular, a perfectly secure distributed detection scheme with censoring sensors was given in [8]. But a comparatively ideal assumption was set that the EFC had no access to the data from sensors and only monitored the transmission activity of sensors. Moreover, the strategy in [8] did not consider the effect of the wireless transmission between the sensors and the fusion center on the reliability and security, so its applicability was limited. Basing on the above considerations, a joint local decision and wireless transmission based scheme for secure distributed detection with energy constraint is proposed in this section.

The JLDWT method is performed as follows: each sensor first calculates the local ${\mathrm{\Lambda}}_{k}^{L}$ and compares it with two local decision thresholds. If ${\mathrm{\Lambda}}_{k}^{L}$ locates between $log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})$ and $log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})$, it will stay inactive in current report timeslot for it appears less informative to make a correct decision about the target state. Otherwise, the sensor will make a 1bit-decision regarding the state of the hypothesis and then deliver it to the FC over a wireless PAC. While, in order to keep secret from the eavesdropping FC, the active sensor still should encrypt its local decision by randomly flipping it before transmitting. A single comparison threshold ${t}_{0}$ is used here instead of tree thresholds in TCBO scheme. If the sensor has the channel gain satisfying $\infty >{h}_{k}^{A}\ge {t}_{0}$, it is involved in the non-flipping group. Otherwise, it is chosen to be in the flipping group. At the fusion center, the LLR based fusion rule is still used. Three thresholds, namely $log({\lambda}_{L}\xb7{q}_{0}/{q}_{1})$, $log({\lambda}_{U}\xb7{q}_{0}/{q}_{1})$ and ${t}_{0}$, along with the encryption scheme at the sensors are assumed to be known by both the AFC and EFC.

#### 5.1. Security Analysis

Now the condition of perfect secrecy in JLDWT scheme will be derived. Our analysis begins with the conditional likelihood function of the k-th sensor calculated by the EFC, which is given by
where (a) is valid as ${\theta}_{i}\to {b}_{k}\to {x}_{k}\to {y}_{k}^{E}$ forms a Markov chain and ${h}_{k}^{A}$ is uncorrelated with ${y}_{k}^{E}$, ${x}_{k}$ and ${\theta}_{i}$. And (b) follows the fact that $p\left({x}_{k}=1|{b}_{k}=1\right)=1$ and $p\left({x}_{k}=-1|{b}_{k}=0\right)=1$ for ${h}_{k}^{A}\ge {t}_{0}$, while $p\left({x}_{k}=-1|{b}_{k}=1\right)=1$ and $p\left({x}_{k}=1|{b}_{k}=0\right)=1$ for ${h}_{k}^{A}<{t}_{0}$. In addition, ${b}_{k}=null$ corresponds to the sensor’s dormant state and ${x}_{k}=0$ accordingly. Furthermore, define $\lambda \stackrel{\mathrm{def}}{=}{\int}_{{t}_{0}}^{\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}$ and we can easily yield

$$\begin{array}{c}f\left({y}_{k}^{E}|{\theta}_{i}\right)={\displaystyle \sum _{{b}_{k}}}{\displaystyle \sum _{{x}_{k}}}{\int}_{0}^{\infty}f\left({y}_{k}^{E},{h}_{k}^{A},{x}_{k},{b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}\hfill \\ ={\displaystyle \sum _{{b}_{k}}}{\displaystyle \sum _{{x}_{k}}}{\int}_{0}^{\infty}f\left({y}_{k}^{E},{h}_{k}^{A},{x}_{k}|{b}_{k},{\theta}_{i}\right)p\left({b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}\hfill \\ ={\displaystyle \sum _{{b}_{k}}}p\left({b}_{k}|{\theta}_{i}\right){\displaystyle \sum _{{x}_{k}}}{\int}_{0}^{\infty}f\left({y}_{k}^{E}|{h}_{k}^{A},{x}_{k},{b}_{k},{\theta}_{i}\right)f\left({h}_{k}^{A},{x}_{k}|{b}_{k},{\theta}_{i}\right)d{h}_{k}^{A}\hfill \\ \stackrel{\left(\mathrm{a}\right)}{=}{\displaystyle \sum _{{b}_{k}}}p\left({b}_{k}|{\theta}_{i}\right){\displaystyle \sum _{{x}_{k}}}f\left({y}_{k}^{E}|{x}_{k}\right){\int}_{0}^{\infty}f\left({h}_{k}^{A}\right)p\left({x}_{k}|{b}_{k}\right)d{h}_{k}^{A}\hfill \\ \stackrel{\left(\mathrm{b}\right)}{=}p\left({b}_{k}=1|{\theta}_{i}\right)\left[f\left({y}_{k}^{E}|{x}_{k}=1\right){\int}_{{t}_{0}}^{+\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}+f\left({y}_{k}^{E}|{x}_{k}=-1\right){\int}_{0}^{{t}_{0}}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+p\left({b}_{k}=0|{\theta}_{i}\right)\left[f\left({y}_{k}^{E}|{x}_{k}=-1\right){\int}_{{t}_{0}}^{+\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}+f\left({y}_{k}^{E}|{x}_{k}=1\right){\int}_{0}^{{t}_{0}}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+p\left({b}_{k}=null|{\theta}_{i}\right)f\left({y}_{k}^{E}|{x}_{k}=0\right){\int}_{0}^{+\infty}f\left({h}_{k}^{A}\right)d{h}_{k}^{A}\hfill \end{array}$$

$$\begin{array}{c}f\left({y}_{k}^{E}|{\theta}_{1}\right)={P}_{d}\left[f\left({y}_{k}^{E}|{x}_{k}=1\right)\lambda +f\left({y}_{k}^{E}|{x}_{k}=-1\right)\left(1-\lambda \right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{P}_{m}\left[f\left({y}_{k}^{E}|{x}_{k}=-1\right)\lambda +f\left({y}_{k}^{E}|{x}_{k}=1\right)\left(1-\lambda \right)\right]+\left(1-{P}_{d}-{P}_{m}\right)f\left({y}_{k}^{E}|{x}_{k}=0\right)\hfill \\ f\left({y}_{k}^{E}|{\theta}_{0}\right)={P}_{f}\left[f\left({y}_{k}^{E}|{x}_{k}=1\right)\lambda +f\left({y}_{k}^{E}|{x}_{k}=-1\right)\left(1-\lambda \right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{P}_{0d}\left[f\left({y}_{k}^{E}|{x}_{k}=-1\right)\lambda +f\left({y}_{k}^{E}|{x}_{k}=1\right)\left(1-\lambda \right)\right]+\left(1-{P}_{f}-{P}_{0d}\right)f\left({y}_{k}^{E}|{x}_{k}=0\right)\hfill \end{array}$$

To achieve perfect secrecy, two likelihood function $f\left({y}_{k}^{E}|{\theta}_{1}\right)$ and $f\left({y}_{k}^{E}|{\theta}_{0}\right)$ should be identical [10]. Then we can establish the following group of equations based on Equation (39).

$$\begin{array}{c}\left(1-{P}_{d}-{P}_{m}\right)=\left(1-{P}_{f}-{P}_{0d}\right)\\ {P}_{d}\lambda +{P}_{m}\left(1-\lambda \right)={P}_{f}\lambda +{P}_{0d}\left(1-\lambda \right)\\ {P}_{m}\lambda +{P}_{d}\left(1-\lambda \right)={P}_{0d}\lambda +{P}_{f}\left(1-\lambda \right)\end{array}$$

Through simply computing, we obtain the perfect secrecy condition given by

$$\lambda =1/2\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{and}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{P}_{d}+{P}_{m}={P}_{f}+{P}_{0d}$$

The first condition in Equation (41) directly results in ${t}_{0}=\sqrt{log\left(2\right)}$. And the second condition means that the activation probability under the hypothesis ${\theta}_{1}$, indicated by ${\beta}_{1}={P}_{d}+{P}_{m}$, equates to the activation probability under ${\theta}_{0}$, denoted by ${\beta}_{2}={P}_{f}+{P}_{0d}$. Comparing this condition with the perfect secrecy setting given in section II of [8], we find they are identical. Next, our task is to find two suitable thresholds ${\lambda}_{U}$ and ${\lambda}_{L}$ used in local Bayesian detection Equation (2) to minimize the error probability at the AFC, meanwhile, meet the perfect security and energy constraint of ${\beta}_{1}={\beta}_{2}\le \beta $.

#### 5.2. Optimization of Local Detection Thresholds

Referring to the derivation methods of Equations (12) and (38), we can obtain the conditional likelihood functions at the AFC, which are expressed as
where $\mathrm{\Phi}\left({t}_{a},{t}_{b},{x}_{k},{y}_{k}^{A},{\delta}_{A}^{2}\right)$ has the expression of Equation (13).

$$\begin{array}{cc}f\left({y}_{k}^{A}|{\theta}_{1}\right)\hfill & ={P}_{d}\left[\mathrm{\Phi}\left({t}_{0},\infty ,1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left(0,{t}_{0},-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\hfill & +{P}_{m}\left[\mathrm{\Phi}\left({t}_{0},\infty ,-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left(0,{t}_{0},1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]+\left(1-{P}_{d}-{P}_{m}\right)\mathrm{\Phi}\left(0,\infty ,0,{y}_{k}^{A},{\delta}_{A}^{2}\right)\hfill \\ \phantom{\rule{1.0pt}{0ex}}f\left({y}_{k}^{A}|{\theta}_{0}\right)\hfill & ={P}_{f}\left[\mathrm{\Phi}\left({t}_{0},\infty ,1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left(0,{t}_{0},-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\hfill & +{P}_{0d}\left[\mathrm{\Phi}\left({t}_{0},\infty ,-1,{y}_{k}^{A},{\delta}_{A}^{2}\right)+\mathrm{\Phi}\left(0,{t}_{0},1,{y}_{k}^{A},{\delta}_{A}^{2}\right)\right]+\left(1-{P}_{f}-{P}_{0d}\right)\mathrm{\Phi}\left(0,\infty ,0,{y}_{k}^{A},{\delta}_{A}^{2}\right)\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\hfill \end{array}$$

#### 5.2.1. Optimization of Local Detection Thresholds under Low SNR

Following the deducing process in Section 4.1.1, we can obtain the calculation formula of the error probability under low SNR for AFC, which can be written as
where

$$\begin{array}{c}{P}_{e}^{A}={q}_{0}Q\left(\frac{log({q}_{0}/{q}_{1})-\mathrm{\Gamma}({\lambda}_{U},{\lambda}_{L},{t}_{0})E\left({y}_{k}^{A}|{\theta}_{0}\right)}{\sqrt{{\mathrm{\Gamma}}^{2}({\lambda}_{U},{\lambda}_{L},{t}_{0})\left[{\delta}_{A}^{2}-{E}^{2}\left({y}_{k}^{A}|{\theta}_{0}\right)\right]/K}}\right)\hfill \\ \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{q}_{1}\left[1-Q\left(\frac{log({q}_{0}/{q}_{1})-\mathrm{\Gamma}({\lambda}_{U},{\lambda}_{L},{t}_{0})E\left({y}_{k}^{A}|{\theta}_{1}\right)}{\sqrt{{\mathrm{\Gamma}}^{2}({\lambda}_{U},{\lambda}_{L},{t}_{0})\left[{\delta}_{A}^{2}-{E}^{2}\left({y}_{k}^{A}|{\theta}_{1}\right)\right]/K}}\right)\right]\hfill \end{array}$$

$$\mathrm{\Gamma}({\lambda}_{U},{\lambda}_{L},{t}_{0})=\frac{({P}_{d}-{P}_{f})+({P}_{0d}-{P}_{m})}{{\delta}_{A}^{2}}[m\left({t}_{0}\right)-n(0,{t}_{0})]$$

In Equation (44), ${P}_{d}$, ${P}_{m}$, ${P}_{0d}$ and ${P}_{d}$ have the expressions given in Equation (7). Further, referring to the optimization problem $P1$, we build
where

$$\begin{array}{c}\phantom{\rule{-6.0pt}{0ex}}P3:\phantom{\rule{1.em}{0ex}}\underset{{\lambda}_{U},{\lambda}_{L}}{max}\phantom{\rule{1.em}{0ex}}\mathrm{\Gamma}({\lambda}_{U},{\lambda}_{L},{t}_{0})\left[E\left({y}_{k}^{A}|{\theta}_{1}\right)-E\left({y}_{k}^{A}|{\theta}_{0}\right)\right]\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}subject\phantom{\rule{0.277778em}{0ex}}to:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\beta {\phantom{\rule{1.0pt}{0ex}}}_{1}={\beta}_{2}\le \beta \hfill \end{array}$$

$$\begin{array}{c}E\left({y}_{k}^{A}|{\theta}_{1}\right)=({P}_{d}-{P}_{m})[m\left({t}_{0}\right)-n(0,{t}_{0})]\hfill \\ \\ E\left({y}_{k}^{A}|{\theta}_{0}\right)=({P}_{f}-{P}_{0d})[m\left({t}_{0}\right)-n(0,{t}_{0})]\hfill \end{array}$$

Applying Equation (46) to Equation (45), it can be achieved the rewritten object function is ${[({P}_{d}-{P}_{f})+({P}_{0d}-{P}_{m})]}^{2}\xb7$ ${[m\left({t}_{0}\right)-n(0,{t}_{0})]}^{2}/{\delta}_{A}^{2}$. Due to ${[m\left({t}_{0}\right)-n(0,{t}_{0})]}^{2}/{\delta}_{A}^{2}$ being independent on the variables to be optimized, the final target function can reduce to

$$O\left({\beta}_{1}\right)=({P}_{d}-{P}_{f})+({P}_{0d}-{P}_{m})$$

In addition, because the probability of correct detection is always larger than the incorrect one in practice, we have $O\left({\beta}_{1}\right)\ge 0$. Moreover, the condition $\beta {\phantom{\rule{1.0pt}{0ex}}}_{1}={\beta}_{2}$ contributes to $({P}_{d}-{P}_{f})=({P}_{0d}-{P}_{m})$, and then $O\left({\beta}_{1}\right)=2({P}_{d}-{P}_{f})$.

First of all, we should find a good $\beta {\phantom{\rule{1.0pt}{0ex}}}_{1}$ that meets the constraint in Equation (45) to maximize $O\left({\beta}_{1}\right)$. Combining Equations (7) and (47), we have

$$O\left({\beta}_{1}\right)=2{\int}_{log[{\lambda}_{U}\left({\beta}_{1}\right)\xb7{q}_{0}/{q}_{1}]}^{\infty}[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)]d{\mathrm{\Lambda}}_{k}^{L}$$

Let’s first focus on the following function:

$$D\left(\lambda \right)\stackrel{\mathrm{Def}}{=}{\int}_{log(\lambda {q}_{0}/{q}_{1})}^{\infty}[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)]d{\mathrm{\Lambda}}_{k}^{L}$$

Applying the condition $({P}_{d}-{P}_{f})=({P}_{0d}-{P}_{m})$, we can get the result of $D\left({\lambda}_{U}\right)=D\left({\lambda}_{L}\right)$, which is derived in detail in Appendix F. Substituting Equation (6) into Equation (49), it can be obtained
where the error function $\mathrm{erf}\left(x\right)=\frac{2}{\sqrt{\pi}}{\int}_{0}^{x}exp(-{\eta}^{2})d\eta $. Due to $\mathrm{er}{\mathrm{f}}^{\prime}\left(x\right)=\frac{2}{\sqrt{\pi}}exp(-{x}^{2})$, we further get

$$\begin{array}{c}D\left(\lambda \right)=\frac{1}{\sqrt{2\pi sn{r}_{L}}}\{{\int}_{log(\lambda {q}_{0}/{q}_{1})}^{+\infty}exp[-{\left({\mathrm{\Lambda}}_{k}^{L}-sn{r}_{L}/2\right)}^{2}/\left(2sn{r}_{L}\right)]d{\mathrm{\Lambda}}_{k}^{L}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}-{\int}_{log(\lambda {q}_{0}/{q}_{1})}^{+\infty}exp[-{\left({\mathrm{\Lambda}}_{k}^{L}+sn{r}_{L}/2\right)}^{2}/(2sn{r}_{L})]d{\mathrm{\Lambda}}_{k}^{L}\}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=\frac{1}{2}\{\mathrm{erf}([log(\lambda \frac{{q}_{0}}{{q}_{1}})+sn{r}_{L}/2]/\sqrt{2sn{r}_{L}})-\mathrm{erf}([log(\lambda \frac{{q}_{0}}{{q}_{1}})-sn{r}_{L}/2]/\sqrt{2sn{r}_{L}})\}\hfill \end{array}$$

$$\begin{array}{c}\frac{dD\left(\lambda \right)}{d\lambda}=\frac{1}{\lambda \sqrt{2\pi sn{r}_{L}}}(exp\{-{[log(\lambda {q}_{0}/{q}_{1})+sn{r}_{L}/2]}^{2}/2sn{r}_{L}\}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}-exp\{-{[log(\lambda {q}_{0}/{q}_{1})-sn{r}_{L}/2]}^{2}/2sn{r}_{L}\})\hfill \end{array}$$

Through setting $\frac{dD\left(\lambda \right)}{d\lambda}=0$, we can find three extreme points

$$\lambda =0,\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\lambda =\infty \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{and}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\lambda ={q}_{1}/{q}_{0}$$

Substituting them into Equation (50), we have

$$D\left(\lambda =0\right)=0,\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}D\left(\lambda =\infty \right)=0\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{and}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}D\left(\lambda ={q}_{1}/{q}_{0}\right)=\mathrm{erf}(\sqrt{\frac{sn{r}_{L}}{8}})$$

From Figure 3, we can see that there are two thresholds corresponding to one value of $D\left(\lambda \right)$, further, this $D\left(\lambda \right)$ actually maps to a single ${\beta}_{1}$. When ${\beta}_{1}=1$, two thresholds overlap at a point of $\lambda ={q}_{1}/{q}_{0}$ and $D\left(\lambda \right)$ has the maximum value. While ${\beta}_{1}$ decreases, we know that ${\lambda}_{U}$ moves towards ∞ and ${\lambda}_{L}$ approaches zero further. Thus, from Figure 3, we can see that the corresponding $D\left(\lambda \right)$ reduces. That is to say a larger ${\beta}_{1}$ is preferred in order to get a higher $D\left(\lambda \right)$.

Moreover, the reduced target function for $P3$ can be written as $O\left({\beta}_{1}\right)=2D\left({\lambda}_{U}\left({\beta}_{1}\right)\right)$. Therefore, ${\beta}_{1}=\beta $ should be chosen to achieve the maximum $O\left({\beta}_{1}\right)$, along with the optimal performance of AFC, and the corresponding pair of thresholds are the optimal thresholds to be found. However, the expressions in Equations (7) and (49) are so complex that a closed-form expression of ${\lambda}_{L}\left(\beta \right)$ and ${\lambda}_{U}\left(\beta \right)$ couldn’t be obtained. In this situation, a pre-calculated table corresponding to each $sn{r}_{L}$ could be used to get the required ${\lambda}_{L}\left(\beta \right)$ and ${\lambda}_{U}\left(\beta \right)$, just as the processing method in our simulations.

#### 5.2.2. Optimization of Local Detection Thresholds under High SNR

For the very high SNR scenario, the analysis methods in Section 4.1.2 are consulted. Firstly, the simplified LLR similar as Equation (26) are obtained, which is given by
where ${t}_{h}$ is set as ${t}_{0}$. Referring to the derivation of Equation (27), it is achieved that

$${\mathrm{\Lambda}}_{k}^{A}=\left\{\begin{array}{c}0,\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{y}_{k}^{A}=0\hfill \\ log\frac{{P}_{d}}{{P}_{f}},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}\ge {t}_{h}\cap {y}_{k}^{A}>0\hfill \\ log\frac{{P}_{m}}{{P}_{0d}},{\widehat{h}}_{k}^{A}\ge {t}_{h}\cap {y}_{k}^{A}<0\hfill \\ log\frac{{P}_{m}}{{P}_{0d}},{\widehat{h}}_{k}^{A}<{t}_{h}\cap {y}_{k}^{A}>0\hfill \\ log\frac{{P}_{d}}{{P}_{f}},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{\widehat{h}}_{k}^{A}<{t}_{h}\cap {y}_{k}^{A}<0\hfill \end{array}\right.\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$$

$$\begin{array}{c}E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1}\right)={P}_{d}log\frac{{P}_{d}}{{P}_{f}}+{P}_{m}log\frac{{P}_{m}}{{P}_{0d}}\hfill \\ E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0}\right)={P}_{f}log\frac{{P}_{d}}{{P}_{f}}+{P}_{0d}log\frac{{P}_{m}}{{P}_{0d}}\hfill \end{array}$$

Then the design problem is built as

$$\begin{array}{c}\phantom{\rule{-6.0pt}{0ex}}\hfill \end{array}P4:\phantom{\rule{1.em}{0ex}}\underset{{\lambda}_{u},{\lambda}_{l}}{max}\phantom{\rule{1.em}{0ex}}E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{1}\right)-E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{0}\right)\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}subject\phantom{\rule{0.277778em}{0ex}}to:\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\beta {\phantom{\rule{1.0pt}{0ex}}}_{1}={\beta}_{2}\le \beta $$

Here, the object function can be written as $O\left({\lambda}_{L},{\lambda}_{U}\right)=\left({P}_{d}-{P}_{f}\right)log\frac{{P}_{d}}{{P}_{f}}\xb7log\frac{{P}_{0d}}{{P}_{m}}$. Because ${P}_{d}-{P}_{f}={P}_{0d}-{P}_{m}$, maximizing ${P}_{d}-{P}_{f}$ could also make $log\frac{{P}_{d}}{{P}_{f}}$ and $log\frac{{P}_{0d}}{{P}_{m}}$ largest. Therefore, the object function in Equation (56) can be transformed into ${P}_{d}-{P}_{f}$, so Problem $P4$ is equivalent to Problem $P3$ and they have the identical optimization results.

## 6. Simulation Results and Discussions

In this section, simulation results are presented to evaluate the TCBO and the proposed JLDWT schemes in a sensor network of IoT. Their error probabilities are compared from various perfectives, including with the changing of transmission channel SNR, energy constraint and local detection SNR. The performance of a degraded form of the JLDWT scheme, where the random flipping is not included, is also given to represent the performance of secure detection designed in [8] over a practical rather than an idea wireless PAC.

#### 6.1. Simulation Settings

A wireless sensor network with K sensors is modeled. The local detection SNR and the transmission channel SNR to fusion center for different sensors are assumed to be identical, as well as, the transmission channel SNR to the AFC and the EFC is also supposed to be equal. In addition, the LLR computation at the EFC is same as the AFC except the received signals from the sensors. Detail simulation parameters are listed in Table 1. Moreover, Table 2 and Table 3 give the specific local decision thresholds corresponding to different energy constraints under $sn{r}_{L}=0\phantom{\rule{1.0pt}{0ex}}\mathrm{dB}$ and $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\mathrm{dB}$, respectively.

#### 6.2. Simulation Results for TCBO Scheme

Let’s begin with the performance evaluation for the low SNR scenarios, where the SNR is not larger than 0 dB. From Figure 4, we first notice that the error probabilities for various settings at the EFC all locate around 0.5, which is our expected situation of perfect secrecy. Moreover, it is obvious that the AFC performance for the case of ${t}_{3}=0$ expresses better than the case of ${t}_{1}={t}_{2}$ and there is a gain of about 4 dB obtained by the former one. This may be contributed by two aspects. On one side, the dormant region (or a gap) locates between the flipping and non-flipping group for the case of ${t}_{3}=0$ and it is beneficial for the AFC to discriminate between the flipping and non-flipping case, especially with serious noise. On the other side, the flipped decisions also disturb the fusion process at the AFC. For ${t}_{3}=0$, the power of received interference is lower since the flipping sensor has the lower channel gain. Thus the interference would have less effect on the fusion decision of the AFC. In addition, the error performances of using the approximated LLR (given in Equation (19)) are almost identical with the ones of using the statistic channel (SC) based LLR (Here, numerical integrations are needed.), particularly during the very low SNR region. This demonstrates the availability of the approximated LLR under low SNR. The theoretic performance calculated by using Equation (23) for ${t}_{3}=0$ is also drawn in Figure 4. It can be seen that the simulation result fits the theoretic one well for the SNR lower than −10 dB, and the gap between them becomes larger with the growing of SNR due to the noise variance being farther from the assumption of ${\delta}_{A}^{2}=\infty $ included in Equation (23).

Figure 5 shows the performance of TCBO scheme with the SC based LLR varying with β. It can be seen that the error probabilities for ${t}_{3}=0$ and ${t}_{1}={t}_{2}$ are identical with $\beta =1$ and they would increase with β reducing from 1. But the increasing of the former one is slower than the latter one, which is correspondent to the analysis about Equation (34) in Section 4.2. Moreover, carefully observing the curves corresponding to ${t}_{3}=0$, we find that the error probabilities even rise slowly when we continue improving β and this phenomenon is more obvious for the moderate SNR, for example $SNR=0$ dB. It is because the reduced gap between the flipping and non-flipping group with a larger β leads to the confusion of the AFC to judge between two groups. It is noted that the confusion is created by the noise of channels. When the noise is very strong (or there is no such gap and the case ${t}_{1}={t}_{2}$ follows this situation), the confusion always exits, so the more energy consumes and the better performance gets, just as the analytical result under low SNR in Section 4.2. However, with the noise reducing, the confusion disappears when the gap is large (Corresponding to a small β), while it appears when the gap becomes small. Therefore, although the energy consumption increases with β becoming large, the appeared confusion would worsen the performance. Of course, when the noise reduces to zero, the confusion never appears and the error probability will strictly decrease with β. This is the asymptotical analysis result under high SNR in Section 4.2 and also will be seen in the following simulations.

The performance curves of TCBO scheme for the high SNR scenarios, where the SNR is larger than 0 dB, are shown in Figure 6. Obviously, the error probabilities for various simulation conditions at the EFC are all about 0.5 and perfect secrecy is maintained. Moreover, the AFC performance for the case of ${t}_{3}=0$ is still better than the one for the case of ${t}_{1}={t}_{2}$, and the performance gap is about 2 dB. However, we find that the performance loss induced by the approximation of LLR with ${\delta}_{A}^{2}\to 0$ (seen in Equation (26)) is obvious. And this loss for ${t}_{3}=0$ will decrease with improving SNR, since the noise variance is closer to zero. In fact, the performance loss for ${t}_{1}={t}_{2}$ will also reduce with the growing of SNR. In particular, this loss will reduce to zero for the extreme case of ${\delta}_{A}^{2}=0$ with two kinds of threshold setting, which can be seen in Figure 7. Therefore, the approximated LLR given in Equation (26) is still usable in terms of the reducing computation complexity, especially under high SNR scenarios.

From the other perspective, Figure 7 draws the error performance varying with β under the high SNR case. It can be seen that the threshold setting of ${t}_{3}=0$ demonstrates higher robustness than ${t}_{1}={t}_{2}$ when the energy constraint is more severe. In addition, for the extreme case that the noise disappears, the error probabilities for various settings converge to an identical value and they decrease strictly as β increases. Because the confusion of the AFC for discriminating between the flipping and non-flipping group does not exist when the noise is absent, the case of ${t}_{3}=0$ would be equivalent to the case of ${t}_{1}={t}_{2}$. Furthermore, the approximated LLR could obtain the similar performance as the SC based LLR.

#### 6.3. Simulation Results for JLDWT Scheme

In this section, the performances of the TCBO and the proposed JLDWT schemes are compared from various perspectives. Figure 8 gives the error probabilities of two schemes for low SNR case. We can see that the JLDWT using the SC based LLR, the JLDWT using the approximated LLR and the TCBO using the SC based LLR have almost identical performance. Because the strong channel noise dominates in low SNR, the JLDWT’s advantage is not shown up. The simplified LLR for low SNR is very effective for maintaining the performance as well as reducing complexity of FC. Furthermore, all these schemes could achieve the perfect secrecy.

For comparison, the degraded JLDWT method without random flipping is also evaluated. Concretely, in the degraded JLDWT scheme, each sensor still executes the local detection based on the Bayesian criteria with two local decision thresholds ${\lambda}_{U}$ and ${\lambda}_{L}$ keeping ${\beta}_{1}={\beta}_{2}$, while the active sensor will deliver the local 1bit-decision in its original form to the FC no matter what the estimated channel gain is. That is to say the difference of the degraded JLDWT from the JLDWT is that the flipping process is not involved. As comparing it with the secure strategy given in [8], we can easily see that ${\lambda}_{U}$ and ${\lambda}_{L}$ used by the degraded JLDWT are identical with the ones used by the scheme in [8], because their optimization targets to find the optimal ${\lambda}_{U}$ and ${\lambda}_{L}$ are equivalent and the perfect secrecy constraint conditions are same. Thus, the degraded JLDWT can be seen equivalent to the scheme of [8] except that it is applied under a more realistic scenario considering the wireless transmission and a looser constraint on the EFC ability relative to the case in [8]. From Figure 8, it can be seen that the secrecy from the EFC is totally lost and the EFC has the same performance as the AFC when the secure strategy in [8] is used. That is to say the strategy given in [8] is ineffective if the EFC has the same process capability and the prior information as the AFC. Thus, random flipping is necessary to assure the information confidentiality with the enhanced EFC. Certainly, this information security is exchanged by certain performance loss of the AFC.

As for the case of high SNR, it can be seen from Figure 9 that the JLDWT scheme with the SC based LLR outperforms the TCBO using the SC based LLR and the performance gain for the AFC would increase with the transmission channel SNR going higher. That is to say preventing the worse local decision from contributing to the data fusion would facilitate to improve the performance at the FC when the disadvantage effect of transmission channel reduces. Moreover, similar as the result seen in Figure 8, the AFC and the EFC have the identical error probabilities with the degraded JLDWT and the information confidentiality is not guaranteed. In addition, the approximated LLR contributes to the performance loss for both the JLDWT and the TCBO schemes, but the JLDWT scheme still outperforms the TCBO one slightly.

Figure 10 and Figure 11 compare the error performance of TCBO and JLDWT schemes with the SC based LLR under $sn{r}_{L}=0\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{dB}$ and $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\mathrm{dB}$, respectively. It can be seen that the gain of the JLDWT scheme against the TCBO method increases with the growing of transmission channel SNR. That is correspondent to the result seen in Figure 9. Furthermore, this gain at the high SNR, for example SNR = 15 dB, becomes larger for a smaller β. That is the advantage induced by cancelling the worse local detection results from the fusion data and it would dominant the final decision fusion when the transmission channel becomes good. Furthermore, we also find the performance inflection phenomenon over the curves of the JLDWT, which is similar as seen in Figure 5. While, it is induced by the confusion of the sensor to judge between two hypothesis of ${\theta}_{0}$ and ${\theta}_{1}$, rather than the confusion of the AFC for discriminating between the flipping and non-flipping group.

Based on the above simulation results and discussions, we suggest that the TCBO scheme with the approximated LLR is a good selection over the low transmission channel SNR region. While, under a good wireless transmission scenario with a severe energy constraint, the JLDWT scheme with the SC based LLR is preferred in order to obtain the higher detection accuracy at the AFC. Moreover, a moderate β around 0.7∼0.8 is more appropriate for a practical sensor network in terms of both the energy consumption and the detection performance. In addition, it is to be noted that the TCBO and JLDWT schemes both can be easily extended to a larger sensor network, although only the case of K = 20 is studied in our simulations.

## 7. Conclusions and Future Work

Distributed detection scheme with good security and energy efficiency plays an important role in the implement of sensor network in IoT. In this paper, two secure decentralized detection schemes under energy constraint are studied comprehensively. Firstly, a specific energy constraint is introduced to the existing channel aware encryption scheme and we call it TCBO scheme. Next, the simplified LLRs under the low and high SNR are derived, respectively. Based on the new LLRs, the asymptotic error probabilities for the worst and best noise situations at the AFC are calculated. Then, three comparison thresholds are optimized through minimizing the error probability while satisfying the perfect secrecy and energy constraints. Secondly, combing the local detection and the wireless transmission of local decision at the sensor, a hybrid scheme named JLDWT is proposed, where the energy efficiency is provided by censoring the sensor with less informative local LLR and the confidentiality from the EFC is guaranteed by the channel based random flipping. Then, the asymptotic error probabilities under low and high SNR environment are also given. Furthermore, two local detection thresholds and one flipping comparison threshold are optimized to minimize the error rates, as well as, assure the perfect secrecy and the required energy efficiency. At last, we evaluate the detection performance of the TCBO and the proposed JLDWT schemes through computer simulations. The simulation results demonstrate that the perfect secrecy is assured by both schemes. The JLDWT scheme outperforms the TCBO one under the better wireless transmission environment with a severe energy constraint.

The perfect secrecy is guaranteed at the cost of reducing the detection accuracy at the AFC in the TCBO and JLDWT schemes. However, in some scenarios, a limited information leakage to the EFC maybe is permitted, while the high detection performance at the AFC is more important. In future work, the modified forms of the above two schemes will be designed to support the more flexible constraint on the EFC’s performance. Moreover, except the eavesdropping attack, there are many other attack modes faced by IoT networks in practice, such as the denial of services (DOS) attack, node outage attack, signal jamming attack and intentional attack. Among them, the intentional attack could incur fatal threat on network by paralyzing a small fraction of nodes with highest degrees. As to IoT networks, if some important nodes, such as the fusion center and the controller, suffer the intentional attack, the whole IoT system may be disrupted. Therefore, the robust defense mechanism against the intentional attack for IoT will be studied in our future work.

## Acknowledgments

The authors would like to thank the support from the National Natural Science Foundation of China (NSFC) under Grant No. 61461136001, No. 61431011 and No. 61401350, the National 863 Program of China under Grant No. 2014AA01A707 and the Fundamental Research Funds for the Central Universities.

## Author Contributions

Guomei Zhang has done the research on the related topic of this paper, designed the analyzing methods and developed the schemes, simulated them in MATLAB, extracted results and wrote the paper. Hao Sun participated in the literature research and the simulation programming, edited formulas and drew the simulation figures.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Approximation of Φ (t_{a}, t_{b}, x_{k}, y${}_{\mathit{k}}^{\mathit{A}}$, δ${}_{\mathit{A}}^{\mathbf{2}}$) under the Low Channel SNR

The derivation of the approximated formulation for $\mathrm{\Phi}\left({t}_{a},{t}_{b},{x}_{k},{y}_{k}^{A},{\delta}_{A}^{2}\right)$ is given by
where (a) is based on the fact that $exp\left(x\right)\approx 1+x$ for small x and (b) is due to the assumption of ${\delta}_{A}^{2}\to \infty $.

$$\begin{array}{cc}\hfill \mathrm{\Phi}\left({t}_{a},{t}_{b},{x}_{k},{y}_{k}^{A},{\delta}_{A}^{2}\right)& ={\int}_{{t}_{a}}^{{t}_{b}}\frac{1}{\sqrt{2\pi}{\delta}_{A}}exp\left(-\frac{{\left({y}_{k}^{A}-{h}_{k}^{A}{x}_{k}\right)}^{2}}{2{\delta}_{A}^{2}}\right)2{h}_{k}^{A}exp[-{({h}_{k}^{A})}^{2}]d{h}_{k}^{A}\hfill \\ & =N({y}_{k}^{A},{\delta}_{A}^{2}){\int}_{{t}_{a}}^{{t}_{b}}exp\left(\frac{{y}_{k}^{A}{h}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}}\right)2{h}_{k}^{A}exp[-(1+\frac{{{x}_{k}}^{2}}{2{\delta}_{A}^{2}}){({h}_{k}^{A})}^{2}]d{h}_{k}^{A}\hfill \\ & \stackrel{(a)}{\approx}N({y}_{k}^{A},{\delta}_{A}^{2}){\int}_{{t}_{a}}^{{t}_{b}}(1+\frac{{y}_{k}^{A}{h}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}})2{h}_{k}^{A}exp[-(1+\frac{{{x}_{k}}^{2}}{2{\delta}_{A}^{2}}){({h}_{k}^{A})}^{2}]d{h}_{k}^{A}\hfill \\ & =N({y}_{k}^{A},{\delta}_{A}^{2})\{-{(1+\frac{{{x}_{k}}^{2}}{2{\delta}_{A}^{2}})}^{-1}exp[-(1+\frac{{{x}_{k}}^{2}}{2{\delta}_{A}^{2}}){({h}_{k}^{A})}^{2}]\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{|}_{{t}_{a}}^{{t}_{b}}\hfill \\ & \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+\frac{{y}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}}{\int}_{{t}_{a}}^{{t}_{b}}2{({h}_{k}^{A})}^{2}exp[-(1+\frac{{{x}_{k}}^{2}}{2{\delta}_{A}^{2}}){({h}_{k}^{A})}^{2}]d{h}_{k}^{A}\}\hfill \\ & \stackrel{(b)}{\approx}N({y}_{k}^{A},{\delta}_{A}^{2})\{exp(-{t}_{a}^{2})-exp(-{t}_{b}^{2})-\frac{{y}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}}{\int}_{{t}_{a}}^{{t}_{b}}{h}_{k}^{A}d\left(exp[-{({h}_{k}^{A})}^{2}]\right)\}\hfill \\ & =N({y}_{k}^{A},{\delta}_{A}^{2})\{exp(-{t}_{a}^{2})-exp(-{t}_{b}^{2})\hfill \\ & \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+\frac{{y}_{k}^{A}{x}_{k}}{{\delta}_{A}^{2}}[{t}_{a}exp(-{t}_{a}^{2})-{t}_{b}exp(-{t}_{b}^{2})+{\int}_{{t}_{a}}^{{t}_{b}}exp(-{h}^{2})dh]\}\hfill \end{array}$$

## Appendix B. Calculation of Three Integrations Used in Equations (20) and (22)

$$\begin{array}{cc}\hfill {\int}_{-\infty}^{+\infty}{y}_{k}^{A}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}& =\frac{1}{\sqrt{2\pi}{\delta}_{A}}{\int}_{-\infty}^{+\infty}{y}_{k}^{A}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}\hfill \\ & =-\frac{{\delta}_{A}}{\sqrt{2\pi}}{\int}_{-\infty}^{+\infty}d\left[exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)\right]=0\hfill \end{array}$$

$$\begin{array}{c}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}\phantom{\rule{1.0pt}{0ex}}=\frac{1}{\sqrt{2\pi}{\delta}_{A}}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=-\frac{{\delta}_{A}}{\sqrt{2\pi}}{\int}_{-\infty}^{+\infty}{y}_{k}^{A}d\left[exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=\frac{{\delta}_{A}}{\sqrt{2\pi}}{y}_{k}^{A}exp[-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}]\left|\begin{array}{c}+\infty \\ -\infty \end{array}\right.+\frac{{\delta}_{A}}{\sqrt{2\pi}}{\int}_{-\infty}^{+\infty}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=0+\frac{{\delta}_{A}^{2}}{\sqrt{2\pi}{\delta}_{A}}{\int}_{-\infty}^{+\infty}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}={\delta}_{A}^{2}\hfill \end{array}$$

$$\begin{array}{c}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{3}N({y}_{k}^{A},{\delta}_{A}^{2})d{y}_{k}^{A}=\frac{1}{\sqrt{2\pi}{\delta}_{A}}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}{y}_{k}^{A}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=-\frac{{\delta}_{A}}{\sqrt{2\pi}}{\int}_{-\infty}^{+\infty}{\left({y}_{k}^{A}\right)}^{2}d\left[exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)\right]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=\frac{{\delta}_{A}}{\sqrt{2\pi}}{\left({y}_{k}^{A}\right)}^{2}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)\left|\begin{array}{c}-\infty \\ +\infty \end{array}\right.+\frac{2{\delta}_{A}}{\sqrt{2\pi}}{\int}_{-\infty}^{+\infty}{y}_{k}^{A}exp\left(-\frac{{\left({y}_{k}^{A}\right)}^{2}}{2{\delta}_{A}^{2}}\right)d{y}_{k}^{A}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=0\hfill \end{array}$$

## Appendix C. Derivation of ${\mathbf{\Lambda}}_{\mathit{k}}^{\mathit{A}}$ under High SNR

In Equation (25), substituting $f\left({y}_{k}^{A}|{x}_{k},{\widehat{h}}_{k}^{A}\right)$ with $\frac{1}{\sqrt{2\pi}{\delta}_{A}}exp[-\frac{{({y}_{k}^{A}-{\widehat{h}}_{k}^{A}{x}_{k})}^{2}}{2{\delta}_{A}^{2}}]$ gives

$$\begin{array}{c}{\mathrm{\Lambda}}_{k}^{A}=log\frac{f\left({y}_{k}^{A}|{\theta}_{1}\right)}{f\left({y}_{k}^{A}|{\theta}_{0}\right)}=\left\{\begin{array}{c}log[\frac{{P}_{d}exp\left(2{y}_{k}^{A}{\widehat{h}}_{k}^{A}/{\delta}_{A}^{2}\right)+(1-{P}_{d})}{{P}_{f}exp\left(2{y}_{k}^{A}{\widehat{h}}_{k}^{A}/{\delta}_{A}^{2}\right)+(1-{P}_{f})}],{\widehat{h}}_{k}^{A}\ge {t}_{h}\\ log[\frac{(1-{P}_{d})exp\left(2{y}_{k}^{A}{\widehat{h}}_{k}^{A}/{\delta}_{A}^{2}\right)+{P}_{d}}{(1-{P}_{f})exp\left(2{y}_{k}^{A}{\widehat{h}}_{k}^{A}/{\delta}_{A}^{2}\right)+{P}_{f}}],{\widehat{h}}_{k}^{A}<{t}_{h}\end{array}\right.\hfill \end{array}$$

Furthermore, with ${\delta}_{A}^{2}\to 0$, we have $exp\left(\frac{2{y}_{k}^{A}{\widehat{h}}_{k}^{A}}{{\sigma}_{A}^{2}}\right)\to \infty $ for ${y}_{k}^{A}>0$, $exp\left(\frac{2{y}_{k}^{A}{\widehat{h}}_{k}^{A}}{{\sigma}_{A}^{2}}\right)\to 0$ for ${y}_{k}^{A}<0$ and ${\mathrm{\Lambda}}_{k}^{A}=0$ for ${y}_{k}^{A}=0$. Substituting them into Equation (C1), we can rewrite it as the Equation (26).

## Appendix D. Derivation of Equations (27) and (28)

From Equation (26), we see that ${\mathrm{\Lambda}}_{k}^{A}$ is a discrete random variable, so its mean can be given by $E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{i}\right)={\sum}_{{\mathrm{\Lambda}}_{k}^{A}}{\mathrm{\Lambda}}_{k}^{A}f\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{i}\right)$. Moreover, the non-negative channel coefficient makes that ${y}_{k}^{A}>0$ is equivalent to ${x}_{k}=1$, ${y}_{k}^{A}<0$ corresponds to ${x}_{k}=-1$ and ${y}_{k}^{A}=0$ means ${x}_{k}=0$. In addition, ${h}_{k}^{A}\approx {\widehat{h}}_{k}^{A}$ holds for the large SNR scenario. Combining the above facts, we have
where (a) follows the condition that ${h}_{k}^{A}$ is uncorrelated with ${x}_{k}$ and ${\theta}_{i}$, and (b) is due to Equation (24). Similarly, it can be obtained that

$$\begin{array}{c}E\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{i}\right)={\displaystyle \sum _{{\mathrm{\Lambda}}_{k}^{A}}}{\mathrm{\Lambda}}_{k}^{A}f\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{i}\right)\hfill \\ =0\xb7p({x}_{k}=0|{\theta}_{i})+log\frac{{P}_{d}}{{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A},{x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A},{x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+log\frac{1-{P}_{d}}{1-{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A},{x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A},{x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ =log\frac{{P}_{d}}{{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A}|{x}_{k}=1,{\theta}_{i}\right)p\left({x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}\phantom{\rule{1.0pt}{0ex}}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A}|{x}_{k}=-1,{\theta}_{i}\right)p\left({x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ +log\frac{1-{P}_{d}}{1-{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({x}_{k}^{A}|{u}_{k}=-1,{\theta}_{i}\right)p\left({x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A}|{x}_{k}=1,{\theta}_{i}\right)p\left({x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \stackrel{(a)}{=}log\frac{{P}_{d}}{{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A}\right){\displaystyle \sum _{{b}_{k}}}p\left({x}_{k}=1|{b}_{k}\right)p\left({b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A}\right){\displaystyle \sum _{{b}_{k}}}p\left({x}_{k}=-1|{b}_{k}\right)p\left({b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+log\frac{1-{P}_{d}}{1-{P}_{f}}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A}\right){\displaystyle \sum _{{b}_{k}}}p\left({x}_{k}=-1|{b}_{k}\right)p\left({b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A}\right){\displaystyle \sum _{{b}_{k}}}p\left({x}_{k}=1|{b}_{k}\right)p\left({b}_{k}|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \stackrel{(b)}{=}log\frac{{P}_{d}}{{P}_{f}}[{\int}_{{t}_{1}}^{+\infty}f\left({h}_{k}^{A}\right)p\left({b}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}\phantom{\rule{1.0pt}{0ex}}+{\int}_{{t}_{3}}^{{t}_{2}}f\left({h}_{k}^{A}\right)p\left({b}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+log\frac{1-{P}_{d}}{1-{P}_{f}}[{\int}_{{t}_{1}}^{+\infty}f\left({h}_{k}^{A}\right)p\left({b}_{k}=0|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{{t}_{3}}^{{t}_{2}}f\left({h}_{k}^{A}\right)p\left({b}_{k}=0|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ =[log\frac{{P}_{d}}{{P}_{f}}p\left({b}_{k}=1|{\theta}_{i}\right)+log\frac{1-{P}_{d}}{1-{P}_{f}}p\left({b}_{k}=0|{\theta}_{i}\right)]\left({\lambda}_{1}+{\lambda}_{2}\right)\hfill \end{array}$$

$$\begin{array}{c}E\left({({\mathrm{\Lambda}}_{k}^{A})}^{2}|{\theta}_{i}\right)={\sum}_{{\mathrm{\Lambda}}_{k}^{A}}{({\mathrm{\Lambda}}_{k}^{A})}^{2}f\left({\mathrm{\Lambda}}_{k}^{A}|{\theta}_{i}\right)\hfill \\ ={0}^{2}\xb7p({x}_{k}=0|{\theta}_{i})\phantom{\rule{1.0pt}{0ex}}+{(log\frac{{P}_{d}}{{P}_{f}})}^{2}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A},{x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A},{x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}+{(log\frac{1-{P}_{d}}{1-{P}_{f}})}^{2}[{\int}_{{t}_{h}}^{+\infty}f\left({h}_{k}^{A},{x}_{k}=-1|{\theta}_{i}\right)d{h}_{k}^{A}+{\int}_{0}^{{t}_{h}}f\left({h}_{k}^{A},{x}_{k}=1|{\theta}_{i}\right)d{h}_{k}^{A}]\hfill \\ =[{(log\frac{{P}_{d}}{{P}_{f}})}^{2}p\left({b}_{k}=1|{\theta}_{i}\right)+{(log\frac{1-{P}_{d}}{1-{P}_{f}})}^{2}p\left({b}_{k}=0|{\theta}_{i}\right)]\left({\lambda}_{1}+{\lambda}_{2}\right)\hfill \end{array}$$

## Appendix E. Calculation of the Derivative of $\mathit{D}({\mathit{t}}_{\mathbf{3}},{\mathit{t}}_{\mathbf{2}},{\mathit{t}}_{\mathbf{1}})$ to α

Rewrite $D({t}_{3},{t}_{2},{t}_{1})$ as a function of α
whose derivative is given by

$$D\left(\alpha \right)=m\left({t}_{1}\left(\alpha \right)\right)-n\left({t}_{3}\left(\alpha \right),{t}_{2}\left(\alpha \right)\right)$$

$${\delta}_{D}\left(\alpha \right)=\frac{dD\left(\alpha \right)}{d\alpha}=\frac{dD\left(\alpha \right)/d{t}_{1}}{d\alpha /d{t}_{1}}$$

Moreover, we can calculate

$$\begin{array}{cc}dD\left(\alpha \right)/d{t}_{1}\hfill & =d[{t}_{1}exp\left(-{t}_{1}^{2}\right)+{\int}_{{t}_{1}}^{\infty}exp\left(-{h}^{2}\right)dh-{t}_{3}exp\left(-{t}_{3}^{2}\right)+{t}_{2}exp\left(-{t}_{2}^{2}\right)-{\int}_{{t}_{3}}^{{t}_{2}}exp\left(-{h}^{2}\right)dh]/d{t}_{1}\hfill \\ & =-2{t}_{1}^{2}exp\left(-{t}_{1}^{2}\right)+2{t}_{3}^{2}exp\left(-{t}_{3}^{2}\right)\frac{d{t}_{3}}{d{t}_{1}}-2{t}_{2}^{2}exp\left(-{t}_{2}^{2}\right)\frac{d{t}_{2}}{d{t}_{1}}\hfill \end{array}$$

$$\begin{array}{cc}d\alpha /d{t}_{1}\hfill & =d[exp\left(-{t}_{1}^{2}\right)+exp\left(-{t}_{3}^{2}\right)-exp\left(-{t}_{2}^{2}\right)]/d{t}_{1}\hfill \\ & =-2{t}_{1}exp\left(-{t}_{1}^{2}\right)-2{t}_{3}exp\left(-{t}_{3}^{2}\right)\frac{d{t}_{3}}{d{t}_{1}}+2{t}_{2}exp\left(-{t}_{2}^{2}\right)\frac{d{t}_{2}}{d{t}_{1}}\hfill \end{array}$$

Specially, for the case ${t}_{3}=0$, we obtain

$$\begin{array}{c}dD\left(\alpha \right)/d{t}_{1}=\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}-2{t}_{1}^{2}exp\left(-{t}_{1}^{2}\right)-2{t}_{2}^{2}exp\left(-{t}_{2}^{2}\right)\frac{d{t}_{2}}{d{t}_{1}}\hfill \\ d\alpha /d{t}_{1}=-2{t}_{1}exp\left(-{t}_{1}^{2}\right)+2{t}_{2}exp\left(-{t}_{2}^{2}\right)\frac{d{t}_{2}}{d{t}_{1}}\hfill \end{array}$$

And because ${\lambda}_{1}={\lambda}_{2}$ has to be satisfied, the following equation is achieved

$$\begin{array}{c}dexp\left(-{t}_{1}^{2}\right)/d{t}_{1}=d\left(1-exp\left(-{t}_{2}^{2}\right)\right)/d{t}_{1}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\Rightarrow -2{t}_{1}exp\left(-{t}_{1}^{2}\right)=2{t}_{2}exp\left(-{t}_{2}^{2}\right)\frac{d{t}_{2}}{d{t}_{1}}\hfill \\ \Rightarrow \frac{d{t}_{2}}{d{t}_{1}}=-\frac{{t}_{1}exp\left(-{t}_{1}^{2}\right)}{{t}_{2}exp\left(-{t}_{2}^{2}\right)}\hfill \end{array}$$

Substituting Equation (E6) into Equation (E5) yields

$${\delta}_{D}\left(\alpha \right)=\frac{{t}_{1}-{t}_{2}}{2}$$

Otherwise, for the case ${t}_{1}={t}_{2}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}(\text{i.e.},\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}{t}_{3}\ge 0)$, we have

$$\begin{array}{c}dD\left(\alpha \right)/d{t}_{1}=\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}-4{t}_{1}^{2}exp\left(-{t}_{1}^{2}\right)+2{t}_{3}^{2}exp\left(-{t}_{3}^{2}\right)\frac{d{t}_{3}}{d{t}_{1}}\hfill \\ d\alpha /d{t}_{1}=-2{t}_{3}exp\left(-{t}_{3}^{2}\right)\frac{d{t}_{3}}{d{t}_{1}}\hfill \end{array}$$

Also due to ${\lambda}_{1}={\lambda}_{2}$, it can be achieved

$$\begin{array}{c}dexp\left(-{t}_{1}^{2}\right)/d{t}_{1}=d\left(exp(-{t}_{3}^{2})-exp\left(-{t}_{1}^{2}\right)\right)/d{t}_{1}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\Rightarrow -4{t}_{1}exp\left(-{t}_{1}^{2}\right)=-2{t}_{3}exp\left(-{t}_{3}^{2}\right)\frac{d{t}_{3}}{d{t}_{1}}\hfill \\ \Rightarrow \frac{d{t}_{3}}{d{t}_{1}}=\frac{2{t}_{1}exp\left(-{t}_{1}^{2}\right)}{{t}_{3}exp\left(-{t}_{3}^{2}\right)}\hfill \end{array}$$

Further, ${\delta}_{D}\left(\alpha \right)={t}_{1}-{t}_{3}$ is given.

## Appendix F. Proof of D (λ_{U}) = D (λ_{L} )

Beginning with $({P}_{d}-{P}_{f})=({P}_{0d}-{P}_{m})$, we get

$$\begin{array}{c}D\left({\lambda}_{U}\right)={\int}_{log({\lambda}_{U}{q}_{0}/{q}_{1})}^{+\infty}\left[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)\right]d{\mathrm{\Lambda}}_{k}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=-{\int}_{-\infty}^{log({\lambda}_{L}{q}_{0}/{q}_{1})}\left[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)\right]d{\mathrm{\Lambda}}_{k}\hfill \end{array}$$

From the total probability theory, we have

$${\int}_{-\infty}^{+\infty}\left[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)\right]d{\mathrm{\Lambda}}_{k}=0$$

Then $D\left({\lambda}_{U}\right)$ can be rewritten as

$$\begin{array}{c}D\left({\lambda}_{U}\right)=-{\int}_{-\infty}^{log({\lambda}_{L}{q}_{0}/{q}_{1})}\left[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)\right]d{\mathrm{\Lambda}}_{k}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=-(-{\int}_{log({\lambda}_{L}{q}_{0}/{q}_{1})}^{+\infty}\left[f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{1}\right)-f\left({\mathrm{\Lambda}}_{k}^{L}|{\theta}_{0}\right)\right]d{\mathrm{\Lambda}}_{k})\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=D\left({\lambda}_{L}\right)\hfill \end{array}$$

## References

- Gil, D.; Ferrandez, A.; Mora-Mora, H.; Peral, J. Internet of Things: A Review of Surveys Based on Context Aware Intelligent Services. Sensors
**2016**, 16, 1069. [Google Scholar] [CrossRef] [PubMed] - Mukherjee, A. Physical-Layer Security in the Internet of Things: Sensing and Communication Confidentiality Under Resource Constraints. Proc. IEEE
**2015**, 103, 1747–1761. [Google Scholar] [CrossRef] - Ghayvat, H.; Mukhopadhyay, S.; Gui, X.; Suryadevara, N. WSN- and IOT-Based Smart Homes and Their Extension to Smart Buildings. Sensors
**2015**, 15, 10350–10379. [Google Scholar] [CrossRef] [PubMed] - Rani, S.; Talwar, R.; Malhotra, J.; Ahmed, S.H.; Sarkar, M.; Song, H. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks. Sensors
**2015**, 15, 28603–28626. [Google Scholar] [CrossRef] [PubMed] - Li, Z.; Tao, J.; Ma, L.; Qian, J. Worst-Case Cooperative Jamming for Secure Communications in CIoT Networks. Sensors
**2016**, 16, 339. [Google Scholar] [CrossRef] [PubMed] - Ndibanje, B.; Lee, H.J.; Lee, S.G. Security analysis and improvements of authentication and access control in the Internet of Things. Sensors
**2014**, 14, 14786–14805. [Google Scholar] [CrossRef] [PubMed] - Kailkhura, B.; Nadendla, V.S.S.; Varshney, P.K. Distributed inference in the presence of eavesdroppers: A survey. IEEE Commun. Mag.
**2015**, 53, 40–46. [Google Scholar] [CrossRef] - Marano, S.; Matta, V.; Willett, P.K. Distributed detection with censoring sensors under physical layer secrecy. IEEE Trans. Signal Proc.
**2009**, 57, 1976–1986. [Google Scholar] [CrossRef] - Soosahabi, R.; Naraghi-Pour, M. Scalable PHY-Layer Security for Distributed Detection in Wireless Sensor Networks. IEEE Trans. Inf. Forensics Secur.
**2012**, 7, 1118–1126. [Google Scholar] [CrossRef] - Jeon, H.; Choi, J.; Mclaughlin, S.W.; Ha, J. Channel aware encryption and decision fusion for wireless sensor networks. IEEE Trans. Inf. Forensics Secur.
**2013**, 8, 619–625. [Google Scholar] [CrossRef] - Appadwedula, S.; Veeravalli, V.V.; Jones, D.L. Decentralized Detection With Censoring Sensors. IEEE Trans. Signal Proc.
**2008**, 56, 1362–1373. [Google Scholar] [CrossRef] - Miorandi, D.; Sicari, S.; Pellegrini, F.D.; Chlamtac, I. Internet of things: Vision, applications and research challenges. Ad Hoc Netw.
**2012**, 10, 1497–1516. [Google Scholar] [CrossRef] - Sen, J. A Survey on Wireless Sensor Network Security. Comput. Sci.
**2010**, 43, 90–95. [Google Scholar] - Weber, R.H. Internet of Things—New security and privacy challenges. Comput. Law Secur. Rep.
**2010**, 26, 23–30. [Google Scholar] [CrossRef] - Keoh, S.L.; Kumar, S.S.; Tschofenig, H. Securing the Internet of Things: A Standardization Perspective. IEEE Internet Things J.
**2014**, 1, 265–275. [Google Scholar] [CrossRef] - Mukherjee, A.; Fakoorian, S.A.A.; Huang, J.; Swindlehurst, A.L. Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey. IEEE Commun. Surv. Tutor.
**2014**, 16, 1550–1573. [Google Scholar] [CrossRef] - Xu, Q.; Ren, P.; Song, H.; Du, Q. Security enhancement for IoT communications exposed to eavesdroppers with uncertain locations. IEEE Access
**2016**, 4, 2840–2853. [Google Scholar] [CrossRef] - Atzori, L.; Iera, A.; Morabito, G. The Internet of Things: A survey. Comput. Netw.
**2010**, 54, 2787–2805. [Google Scholar] [CrossRef] - Su, Z.; Xu, Q.; Zhu, H.; Wang, Y. A novel design for content delivery over software defined mobile social networks. IEEE Netw.
**2015**, 29, 62–67. [Google Scholar] [CrossRef] - Du, Q.; Zhao, W.; Li, W.; Zhang, X.; Sun, B.; Song, H.; Ren, P.; Sun, L.; Wang, Y. Massive access control aided by knowledge-extraction for co-existing periodic and random services over wireless clinical networks. J. Med. Syst.
**2016**, 40, 1–8. [Google Scholar] [CrossRef] [PubMed] - Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Gener. Comput. Syst.
**2012**, 29, 1645–1660. [Google Scholar] [CrossRef] - Yick, J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. Int. J. Comput. Telecommun. Netw.
**2008**, 52, 2292–2330. [Google Scholar] [CrossRef] - Jeon, H.; Hwang, D.; Choi, J.; Lee, H.; Ha, J. Secure Type-Based Multiple Access. IEEE Trans. Inf. Forensics Secur.
**2011**, 6, 763–774. [Google Scholar] [CrossRef] - Soosahabi, R.; Naraghi-Pour, M.; Perkins, D.; Bayoumi, M.A. Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks. IEEE Trans. Inf. Forensics Secur.
**2014**, 9, 375–385. [Google Scholar] [CrossRef] - Bhavya, K.; Thakshila, W.; Lixin, S.; Pramod, K. Distributed Compressive Detection with Perfect Secrecy. In Proceedings of the IEEE 11th International Conference on Mobile Ad Hoc and Sensor Systems, Philadelphia, PA, USA, 28–30 October 2014; pp. 674–679.
- Li, Z.; Oechtering, T.J.; Kittichokechai, K. Parallel distributed Bayesian detection with privacy constraints. In Proceedings of the IEEE International Conference on Communications, Sydney, Australia, 10–14 June 2014; pp. 2178–2183.
- Li, Z.; Oechtering, T.J.; Jalde, N.J. Parallel distributed Neyman-Pearson detection with privacy constraints. In Proceedings of the IEEE International Conference on Communications Workshops, Sydney, Australia, 10–14 June 2014.
- Nadendla, V.S.S.; Chen, H.; Varshney, P.K. Secure distributed detection in the presence of eavesdroppers. In Proceedings of the 11th Asilomar Conference on Circuits, Systems and Computers, Pacific Grove, CA, USA, 7–10 November 2010; pp. 1437–1441.
- Araujo, A.; Blesa, J.; Romero, E.; Nieto-Taladriz, O. Artificial noise scheme to ensure secure communications in CWSN. In Proceedings of the Wireless Communications and Mobile Computing Conference, Limassol, Cyprus, 27–31 August 2012; pp. 1023–1027.
- Khisti, A.; Wornell, G. Secure transmission with multiple antennas-I: The MISOME wiretap channel. IEEE Trans. Inform. Theory
**2010**, 56, 3088–3104. [Google Scholar] [CrossRef] - Khisti, A.; Wornell, G. Secure transmission with multiple antennas-II: The MIMOME wiretap channel. IEEE Trans. Inform. Theory
**2010**, 56, 5515–5532. [Google Scholar] [CrossRef] - Li, Q.; Ma, W.-K. Multicast secrecy rate maximization for MISO channels with multiple multiantenna eavesdroppers. In Proceedings of the 2011 IEEE International Conference on Communications (ICC), Victoria, BC, Canada, 5–9 June 2011; pp. 1–5.
- Geraci, G.; Egan, M.; Yuan, J.; Razi, A.; Collings, I.B. Secrecy sum-rates for multi-user MIMO regularized channel inversion precoding. IEEE Trans. Commun.
**2012**, 60, 3472–3482. [Google Scholar] [CrossRef] - Goel, S.; Negi, R. Guaranteeing secrecy using artificial noise. IEEE Trans. Wireless Commun.
**2008**, 7, 2180–2189. [Google Scholar] [CrossRef] - Mukherjee, A.; Swindlehurst, A.L. Utility of beamforming strategies for secrecy in multiuser MIMO wiretap channels. In Proceedings of the 47th Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 30 September–2 October 2009; pp. 1134–1141.
- Gerbracht, S.; Scheunert, C.; Jorswieck, E.A. Secrecy outage in MISO systems with partial channel information. IEEE Trans. Inform. Forensics Sec.
**2012**, 7, 704–716. [Google Scholar] [CrossRef] - Zhou, L.; Wu, D.; Zheng, B.; Guizani, M. Joint physical-application layer security for wireless multimedia delivery. IEEE Commun. Mag.
**2014**, 52, 66–72. [Google Scholar] [CrossRef] - Sun, L.; Du, Q.; Ren, P.; Wang, Y. Two birds with one stone: Towards secure and interference-free D2D transmissions via constellation rotation. IEEE Trans. Veh. Technol.
**2016**, 65, 8767–8774. [Google Scholar] [CrossRef] - Xu, H.; Sun, L.; Ren, P.; Du, Q.; Wang, Y. Cooperative privacy preserving scheme for downlink transmission in multiuser relay networks. IEEE Trans. Inf. Forensics Secur. published online.
**2016**. [Google Scholar] [CrossRef] - Hussain, M.; Du, Q.; Sun, L.; Ren, P. Security enhancement for video transmission via noise aggregation in immersive systems. Multimed. Tools Appl.
**2016**, 75, 5345–5357. [Google Scholar] [CrossRef] - Xu, Q.; Ren, P.; Du, Q.; Sun, L.; Wang, Y. On achievable secrecy rate by noise aggregation over wireless fading channels. In Proceedings of the IEEE International Conference on Communications, Kuala Lumpur, Malaysia, 22–27 May 2016.
- Jiang, R.; Chen, B. Fusion of censored decisions in wireless sensor networks. IEEE Trans. Wireless Commun.
**2005**, 4, 2668–2673. [Google Scholar] [CrossRef]

**Figure 4.**Error probabilities at the AFC and EFC as functions of various SNR for $\beta =0.8$ and $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$ dB over low SNR region.

**Figure 5.**Error probabilities at the AFC and EFC as functions of various β for $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$ dB over low SNR region.

**Figure 6.**Error probabilities at the AFC and EFC as functions of various SNR for $\beta =0.8$ and $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$ dB over high SNR region.

**Figure 7.**Error probabilities at the AFC and EFC as functions of various β for $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$ dB over high SNR region.

**Figure 8.**Error probabilities of TCBO and JLDWT schemes as functions of various SNR for $\beta =0.8$ and $sn{r}_{L}=0\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$ dB over low SNR region.

**Figure 9.**Error probabilities of TCBO and JLDWT schemes as functions of various SNR for $\beta =0.8$ over high SNR region.

**Figure 10.**Error probabilities of TCBO and JLDWT schemes with SC based LLR as functions of various β for $sn{r}_{L}=0\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$dB.

**Figure 11.**Error probabilities of TCBO and JLDWT schemes with SC based LLR as functions of various β for $sn{r}_{L}=5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$dB.

Parameters | Assumption |
---|---|

Number of sensors | 20 |

Prior probabilities of target states | ${q}_{0}={q}_{1}=0.5$ |

Transmission channel model | Rayleigh distribution with $E\left[{h}^{2}\right]=1$ |

Energy constraint | $\beta =0.4:0.1:1$ |

Local detection SNR | $sn{r}_{L}=0,\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}5\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}$dB |

Transmission channel SNR | $SN{R}_{A}=SN{R}_{E}=-12:2:16$ dB |

β | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 |
---|---|---|---|---|---|---|---|

${\lambda}_{U}$ | 2.585 | 2.145 | 1.810 | 1.545 | 1.330 | 1.155 | 1.000 |

${\lambda}_{L}$ | 0.387 | 0.466 | 0.553 | 0.647 | 0.752 | 0.866 | 1.000 |

β | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 |
---|---|---|---|---|---|---|---|

${\lambda}_{U}$ | 8.320 | 5.595 | 3.875 | 2.730 | 1.945 | 1.395 | 1.000 |

${\lambda}_{L}$ | 0.120 | 0.179 | 0.258 | 0.366 | 0.514 | 0.717 | 1.000 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).