Open Access
This article is

- freely available
- re-usable

*J. Sens. Actuator Netw.*
**2018**,
*7*(3),
25;
https://doi.org/10.3390/jsan7030025

Article

Fundamental Limitations in Energy Detection for Spectrum Sensing

^{1}

Nanfang College, Sun Yat-Sen Univeristy, Guangzhou 510900, China

^{2}

College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China

^{3}

Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada

^{4}

School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea

^{*}

Author to whom correspondence should be addressed.

Received: 15 May 2018 / Accepted: 27 June 2018 / Published: 28 June 2018

## Abstract

**:**

A key enabler for Cognitive Radio (CR) is spectrum sensing, which is physically implemented by sensor and actuator networks typically using the popular energy detection method. The threshold of the binary hypothesis for energy detection is generally determined by using the principles of constant false alarm rate (CFAR) or constant detection rate (CDR). The CDR principle guarantees the CR primary users at a designated low level of interferences, which is nonetheless subject to low spectrum usability of secondary users in a given sensing latency. On the other hand, the CFAR principle ensures secondary users’ spectrum utilization at a designated high level, while may nonetheless lead to a high level of interference to the primary users. The paper introduces a novel framework of energy detection for CR spectrum sensing, aiming to initiate a graceful compromise between the two reported principles. The proposed framework takes advantage of the summation of the false alarm probability ${P}_{fa}$ from CFAR and the missed detection probability $(1-{P}_{d})$ from CDR, which is further compared with a predetermined confidence level. Optimization presentations for the proposed framework to determine some key parameters are developed and analyzed. We identify two fundamental limitations that appear in spectrum sensing, which further define the relationship among the sample data size for detection, detection time, and signal-to-noise ratio (SNR). We claim that the proposed framework of energy detection yields merits in practical policymaking for detection time and design sample rate on specific channels to achieve better efficiency and less interferences.

Keywords:

fundamental limitations; SNR; energy detection; noise variance; spectrum sensing## 1. Introduction

Cognitive Radio (CR) was first introduced as a promising candidate for dealing with spectrum scarcity in future wireless communications [1]. Under-utilized frequency bands originally allocated to licensed users (i.e., primary users) are freed and become accessible by non-licensed users (i.e., secondary users) equipped with CR in an opportunistic manner to maximize the spectrum utilization while minimizing interferences to the primary users. Despite its obvious advantages, CR technology is subject to a great challenge in detection of spectrum holes through spectrum sensing. This is because secondary users generally have very limited knowledge about the whole spectrum, which may leave the spectrum sensing results far from accurate. Some existing spectrum sensing methods in the literature are by way of matched filtering, waveform-based sensing [2], cyclostationary-based sensing [3,4], eigenvalue-based method [5,6], energy detection [7,8,9,10,11,12,13], etc. Obviously, energy detection is the most popular and simple way for spectrum sensing.

The energy detection method [7,8,9,10,11,12,13] for spectrum sensing measures the average energy of the total received signal during a period of time and compares it with a properly assigned threshold to decide the presence or absence of users. Typically, the energy detection method is formulated in a binary hypothesis test with a null Hypothesis ${H}_{0}$ for absence of users and an alternative Hypothesis ${H}_{1}$ for presence. The threshold is determined typically based on two standard principles: constant false alarm rate (CFAR) and constant detection rate (CDR) [14,15]. With the emphasis on promoting usage of spectrum hole, a threshold by CFAR is derived by assuring the probability of false alarm under ${H}_{0}$ less than a given confidence level $\alpha $, while, with the emphasis on less interference of users, a threshold by CDR is derived by letting the probability of missed detection under ${H}_{1}$ less than a given confidence level $\alpha $. Each criterion can ensure the error detection probability of one hypothesis under a low level while ignoring the error detection probability for the opponent hypothesis. Therefore, under some extreme circumstances, one error probability may be large although the other one is small. It is the purpose of this paper to develop a new criterion to simultaneously keep the two kinds of error detection probabilities at a low level. To the author’s knowledge, it is the first effort to develop this kind of criterion in energy detection for spectrum sensing.

A simple way to keep the two kinds of error detection probabilities simultaneously small is to restrict the summation of the two probabilities to less than a confidence level. To describe more in detail, let u be a constructed statistics for the binary hypothesis test. For a given small $\alpha \in (0,1)$, say $\alpha =0.05$, the threshold by CFAR principle is selected by the smallest $\lambda $ such that
while the threshold by CDR principle is selected by maximal $\lambda $ such that

$$P[u\ge \lambda |{H}_{0}]\le \alpha ,$$

$$P[u\le \lambda |{H}_{1}]\le \alpha .$$

The new criterion to select $\lambda $ based on the summation of the two probabilities is proposed as
which ensures the two probabilities simultaneously smaller than the confidence level $\alpha $. Denote the false alarm probability ${P}_{fa}\left(\lambda \right)=P[u\ge \lambda |{H}_{0}]$ for CFAR and the missed detection probability $(1-{P}_{d}\left(\lambda \right))$ for CDR with ${P}_{d}\left(\lambda \right)=P[u\ge \lambda |{H}_{1}]$, respectively. The aforementioned summation principle turns to be

$$P[u\ge \lambda |{H}_{0}]+P[u\le \lambda |{H}_{1}]\le \alpha ,$$

$$\begin{array}{c}\hfill {P}_{fa}\left(\lambda \right)+(1-{P}_{d}\left(\lambda \right))\le \alpha ,\end{array}$$

This is actually not a well-posed presentation though, since there may be too many solutions or no solution sometimes to the inequality in Equation (1) with respect to $\lambda $ for a given noise variance ${\sigma}_{n}^{2}$, signal to noise ratio (SNR), and data size M.

In this paper, to derive unique solution of threshold $\lambda $ from (1), two kinds of optimization problems are introduced: (i) to find the minimum data size M with ${\sigma}_{n}^{2}$ and SNR given; and (ii) to find the lowest SNR with ${\sigma}_{n}^{2}$ and M given. Under the first optimization setting, we show that, with a small SNR, the data size M should be larger than a critical value, denoted by ${M}_{\mathrm{min}}$, to guarantee the existence of threshold $\lambda $ that can satisfy the inequality in Equation (1) under a given confidence level $\alpha $. We also show an asymptotical formula, i.e., Equation (61), for the minimum data size ${M}_{\mathrm{min}}$ as SNR goes to zero. Under the second optimization setting, we show that for a given data size M the SNR should be greater than a minimum SNR to ensure the existence of threshold $\lambda $ that satisfies inequality in Equation (1) under given confidence level $\alpha $. The asymptotical formula for the minimum SNR, i.e., Equation (62), is found as $M\to \infty $. Theoretical analysis and simulations are conducted for the two optimization settings and the obtained asymptotical formulas are verified.

The main contributions of this paper are as follows. (i) A new principle to select the threshold of energy detection is proposed by assuring the summation of the two kind of error detection probabilities less than a given confidence level. To derive unique threshold under the constraint, two kinds of optimization frameworks are proposed. (ii) The possible optimal selection of the thresholds under the proposed two optimization frameworks have been analyzed in Propositions 1 and 3 regardless of the constraints. (iii) The lower and upper bounds of the solutions for the two optimization frameworks have been established in Theorems 1–4, respectively, to the accurate distribution and the approximate distribution of the test statistics. Two asymptotical formulae under corresponding limit process for the two optimization problems are also derived to describe the fundamental limitations when using energy detection for spectrum sensing.

The fundamental limitations in energy detection found in this paper based on the constraint Equation (1) are different from the SNR wall introduced in [16], which is on the other hand a limitation regarding robustness of detectors. It is discovered in this paper that even when the noise variance keeps constant, some limitations still exist regarding the tradeoff between efficiency and noninterference. For example, when channel detection time is 2 s [17] and sample rate of a channel [18] is once every $16\phantom{\rule{0.166667em}{0ex}}\mathsf{\mu}$s (which yields the data size $M=2/0.000016=125,000$), by the asymptotical Equation (62), the minimum SNR is approximately $0.0158$ (i.e., $-18.0124$ dB) under a confidence level $\alpha =0.05$. In other words, it is impossible to detect any signal with SNR lower than $-18.0124$ dB under such a setting with a confidence level $\alpha =0.05$. The analysis and understanding on these limitations not only enables a wise choice of channels in the CR spectrum sensing, but also helps policymaking in determination of detection settings, such as detection time and sample rate, for a specific channel under certain requirements on efficiency and interference at a confidence level. These issues are critical in the design of a CR spectrum sensing system, which address fundamental impacts on the resultant system performance.

The rest of this paper is organized as follows. The model setting and the CFAR and CDR principles for energy detection are introduced in Section 2, where the thresholds are derived by assuming that all signals are Gaussian. The principle of compromise for CFAR and CDR is introduced in Section 3, and the two presentations of optimization are introduced and theoretically analyzed. Some numerical experiments are conducted to check the relevant evolutions of the solutions of the proposed optimization problems in Section 4. In Section 5, the fundamental limitations in energy detection are demonstrated, and some asymptotical orders of the critical values are discovered via theoretical analysis. Finally, the conclusive remarks of this study are given in Section 6.

## 2. Model Setting and Thresholds by CFAR and CDR

A block diagram of typical energy detection for spectrum sensing is shown in Figure 1. The input Band Pass Filter (BPF) which has bandwidth W centered at ${f}_{c}$ aims to remove the out-of-band signals. Note that W and ${f}_{c}$ must be known to the secondary user so that it can perform spectrum sensing for the corresponding channels. After the signal is digitized by an analog-to-digital converter (ADC), a simple square and average device is used to estimate the received signal energy. Assume the input signal to the energy detection is real. The estimated energy, u, is then compared with a threshold, $\lambda $, to decide if a signal is present (${H}_{1}$) or not (${H}_{0}$).

Spectrum sensing is to determine whether a licensed band is currently used by its primary user. This can be formulated into a binary hypothesis testing problem [19,20]:
where $s\left(k\right)$, $n\left(k\right)$, and $x\left(k\right)$ represent the primary user’s signal, the noise, and the received signal, respectively. The noise is assumed to be an iid Gaussian random process of zero mean and variance ${\sigma}_{n}^{2}$, whereas the signal is also assumed to be an iid Gaussian random process of zero mean and variance of ${\sigma}_{s}^{2}$. The signal to noise ratio is defined as the ratio of signal variance to the noise variance

$$\begin{array}{c}\hfill x\left(k\right)=\left\{\begin{array}{cc}n\left(k\right),\hfill & {H}_{0}\left(\mathrm{vacant}\right),\hfill \\ s\left(k\right)+n\left(k\right),\hfill & {H}_{1}\left(\mathrm{occupied}\right),\hfill \end{array}\right.\end{array}$$

$$\begin{array}{c}\hfill \mathrm{SNR}={\sigma}_{s}^{2}/{\sigma}_{n}^{2}.\end{array}$$

The test statistics generated from the energy detector, as shown in Figure 1, is

$$\begin{array}{c}\hfill u=\frac{1}{M}\sum _{k=1}^{M}{x}_{k}^{2}.\end{array}$$

The threshold is determined typically based on two standard principles: constant false alarm rate (CFAR) and constant detection rate (CDR) [14,15]. CDR guarantees a designated low level of interference to primary users, which nonetheless results in low spectrum usability of secondary users given a fixed sensing time. On the other hand, CFAR protects secondary users’ spectrum utilization at a designated high level, which may lead to a high level of interference to primary users. Therefore, each of the CFAR and CDR principles can general ensure either one of the error probabilities under low level within a limited sensing time, i.e., false alarm probability ${P}_{fa}$ for CFAR and missed detection probability $(1-{P}_{d})$ for CDR, respectively.

Under Hypotheses ${H}_{0}$ and ${H}_{1}$, the test statistics u is a random variable whose probability density function (PDF) is chi-square distributed. Let us denote a chi-square distributed random variable X with M degrees of freedom as $X\sim {\chi}_{M}^{2}$, and recall its PDF as
where $\Gamma (\xb7)$ denotes Gamma function, given in Equation (A2) in the Appendix.

$$\begin{array}{c}\hfill {f}_{\chi}(x,M)=\left\{\begin{array}{cc}\frac{1}{{2}^{M/2}\Gamma (M/2)}{x}^{M/2-1}{e}^{-x/2},\hfill & \mathrm{for}x>0,\hfill \\ 0,\hfill & \mathrm{otherwise},\hfill \end{array}\right.\end{array}$$

Clearly, under Hypothesis ${H}_{0}$, $Mu/{\sigma}_{n}^{2}\sim {\chi}_{M}^{2}$; and $Mu/{\sigma}_{t}^{2}\sim {\chi}_{M}^{2}$ under ${H}_{1}$ with ${\sigma}_{t}^{2}=(1+\mathrm{SNR}){\sigma}_{n}^{2}$. Thus, the PDF of test statistics u, given by test, is

$$\begin{array}{c}\hfill {f}_{u}\left(x\right)\sim \left\{\begin{array}{cc}\frac{{\sigma}_{n}^{2}}{M}{f}_{\chi}(\frac{x{\sigma}_{n}^{2}}{M},M),\hfill & \mathrm{under}\phantom{\rule{4.pt}{0ex}}{H}_{0};\hfill \\ \frac{{\sigma}_{t}^{2}}{M}{f}_{\chi}(\frac{x{\sigma}_{t}^{2}}{M},M),\hfill & \mathrm{under}\phantom{\rule{4.pt}{0ex}}{H}_{1}.\hfill \end{array}\right.\end{array}$$

Observe that $Eu={\sigma}_{n}^{2}$ and $\mathrm{var}\left(u\right)=\frac{2}{M}{\sigma}_{n}^{4}$ under ${H}_{0}$, by the central limit theorem [21], the test statistics u asymptotically obeys the Gaussian distribution with mean ${\sigma}_{n}^{2}$ and variance $\frac{2}{M}{\sigma}_{n}^{4}$. Similar distribution can be derived under ${H}_{1}$. Therefore, when M is sufficiently large, we can approximate the PDF of u using a Gaussian distribution:

$$\begin{array}{c}\hfill {\tilde{f}}_{u}\left(x\right)\sim \left\{\begin{array}{cc}\mathcal{N}({\sigma}_{n}^{2},2{\sigma}_{n}^{4}/M),\hfill & \mathrm{under}\phantom{\rule{4.pt}{0ex}}{H}_{0};\hfill \\ \mathcal{N}({\sigma}_{t}^{2},2{\sigma}_{t}^{4}/M),\hfill & \mathrm{under}\phantom{\rule{4.pt}{0ex}}{H}_{1}.\hfill \end{array}\right.\end{array}$$

For a given threshold $\lambda $, the probability of false alarm is given by
where $\Gamma (a,x)$ is the upper incomplete gamma function in Equation (A3) in the Appendix, and the approximated form of ${P}_{fa}$ corresponding to distribution in Equation (7) for large M is
where $Q(\xb7)$ is defined in Equation (A1) in the Appendix.

$$\begin{array}{c}\hfill {P}_{fa}\left(\lambda \right)=\mathrm{prob}[u>\lambda |{H}_{0}]=\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right),\end{array}$$

$$\begin{array}{c}\hfill {\tilde{P}}_{fa}\left(\lambda \right)=Q\left(\frac{\lambda -{\sigma}_{n}^{2}}{{\sigma}_{n}^{2}/\sqrt{M/2}}\right),\end{array}$$

In practice, if it is required to guarantee a reuse probability of the unused spectrum, the probability of false alarm is fixed to a small value (e.g., ${P}_{fa}=0.05$) and meanwhile the detection probability is expected to be maximized as much as possible. This is referred to as constant false alarm rate (CFAR) principle [19,22]. Under the CFAR principle, the probability of false alarm rate (${P}_{fa}$) is predetermined, and the threshold (${\lambda}_{fa}$) can be set accordingly by
where ${\Gamma}^{-1}(a,x)$ is the inverse function of $\Gamma (a,x)$. For the approximation case corresponding to distribution Equation (7) for large M, the threshold is
where ${Q}^{-1}\left(x\right)$ is the inverse function of $Q\left(x\right)$.

$$\begin{array}{c}\hfill {\lambda}_{fa}=\frac{2{\sigma}_{n}^{2}}{M}{\Gamma}^{-1}\left(M/2,{P}_{fa}\right),\end{array}$$

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{fa}={\sigma}_{n}^{2}\left(1+\frac{{Q}^{-1}\left({P}_{fa}\right)}{\sqrt{M/2}}\right),\end{array}$$

Similarly, under Hypothesis ${H}_{1}$, for a given threshold $\lambda $, the probability of detection is given by
where $\Gamma (a,x)$ is the upper incomplete gamma function. Its approximating form of ${P}_{d}$ corresponding to distribution in Equation (7) for large M is

$$\begin{array}{c}\hfill {P}_{d}\left(\lambda \right)=\mathrm{prob}[u>\lambda |{H}_{1}]=\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right),\end{array}$$

$$\begin{array}{c}\hfill {\tilde{P}}_{d}\left(\lambda \right)=Q\left(\frac{\lambda -{\sigma}_{t}^{2}}{{\sigma}_{t}^{2}/\sqrt{M/2}}\right).\end{array}$$

Practically, if it is required to guarantee interference-free to the primary users, the probability of detection should be set high (e.g., ${P}_{d}=0.95$) and the probability of false alarm should be minimized as much as possible. This is called the constant detection rate (CDR) principle [19,22]. With this, the threshold under the CDR principle to achieve a target probability of detection is given by

$$\begin{array}{c}\hfill {\lambda}_{d}=\frac{2{\sigma}_{n}^{2}(1+\mathrm{SNR})}{M}{\Gamma}^{-1}\left(M/2,{P}_{d}\right).\end{array}$$

The corresponding approximation case is

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{d}={\sigma}_{n}^{2}(1+\mathrm{SNR})\left(1+\frac{{Q}^{-1}\left({P}_{d}\right)}{\sqrt{M/2}}\right).\end{array}$$

Due to the similarity of Equations (10) and (14), we can expect that the derivation of the threshold values for CFAR and CDR are similar. Thus, it is not surprising to see that some analytic results derived by assuming CFAR based detection can be applied to CDR based detection with minor modifications and vice versa (see, e.g., [19,22]).

## 3. Thresholds by New Principle

It is clear that using CFAR and CDR principles can guarantee a low ${P}_{fa}$ and $(1-{P}_{d})$, respectively. However, in practice, we may hope both of them to be low. This motivates us to come up with a new principle such that a threshold is determined to keep the sum of the two error probabilities at a designated low level. The problem of interest in the study is to find a threshold, for a given small $\alpha \in (0,1)$, say $\alpha =0.05$, such that
$$\begin{array}{c}\hfill {P}_{fa}\left(\lambda \right)+(1-{P}_{d}\left(\lambda \right))\le \alpha ,\end{array}$$
where ${P}_{fa}\left(\lambda \right)$ and ${P}_{d}\left(\lambda \right)$ are given by Equations (8) and (12), respectively. This is nonetheless not a well-posed presentation, since there may be too many solutions or no solution to the inequality in Equation (16) with respect to $\lambda $ for a given noise variance ${\sigma}_{n}^{2}$, SNR (or ${\sigma}_{t}^{2}$), and data size M. In this section, a suite of well-posed presentations for realizing this idea are formulated and analyzed, and relevant properties are developed for reference. Specifically, the presentations considered in the study include the following two scenarios:

- (i)
- By assuming given ${\sigma}_{n}^{2}$ and SNR (or ${\sigma}_{t}^{2}$), our target is to find minimum data size M and the corresponding threshold $\lambda $ satisfying the inequality in Equation (16). This results in a nonlinear optimization problem as following:$$\begin{array}{c}\hfill \left({\mathrm{NP}}_{1}\right)\left\{\begin{array}{cc}\mathrm{Min}\hfill & M\hfill \\ \mathrm{s}.\mathrm{t}.\hfill & \Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)+1-\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)\le \alpha .\hfill \end{array}\right.\end{array}$$
- (ii)
- By taking ${\sigma}_{n}^{2}$ and M fixed, we target to find minimum SNR and corresponding threshold $\lambda $ satisfying the inequality in Equation (16). This also results in a nonlinear optimization problem as following:$$\begin{array}{c}\hfill \left({\mathrm{NP}}_{2}\right)\left\{\begin{array}{cc}\mathrm{Min}\hfill & \mathrm{SNR}\hfill \\ \mathrm{s}.\mathrm{t}.\hfill & \Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)+1-\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)\le \alpha .\hfill \end{array}\right.\end{array}$$

We find in Proposition 1 that the threshold can be unambiguously determined if ${\sigma}_{n}^{2}$ and SNR are given. Based on this theoretical discovery, the numerical algorithm for solving (NP${}_{1}$) and (NP${}_{2}$) can be significantly simplified.

**Proposition**

**1.**

Both in nonlinear optimization problems (NP${}_{1}$) and (NP${}_{2}$), if solvable, the solution for λ should be
where s represents SNR for brief.

$$\begin{array}{c}\hfill {\lambda}_{0}=\frac{(1+s){\sigma}_{n}^{2}\mathrm{ln}(1+s)}{s},\end{array}$$

**Proof.**

Let us consider the case for (NP${}_{1}$) only, since (NP${}_{2}$) has a very similar shape. We consider the Lagrange function of (NP${}_{1}$) with respect to a multiplier $\mu $ as
where ${s}_{1}\ge 0$ is a slack variable. Clearly, differentiating $L(M,\lambda ,s,\mu )$ by $\lambda $, we have

$$\begin{array}{c}\hfill L(M,\lambda ,{s}_{1},\mu )=M+\mu \left[\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)+1-\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)+{s}_{1}-\alpha \right],\end{array}$$

$$\begin{array}{cc}\hfill \frac{\partial L(M,\lambda ,{s}_{1},\mu )}{\partial \lambda}& =\frac{\mu}{\Gamma \left(\frac{M}{2}\right)}\left[{\left(\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)}^{\frac{M}{2}-1}{e}^{-\frac{M\lambda}{2{\sigma}_{t}^{2}}}\frac{M}{2{\sigma}_{t}^{2}}-{\left(\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)}^{\frac{M}{2}-1}{e}^{-\frac{M\lambda}{2{\sigma}_{n}^{2}}}\frac{M}{2{\sigma}_{n}^{2}}\right].\hfill \end{array}$$

Let $\frac{\partial L(M,\lambda ,{s}_{1},\mu )}{\partial \lambda}=0$, we get a simplified equivalent equation as
which further means

$$\begin{array}{c}\hfill \frac{1}{{\sigma}_{t}^{2}}{e}^{-\frac{\lambda}{{\sigma}_{t}^{2}}}=\frac{1}{{\sigma}_{n}^{2}}{e}^{-\frac{\lambda}{{\sigma}_{n}^{2}}},\end{array}$$

$$\begin{array}{c}\hfill {e}^{\left(\frac{1}{{\sigma}_{n}^{2}}-\frac{1}{{\sigma}_{t}^{2}}\right)\lambda}=\frac{{\sigma}_{t}^{2}}{{\sigma}_{n}^{2}}=1+s.\end{array}$$

Solving this equation, we derive Equation (17). ☐

The following proposition shows that the two nonlinear optimization problems (NP${}_{1}$) and (NP${}_{2}$) are well-posed, i.e., the solutions for (NP${}_{1}$) and (NP${}_{2}$) uniquely exist.

**Proposition**

**2.**

Both nonlinear optimization problems (NP${}_{1}$) and (NP${}_{2}$) are well-posed: (i). for any given $\alpha \in (0,1)$, ${\sigma}_{n}^{2}$ and SNR$>0$, (NP${}_{1}$) has one and only one solution pair $(M,\lambda )$; and (ii) for any given $\alpha \in (0,1)$, ${\sigma}_{n}^{2}$ and $M>0$, (NP${}_{2}$) has one and only one solution pair $(SNR,\lambda )$.

**Proof.**

Let $\mathrm{SNR}$ be denoted by s, and the LHS of the restriction inequality of (NP${}_{1}$) be expressed by a function

$$\begin{array}{c}\hfill \Gamma (M,s,\lambda )=\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)+1-\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right).\end{array}$$

By the definition of SNR, it follows that ${\sigma}_{t}^{2}=(1+s){\sigma}_{n}^{2}$. Thus, we have

$$\begin{array}{c}\hfill \frac{M\lambda}{2{\sigma}_{n}^{2}}-\frac{M\lambda}{2{\sigma}_{t}^{2}}=\frac{M\lambda {\sigma}_{s}^{2}}{2{\sigma}_{t}^{2}{\sigma}_{n}^{2}}=\frac{Ms\lambda}{2{\sigma}_{t}^{2}}.\end{array}$$

Note that the threshold $\lambda $ should be located between the two energies ${\sigma}_{n}^{2}$ and ${\sigma}_{t}^{2}$, i.e., $\lambda \in ({\sigma}_{n}^{2},{\sigma}_{t}^{2})$. We know that, for small SNR (s) and small M, the distance between $\frac{M\lambda}{2{\sigma}_{n}^{2}}$ and $\frac{M\lambda}{2{\sigma}_{t}^{2}}$ is very close. Hence, the value of $\Gamma (M,s,\lambda )$ given by Equation (19) is close to 1, which means the restriction $\Gamma (M,s,\lambda )\le \alpha $ is probably violated.

Next, we demonstrate that the solution of (NP${}_{1}$) is unique if it exists. Clearly, it is sufficient to show that $\frac{\partial \Gamma (M,s,\lambda )}{\partial M}<0$. For this, by noticing that
and $\frac{\partial \Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)}{\partial M}$, similar to Equation (21) by replacing ${\sigma}_{n}$ by ${\sigma}_{t}$ therein, we have
where ${\lambda}_{0}=\frac{(1+s){\sigma}_{n}^{2}\mathrm{ln}(1+s)}{s}$ is given by Equation (17).

$$\begin{array}{cc}\hfill \frac{\partial \Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)}{\partial M}& =-\frac{1}{\Gamma \left(\frac{M}{2}\right)}\frac{d\Gamma \left(\frac{M}{2}\right)}{dM}\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)-\frac{1}{\Gamma \left(\frac{M}{2}\right)}{\left(\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)}^{\frac{M}{2}-1}{e}^{-\frac{M\lambda}{2{\sigma}_{n}^{2}}}\frac{\lambda}{2{\sigma}_{n}^{2}}\hfill \\ & \phantom{\rule{1.em}{0ex}}+\frac{1}{\Gamma \left(\frac{M}{2}\right)}{\int}_{\frac{M\lambda}{2{\sigma}_{n}^{2}}}^{\infty}{e}^{-t}{t}^{\frac{M}{2}-1}\frac{lnt}{2}dt\hfill \end{array}$$

$$\begin{array}{c}\hfill \frac{\partial \Gamma (M,s,\lambda )}{\partial M}{|}_{\lambda ={\lambda}_{0}}=\frac{1}{\Gamma \left(\frac{M}{2}\right)}{\int}_{\frac{M{\lambda}_{0}}{2{\sigma}_{t}^{2}}}^{\frac{M{\lambda}_{0}}{2{\sigma}_{n}^{2}}}{e}^{-t}{t}^{\frac{M}{2}-1}\left[\frac{1}{\Gamma \left(\frac{M}{2}\right)}\frac{d\Gamma \left(\frac{M}{2}\right)}{dM}-\frac{\mathrm{ln}t}{2}\right]dt\stackrel{\Delta}{=}\frac{D(M,s)}{\Gamma \left(\frac{M}{2}\right)},\end{array}$$

Denote

$$\begin{array}{c}\hfill a\stackrel{\Delta}{=}\frac{M{\lambda}_{0}}{2{\sigma}_{t}^{2}}=\frac{Mln(1+s)}{2s},\phantom{\rule{1.em}{0ex}}b\stackrel{\Delta}{=}\frac{M{\lambda}_{0}}{2{\sigma}_{n}^{2}}=\frac{M(1+s)ln(1+s)}{2s}.\end{array}$$

Clearly, $b=a(1+s)$. By noticing that $a\underset{s\to 0}{\to}\frac{M}{2}$ and $b\underset{s\to 0}{\to}\frac{M}{2}$, we have ${lim}_{s\to 0}D(M,s)=0$. Similarly, by $a\underset{s\to \infty}{\to}0$ and $b\underset{s\to \infty}{\to}\infty $, it follows that ${lim}_{s\to \infty}D(M,s)=0$. Thus, to find the sign of Equation (22), we analyze the derivative of $D(M,s)$ with respect to s below:
where ${C}_{M}=\frac{1}{\Gamma \left(\frac{M}{2}\right)}\frac{d\Gamma \left(\frac{M}{2}\right)}{dM}$. Note that
we proceed the essential terms of Equation (23) as
where $\delta \left(s\right)=\frac{1}{2}ln(1+s)ln\frac{s}{ln(1+s)}$. Recalling an inequality
for $x>1$ , we find

$$\begin{array}{cc}\hfill \frac{\partial D(M,s)}{\partial s}& ={e}^{-a}{a}^{\frac{M}{2}-1}\left[{e}^{-as}{(1+s)}^{\frac{M}{2}-1}({C}_{M}-\frac{1}{2}lnb)\frac{\partial b}{\partial s}-({C}_{M}-\frac{1}{2}lna)\frac{\partial a}{\partial s}\right]\hfill \\ & \stackrel{\Delta}{=}{e}^{-a}{a}^{\frac{M}{2}-1}{D}_{1}(M,s),\hfill \end{array}$$

$$\begin{array}{c}{e}^{-as}={e}^{-\frac{Mln(1+s)}{2}}={(1+s)}^{-\frac{M}{2}},\hfill \\ \frac{\partial b}{\partial s}=(1+s)\frac{\partial a}{\partial s}+a,\hfill \end{array}$$

$$\begin{array}{cc}\hfill {D}_{1}(M,s)& =\frac{1}{1+s}({C}_{M}-\frac{1}{2}lnb)\frac{\partial b}{\partial s}-({C}_{M}-\frac{1}{2}lna)\frac{\partial a}{\partial s}\hfill \\ & =\frac{\partial a}{\partial s}\frac{1}{2}ln\frac{a}{b}+\frac{a}{1+s}({C}_{M}-\frac{1}{2}lnb)\hfill \\ & =\frac{M}{2s(1+s)}\left[\frac{{ln}^{2}(1+s)}{2s}+({C}_{M}-\frac{1}{2}ln\frac{M}{2}-\frac{1}{2})ln(1+s)+\delta \left(s\right)\right],\hfill \end{array}$$

$$lnx-\frac{1}{x}<\frac{{\Gamma}^{\prime}\left(x\right)}{\Gamma \left(x\right)}<lnx-\frac{1}{2x}$$

$$\begin{array}{c}\hfill -\frac{2}{M}<{C}_{M}-\frac{1}{2}ln\frac{M}{2}<-\frac{1}{M}.\end{array}$$

Thus, the sign of ${D}_{1}(M,s)$ changes from negative to positive as s moves from 0 to ∞, and it also does for $\frac{\partial D(M,s)}{\partial s}$. Together with the two limitations of $D(M,s)$, we know that $D(M,s)$ decreases from $D(M,0)=0$ to negative minimum and then increase to $D(M,\infty )=0$. Thus, $\frac{\partial \Gamma (M,s,\lambda )}{\partial M}{|}_{\lambda ={\lambda}_{0}}<0$ for $s\in (0,\infty )$, which derives the uniqueness of the solution.

Let us recall some basic facts of Gamma distribution below before the deducing the existence of the solution of (NP${}_{1}$) when M is sufficiently large. For a Gamma distribution with density function as $\frac{1}{\Gamma \left(k\right)}{x}^{k-1}{e}^{-x}$, its expectation and deviation are k and $\sqrt{k}$, respectively. Let us first introduce a Gamma distributed random variable $\xi $ with $k=M/2$. Denote
where ${\beta}_{1}>0$ and ${\beta}_{2}>0$, by the fact that $\lambda \in ({\sigma}_{n}^{2},{\sigma}_{t}^{2})$. By Equation (A3) (in the Appendix) and Chebyshev’s inequality, we have

$$\begin{array}{c}\frac{M\lambda}{2{\sigma}_{n}^{2}}-\frac{M}{2}\stackrel{\Delta}{=}{\beta}_{1}\frac{M}{2},\hfill \\ \frac{M}{2}-\frac{M\lambda}{2{\sigma}_{t}^{2}}\stackrel{\Delta}{=}{\beta}_{2}\frac{M}{2},\hfill \end{array}$$

$$\begin{array}{cc}\hfill \Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{n}^{2}}\right)& =\mathrm{P}\left[\xi -E\xi >{\beta}_{1}\frac{M}{2}\right]\le \mathrm{P}\left[|\xi -E\xi |>{\beta}_{1}\frac{M}{2}\right]\hfill \\ & \le \frac{\mathrm{Var}\left(\xi \right)}{{\beta}_{1}^{2}{(M/2)}^{2}}=\frac{2}{{\beta}_{1}^{2}M}\underset{M\to \infty}{\to}0.\hfill \end{array}$$

Similarly,

$$\begin{array}{cc}\hfill 1-\Gamma \left(\frac{M}{2},\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)& =\mathrm{P}\left[E\xi -\xi >{\beta}_{2}\frac{M}{2}\right]\le \mathrm{P}\left[|\xi -E\xi |>{\beta}_{2}\frac{M}{2}\right]\hfill \\ & \le \frac{\mathrm{Var}\left(\xi \right)}{{\beta}_{2}^{2}{(M/2)}^{2}}=\frac{2}{{\beta}_{2}^{2}M}\underset{M\to \infty}{\to}0.\hfill \end{array}$$

Hence, by Equation (19), $\Gamma (M,s,\lambda )\to 0$ as $M\to \infty $. This means the restriction of (NP${}_{1}$) can be satisfied if M is sufficiently large, which derives Assertion (i).

For Assertion (ii), by Equation (20), we also know that the distance between $\frac{M\lambda}{2{\sigma}_{n}^{2}}$ and $\frac{M\lambda}{2{\sigma}_{t}^{2}}$ is very close for small SNR (s). Thus, the restriction $\Gamma (M,s,\lambda )\le \alpha $ is probably violated.

Differentiating $\Gamma (M,s,\lambda )$ by s, we have

$$\begin{array}{cc}\hfill \frac{\partial \Gamma (M,s,\lambda )}{\partial s}& =\frac{-M\lambda}{2\Gamma \left(\frac{M}{2}\right)}{\left(\frac{M\lambda}{2{\sigma}_{t}^{2}}\right)}^{\frac{M}{2}-1}{e}^{-\frac{M\lambda}{2{\sigma}_{t}^{2}}}\frac{{\sigma}_{n}^{2}}{{\sigma}_{t}^{4}}<0.\hfill \end{array}$$

Thus, we know that $\Gamma (M,s,\lambda )$ decreases as $s\to \infty $. This means the solution is unique. By noticing that
which declares that $\Gamma (M,s,\lambda )\to 0$ as $s\to \infty $. This further proves the existence of the solution, and, thus, Assertion (ii) follows. ☐

$$\begin{array}{c}\hfill \frac{M{\lambda}_{0}}{2{\sigma}_{t}^{2}}=\frac{Mln(1+s)}{2s}\underset{s\to \infty}{\to}0,\phantom{\rule{1.em}{0ex}}\frac{M{\lambda}_{0}}{2{\sigma}_{n}^{2}}=\frac{M(1+s)ln(1+s)}{2s}\underset{s\to \infty}{\to}\infty ,\end{array}$$

Similarly, by replacing ${P}_{fa}$ and ${P}_{d}$ in Equation (16) with the approximated distributions ${\tilde{P}}_{fa}$ and ${\tilde{P}}_{d}$ given by Equations (9) and (13), respectively, the corresponding case based on approximation distribution in Equation (7) for the new principle can be obtained. Precisely, for a given small $\alpha >0$, a threshold $\lambda $ can be identified such that
where ${\tilde{P}}_{fa}\left(\lambda \right)$ and ${\tilde{P}}_{d}\left(\lambda \right)$ are given by Equations (9) and (13), respectively. To achieve this, we have to formulate the two nonlinear optimization problems. Firstly, with given ${\sigma}_{n}^{2}$ and SNR (or ${\sigma}_{t}^{2}$), we try to derive the minimum data size M and the corresponding threshold $\lambda $ satisfying the inequality in Equation (27), the nonlinear optimization problem is formulated as:

$$\begin{array}{c}\hfill {\tilde{P}}_{fa}\left(\lambda \right)+(1-{\tilde{P}}_{d}\left(\lambda \right))\le \alpha ,\end{array}$$

$$\begin{array}{c}\hfill ({\tilde{\mathrm{NP}}}_{1})\left\{\begin{array}{cc}\mathrm{Min}\hfill & M\hfill \\ \mathrm{s}.\mathrm{t}.\hfill & Q\left(\frac{\lambda -{\sigma}_{n}^{2}}{{\sigma}_{n}^{2}/\sqrt{M/2}}\right)+1-Q\left(\frac{\lambda -{\sigma}_{t}^{2}}{{\sigma}_{t}^{2}/\sqrt{M/2}}\right)\le \alpha .\hfill \end{array}\right.\end{array}$$

Secondly, let ${\sigma}_{n}^{2}$ and M be fixed, such that the minimum SNR and corresponding threshold $\lambda $ satisfying the inequality in Equation (27) will be identified. This is also a nonlinear optimization problem as described as follows:

$$\begin{array}{c}\hfill ({\tilde{\mathrm{NP}}}_{2})\left\{\begin{array}{cc}\mathrm{Min}\hfill & \mathrm{SNR}\hfill \\ \mathrm{s}.\mathrm{t}.\hfill & Q\left(\frac{\lambda -{\sigma}_{n}^{2}}{{\sigma}_{n}^{2}/\sqrt{M/2}}\right)+1-Q\left(\frac{\lambda -{\sigma}_{t}^{2}}{{\sigma}_{t}^{2}/\sqrt{M/2}}\right)\le \alpha .\hfill \end{array}\right.\end{array}$$

It is also found below that the potential threshold can be deterministically selected with given ${\sigma}_{n}^{2}$ and SNR and data size M. By this theoretical discovery, the numerical algorithm to solve (${\tilde{\mathrm{NP}}}_{1}$) and (${\tilde{\mathrm{NP}}}_{2}$) can be largely simplified.

**Proposition**

**3.**

In both nonlinear optimization problems (${\tilde{NP}}_{1}$) and (${\tilde{NP}}_{2}$), if solvable, the solution for λ should be
where
with $\delta =\frac{4}{M}ln(1+SNR)$. A simplified form is
where s represents SNR. To assure ${\tilde{\lambda}}_{0}\in ({\sigma}_{n}^{2},{\sigma}_{t}^{2})$, it requires that

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{0}=\frac{{\sigma}_{t}^{2}{\sigma}_{n}^{2}({\sigma}_{t}^{2}-{\sigma}_{n}^{2})+\sqrt{\Delta}}{{\sigma}_{t}^{4}-{\sigma}_{n}^{4}},\end{array}$$

$$\begin{array}{c}\hfill \Delta ={\sigma}_{t}^{4}{\sigma}_{n}^{4}({\sigma}_{t}^{2}-{\sigma}_{n}^{2})[{\sigma}_{t}^{2}-{\sigma}_{n}^{2}+\delta ({\sigma}_{t}^{2}+{\sigma}_{n}^{2})]\end{array}$$

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{0}=\frac{{\sigma}_{n}^{2}(1+s)\left(1+\sqrt{1+\delta +\frac{2\delta}{s}}\right)}{2+s},\end{array}$$

$$\begin{array}{c}\hfill M>\frac{4ln(1+s)}{{s}^{2}}.\end{array}$$

**Proof.**

In the following, only the case of (${\tilde{\mathrm{NP}}}_{1}$) is proven since the proof for (${\tilde{\mathrm{NP}}}_{2}$) is similar. We first construct the Lagrange function for (${\tilde{\mathrm{NP}}}_{1}$) with respect to a multiplier $\mu $ as
where $m=\sqrt{M/2}$ and ${s}_{2}\ge 0$ is a slack variable. Clearly, differentiating $\tilde{L}(M,\lambda ,s,\mu )$ in terms of $\lambda $, we have

$$\begin{array}{c}\hfill \tilde{L}(M,\lambda ,{s}_{2},\mu )=M+\mu \left[Q\left(\frac{\lambda -{\sigma}_{n}^{2}}{{\sigma}_{n}^{2}/m}\right)+1-Q\left(\frac{\lambda -{\sigma}_{t}^{2}}{{\sigma}_{t}^{2}/m}\right)+{s}_{2}-\alpha \right],\end{array}$$

$$\begin{array}{cc}\hfill \frac{\partial \tilde{L}(M,\lambda ,{s}_{2},\mu )}{\partial \lambda}& =\frac{\mu}{\sqrt{2\pi}}\left[\frac{m}{{\sigma}_{t}^{2}}{e}^{-\frac{{m}^{2}{(\lambda -{\sigma}_{t}^{2})}^{2}}{2{\sigma}_{t}^{4}}}-\frac{m}{{\sigma}_{n}^{2}}{e}^{-\frac{{m}^{2}{(\lambda -{\sigma}_{n}^{2})}^{2}}{2{\sigma}_{n}^{4}}}\right].\hfill \end{array}$$

Let $\frac{\partial \tilde{L}(M,\lambda ,{s}_{2},\mu )}{\partial \lambda}=0$, we get a simplified equivalent equation as
which further means

$$\begin{array}{c}\hfill \frac{{(\lambda -{\sigma}_{n}^{2})}^{2}}{{\sigma}_{n}^{4}}-\frac{{(\lambda -{\sigma}_{t}^{2})}^{2}}{{\sigma}_{t}^{4}}=\frac{2}{{m}^{2}}ln\left(\frac{{\sigma}_{t}^{2}}{{\sigma}_{n}^{2}}\right)=\frac{2}{{m}^{2}}ln(1+\mathrm{SNR})\stackrel{\Delta}{=}\delta ,\end{array}$$

$$\begin{array}{c}\hfill ({\sigma}_{t}^{4}-{\sigma}_{n}^{4}){\lambda}^{2}-2{\sigma}_{t}^{2}{\sigma}_{n}^{2}({\sigma}_{t}^{2}-{\sigma}_{n}^{2})\lambda -\delta {\sigma}_{t}^{4}{\sigma}_{n}^{4}=0.\end{array}$$

Thus,
where $\Delta $ is given by Equation (29). Note that $\lambda \in ({\sigma}_{n}^{2},{\sigma}_{t}^{2})$, i.e., the threshold should be located between the two expectations of the distributions under Hypotheses ${H}_{0}$ and ${H}_{1}$, and
thus we derive Equation (28). Equation (30) follows by substituting ${\sigma}_{t}^{2}=(1+s){\sigma}_{n}^{2}$ in Equation (28).

$$\begin{array}{c}\hfill \lambda =\frac{{\sigma}_{t}^{2}{\sigma}_{n}^{2}({\sigma}_{t}^{2}-{\sigma}_{n}^{2})\pm \sqrt{\Delta}}{{\sigma}_{t}^{4}-{\sigma}_{n}^{4}},\end{array}$$

$$\sqrt{\Delta}\ge {\sigma}_{t}^{2}{\sigma}_{n}^{2}({\sigma}_{t}^{2}-{\sigma}_{n}^{2}),$$

Clearly, ${\tilde{\lambda}}_{0}>{\sigma}_{n}^{2}$. For ${\tilde{\lambda}}_{0}>{\sigma}_{t}^{2}$, it is sufficient to require that
which is equivalent to Equation (31). ☐

$$\begin{array}{c}\hfill 1+\sqrt{1+\delta +\frac{2\delta}{s}}<2+s,\end{array}$$

From the above, we have demonstrated that the two nonlinear optimization problems (${\tilde{\mathrm{NP}}}_{1}$) and (${\tilde{\mathrm{NP}}}_{2}$) are well-posed.

**Proposition**

**4.**

Both nonlinear optimization problems (${\tilde{NP}}_{1}$) and (${\tilde{NP}}_{2}$) are well-posed: (i) for any given $\alpha \in (0,1)$, ${\sigma}_{n}^{2}$ and SNR$>0$, (${\tilde{NP}}_{1}$) has one and only one solution pair $(M,\lambda )$; and (ii) for any given $\alpha \in (0,1)$, ${\sigma}_{n}^{2}$ and $M>0$, (${\tilde{NP}}_{2}$) has one and only one solution pair $(SNR,\lambda )$.

**Proof.**

For convenience, let $\mathrm{SNR}$ be denoted by s, and the LHS of the restriction inequality of (${\tilde{\mathrm{NP}}}_{1}$) be expressed by a function
where $m=\sqrt{M/2}$. For small SNR, ${\sigma}_{t}^{2}$ and ${\sigma}_{n}^{2}$ are very close. Thus, for small M, it is impossible to require that $Q(M,s,\lambda )\le \alpha $, since $\frac{m(\lambda -{\sigma}_{n}^{2})}{{\sigma}_{n}^{2}}$ and $\frac{m(\lambda -{\sigma}_{t}^{2})}{{\sigma}_{t}^{2}}$ are too close.

$$\begin{array}{c}\hfill Q(M,s,\lambda )=Q\left(\frac{m(\lambda -{\sigma}_{n}^{2})}{{\sigma}_{n}^{2}}\right)+1-Q\left(\frac{m(\lambda -{\sigma}_{t}^{2})}{{\sigma}_{t}^{2}}\right),\end{array}$$

(i) By differentiating $Q(M,s,\lambda )$ in terms of m, we have
by noticing that $\lambda \in ({\sigma}_{n}^{2},{\sigma}_{t}^{2})$. This concludes the uniqueness of the solution. Now, we derive the existence of the solution. Denote ${\tilde{\lambda}}_{0}$ as the possible threshold of $\lambda $ given by Equation (28). Clearly,
thus, for sufficiently large m, we have
where ${\gamma}_{1}>0$ and ${\gamma}_{2}>0$. Thus, we further have

$$\begin{array}{cc}\hfill \frac{\partial Q(M,s,\lambda )}{\partial m}& =\frac{1}{\sqrt{2\pi}}\left[\frac{\lambda -{\sigma}_{t}^{2}}{{\sigma}_{t}^{2}}{e}^{-\frac{{m}^{2}{(\lambda -{\sigma}_{t}^{2})}^{2}}{2{\sigma}_{t}^{4}}}-\frac{\lambda -{\sigma}_{n}^{2}}{{\sigma}_{n}^{2}}{e}^{-\frac{{m}^{2}{(\lambda -{\sigma}_{n}^{2})}^{2}}{2{\sigma}_{n}^{4}}}\right]<0,\hfill \end{array}$$

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{0}\underset{m\to \infty}{\to}\frac{2{\sigma}_{t}^{2}{\sigma}_{n}^{2}}{{\sigma}_{t}^{2}+{\sigma}_{n}^{2}}=\frac{2(1+s){\sigma}_{n}^{2}}{2+s}\stackrel{\Delta}{=}{\lambda}_{*},\end{array}$$

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{0}-{\sigma}_{n}^{2}\ge {\gamma}_{1}|{\lambda}_{*}-{\sigma}_{n}^{2}|,\phantom{\rule{1.em}{0ex}}{\sigma}_{t}^{2}-{\tilde{\lambda}}_{0}\ge {\gamma}_{2}|{\lambda}_{*}-{\sigma}_{t}^{2}|,\end{array}$$

$$\begin{array}{c}Q\left(\frac{m({\tilde{\lambda}}_{0}-{\sigma}_{n}^{2})}{{\sigma}_{n}^{2}}\right)\le Q\left(\frac{m{\gamma}_{1}|{\lambda}_{*}-{\sigma}_{n}^{2}|}{{\sigma}_{n}^{2}}\right)\underset{m\to \infty}{\to}0,\hfill \\ 1-Q\left(\frac{m(\lambda -{\sigma}_{t}^{2})}{{\sigma}_{t}^{2}}\right)\le Q\left(\frac{m{\gamma}_{2}|{\lambda}_{*}-{\sigma}_{t}^{2}|}{{\sigma}_{t}^{2}}\right)\underset{m\to \infty}{\to}0.\hfill \end{array}$$

These mean $Q(M,s,\lambda )\underset{m\to \infty}{\to}0$, which implies the existence of solution.

(ii) By differentiating $Q(M,s,\lambda )$ in terms of s, we have
which declares the uniqueness of solution for this case. Consider ${\tilde{\lambda}}_{0}$ given by Equation (28) as $s\to \infty $; clearly, ${\tilde{\lambda}}_{0}\underset{s\to \infty}{\to}\infty $ with the order $\sqrt{lns}$, and ${\sigma}_{t}^{2}\underset{s\to \infty}{\to}\infty $ with the order s. Thus, we have
which means $Q(M,s,\lambda )\underset{s\to \infty}{\to}0$, guaranteeing the existence of solution for this case. ☐

$$\begin{array}{cc}\hfill \frac{\partial Q(M,s,\lambda )}{\partial s}& =\frac{-m\lambda {\sigma}_{n}^{2}}{{\sigma}_{t}^{4}\sqrt{2\pi}}{e}^{-\frac{{m}^{2}{(\lambda -{\sigma}_{t}^{2})}^{2}}{2{\sigma}_{t}^{4}}}<0,\hfill \end{array}$$

$$\begin{array}{c}\hfill {\tilde{\lambda}}_{0}-{\sigma}_{n}^{2}\underset{s\to \infty}{\to}\infty ,\phantom{\rule{1.em}{0ex}}{\sigma}_{t}^{2}-{\tilde{\lambda}}_{0}\underset{s\to \infty}{\to}\infty ,\end{array}$$

Based on the above propositions, the proposed principles for threshold selection can be well defined provided data size M is sufficiently large and SNR is given, or SNR is sufficiently large and M is given. Hence, mathematically, some fundamental limitations are identified regarding data size M when SNR is given, as well as SNR when M is given.

## 4. Numerical Experiments

Two kinds of simulations are conducted in this section. One is to observe the evolutions of the solutions of the above four nonlinear optimization problems, and the other is to investigate the accuracy of the two asymptotical equations: Equation (61) for both (NP${}_{1}$) and (${\tilde{\mathrm{NP}}}_{1}$) as SNR tends to zero, and Equation (62) for both (NP${}_{2}$) and (${\tilde{\mathrm{NP}}}_{2}$) as M tends to infinity. The two formulae are discovered by Theorems 1–4.

#### 4.1. Intuitive Sense of the Solutions for Relevant Optimization Problems

Specifically, we would like to check the evolutions of solutions M in (NP${}_{1}$) and (${\tilde{\mathrm{NP}}}_{1}$) as SNR tends to 0, respectively; and the evolutions of solutions SNR in (NP${}_{2}$) and (${\tilde{\mathrm{NP}}}_{2}$) as M tends to ∞, respectively.

The nonlinear optimization problems (NP${}_{1}$)(NP${}_{2}$) and (${\tilde{\mathrm{NP}}}_{1}$)(${\tilde{\mathrm{NP}}}_{2}$) can be solved directly by Matlab code /fmincon/, which is used to find minimum of an object function with nonlinear constraints. However, it is found to be quite unstable when using /fmincon/ to solve them directly, say (NP${}_{1}$), most likely due to the nonlinearity in the constraint of (NP${}_{1}$).

Proposition 1 significantly simplifies the solution process for (NP${}_{1}$) and meanwhile makes it stable by substituting the potential optimal threshold given by Equation (17) into the constraint of (NP${}_{1}$). The same for (NP${}_{2}$). As to (${\tilde{\mathrm{NP}}}_{1}$) and (${\tilde{\mathrm{NP}}}_{2}$), Proposition 3 plays the same role.

**Example**

**1.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. We deal with (NP${}_{1}$) by letting $SNR=\frac{1}{k}$ with $k=1,2,\dots ,20$. The solutions for M and corresponding λ are shown in Figure 2a,b, respectively.

**Example**

**2.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. We deal with (NP${}_{2}$) by letting $M=1000k$ with $k=1,2,\dots ,20$. The solutions for SNR and λ are shown in Figure 3a,b, respectively.

**Example**

**3.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. We deal with (${\tilde{NP}}_{1}$) by letting $SNR=\frac{1}{k}$ with $k=1,2,\dots ,20$. Since the solutions for M and λ are very close to the counterparts of Example 1, we only plot the differences between the two examples (the quantities of Example 3 minus that of Example 1), shown practically negligible in Figure 4a,b, respectively.

**Example**

**4.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. We deal with (${\tilde{NP}}_{2}$) by letting $M=1000k$ with $k=1,2,\dots ,20$. Since the solutions for SNR and λ are very close to the counterparts of Example 2, we only plot the differences between the two examples (i.e., the quantities of Example 4 minus that of Example 2), as shown practically negligible in Figure 5a,b, respectively.

With Propositions 1 and 3, it is under our expectation that the thresholds $\lambda $ tend to ${\sigma}_{n}^{2}=1$ in Examples 1 and 3 as SNR tends to 0. The thresholds in Examples 2 and 4 tend to ${\sigma}_{n}^{2}=1$ as $M\to \infty $ due to the fact that the minimum SNR required tends to 0 as $M\to \infty $.

#### 4.2. Accuracy of the Two Asymptotical Equations (61) and (62)

To investigate the performances of asymptotical Equation (61) for both (NP${}_{1}$) and (${\tilde{\mathrm{NP}}}_{1}$) as SNR tends to zero and Equation (62) for both (NP${}_{2}$) and (${\tilde{\mathrm{NP}}}_{2}$) as M tends to infinity, we design two more simulation examples below.

**Example**

**5.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. For both (NP${}_{1}$) and (${\tilde{NP}}_{1}$), let $SNR={10}^{1-3(k-1)/19}$, $k=1,2,\dots ,20$, corresponding to the range of 10 dB to −20 dB. Let M be defined by Equation (61) and the corresponding λ is given by Equation (17) for (NP${}_{1}$) and Equation (30) for (${\tilde{NP}}_{1}$). The constraint summation probabilities of (NP${}_{1}$) and (${\tilde{NP}}_{1}$), marked by “-o” and “-x”, are plotted in Figure 6a. The bias between the solutions M from either (NP${}_{1}$) or (${\tilde{NP}}_{1}$) and the asymptotical Equation (61) are plotted in Figure 6b. It is clear from both figures that the asymptotical Equation (61) provides quite accurate estimation of the solution for both (NP${}_{1}$) and (${\tilde{NP}}_{1}$), meanwhile, the required confidence level $\alpha =0.05$ is also satisfied.

**Example**

**6.**

Let $\alpha =0.05$ and ${\sigma}_{n}=1$. For both (NP${}_{2}$) and (${\tilde{NP}}_{2}$), let $M=10k$ with $k=1,2,\dots ,20$. Let SNR be defined by Equation (62) and the corresponding λ is given by Equation (17) for (NP${}_{1}$) and Equation (30) for (${\tilde{NP}}_{1}$). The constraint summation probabilities of (NP${}_{2}$) and (${\tilde{NP}}_{2}$), marked by “-o” and “-x”, are plotted in Figure 7a. The bias between the solutions SNR from either (NP${}_{2}$) or (${\tilde{NP}}_{2}$) and the asymptotical Equation (62) are plotted in Figure 7b. It is clear from both figures that the asymptotical Equation (62) provides quite accurate estimation of the solution for both (NP${}_{2}$) and (${\tilde{NP}}_{2}$); meanwhile, the required confidence level $\alpha =0.05$ is also satisfied.

From the simulation Examples 5 and 6, it is clear that the asymptotical Equations (61) and (62) provide quite accurate estimation of the solutions for the relevant four optimization problems with the constraint inequalities satisfied. The constraint inequality is tight for Equation (61) when SNR is sufficient small, and for Equation (62) when M is sufficient large. The relative difference between the two optimization frameworks is insignificant. This shows the potential application value of the two Equations (61) and (62).

## 5. Fundamental Limits of Detection

Let us denote ${M}_{\mathrm{min}}$ as the solution of (NP${}_{1}$) or (${\tilde{\mathrm{NP}}}_{1}$), and SNR${}_{\mathrm{min}}$ as the solution of (NP${}_{2}$) or (${\tilde{\mathrm{NP}}}_{2}$), respectively. Obviously, for a fixed SNR, it is impossible to find a threshold $\lambda $ satisfying inequality in Equation (16) or Equation (27) if the data size is smaller than ${M}_{\mathrm{min}}$. Equivalently, for a fixed data size M, it is not possible to find a threshold $\lambda $ satisfying inequality in Equation (16) or Equation (27) if the SNR is smaller than SNR${}_{\mathrm{min}}$. It can be observed that there exists some fundamental limitations in the effort of keeping the sum of the two error probabilities smaller than a designated confidence level. Theoretically, it is impossible to explicitly solve the four optimization problems introduced in Section 3. In this section, we investigate the asymptotical performances of the solutions to the four nonlinear optimization problems, i.e., to find the orders of ${M}_{\mathrm{min}}$ as SNR tends to 0, and the orders of SNR${}_{\mathrm{min}}$ as M tends to ∞.

We fist analyze (${\tilde{\mathrm{NP}}}_{1}$)(${\tilde{\mathrm{NP}}}_{2}$), since Q function is much better than incomplete Gamma function, e.g., Q function possesses an interesting property as $Q\left(x\right)=1-Q(-x)$. Then, we establish a relation between incomplete Gamma function given by Equation (A3) (in the Appendix) and Q function in Lemma 1 to facilitate the investigations for (NP${}_{1}$)(NP${}_{2}$). Throughout this section, SNR is usually simplified as s for brief. Two functions $\psi \left(t\right)$ and $\varphi \left(t\right)$ are called to be of equivalent order, if
and are simply denoted below by $\psi \left(t\right)\sim \varphi \left(t\right)$ as $t\to {t}_{0}$.

$$\underset{t\to {t}_{0}}{lim}\frac{\psi \left(t\right)}{\varphi \left(t\right)}=1.$$

The solution of (${\tilde{\mathrm{NP}}}_{1}$) is discussed in the followings as a starting point.

**Theorem**

**1.**

The solution of nonlinear optimization problem (${\tilde{NP}}_{1}$) with $\alpha \in (0,\frac{1}{2})$, denoted by ${M}_{min}$, has bounds given as
where ${x}_{1}={Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}}{2}\right)$ and ${x}_{2}={Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}}{2}\right)$ with ${\alpha}_{1}=\frac{s+2}{\sqrt{\pi {M}_{min}}}$ and ${\alpha}_{2}=\frac{s+2}{\sqrt{\pi ({M}_{min}-1)}}$. Additionally,
as $s\to 0$.

$$\begin{array}{c}\hfill \frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{1}^{2}<{M}_{min}<1+\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{2}^{2},\end{array}$$

$$\begin{array}{c}\hfill {M}_{min}\sim \frac{2{(2+s)}^{2}}{{s}^{2}}{\left[{Q}^{-1}\left(\frac{\alpha}{2}\right)\right]}^{2}\end{array}$$

**Proof.**

Recall the notations ${\tilde{\lambda}}_{0}$ and ${\lambda}_{*}$ given by Equations (30) and (34), respectively, we derive
for $M>\frac{4ln(1+s)}{{s}^{2}}$. It clearly indicates ${M}_{\mathrm{min}}>\frac{4ln(1+s)}{{s}^{2}}$. Otherwise, the minimum point ${\tilde{\lambda}}_{0}$ of function $Q(M,s,\lambda )$, given by Equation (33), does not belong to $({\sigma}_{n}^{2},{\sigma}_{t}^{2})$. Thus, the minimum value of $Q(M,s,\lambda )$ over $({\sigma}_{n}^{2},{\sigma}_{t}^{2})$ attains at either $\lambda ={\sigma}_{n}^{2}$ or ${\sigma}_{t}^{2}$, which means $Q(M,s,\lambda )>\frac{1}{2}>\alpha $ over $({\sigma}_{n}^{2},{\sigma}_{t}^{2})$, contradicting the definition of ${M}_{\mathrm{min}}$. Thus, ${M}_{\mathrm{min}}\to \infty $ as $s\to 0.$

$$\begin{array}{c}\hfill {\sigma}_{n}^{2}<{\lambda}_{*}<{\tilde{\lambda}}_{0}<{\sigma}_{t}^{2}\end{array}$$

Clearly, we have
where ${\delta}_{1}=\sqrt{1+\delta +\frac{2\delta}{s}}$ and $\delta =\frac{4}{M}ln(1+s)$.

$$\begin{array}{cc}\hfill & {\tilde{\lambda}}_{0}-{\lambda}_{*}=\frac{(1+s)({\delta}_{1}-1)}{2+s}{\sigma}_{n}^{2},\hfill \end{array}$$

Using standard Mean Value Theorem to $Q(M,s,\lambda )$, given by Equation (33), with respective to $\lambda ={\tilde{\lambda}}_{0},{\lambda}_{*}$, we have
where $\xi \in ({\lambda}_{*},{\tilde{\lambda}}_{0})$. Note that
and
we have, by Equation (38),

$$\begin{array}{c}\hfill Q(M,s,{\tilde{\lambda}}_{0})=Q(M,s,{\lambda}_{*})+\frac{\partial Q(M,s,\lambda )}{\partial \lambda}{|}_{\lambda =\xi}({\tilde{\lambda}}_{0}-{\lambda}_{*}),\end{array}$$

$$\begin{array}{cc}\hfill \left|\frac{\partial Q(M,s,\lambda )}{\partial \lambda}{|}_{\lambda =\xi}\right|& =\frac{m}{\sqrt{2\pi}}\left|\frac{1}{{\sigma}_{t}^{2}}{e}^{-\frac{{m}^{2}{(\xi -{\sigma}_{t}^{2})}^{2}}{2{\sigma}_{t}^{4}}}-\frac{1}{{\sigma}_{n}^{2}}{e}^{-\frac{{m}^{2}{(\xi -{\sigma}_{n}^{2})}^{2}}{2{\sigma}_{n}^{4}}}\right|\hfill \\ & <\frac{m}{\sqrt{2\pi}}\left(\frac{1}{{\sigma}_{t}^{2}}+\frac{1}{{\sigma}_{n}^{2}}\right)=\frac{m}{{\sigma}_{n}^{2}\sqrt{2\pi}}\frac{2+s}{1+s},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\delta}_{1}-1& =\sqrt{1+\delta +\frac{2\delta}{s}}-1<1+\frac{\delta}{2}+\frac{\delta}{s}-1=\frac{(s+2)\delta}{2s}\hfill \\ & =\frac{2(s+2)ln(1+s)}{Ms}<\frac{2(s+2)}{M},\hfill \end{array}$$

$$\begin{array}{c}\hfill \left|Q(M,s,{\tilde{\lambda}}_{0})-Q(M,s,{\lambda}_{*})\right|<\frac{m({\delta}_{1}-1)}{\sqrt{2\pi}}<\frac{2m(s+2)}{M\sqrt{2\pi}}=\frac{s+2}{m\sqrt{2\pi}}\stackrel{\Delta}{=}{\alpha}_{1}(m,s).\end{array}$$

Clearly, ${\alpha}_{1}(m,s)\to 0$ as $m\to \infty $.

Observing that $Q(M,s,{\lambda}_{*})=2Q\left(\frac{ms}{2+s}\right)$, and recalling ${M}_{min}$, the solution of (${\tilde{\mathrm{NP}}}_{1}$), by Equation (39) we know that
where ${m}_{0}=\sqrt{{M}_{\mathrm{min}}/2}$ and ${m}_{1}=\sqrt{({M}_{\mathrm{min}}-1)/2}$.

$$\begin{array}{c}2Q\left(\frac{{m}_{0}s}{2+s}\right)-{\alpha}_{1}({m}_{0},s)<Q({M}_{\mathrm{min}},s,{\tilde{\lambda}}_{0})\le \alpha ,\hfill \end{array}$$

$$\begin{array}{c}2Q\left(\frac{{m}_{1}s}{2+s}\right)+{\alpha}_{1}({m}_{1},s)>Q({M}_{\mathrm{min}}-1,s,{\tilde{\lambda}}_{0})>\alpha ,\hfill \end{array}$$

By solving the inequality in Equation (40), we derive a lower bound of ${M}_{\mathrm{min}}$ as
where ${x}_{1}={Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}({m}_{0},s)}{2}\right)$. By solving the inequality in Equation (41), we derive an upper bound of ${M}_{\mathrm{min}}$ as
where ${x}_{2}={Q}^{-1}\left(\frac{\alpha -{\alpha}_{1}({m}_{1},s)}{2}\right)$.

$$\begin{array}{c}\hfill {M}_{\mathrm{min}}>\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{1}^{2},\end{array}$$

$$\begin{array}{c}\hfill {M}_{\mathrm{min}}<1+\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{2}^{2},\end{array}$$

Recalling the fact that ${M}_{\mathrm{min}}\to \infty $ as $s\to 0$ guaranteed in the beginning of the proof, Equation (36) follows immediately. ☐

Now, based on the facts revealed in the above proof, the consideration of (${\tilde{\mathrm{NP}}}_{2}$) is much simplified.

**Theorem**

**2.**

The solution of nonlinear optimization problem (${\tilde{NP}}_{2}$) with $\alpha \in (0,\frac{1}{2})$, denoted by ${SNR}_{min}$, has bounds as following:
where ${\alpha}_{1}=\frac{{SNR}_{min}+2}{\sqrt{\pi M}}$ and ${\alpha}_{2}=\frac{{SNR}_{min}-\u03f5+2}{\sqrt{\pi M}}$ with $\forall \u03f5>0$. Additionally,
as $M\to \infty $.

$$\begin{array}{c}\hfill \frac{2{Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}}{2}\right)}<{SNR}_{min}<\u03f5+\frac{2{Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}}{2}\right)},\end{array}$$

$$\begin{array}{c}\hfill {SNR}_{min}\sim \frac{2{Q}^{-1}\left(\frac{\alpha}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha}{2}\right)}\end{array}$$

**Proof.**

Recalling the solution of (${\tilde{\mathrm{NP}}}_{2}$) ${\mathrm{SNR}}_{\mathrm{min}}$, denoted by ${s}_{m}$ for brief, by Equation (39) we have
where $\u03f5>0$ and ${\alpha}_{1}(m,s)$ is given by Equation (39). After solving the two inequalities and Equations (44) and (45), we can derive Equation (42).

$$\begin{array}{c}2Q\left(\frac{m{s}_{m}}{2+{s}_{m}}\right)-{\alpha}_{1}(m,{s}_{m})<Q(M,{s}_{m},{\tilde{\lambda}}_{0})\le \alpha ,\hfill \end{array}$$

$$\begin{array}{c}2Q\left(\frac{m({s}_{m}-\u03f5)}{2+({s}_{m}-\u03f5)}\right)+{\alpha}_{1}(m,{s}_{m}-\u03f5)>Q(M,{s}_{m}-\u03f5,{\tilde{\lambda}}_{0})>\alpha ,\hfill \end{array}$$

Note that ${\alpha}_{1}\underset{M\to \infty}{\to}0$ and ${\alpha}_{2}\underset{M\to \infty}{\to}0$, Equation (43) follows directly. ☐

As already mentioned, we establish a relation between Q function and incomplete Gamma function before the considerations of (NP${}_{1}$) and (NP${}_{2}$).

**Lemma**

**1.**

Extend the incomplete Gamma function given by Equation (A3) (in the Appendix) by letting $\Gamma (k,x)=1$ for $x<0$. Then, for $\forall x$,
where $C=0.7056$.

$$\begin{array}{c}\hfill \left|\Gamma \left(\frac{M}{2},\frac{M}{2}+x\sqrt{\frac{M}{2}}\right)-Q\left(x\right)\right|\le \frac{14C}{\sqrt{2M}},\end{array}$$

**Proof.**

Let us first recall a simple version of the Berry–Esseen inequality (see, e.g., page 670 of [23]) here: Let ${\xi}_{1}$, ${\xi}_{2}$, ⋯, be iid random variables with $E{\xi}_{1}=\mu $, $E{\xi}_{1}^{2}={\sigma}^{2}$, and $E|{\xi}_{1}-E{\xi}_{1}{|}^{3}=\rho <\infty $. In addition, let ${S}_{n}={\sum}_{k=1}^{n}{\xi}_{k}$. Let the distribution function of
be denoted by ${F}_{n}\left(x\right)$, and the normal distribution function be denoted by $\Phi \left(x\right)$. There exists a positive constant C such that, for all x and n,

$${S}_{n}^{*}=\frac{{S}_{n}-n\mu}{\sqrt{n({\sigma}^{2}-{\mu}^{2})}}$$

$$\begin{array}{c}\hfill |{F}_{n}\left(x\right)-\Phi \left(x\right)|\le \frac{C\rho}{\sqrt{n}{({\sigma}^{2}-{\mu}^{2})}^{3/2}}.\end{array}$$

The best current bound for C was discovered by Shevtsova as 0.7056 [24] in 2007.

For M iid standardly normal distributed random variables ${\eta}_{k}$, $k=1,2,\dots ,M$, we know that $E{\eta}_{k}^{2}=1$, $E{\eta}_{k}^{4}=3$, $E{\eta}_{k}^{6}=15$. On the other hand, by the definition of chi-square distribution, it is clear that ${S}_{M}={\sum}_{k=1}^{M}{\eta}_{k}^{2}$ is chi-square distributed with degree M, i.e., ${S}_{M}\sim {\chi}_{M}^{2}$. Apply the Berry–Esseen inequality to ${\xi}_{k}={\eta}_{k}^{2}$, $k=1,\dots ,M$, by noticing $\mu =1$, ${\sigma}^{2}=3$ and $\rho \le 28$ (by the fact $|{\eta}_{k}^{2}-1|\le {\eta}_{k}^{2}+1$) for this case, we have
where $C=0.7056$.

$$\begin{array}{c}\hfill \left|\mathrm{Pr}\left[\frac{{S}_{M}-M}{\sqrt{2M}}>x\right]-Q\left(x\right)\right|\le \frac{14C}{\sqrt{2M}},\end{array}$$

Recalling the PDF of chi-square distribution given by Equation (5), we have

$$\begin{array}{cc}\hfill \mathrm{Pr}\left[\frac{{S}_{M}-M}{\sqrt{2M}}>x\right]& =\mathrm{Pr}\left[{S}_{M}>M+x\sqrt{2M}\right]\hfill \\ & ={\int}_{M+x\sqrt{2M}}^{\infty}\frac{{t}^{\frac{M}{2}-1}{e}^{-\frac{t}{2}}dt}{{2}^{\frac{M}{2}}\Gamma \left(\frac{M}{2}\right)}\hfill \\ & ={\int}_{\frac{M}{2}+x\sqrt{\frac{M}{2}}}^{\infty}\frac{{u}^{\frac{M}{2}-1}{e}^{-u}du}{\Gamma \left(\frac{M}{2}\right)}\hfill \\ & =\Gamma \left(\frac{M}{2},\frac{M}{2}+x\sqrt{\frac{M}{2}}\right).\hfill \end{array}$$

Now, we are in the position to analyze the situation of (NP${}_{1}$).

**Theorem**

**3.**

The solution of nonlinear optimization problem (NP${}_{1}$) with $\alpha \in (0,1)$, denoted by ${M}_{min}$, has bounds as following:
$$\begin{array}{c}\hfill \frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{1}^{2}<{M}_{min}<1+\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{2}^{2},\end{array}$$
where ${x}_{1}$ and ${x}_{2}$ are given in the following proof. Additionally,
$$\begin{array}{c}\hfill {M}_{min}\sim \frac{2{(2+s)}^{2}}{{s}^{2}}{\left[{Q}^{-1}\left(\frac{\alpha}{2}\right)\right]}^{2}\end{array}$$
as $s\to 0$.

**Proof.**

Recalling the functions $\Gamma (M,s,\lambda )$ and $Q(M,s,\lambda )$ given by Equations (19) and (33), respectively, by Equation (46), we have
where $C=0.7056$.

$$\begin{array}{c}\hfill |\Gamma (M,s,\lambda )-Q(M,s,\lambda )|\le \frac{14C}{\sqrt{2M}},\end{array}$$

Recalling also ${\lambda}_{0}$ and ${\lambda}_{*}$ given by Equations (17) and (34), respectively, we have
by the fact $\frac{2s}{2+s}<ln(1+s)$ for $s>0$, and by the inequality $ln(1+s)<s-\frac{{s}^{2}}{2}+\frac{{s}^{3}}{3}$ for $s>0$, we further have

$$\begin{array}{c}\hfill {\sigma}_{n}^{2}<{\lambda}_{*}<{\lambda}_{0}<{\sigma}_{t}^{2}\end{array}$$

$$\begin{array}{c}\hfill 0<{\lambda}_{0}-{\lambda}_{*}=\frac{(1+s){\sigma}_{n}^{2}}{s}\left[ln(1+s)-\frac{2s}{2+s}\right]<\frac{{s}^{2}(1+s)(1+2s){\sigma}_{n}^{2}}{6(2+s)}.\end{array}$$

Similar to the derivation of Equation (39), by Equation (52), we have

$$\begin{array}{c}\hfill \left|Q(M,s,{\lambda}_{0})-Q(M,s,{\lambda}_{*})\right|<\frac{m{s}^{2}(1+2s)}{6\sqrt{2\pi}}.\end{array}$$

Observing that $Q(M,s,{\lambda}_{*})=2Q\left(\frac{ms}{2+s}\right)$, and recalling ${M}_{\mathrm{min}}$, the solution of (NP${}_{1}$), by Equations (51) and (53) we know that
where ${m}_{0}=\sqrt{{M}_{\mathrm{min}}/2}$, ${m}_{1}=\sqrt{({M}_{\mathrm{min}}-1)/2}$, and

$$\begin{array}{c}2Q\left(\frac{{m}_{0}s}{2+s}\right)-{\alpha}_{2}({m}_{0},s)<\Gamma ({M}_{\mathrm{min}},s,{\tilde{\lambda}}_{0})\le \alpha ,\hfill \end{array}$$

$$\begin{array}{c}2Q\left(\frac{{m}_{1}s}{2+s}\right)+{\alpha}_{2}({m}_{1},s)>\Gamma ({M}_{\mathrm{min}}-1,s,{\tilde{\lambda}}_{0})>\alpha ,\hfill \end{array}$$

$$\begin{array}{c}\hfill {\alpha}_{2}(m,s)=\frac{14C}{\sqrt{2M}}+\frac{m{s}^{2}(1+2s)}{6\sqrt{2\pi}}.\end{array}$$

By solving the inequality in Equation (54), we derive a lower bound of ${M}_{\mathrm{min}}$ as
where ${x}_{1}={Q}^{-1}\left(\frac{\alpha +{\alpha}_{2}({m}_{0},s)}{2}\right)$. While by solving the inequality in Equation (55), we derive an upper bound of ${M}_{\mathrm{min}}$ as
where ${x}_{2}={Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}({m}_{1},s)}{2}\right)$.

$$\begin{array}{c}\hfill {M}_{\mathrm{min}}>\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{1}^{2},\end{array}$$

$$\begin{array}{c}\hfill {M}_{\mathrm{min}}<1+\frac{2{(2+s)}^{2}}{{s}^{2}}{x}_{2}^{2},\end{array}$$

It is sufficient to prove Equation (50) based on the facts that ${\alpha}_{2}({m}_{0},s)\to 0$ and ${\alpha}_{2}({m}_{1},s)\to 0$ as $s\to 0$. For this, we first note that

$$\begin{array}{c}\hfill 0<\frac{M{\lambda}_{0}}{2{\sigma}_{n}^{2}}-\frac{M{\lambda}_{0}}{2{\sigma}_{t}^{2}}=\frac{Mln(1+s)}{2}<\frac{Ms}{2}={m}^{2}s.\end{array}$$

If ${m}^{2}s\underset{s\to 0}{\to}0$, then $\Gamma (M,s,\lambda )\underset{s\to 0}{\to}1$. This means that $M\underset{s\to 0}{\to}\infty $. On the other side, we have
by inequalities $ln(1+s)>\frac{2s}{2+s}$ and $ln(1+s)<s-\frac{{s}^{2}}{2}+\frac{{s}^{3}}{3}$ for $s>0$. Thus, we know that ${m}_{0}s=O\left(1\right)$ as $s\to 0$. Otherwise, by Chebyshev’s inequality, similar to Equations(25) and (26), we have $\Gamma ({M}_{\mathrm{min}},s,\lambda )\underset{s\to \infty}{\to}0$, which contradicts the fact that $\Gamma ({M}_{\mathrm{min}},s,\lambda )$ should be around the quantity $\alpha >0$. Hence, the conclusions that ${\alpha}_{2}({m}_{0},s)\underset{s\to \infty}{\to}0$ and ${\alpha}_{2}({m}_{1},s)\underset{s\to \infty}{\to}0$ follow directly. ☐

$$\begin{array}{c}\frac{M{\lambda}_{0}}{2{\sigma}_{n}^{2}}-\frac{M}{2}=\frac{M}{2}\left[\frac{(1+s)ln(1+s)}{s}-1\right]>\frac{{m}^{2}s}{2+s},\hfill \\ \frac{M}{2}-\frac{M{\lambda}_{0}}{2{\sigma}_{t}^{2}}=\frac{M}{2}\left[1-\frac{ln(1+s)}{s}\right]>\frac{{m}^{2}s(3-2s)}{6}\hfill \end{array}$$

Based on the foundation in above proof, the analysis of (NP${}_{2}$) is simplified as below.

**Theorem**

**4.**

The solution of nonlinear optimization problem (NP${}_{2}$) with $\alpha \in (0,1)$, denoted by ${SNR}_{min}$, has bounds as follows:
$$\begin{array}{c}\hfill \frac{2{Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha +{\alpha}_{1}}{2}\right)}<{SNR}_{min}<\u03f5+\frac{2{Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha -{\alpha}_{2}}{2}\right)},\end{array}$$
where ${\alpha}_{1}$ and ${\alpha}_{2}$ are given in the following proof, equipped with $\forall \u03f5>0$. Additionally,
$$\begin{array}{c}\hfill {SNR}_{min}\sim \frac{2{Q}^{-1}\left(\frac{\alpha}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha}{2}\right)}\end{array}$$
as $M\to \infty $.

**Proof.**

Recalling the solution of (NP${}_{2}$) ${\mathrm{SNR}}_{\mathrm{min}}$, denoted by ${s}_{m}$ for brief, and observing $Q(M,s,{\lambda}_{*})=2Q\left(\frac{ms}{2+s}\right)$, by Equations (51) and (53), we have
where $\u03f5>0$ and ${\alpha}_{2}(m,s)$ is defined by Equation (56). Solving the two inequalities in Equations (59) and (60), we derive Equation (57) with ${\alpha}_{1}={\alpha}_{2}(m,{s}_{m})$ and ${\alpha}_{2}={\alpha}_{2}(m,{s}_{m}-\u03f5)$.

$$\begin{array}{c}2Q\left(\frac{m{s}_{m}}{2+{s}_{m}}\right)-{\alpha}_{2}(m,{s}_{m})<Q(M,{s}_{m},{\tilde{\lambda}}_{0})\le \alpha ,\hfill \end{array}$$

$$\begin{array}{c}2Q\left(\frac{m({s}_{m}-\u03f5)}{2+({s}_{m}-\u03f5)}\right)+{\alpha}_{2}(m,{s}_{m}-\u03f5)>Q(M,{s}_{m}-\u03f5,{\tilde{\lambda}}_{0})>\alpha ,\hfill \end{array}$$

By the facts pointed out in last part of proof for Theorem 3, i.e., that ${m}^{2}s$ is not an infinitesimal quantity (thus, $s\to 0$ if $m\to \infty $) and that $ms$ is bounded, and recalling the definition in Equation (56) we have ${\alpha}_{1}\underset{M\to \infty}{\to}0$ and ${\alpha}_{2}\underset{M\to \infty}{\to}0$. Thus, Equation (58) follows directly. ☐

Now, we find ${M}_{\mathrm{min}}$, either the solution of (NP${}_{1}$) or (${\tilde{\mathrm{NP}}}_{1}$), has an asymptotical order as
and SNR${}_{\mathrm{min}}$, either the solution of (NP${}_{2}$) or (${\tilde{\mathrm{NP}}}_{2}$), has an asymptotical order as

$$\begin{array}{c}\hfill {M}_{\mathrm{min}}\sim \frac{2{(2+s)}^{2}}{{s}^{2}}{\left[{Q}^{-1}\left(\frac{\alpha}{2}\right)\right]}^{2}\phantom{\rule{1.em}{0ex}}(s\to 0);\end{array}$$

$$\begin{array}{c}\hfill {\mathrm{SNR}}_{\mathrm{min}}\sim \frac{2{Q}^{-1}\left(\frac{\alpha}{2}\right)}{\sqrt{\frac{M}{2}}-{Q}^{-1}\left(\frac{\alpha}{2}\right)}\phantom{\rule{1.em}{0ex}}(M\to \infty ).\end{array}$$

If replacing the notations “∼” by “=” in the above two formulas, we find that the derived two equations are equivalent by ignoring the differences between M, SNR and ${M}_{\mathrm{min}}$ SNR${}_{\mathrm{min}}$ respectively.

Based on the foundation in above proof, the analysis of (NP${}_{2}$) is simplified as below.

By noticing the facts that the potential threshold ${\tilde{\lambda}}_{0}$ tends to ${\lambda}_{*}$ as SNR$\to 0$ (or $M\to \infty $), and that $Q(M,s,{\lambda}_{*})=2Q\left(\frac{ms}{2+s}\right)$, we can propose another principle by replacing the restriction in Equation (27) in (${\tilde{\mathrm{NP}}}_{1}$)(${\tilde{\mathrm{NP}}}_{2}$) as
which is equivalent to

$$\begin{array}{c}\hfill {\tilde{P}}_{fa}\left({\lambda}_{*}\right)+(1-{\tilde{P}}_{d}\left({\lambda}_{*}\right))\le \alpha ,\end{array}$$

$$\begin{array}{c}\hfill 2Q\left(\frac{ms}{2+s}\right)\le \alpha .\end{array}$$

Under this principle, the corresponding (${\tilde{\mathrm{NP}}}_{1}$)(${\tilde{\mathrm{NP}}}_{2}$) issues can be explicitly solved. Such an idea of replacement also serves as the key for the proofs presented in this section.

## 6. Conclusions

Spectrum sensing is a key step of enabling the recently emerged CR technologies by detecting the presence/absence of signals to explore spatial and temporal availability of spectrum resources. Among the possible methods for spectrum sensing, energy detection is the most popular and widely adopted technique, most likely due to its low implementation complexity. Two detection principles, i.e., CFAR and CDR, have been reported to set a threshold for corresponding binary hypothesis. CDR protects primary users at a designated low level of interference, while CFAR ensures a high resource utilization available to the secondary users. In practice, it is desired to initiate a graceful tradeoff between these two principles.

Motivated by this, the paper explored a new principle where the sum of the false alarm probability ${P}_{fa}$ from CFAR and the false detection probability $(1-{P}_{d})$ from CDR is kept smaller than a predetermined confidence level. Mathematically, for a given small confidence level $\alpha \in (0,1)$, say $\alpha =0.05$, the proposed principle aims to identify a threshold $\lambda $ such that

$$\begin{array}{c}\hfill {P}_{fa}\left(\lambda \right)+(1-{P}_{d}\left(\lambda \right))\le \alpha .\end{array}$$

However, this equation regarding potential threshold $\lambda $ may lead to too many solutions or no solutions for a given noise variance ${\sigma}_{n}^{2}$, SNR and data size M. To tackle this situation, the paper firstly introduced two well-posed presentations for the optimization problems by finding the minimum data size M (with ${\sigma}_{n}^{2}$ and SNR given) and SNR (with ${\sigma}_{n}^{2}$ and M given), respectively.

From our analysis, we found that for a fixed small SNR the data size M should be larger than a critical value, denoted by ${M}_{\mathrm{min}}$, to guarantee the existence of threshold $\lambda $ suggested by the new principle under given confidence level $\alpha $. An asymptotical explicit form between ${M}_{\mathrm{min}}$ and SNR, i.e., Equation (61), is further given in Section 5. On the other hand, it is also discovered that, for a given data size M, SNR should be greater than a minimum SNR to ensure the existence of threshold $\lambda $ suggested by the new principle under a given confidence level $\alpha $. An asymptotical explicit form between the minimum SNR and M, i.e., Equations (62), is further proposed in Section 5. We found that, if data size M is fixed, SNR should be greater than a certain level to perform considerate detection. If SNR known to be small and fixed, the data size should be greater than a certain level to detect reliably. These discoveries are important for policymaking for the settings of a CR sensing system, such as detection time and design sample rate for a special channel to achieve efficiency and noninterference at a confidence level. The proposed optimization problems can be effectively solved fast by setting the initial solution given by Equation (61) or (62). Therefore, the proposed framework is applicable both in theoretical and operational aspects.

It is worth noting that the inequality in Equation (63) can be extended to a more general form as
where ${w}_{1}\ge 0$ and ${w}_{2}\ge 0$. Clearly, if ${w}_{1}=1$ and ${w}_{2}=0$, Equation (64) derives CFAR principle; if ${w}_{1}=0$ and ${w}_{2}=1$, Equation (64) leads to CDR principle; and, finally, if ${w}_{1}=1$ and ${w}_{2}=1$, Equation (64) turns to be the new principle introduced in this paper. It is of interest to consider relevant theories based on the inequality in Equation (64) in the general setting.

$$\begin{array}{c}\hfill {w}_{1}{P}_{fa}\left(\lambda \right)+{w}_{2}(1-{P}_{d}\left(\lambda \right))\le \alpha ,\end{array}$$

## Author Contributions

Xiao-Li Hu conceived, analyzed, and wrote the original proposed framework under the supervision of Pin-Han Ho during a visit to Waterloo. Pin-Han Ho and Limei Peng gave advice on this work and helped revise the paper.

## Funding

This research was funded by [National Research Foundation of Korea] grant number [2018R1D1A1B07051118].

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix

The Appendix provides some preliminaries on a number of commonly used distributions in the paper for easy reading.

The complement of the standard normal distribution function is often denoted $Q\left(x\right)$, i.e.,
and is sometimes referred to simply as the Q-function, especially in engineering texts. This represents the tail probability of the standard Gaussian distribution. The Gamma function and regularized upper incomplete Gamma function are defined as
for $k>0$, respectively. We simply use ${Q}^{-1}\left(x\right)$, ${\Gamma}^{-1}(k,x)$ to present the inverse functions of $Q\left(x\right)$, $\Gamma (k,x)$ respectively.

$$\begin{array}{c}\hfill Q\left(x\right)={\int}_{x}^{\infty}\frac{1}{\sqrt{2\pi}}{e}^{-\frac{{t}^{2}}{2}}dt,\end{array}$$

$$\begin{array}{c}\Gamma \left(k\right)={\int}_{0}^{\infty}{t}^{k-1}{e}^{-t}dt,\hfill \end{array}$$

$$\begin{array}{c}\Gamma (k,x)=\frac{1}{\Gamma \left(k\right)}{\int}_{x}^{\infty}{t}^{k-1}{e}^{-t}dt\hfill \end{array}$$

## References

- Liang, Y.-C.; Chen, K.-C.; Li, G.Y.; Mahonen, P. Cognitive radio networking and communications: An overview. IEEE Trans. Veh. Technol.
**2011**, 60, 3386–3407. [Google Scholar] [CrossRef] - Sahai, A.; Tandra, R.; Mishra, S.M.; Hoven, N. Fundamental design tradeoffs in cognitive radio systems. In Proceedings of the international Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, USA, 5 August 2006. [Google Scholar]
- Rahimzadeh, F.; Shahtalebi, K.; Parvaresh, F. Using NLMS algorithms in cyclostationary-based spectrum sensing for cognitive radio networks. Wirel. Pers. Commun.
**2017**, 97, 2781–2797. [Google Scholar] [CrossRef] - Deepa, B.; Iyer, A.P.; Murthy, C.R. Cyclostationary-based architecture for spectrum sensing in IEEE 802.22 WRAN. In Proceedings of the IEEE Global Telecommunications Conference GLOBECOM 2010, Miami, FL, USA, 6–10 December 2010. [Google Scholar]
- Tsinos, C.G.; Berberidis, K. Adaptive eigenvalue-based spectrum sensing for multi-antenna cognitive radio systems. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 4454–4458. [Google Scholar]
- Tsinos, C.G.; Berberidis, K. Decentralized adaptive eigenvalue-based spectrum sensing for multiantenna cognitive radio systems. IEEE Trans. Wirel. Commun.
**2015**, 14, 1703–1715. [Google Scholar] [CrossRef] - Herath, S.P.; Rajatheva, N.; Tellambura, C. Energy detection of unknown signals in fading and diversity reception. IEEE Trans. Commun.
**2011**, 59, 2443–2453. [Google Scholar] [CrossRef] - Sofotasios, P.C.; Mohjazi, L.; Muhaidat, S.; Al-Qutayri, M.; Karagiannidis, G.K. Energy detection of unknown signals over cascaded fading channels. IEEE Antennas Wirel. Propag. Lett.
**2016**, 15, 135–138. [Google Scholar] [CrossRef] - Bagheri, A.; Sofotasios, P.C.; Tsiftsis, T.A.; Ho-Van, K.; Loupis, M.I.; Freear, S.; Valkama, M. Energy detection based spectrum sensing over enriched multipath fading channels. In Proceedings of the 2016 IEEE Wireless Communications and Networking Conference, Doha, Qatar, 3–6 April 2016. [Google Scholar]
- Politis, C.; Maleki, S.; Tsinos, C.G.; Liolis, K.P.; Chatzinotas, S.; Ottersten, B. Simultaneous sensing and transmission for cognitive radios with imperfect signal cancellation. IEEE Trans. Wirel. Commun.
**2017**, 16, 5599–5615. [Google Scholar] [CrossRef] - Politis, C.; Maleki, S.; Tsinos, C.; Chatzinotas, S.; Ottersten, B. On-board the satellite interference detection with imperfect signal cancellation. In Proceedings of the 2016 IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK, 3–6 July 2016; pp. 1–5. [Google Scholar]
- Ye, F.; Zhang, X.; Li, Y.; Tang, C. Faithworthy collaborative spectrum censing based on credibility and evidence theory for cognitive radio networks. Symmetry
**2017**, 9, 36. [Google Scholar] [CrossRef] - Liu, P.; Qi, W.; Yuan, E.; Wei, L.; Zhao, Y. Full-duplex cooperative sensing for spectrum-heterogeneous cognitive radio networks. Sensors
**2017**, 17, 1773. [Google Scholar] [CrossRef] [PubMed] - Abraham, D.A. Performance analysis of constant-false-alarm-rate detectors using characteristic functions. IEEE J. Ocean. Eng.
**2017**. [Google Scholar] [CrossRef] - Weinberg, G.V. Constant false alarm rate detection in Pareto Type II Clutter. Digit. Signal Process.
**2017**, 68, 192–198. [Google Scholar] [CrossRef] - Tandra, R.; Sahai, A. SNR walls for signal detection. IEEE J. Sel. Top. Signal Process.
**2008**, 2, 4–17. [Google Scholar] [CrossRef] - Carlos, K.C.; Birru, D. IEEE 802.22: An introduction to the first wireless standard based on cognitive radios. J. Commun.
**2006**, 1, 38–47. [Google Scholar] - Penna, F.; Pastrone, C.; Spirito, M.; Garello, R. An experimental study on spectrum sensing for cognitive WSNs under Wi-Fi interference. In Proceedings of the Wireless World Research Forum—WWRF 21, Stockholm, Sweden, 13–15 October 2008. [Google Scholar]
- Ye, Z.; Memik, G.; Grosspietsch, J. Energy Detection Using Estimated Noise Variance for Spectrum Sensing in Cognitive Radio Networks. In Proceedings of the Wireless Communications and Networking Conference, Las Vegas, NV, USA, 31 March–3 April 2008; pp. 711–716. [Google Scholar]
- Shellhammer, S.J.; Tandra, R.; Tomcik, J. Performance of power detector sensors of DVT signals in IEEE 802.22 WRANs. In Proceedings of the First International Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, USA, 5 August 2006. [Google Scholar]
- Bertsekas, D.P.; Tsitsiklis, J.N. Introduction to Probability; Athena Scientific: Nashua, NH, USA, 2008. [Google Scholar]
- Peh, E.C.; Liang, Y.-C.; Guan, Y.-L.; Zeng, Y.-H. Optimization of cooperative sensing in cognitive radio networks: A sensing-throughput tradeoff view. IEEE Trans. Veh. Technol.
**2007**, 58, 5294–5299. [Google Scholar] [CrossRef] - Kuang, J. Applied Inequalities, 3rd ed.; Shandong Science and Technology Press: Jinan, China, 2004. (In Chinese) [Google Scholar]
- Shevtsova, G.I. Sharpening of the upper bound of the absolute constant in the Berry-Esseen inequality. Theory Probab. Appl.
**2007**, 51, 549–553. [Google Scholar] [CrossRef]

**Figure 2.**The plot of the solutions $(M,\lambda )$ for (NP${}_{1}$) with $\mathrm{SNR}=\frac{1}{k}$, $k=1,2,\dots ,20$, under setting $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of the solutions M of (NP${}_{1}$) vs. 1/SNR. (

**b**) The plot of the corresponding $\lambda $ vs. 1/SNR.

**Figure 3.**The plot of solutions $(\mathrm{SNR},\lambda )$ for (NP${}_{2}$) with $M=1000k$ with $k=1,2,\dots ,20$, $k=1,2,\dots ,20$, under setting $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of SNR vs. M/1000. (

**b**) The plot of corresponding $\lambda $ vs. M/1000.

**Figure 4.**The plot of differences between the solutions $(M,\lambda )$ for (${\tilde{\mathrm{NP}}}_{1}$) and (NP${}_{1}$) with $\mathrm{SNR}=\frac{1}{k}$, $k=1,2,\dots ,20$, under setting $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of the differences between solutions M of (${\tilde{\mathrm{NP}}}_{1}$) and (NP${}_{1}$) vs. 1/SNR. (

**b**) The plot of differences for the corresponding $\lambda $ vs. 1/SNR.

**Figure 5.**The plot of differences between the solutions $(SNR,\lambda )$ for (${\tilde{\mathrm{NP}}}_{2}$) and (NP${}_{2}$) with $M=1000k$, $k=1,2,\dots ,20$, under setting $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of the differences between solutions $SNR$ of (${\tilde{\mathrm{NP}}}_{2}$) and (NP${}_{2}$) vs. M/1000. (

**b**) The plot of differences for the corresponding $\lambda $ vs. M/1000.

**Figure 6.**The plot for Example 5 under the setting $SNR={10}^{1-3(k-1)/19}$, $k=1,2,\dots ,20$, corresponding to the range of 10 dB to −20 dB, $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of the constraint summation probabilities of (NP${}_{1}$) and (${\tilde{\mathrm{NP}}}_{1}$), marked by “-o” and “-x”, respectively. (

**b**) The plot of the bias between the solution M from either (NP${}_{1}$) or (${\tilde{\mathrm{NP}}}_{1}$) and the asymptotical Equation (61), marked by “-o” and “-x”, respectively.

**Figure 7.**The plot for Example 6 under the setting $M=10k$, $k=1,2,\dots ,20$, under setting $\alpha =0.05$ and ${\sigma}_{n}=1$. (

**a**) The plot of the constraint summation probabilities of (NP${}_{2}$) and (${\tilde{\mathrm{NP}}}_{2}$), marked by “-o” and “-x”, respectively. (

**b**) The plot of the bias between the SNR solutions from either (NP${}_{2}$) or (${\tilde{\mathrm{NP}}}_{2}$) and the asymptotical Equation (62), marked by “-o” and “-x”, respectively.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).