A Novel Blind Signal Detector Based on the Entropy of the Power Spectrum Subband Energy Ratio

In this paper, we present a novel blind signal detector based on the entropy of the power spectrum subband energy ratio (PSER), the detection performance of which is significantly better than that of the classical energy detector. This detector is a full power spectrum detection method, and does not require the noise variance or prior information about the signal to be detected. According to the analysis of the statistical characteristics of the power spectrum subband energy ratio, this paper proposes concepts such as interval probability, interval entropy, sample entropy, joint interval entropy, PSER entropy, and sample entropy variance. Based on the multinomial distribution, in this paper the formulas for calculating the PSER entropy and the variance of sample entropy in the case of pure noise are derived. Based on the mixture multinomial distribution, the formulas for calculating the PSER entropy and the variance of sample entropy in the case of the signals mixed with noise are also derived. Under the constant false alarm strategy, the detector based on the entropy of the power spectrum subband energy ratio is derived. The experimental results for the primary signal detection are consistent with the theoretical calculation results, which proves that the detection method is correct.


Introduction
With the rapid development of wireless communication, spectrum sensing has been deeply studied and successfully applied in cognitive radio (CR). Common spectrum sensing methods [1,2] can be classified as matched filter detection [3], energy-based detection [4], and cyclostationary-based detection [5], among others. Matched-filtering is the optimum detection method when the transmitted signal is known [6]. Energy detection is considered to be the optimal method when there is no prior information of the transmitted signal. Cyclostationary detection is applicable to signals with cyclostationary features [2]. Energy detection can be classified into two categories: time domain energy detection [4] and frequency domain energy detection [7,8]. The power spectrum subband energy ratio (PSER) detector [9] is a local power spectrum energy detection technology. In order to enable PSER to be used in full spectrum detection, a new detector-based PSER is proposed in this paper, whose detection performance is higher than that of time domain energy detection.
In information theory, entropy is a measure of the uncertainty associated with a discrete random variable, and differential entropy is a measure of uncertainty of a continuous random. As the uncertainty of noise is higher than that of the signal, the entropy of the noise is higher than that of the signal, which is the basis of using entropy to detect signals. Information entropy has been successfully applied to signal detection [10][11][12]. The main entropy detection methods can be classified into two categories: the time domain and the frequency domain.
In the time domain, a signal with a low signal-noise ratio (SNR) is annihilated in the noise, and the estimated entropy is actually the entropy of the noise; therefore, the time domain entropy-based detector does not have an adequate detection performance. Nagaraj [12] introduced a time domain entropy-based matched-filtering for primary users (PU) detection and presented a likelihood ratio test for detecting a PU signal. Gu et al. [13] presented a new cross entropy-based spectrum sensing scheme that has two time-adjacent detected data sets of the PU.
In the frequency domain, the spectral bin amplitude of the signal is obviously higher than that of the noise. Therefore, the frequency domain entropy-based detector is widely applied in many fields. Zhang et al. [10,14] presented a spectrum sensing scheme based on the entropy of the spectrum magnitude, which has been used in many practical applications. For example, Jakub et al. [15] used this scheme to assist in blind signal detection, Zhao [16] improved the two-stage entropy detection method based on this scheme, and Waleed et al. [17] applied this scheme to maritime radar network. So [18] used the conditional entropy of the spectrum magnitude to detect unauthorized user signals in cognitive radio networks. Guillermo et al. [11] proposed an improved entropy estimation method based on Bartlett periodic spectrum. Ye et al. [19] proposed a method based on the exponential entropy. Zhu et al. [20] compared the performances of several entropy detectors in primary user detection. In these papers, the mean of the test statistics based on entropy could be calculated by differential entropy, however the variance of the test statistics based on entropy was not given in these papers, that is, its calculation formula was unknown. In detection theory, the variance of test statistics plays a very important role, therefore the detection based on entropy has a major drawback.
In the above references, none of the entropy-based detectors made use of the PSER. The PSER is a common metric used to represent the proportion of signal energy in a single spectral line. It has been extensively applied in the fields of machine design [21], earthquake modeling [22], remote communication [23,24], and geological engineering [25]. The white noise power spectrum bin conforms to the Gaussian distribution, and its PSER conforms to the Beta distribution [9]. Compared with the entropy detector based on spectral line amplitude, the PSER entropy detector has special properties. This paper is organized as follows. In Section 2, the theoretical formulas of PSER entropy under pure noise and mixed signal case are deduced. The statistical characteristics of PSER entropy are summarized and the computational complexity of the main statistics are analyzed. Section 3 describes the derivation process of the PSER entropy detector under the constant false alarm strategy in detail. In Section 4, experiments are carried out to verify the accuracy of the PSER entropy detector, and the detection performance is compared with other detection methods. Section 5 provides additional details concerning the research process. The conclusions are drawn in Section 6.

PSER Entropy
PSER entropy takes the form of a classical Shannon entropy. Theoretically, the Shannon entropy can be approximated by the differential entropy; therefore, this method is adopted in the existing entropy-based signal detection methods [10]. However, this method has the disadvantage that the actual value is not the same as the theoretical value in some cases. Therefore, on the basis of analyzing the statistical characteristics of PSER entropy, this paper proposes a new method to calculate PSER entropy without using differential entropy. Firstly, the range of PSER [0,1] is divided into several equally spaced intervals. Then, under pure noise, the PSER entropy and its variance are derived using the multinomial distribution. Under signal mixed with noise, the PSER entropy and its variance are derived using the mixed multinomial distribution.

Probability Distribution for PSER
The signal mixed with additive Gaussian white noise (GWN) can be expressed as where L is the number of sampling points; s(n) is the signal to be detected; x(n) is the signal; z(n) is the GWN with a mean of zero and a variance of σ 2 ; H 0 represents the hypothesis corresponding to "no signal transmitted"; and H 1 corresponds to "signal transmitted". The single-sided spectrum of s(n) is where j is the imaginary unit, and the arrow superscript denotes a complex function. The kth line in the power spectrum of s(n) can be expressed as X R (k) and X I (k) represent the real and imaginary parts of the signal, respectively. Z R (k) and Z I (k) represent the real and imaginary parts of the noise, respectively. The PSER B d,N (k) is defined as the ratio of the sum of the adjacent d bins from the kth bin in the power spectrum to the entire spectrum energy, i.e., where N = L/2, ∑ N−1 i=0 P(i) represents the total energy in the power spectrum and ∑ k+d−1 l=k P(l) represents the total energy of the adjacent d bins. When there is noise in the power spectrum, it is clear that B d,N (k) is a random variable.
The probability distribution for B d,N (k) is described in detail in [9]. Under H 0 , B d,N (k) follows a beta distribution with parameters d, N − d. Under H 1 , B d,N (k) follows a doubly non-central beta distribution with parameters d, N − d and non-centrality parameters λ k , and ∑ N−1 i=0 λ i −λ , i.e., [9] (p. 7), , the SNR of the kth spectral bin; λ k = ∑ k+d−1 l=k λ l , the SNR of the d spectral lines starting from the kth spectral line; ∑ N−1 i=0 λ i −λ k is the SNR of the spectral lines not contained in the selected subband. The probability density function (PDF) for B d,N (k) [9] (p. 7) is where δ k,1 = λ k /2, δ k,2 = ∑ N−1 i=0 λ i −λ k /2. The cumulative distribution function (CDF) for B d,N (k) [9] (p. 7) is The subband used in this paper only contains one spectral bin, i.e., d = 1; therefore, B 1,N (k) follows the distribution The probability density function (PDF) for B 1,N (k) is where For the convenience of description, B 1,N (k) is replaced by X k .

Basic Definitions
The probability that X k falls into the ith interval [26] is where p 0 + p 1 + · · · + p m−1 = 1. p i is called the interval probability.
The PSER values of all spectral lines are regarded as a sequence X = (X 0 , X 1 , · · · , X N−1 ). Let the number of times the data in X falls into the ith interval be t i , which is a random variable, and t i = 0, 1, · · · , N, ∑ m−1 i=0 t i = N. Let t = (t 0 , t 1 , · · · , t m−1 ). The frequency of the data in sequence X falling into the ith interval is denoted as T i , i.e., T i = t i /N, and ∑ m−1 i=0 T i = 1. The random variable Y i is equal to −T i log T i , which is called the sample entropy of PSER. The mean of the sample entropy of PSER in the ith interval is H i = E(Y i ) , and H i is the interval entropy. Notice that any two t i are not independent of each other; therefore, any two Y i are not independent of each other as well. Let Z(t; Z(t; X) is called sample entropy of PSER. Z(t; X) can be abbreviated as Z(t) or Z. By the definition of entropy, entropy is a mean, and it has no variance. However, the sample entropy is the entropy of a PSER sequence sample; therefore the sample entropy is a random variable, and it has a mean and a variance. The mean of the sample entropy is where m and N are the numbers of intervals and spectral bins, respectively. H(m, N) can be called the total entropy or PSER entropy. The variance of the sample entropy is denoted as Var(Z). In signal detection, E(Z) and Var(Z) are very important, and the calculation of E(Z) and Var(Z) is discussed in the following sections. Under H 0 , by the definition of differential entropy, the differential entropy for X k is where a is the base of the logarithm. When a = e,

PSER Entropy Calculated Using Differential Entropy
According to the calculation process of the spectrum magnitude entropy presented by Zhang in [10] and the equation given in [27] (p. 247), when 1/m → 0 , the PSER entropy under H 0 is If (N − 1)/m < e, then H(m, N) is negative.

The Defect of the PSER Entropy Calculated Using Differential Entropy
PSER entropy is the mean of the sample entropy, and the sample entropy is nonnegative, therefore, the PSER entropy is nonnegative too. However, the PSER entropy calculated by Equation (14) is not always nonnegative. Especially when (N − 1)/m < e, the PSER entropy is negative. The reason why Equation (14) is not always nonnegative is that the differential entropy is not always nonnegative.
The difference between the real PSER entropy calculated by simulation experiment and that calculated by differential entropy is shown in Figure 1 PSER entropy is negative. The reason why Equation (14) is not always nonnegative is that the differential entropy is not always nonnegative. The difference between the real PSER entropy calculated by simulation experiment and that calculated by differential entropy is shown in Figure 1. At least 4 10 Monte Carlo simulation experiments were carried out under =256 N and different values of m . Figure 1. Comparison between the power spectrum subband energy ratio (PSER) entropy calculated using differential entropy and the real PSER entropy.
In Figure 1, the solid line is the PSER entropy calculated by the differential entropy, while the dotted line is the PSER entropy. When m is very large ( . 1 0 005 m < ), the results are close. However, when m is small ( . 1 0 005 m > ), the difference between the two methods is large, and even the total entropy calculated by the differential entropy is negative, Figure 1. Comparison between the power spectrum subband energy ratio (PSER) entropy calcu-lated using differential entropy and the real PSER entropy. In Figure 1, the solid line is the PSER entropy calculated by the differential entropy, while the dotted line is the PSER entropy. When m is very large (1/m < 0.005), the results are close. However, when m is small (1/m > 0.005), the difference between the two methods is large, and even the total entropy calculated by the differential entropy is negative, which is inconsistent with the actual result. Therefore, in this paper a more reasonable method to calculate the PSER entropy is. Under H 0 , all X k obey the same beta distribution. According to (10), the interval probability in ith interval is p i is essentially the area of the ith interval on the probability density map. Figure 2 shows p 0 and p 1 when m is 200, and N is 128.
It is a typical multinomial distribution problem that N data fall into m intervals. By the probability formula of multinomial distribution, the probability of ( ) which is the multinomial coefficient. The following lemmas are used in the following analysis.
Lemma 1. If k is a non-negative integer, then [28] (p. 183) Lemma 2. If j is a non-negative integer, then Pr(t i = j, t ∈ T m,N ).  (18) is Lemma 3. If k and l are a non-negative integer, and k + l ≤ N, then [28] (p. 183) Lemma 4. If k and l are a non-negative integer, and k + l ≤ N, then Proof. From Lemma 3, the right side of Equation (20) is

Statistical Characteristics of T i
The mean of T i [28] (p. 183) is The mean-square value of T i [28] (p. 183) is The variance of T i [28] (p. 183) is

Statistical Characteristics of Y i
For the convenience of description, let Entropy 2021, 23, 448 8 of 28 The mean of the Y i is the mean of all the entropy values of Y i in the ith interval, that is, H i is the interval entropy. The following definition is not used to define the interval entropy in this paper: as this definition is the entropy of all probabilities of Y i in the ith interval.
when i = j, the joint entropy of two interval is i.e., the mean-square value of Y i .

Statistical Characteristics of Z(t)
The total entropy with m intervals and N spectral bins is Theorem 1. The PSER entropy is equal to the sum of all the interval entropy, i.e., Proof. According to Lemma 2, The mean of the mean-square value of the sample entropy is equal to the sum of all the joint entropy of two intervals, i.e., Proof. From Lemmas 2 and 4, Theorem 3. The variance of the sample entropy is For the convenience of description, H(m, N) and Var(Z) under H 0 are replaced by µ 0 and σ 2 0 , respectively.

Computational Complexity
The calculation time of each statistic is mainly consumed in factorial calculation and the traversal of all cases.
Factorial calculation involves two cases: There are two methods to traverse all cases: the traversal of one selected interval and the traversal of two selected intervals. The corresponding expressions for these two methods are Pr t i = k, t j = l, t ∈ T m,N , respectively. The traversal of one selected interval requires that j take all the values from 0 to N. Considering the time spent computing the factorial, its time complexity is O N 2 . Therefore, the time complexity of calculating the interval entropy is O N 2 .
The traversal of two selected intervals requires firstly selection of k spectral bins from all N spectral bins, and then select l spectral bins from the remaining N − k spectral bins. If k and l are fixed, then the time complexity is Pr t i = k, t j = l, t ∈ T m,N requires listing all the combinations of k and l, and its time complexity is It takes the most time to calculate the variance of the sample entropy Var(Z). As all m(m − 1)/2 interval joint entropies have to be computed, the time complexity of calculating Var(Z) is O Nm 2 3 N . In the following experiments, in order to ensure better detection performance, the values of m and N should not be too small, such as m ≥ 500 and N ≥ 256, and therefore the calculation time will be very long.

PSER Entropy under H 1 2.5.1. Definitions and Lemmas
Under H 1 , X k obey β 1,N−1 (λ k , ∑ N−1 i=0 λ i −λ k ), and different X k have different noncentrality parameters, and therefore the calculation of total entropy and sample entropy variance under H 1 is much more complicated than that under H 0 . According to Equation (10), the interval probability in ith interval is where ∑ m−1 i=0 p k,i = 1. The subscript k stands for the label of X k . Figure 3 shows p 0 and p 1 when m = 200, N = 128, δ k,1 = 1, and δ k,2 = 2.
. In the following experiments, in order to ensure better detection performance, the values of m and N should not be too small, such as ≥ 500 m and ≥ 256 N , and therefore the calculation time will be very long.

Definitions and Lemmas
trality parameters, and therefore the calculation of total entropy and sample entropy variance under 1 H is much more complicated than that under 0 H . According to Equation . The subscript k stands for the label of k X . Figure 3 shows 0 p and Under 1 H , different k X obey different probability distributions, therefore this is a multinomial distribution problem under mixture distributions. Let  By the probability formula of multinomial distribution, the probability when  Under H 1 , different X k obey different probability distributions, therefore this is a multinomial distribution problem under mixture distributions. Let where t k,i is the number of times of X k falls into the ith interval. t i represents the times of all X k falls into the ith interval. In a sample, since X k can only fall into one interval, t k,i only can be 0 or 1, and ∑ m−1 i=0 t k,i = 1. By the probability formula of multinomial distribution, the probability when t = (t 0,0 , · · · t 0,m−1 , · · · t N−1,0 , · · · t N−1,m−1 ) is The following lemmas are used in the following analysis.

Lemma 5. If j is a non-negative integer, then
Pr If g and l are a non-negative integer, and g + l ≤ N, then Proof.

Statistical Characteristics of T i
The mean of T i is The mean-square value of T i is The variance of T i is

Statistical Characteristics of Y i
The mean of the Y i is when i = j, the joint entropy of two interval is i.e., the mean-square value of Y i .

Statistical Characteristics of Z t
Under H 1 , the PSER entropy is Proof.
The mean of the mean-square value of Z t is The variance of Z t is For the convenience of description, H(m, N) and Var(Z) are denoted as µ 1 and σ 2 1 under H 1 .

Computational Complexity
Under H 1 , many statistics take a much longer time than under H 0 . The calculation time is mainly consumed in two aspects: calculation of p k,i , and the factorial calculation and the traversal of all cases.

1.
Calculation of p k,i As seen from Equation (32), p k,i is expressed by infinite double series under H 1 , and its value could only be obtained by numerical calculation. Since the number of calculation terms is set to be large, it will take a significant amount of calculation time.

2.
Factorial calculation Similar to the analysis under H 0 , the time complexity of the factorial calculation is O(N).

The traversal of all cases
There are two methods to traverse all cases: the traversal of one selected interval and the traversal of two selected intervals. The corresponding expressions for these two meth- Pr t i = g, t j = l, t ∈ T m,N , respectively.

Signal Detector Based on the PSER Entropy
In this section, a signal detection method based on PSER entropy is deduced under the constant false alarm (CFAR) strategy according to the PSER entropy and sample entropy variance derived in Section 2. This method is also called full power spectrum subband energy ratio entropy detector (FPSED), because it detects on the full power spectrum.

Principle
Signal detection based on PSER entropy takes the sample entropy as a detection statistic to judge whether there is a signal in the whole spectrum. The sample entropy is The PSER entropy of GWN is obviously different from that of the mixed signal. In general, the PSER entropy of the mixed signal will be less than that of GWN, but sometimes it will also be greater than that of GWN. This can be seen in Figure 4.
The PSER entropy of GWN is obviously different from that of the mixed signal. In general, the PSER entropy of the mixed signal will be less than that of GWN, but sometimes it will also be greater than that of GWN. This can be seen in Figure 4.
In Figure 4a, the PSER entropy of GWN is higher than that of the noisy Ricker signal. However, the PSER entropy of GWN is lower than that of the noisy Ricker signal in Figure  4b. Therefore, when setting the detection threshold of the PSER entropy detector, the relationship between the PSER entropy of the signal and that of noise should be considered.

The PSER Entropy of a Signal Less Than That of GWN
When the PSER entropy of signal is less than that of GWN, let the threshold be 1 η which tests the decision statistic. If the test statistic is less than 1 η , the signal is deemed to be present, and it is absent otherwise, i.e., The distribution of Z is regarded as Gaussian in this paper, so Under the CFAR strategy, the false alarm probability f P can be expressed as follows: The detection probability d P can be expressed as follows: In Figure 4a, the PSER entropy of GWN is higher than that of the noisy Ricker signal. However, the PSER entropy of GWN is lower than that of the noisy Ricker signal in Figure 4b. Therefore, when setting the detection threshold of the PSER entropy detector, the relationship between the PSER entropy of the signal and that of noise should be considered.

The PSER Entropy of a Signal Less Than That of GWN
When the PSER entropy of signal is less than that of GWN, let the threshold be η 1 which tests the decision statistic. If the test statistic is less than η 1 , the signal is deemed to be present, and it is absent otherwise, i.e., The distribution of Z is regarded as Gaussian in this paper, so Under the CFAR strategy, the false alarm probability P f can be expressed as follows: η 1 can be derived from Equation (47) The detection probability P d can be expressed as follows: Substituting Equation (48) into Equation (49), P d can then be evaluated as follows: Entropy 2021, 23, 448 16 of 28

The PSER Entropy of a Signal Larger Than That of GWN
When the PSER entropy of a signal is larger than that of GWN, let the threshold be η 2 . If the test statistic is larger than η 2 , the signal is deemed to be present, and it is absent otherwise, i.e., The false alarm probability P f is and The detection probability P d is Substituting Equation (53) into Equation (54), P d can then be evaluated as follows:

Other Detection Methods
In the following experiment, the PSER entropy detector compare with the commonly used full spectrum energy detection (FSED) [29] and matched-filtering detector (MFD) methods, under the same condition. In this section, we introduce these two detectors.

Full Spectrum Energy Detection
The performance of FSED is exactly the same as that of classical energy detection (ED). The total spectral energy is measured by the sum of all spectral lines in the power spectrum, that is, Let γ be the SNR, that is, γ = 1 . When the detection length N is large enough, T FD obeys a Gaussian distribution: Let the threshold be η FD . The false alarm probability and detection probability can be expressed as follows: where η FD = Q −1 (P f ) √ N + N σ 2 .

Matched Filter Detection
The main advantage of matched filtering is the short time to achieve a certain false alarm probability or detection probability. Hence, it requires perfect knowledge of the signal. In the time domain, the detection statistic of matched filtering is where s(n) is the transmitted signal, and x(n) is signal to be detector. Let E = ∑ L−1 n=0 x 2 (n), i.e., all energy of the signal, and η MFD is the threshold. The false alarm probability and detection probability can be expressed as follows: and

Experiments
For this section, we verified and compared the detection performances of the FPSED, FSED, and MTD discussed in Section 3 through Monte Carlo simulations. The primary signal was a binary phase shift keying(BPSK) modulated signal with symbol ratio 1kbit/s, carrier frequency 1000 Hz, and sampling frequency 10 5 Hz.
We performed all Monte Carlo simulations for at least 10 4 independent trials. We set P f to 0.05. We used mean-square error (MSE) to measure the deviation between the theoretical values and actual statistical results. All the programs in the experiment were run in MATLAB set up on a laptop with a Core i5 CPU and 16GB RAM.

Experiments under H 0
This section verifies whether the calculation results of each statistic is correct by comparing the statistical results with the theoretical calculation result. The effects of noise intensity, the number of spectral lines and the number of intervals on interval probability, interval entropy, PSER entropy, and the variance of sample entropy are analyzed.

Influence of Noise
According to the probability density function of PSER, PSER has nothing to do with noise intensity. Therefore, noise intensity has no effect on each statistic under H 0 . In the following experiment, the variance of noise has 10 values ranging from 0.1 to 1 at 0.1 intervals, N = 512, and m = 500.
In Figures 5 and 6, the theoretical and actual values of interval probability and interval entropy under different noise intensities are compared. The results show that the theoretical values are in good agreement with the statistical values, and the noise intensity has no effect on interval probability. In the first few intervals (i < 7), the interval probabilities are large, so the interval entropies are also large, contributing more to the total entropy. tervals, = 512 N , and = 500 m . In Figures 5 and 6, the theoretical and actual values of interval probability and interval entropy under different noise intensities are compared. The results show that the theoretical values are in good agreement with the statistical values, and the noise intensity has no effect on interval probability. In the first few intervals ( i < 7), the interval probabilities are large, so the interval entropies are also large, contributing more to the total entropy.  In Figure 7, the theoretical values of PSER entropies are compared with the actual values. It can be seen that the theoretical values are basically consistent with the actual values, but the actual values are slightly smaller than the theoretical values. The actual values do not change with noise intensity, indicating that noise intensity has no effect on PSER entropy. following experiment, the variance of noise has 10 values ranging from 0.1 to 1 at 0.1 intervals, = 512 N , and = 500 m . In Figures 5 and 6, the theoretical and actual values of interval probability and interval entropy under different noise intensities are compared. The results show that the theoretical values are in good agreement with the statistical values, and the noise intensity has no effect on interval probability. In the first few intervals ( i < 7), the interval probabilities are large, so the interval entropies are also large, contributing more to the total entropy.  In Figure 7, the theoretical values of PSER entropies are compared with the actual values. It can be seen that the theoretical values are basically consistent with the actual values, but the actual values are slightly smaller than the theoretical values. The actual values do not change with noise intensity, indicating that noise intensity has no effect on PSER entropy. In Figure 7, the theoretical values of PSER entropies are compared with the actual values. It can be seen that the theoretical values are basically consistent with the actual values, but the actual values are slightly smaller than the theoretical values. The actual values do not change with noise intensity, indicating that noise intensity has no effect on PSER entropy.
Since the theoretical variance of sample entropy cannot be calculated, only the variation of the actual variance of sample entropy with noise intensity is shown in Figure 8. It can be seen that the variance of sample entropy does not change with the noise intensity.
The above experiments show that the actual statistical results are consistent with the theoretical calculation results, indicating that the calculation methods of interval probability, interval entropy, PSER entropy, and sample entropy variance determined in this paper are correct. Since the theoretical variance of sample entropy cannot be calculated, only the variation of the actual variance of sample entropy with noise intensity is shown in Figure 8. It can be seen that the variance of sample entropy does not change with the noise intensity. The above experiments show that the actual statistical results are consistent with the theoretical calculation results, indicating that the calculation methods of interval probability, interval entropy, PSER entropy, and sample entropy variance determined in this paper are correct.

Influence of N
In Figure 9, the effect of N on the interval probability is shown. When m is fixed, the larger N is, the larger 0 p is, and the interval probabilities in other intervals will be smaller. The reason for that is that the larger N is, the smaller the energy ratio of each line to the entire power spectrum. Since the theoretical variance of sample entropy cannot be calculated, only the variation of the actual variance of sample entropy with noise intensity is shown in Figure 8. It can be seen that the variance of sample entropy does not change with the noise intensity. The above experiments show that the actual statistical results are consistent with the theoretical calculation results, indicating that the calculation methods of interval probability, interval entropy, PSER entropy, and sample entropy variance determined in this paper are correct.

Influence of N
In Figure 9, the effect of N on the interval probability is shown. When m is fixed, the larger N is, the larger 0 p is, and the interval probabilities in other intervals will be smaller. The reason for that is that the larger N is, the smaller the energy ratio of each line to the entire power spectrum.

Influence of N
In Figure 9, the effect of N on the interval probability is shown. When m is fixed, the larger N is, the larger p 0 is, and the interval probabilities in other intervals will be smaller. The reason for that is that the larger N is, the smaller the energy ratio of each line to the entire power spectrum. In Figure 10, the effect of N on the interval entropy is shown. The larger N is, the smaller i H . In Figure 10, the effect of N on the interval entropy is shown. The larger N is, the smaller H i .  In Figure 10, the effect of N on the interval entropy is shown. The larger N is, the smaller i H . In Figure 11, the effect of N on the interval entropy is shown. When m is fixed, the larger N is, the smaller

( )
, H m N becomes. When N is the same, the larger m is, the larger the PSER entropy is.
In Figure 12, the effect of N on the variance of the sample entropy is shown. When m is fixed, the larger N is, the smaller ( ) Var Z becomes. In Figure 11, the effect of N on the interval entropy is shown. When m is fixed, the larger N is, the smaller H(m, N) becomes. When N is the same, the larger m is, the larger the PSER entropy is.

Influence of m
In Figure 13, the effect of m on the PSER entropy is shown. When N is fixed, the larger m is, the larger

( )
, H m N becomes. In Figure 12, the effect of N on the variance of the sample entropy is shown. When m is fixed, the larger N is, the smaller Var(Z) becomes.

Influence of m
In Figure 13, the effect of m on the PSER entropy is shown. When N is fixed, the larger m is, the larger ( ) , H m N becomes.

Influence of m
In Figure 13, the effect of m on the PSER entropy is shown. When N is fixed, the larger m is, the larger H(m, N) becomes.

Influence of m
In Figure 13, the effect of m on the PSER entropy is shown. When N is fixed, the larger m is, the larger ( ) , H m N becomes.  In Figure 14, the effect of N on the variance of the sample entropy is shown. When N is fixed, the larger m is, the variance of sample entropy increases first, then decreases, and then increases slowly. In Figure 14, the effect of N on the variance of the sample entropy is shown. When N is fixed, the larger m is, the variance of sample entropy increases first, then decreases, and then increases slowly.

The Parameters for the Next Experiment
After theoretical calculations and experimental statistical analysis, the PSER entropy and sample entropy variance used in the following experiments were obtained, as shown in Table 1.

The Parameters for the Next Experiment
After theoretical calculations and experimental statistical analysis, the PSER entropy and sample entropy variance used in the following experiments were obtained, as shown in Table 1.

Experiments under H 1
When N is fixed, the change of PSER entropy of the BPSK signal with noise under H 1 is shown in Figure 15. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR. When the SNR is less than −15 dB, the PSER entropy of noise and that of the BPSK signal are almost the same; therefore, it is impossible to distinguish between noise and the BPSK signal. Since the PSER entropy of the BPSK signal is always less than that of noise, the threshold 1 η should be used in BPSK signal detection. As can be seen from Figure 16, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases. When m is fixed, the change of PSER entropy and sample entropy variance of BPSK signal with noise under 1 H is shown in Figures 17 and 18. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR. Since the PSER entropy of the BPSK signal is always less than that of noise, the threshold η 1 should be used in BPSK signal detection.
As can be seen from Figure 16, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases. Since the PSER entropy of the BPSK signal is always less than that of noise, the threshold 1 η should be used in BPSK signal detection. As can be seen from Figure 16, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases. When m is fixed, the change of PSER entropy and sample entropy variance of BPSK signal with noise under 1 H is shown in Figures 17 and 18. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR. When m is fixed, the change of PSER entropy and sample entropy variance of BPSK signal with noise under H 1 is shown in Figures 17 and 18. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR. When m is fixed, the change of PSER entropy and sample entropy variance of BPSK signal with noise under 1 H is shown in Figures 17 and 18. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR.  As can be seen from Figure 18, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases.

Comparison of Detection Performance
When N is 512, m is 200, 500 and 1000, respectively. The detection results of the BSPK signal are shown in Figures 19-21. In Figure 19, the actual false alarm probabilities fluctuate slightly and do not change with the SNR, which is consistent with the characteristics of constant false alarm. As can be seen from Figure 18, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases.

Comparison of Detection Performance
When N is 512, m is 200, 500 and 1000, respectively. The detection results of the BSPK signal are shown in Figures 19-21.
In Figure 19, the actual false alarm probabilities fluctuate slightly and do not change with the SNR, which is consistent with the characteristics of constant false alarm.
It can be seen from Figures 20 and 21 that the detection probability of the PSER entropy detector is obviously better than that of the FSED method when m is 1000. However, when m is 200, the detection performance is lower than that of FSED. There is no doubt that the detection performance of matched filtering is the best.
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 2.
As can be seen from Figure 18, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases.

Comparison of Detection Performance
When N is 512, m is 200, 500 and 1000, respectively. The detection results of the BSPK signal are shown in Figures 19-21. In Figure 19, the actual false alarm probabilities fluctuate slightly and do not change with the SNR, which is consistent with the characteristics of constant false alarm.

Comparison of Detection Performance
When N is 512, m is 200, 500 and 1000, respectively. The detection results of the BSPK signal are shown in Figures 19-21. In Figure 19, the actual false alarm probabilities fluctuate slightly and do not change with the SNR, which is consistent with the characteristics of constant false alarm.  It can be seen from Figures 20 and 21 that the detection probability of the PSER entropy detector is obviously better than that of the FSED method when m is 1000. However, when m is 200, the detection performance is lower than that of FSED. There is no doubt that the detection performance of matched filtering is the best.
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 2.   The deviation between the actual and theoretical detection probabilities was very small, which indicated that the PSER entropy detector was accurate.
When m is 500, N is 256, 512, and 1024, respectively. The detection results for the BSPK signal are shown in  in Table 2. The deviation between the actual and theoretical detection probabilities was very small, which indicated that the PSER entropy detector was accurate.
When m is 500, N is 256, 512, and 1024, respectively. The detection results for the BSPK signal are shown in Figures 22-24.   It can be seen from Figures 23 and 24 that when m is fixed, a larger N does not   It can be seen from Figures 23 and 24 that when m is fixed, a larger N does not necessarily imply a better detection performance. The detection probability when N is 1024 is lower than that when N is 512. However, the detection performance of the full spectrum energy detection method will improve with the increase of N .
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 3. The deviations between the actual and theoretical detection probabilities was very small when was 512 or 1024, however were higher when N was 256.

Theoretical Calculation of Statistics
In Section 2, we analyzed the computational time complexity of each statistic. When m and N are large, the theoretical values of some statistics, such as It can be seen from Figures 23 and 24 that when m is fixed, a larger N does not necessarily imply a better detection performance. The detection probability when N is 1024 is lower than that when N is 512. However, the detection performance of the full spectrum energy detection method will improve with the increase of N.
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 3. The deviations between the actual and theoretical detection probabilities was very small when was 512 or 1024, however were higher when N was 256.

Theoretical Calculation of Statistics
In Section 2, we analyzed the computational time complexity of each statistic. When m and N are large, the theoretical values of some statistics, such as Var(Z) under H 0 , H i , H i,j , H(m, N), and Var(Z) under H 1 , cannot be calculated. This restricts the further analysis on the detection performance of the PSER entropy detector.
There are two ways to solve this problem: reducing the computational complexity and finding an approximate solution. Which way is feasible requires further study.

Experience of Selecting Parameters
The detection probability P d of PSER entropy detection is related to the number of intervals m, the number of power spectrum lines N, and the SNR of the spectrum lines. Since the mathematical expressions of many statistics are too complex, the influence of the three factors on P d cannot be accurately analyzed at present. Based on a large number of experiments, we summarize the following experiences as recommendations for setting parameters.
(1) m cannot be too small. It can be seen from Figure 20, that, if m is too small, then the detection performance of the PSER entropy detector will be lower than that of energy detector. We suggest that m ≥ 500. (2) N must be close to m. It can be seen from Figure 23, that N is not better when bigger. A large number of experiments show that when N is close to m, the detection probability is good. (3) When N is fixed, m can be adjusted appropriately through experiments.

Advantages of the PSER Entropy Detector
When using PSER entropy detection, the noise intensity does not need to be estimated in advance, and prior information of the signal to be detected is not needed. Therefore, the PSER entropy detector is a typical blind signal detection method.

Further Research
The detection performance of the PSER entropy detector will be further improved if some methods are used to improve the SNR of signals. In future research, a denoising method can be used to improve the SNR, the Welch or autoregressive method can be used to improve the quality of power spectrum estimation, and multi-channel cooperative detection can be used to increase the accuracy of detection.

Conclusions
In this paper, the statistical characteristics of PSER entropy are derived through strict mathematical analysis, and the theoretical formulas for calculating the PSER entropy and sample entropy variance from pure noise and mixed signals are obtained. In the process of derivation, we do not use the classical method of approximating PSER entropy using differential entropy, but use interval division and the multinomial distribution to calculate PSER entropy. The calculation results of this method are consistent with the simulation results, which shows that this method is correct. This method is not only suitable for a large number of intervals, but also suitable for a small number of intervals. A signal detector based on the PSER entropy was created according to these statistical characteristics. The performance of the PSER entropy detector is obviously better than that of the classical energy detector. This method does not need to estimate the noise intensity or require any prior information of the signal to be detected, and therefore it is a complete blind signal detector.
The PSER entropy detector can not only be used in spectrum sensing, but also in vibration signal detection, seismic monitoring, and pipeline safety monitoring.