Next Article in Journal
Improving the Retrieval of Arabic Web Search Results Using Enhanced k-Means Clustering Algorithm
Previous Article in Journal
Von Willebrand Factor Multimers and the Relaxation Response: A One-Year Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Blind Signal Detector Based on the Entropy of the Power Spectrum Subband Energy Ratio

1
School of Modern Post, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 448; https://doi.org/10.3390/e23040448
Submission received: 1 March 2021 / Revised: 24 March 2021 / Accepted: 8 April 2021 / Published: 11 April 2021

Abstract

:
In this paper, we present a novel blind signal detector based on the entropy of the power spectrum subband energy ratio (PSER), the detection performance of which is significantly better than that of the classical energy detector. This detector is a full power spectrum detection method, and does not require the noise variance or prior information about the signal to be detected. According to the analysis of the statistical characteristics of the power spectrum subband energy ratio, this paper proposes concepts such as interval probability, interval entropy, sample entropy, joint interval entropy, PSER entropy, and sample entropy variance. Based on the multinomial distribution, in this paper the formulas for calculating the PSER entropy and the variance of sample entropy in the case of pure noise are derived. Based on the mixture multinomial distribution, the formulas for calculating the PSER entropy and the variance of sample entropy in the case of the signals mixed with noise are also derived. Under the constant false alarm strategy, the detector based on the entropy of the power spectrum subband energy ratio is derived. The experimental results for the primary signal detection are consistent with the theoretical calculation results, which proves that the detection method is correct.

1. Introduction

With the rapid development of wireless communication, spectrum sensing has been deeply studied and successfully applied in cognitive radio (CR). Common spectrum sensing methods [1,2] can be classified as matched filter detection [3], energy-based detection [4], and cyclostationary-based detection [5], among others. Matched-filtering is the optimum detection method when the transmitted signal is known [6]. Energy detection is considered to be the optimal method when there is no prior information of the transmitted signal. Cyclostationary detection is applicable to signals with cyclostationary features [2]. Energy detection can be classified into two categories: time domain energy detection [4] and frequency domain energy detection [7,8]. The power spectrum subband energy ratio (PSER) detector [9] is a local power spectrum energy detection technology. In order to enable PSER to be used in full spectrum detection, a new detector-based PSER is proposed in this paper, whose detection performance is higher than that of time domain energy detection.
In information theory, entropy is a measure of the uncertainty associated with a discrete random variable, and differential entropy is a measure of uncertainty of a continuous random. As the uncertainty of noise is higher than that of the signal, the entropy of the noise is higher than that of the signal, which is the basis of using entropy to detect signals. Information entropy has been successfully applied to signal detection [10,11,12]. The main entropy detection methods can be classified into two categories: the time domain and the frequency domain.
In the time domain, a signal with a low signal-noise ratio (SNR) is annihilated in the noise, and the estimated entropy is actually the entropy of the noise; therefore, the time domain entropy-based detector does not have an adequate detection performance. Nagaraj [12] introduced a time domain entropy-based matched-filtering for primary users (PU) detection and presented a likelihood ratio test for detecting a PU signal. Gu et al. [13] presented a new cross entropy-based spectrum sensing scheme that has two time-adjacent detected data sets of the PU.
In the frequency domain, the spectral bin amplitude of the signal is obviously higher than that of the noise. Therefore, the frequency domain entropy-based detector is widely applied in many fields. Zhang et al. [10,14] presented a spectrum sensing scheme based on the entropy of the spectrum magnitude, which has been used in many practical applications. For example, Jakub et al. [15] used this scheme to assist in blind signal detection, Zhao [16] improved the two-stage entropy detection method based on this scheme, and Waleed et al. [17] applied this scheme to maritime radar network. So [18] used the conditional entropy of the spectrum magnitude to detect unauthorized user signals in cognitive radio networks. Guillermo et al. [11] proposed an improved entropy estimation method based on Bartlett periodic spectrum. Ye et al. [19] proposed a method based on the exponential entropy. Zhu et al. [20] compared the performances of several entropy detectors in primary user detection. In these papers, the mean of the test statistics based on entropy could be calculated by differential entropy, however the variance of the test statistics based on entropy was not given in these papers, that is, its calculation formula was unknown. In detection theory, the variance of test statistics plays a very important role, therefore the detection based on entropy has a major drawback.
In the above references, none of the entropy-based detectors made use of the PSER. The PSER is a common metric used to represent the proportion of signal energy in a single spectral line. It has been extensively applied in the fields of machine design [21], earthquake modeling [22], remote communication [23,24], and geological engineering [25]. The white noise power spectrum bin conforms to the Gaussian distribution, and its PSER conforms to the Beta distribution [9]. Compared with the entropy detector based on spectral line amplitude, the PSER entropy detector has special properties.
This paper is organized as follows. In Section 2, the theoretical formulas of PSER entropy under pure noise and mixed signal case are deduced. The statistical characteristics of PSER entropy are summarized and the computational complexity of the main statistics are analyzed. Section 3 describes the derivation process of the PSER entropy detector under the constant false alarm strategy in detail. In Section 4, experiments are carried out to verify the accuracy of the PSER entropy detector, and the detection performance is compared with other detection methods. Section 5 provides additional details concerning the research process. The conclusions are drawn in Section 6.

2. PSER Entropy

PSER entropy takes the form of a classical Shannon entropy. Theoretically, the Shannon entropy can be approximated by the differential entropy; therefore, this method is adopted in the existing entropy-based signal detection methods [10]. However, this method has the disadvantage that the actual value is not the same as the theoretical value in some cases. Therefore, on the basis of analyzing the statistical characteristics of PSER entropy, this paper proposes a new method to calculate PSER entropy without using differential entropy. Firstly, the range of PSER [0,1] is divided into several equally spaced intervals. Then, under pure noise, the PSER entropy and its variance are derived using the multinomial distribution. Under signal mixed with noise, the PSER entropy and its variance are derived using the mixed multinomial distribution.

2.1. Probability Distribution for PSER

The signal mixed with additive Gaussian white noise (GWN) can be expressed as
s ( n ) = { z ( n ) H 0 x ( n ) + z ( n ) H 1   ,   n = 0 , 1 , , L 1
where L is the number of sampling points; s ( n ) is the signal to be detected; x ( n ) is the signal; z ( n ) is the GWN with a mean of zero and a variance of σ 2 ; H 0 represents the hypothesis corresponding to “no signal transmitted”; and H 1 corresponds to “signal transmitted”. The single-sided spectrum of s ( n ) is
S ( k ) = n = 0 L 1 s ( n ) e j 2 π L k n   , k = 0 , 1 , , L 2 1   .
where j is the imaginary unit, and the arrow superscript denotes a complex function. The k th line in the power spectrum of s ( n ) can be expressed as
P ( k ) = 1 L | S ( k ) | 2 = 1 L ( ( X R ( k ) + Z R ( k ) ) 2 + ( X I ( k ) + Z I ( k ) ) 2 ) .
X R ( k ) and X I ( k ) represent the real and imaginary parts of the signal, respectively. Z R ( k ) and Z I ( k ) represent the real and imaginary parts of the noise, respectively.
The PSER B d , N ( k ) is defined as the ratio of the sum of the adjacent d bins from the k th bin in the power spectrum to the entire spectrum energy, i.e.,
B d , N ( k ) = l = k k + d 1 P ( l ) i = 0 N 1 P ( i ) , 1 d < N 1 , k = 0 , 1 , , N d   ,
where N = L / 2 , i = 0 N 1 P ( i ) represents the total energy in the power spectrum and l = k k + d 1 P ( l ) represents the total energy of the adjacent d bins. When there is noise in the power spectrum, it is clear that B d , N ( k ) is a random variable.
The probability distribution for B d , N ( k ) is described in detail in [9]. Under H 0 , B d ,   N ( k ) follows a beta distribution with parameters d , N d . Under H 1 , B d , N ( k ) follows a doubly non-central beta distribution with parameters d , N d and non-centrality parameters λ k , and i = 0 N 1 λ i λ , i.e., [9] (p. 7),
B d , N ( k ) = { β ( d , N d )   H 0 β d , N d ( λ k , i = 0 N 1 λ i λ k ) H 1   .
where λ i = ( X R 2 ( k ) + X I 2 ( k ) ) / ( N σ 2 ) , i.e., the SNR of the k th spectral bin; λ k = l = k k + d 1 λ l , the SNR of the d spectral lines starting from the k th spectral line; i = 0 N 1 λ i λ k is the SNR of the spectral lines not contained in the selected subband. The probability density function (PDF) for B d , N ( k ) [9] (p. 7) is
f B d , N ( k ) ( x ) = { x d 1 ( 1 x ) N d 1 B ( d , N d )   H 0 e ( δ k , 1 + δ k , 2 ) j = 0 l = 0 δ k , 1 j δ k , 2 l j ! l ! ( 1 x ) N + l d 3 x j + d 1 B ( j + d , N d + l ) H 1   , x [ 0 , 1 ] .
where δ k , 1 = λ k / 2 , δ k , 2 = ( i = 0 N 1 λ i λ k ) / 2 . The cumulative distribution function (CDF) for B d , N ( k ) [9] (p. 7) is
F B d , N ( k ) ( x ) = { I x ( d , N d ) H 0 e ( δ k , 1 + δ k , 2 ) j = 0 l = 0 δ k , 1 j δ k , 2 l j ! l ! I x ( j + d , N + l d )   H 1   , x [ 0 , 1 ] .
The subband used in this paper only contains one spectral bin, i.e., d = 1 ; therefore, B 1 , N ( k ) follows the distribution
B 1 , N ( k ) = { β ( 1 , N 1 ) H 0 β 1 , N 1 ( λ k , i = 0 N 1 λ i λ k )   H 1   .
The probability density function (PDF) for B 1 , N ( k ) is
f B 1 , N ( k ) ( x ) = { ( N 1 ) ( 1 x ) N 2 H 0 e ( δ k , 1 + δ k , 2 ) j = 0 l = 0 δ k , 1 j δ k , 2 l j ! l ! ( 1 x ) N + l 4 x j B ( j + 1 , N 1 + l ) H 1   , x [ 0 , 1 ] .
where δ k , 1 = λ k / 2 , δ k , 2 = ( i = 0 N 1 λ i λ k ) / 2 .
The cumulative distribution function (CDF) for B 1 , N ( k ) is
F B 1 , N ( k ) ( x ) = { I x ( 1 , N 1 ) H 0 e ( δ k , 1 + δ k , 2 ) j = 0 l = 0 δ k , 1 j δ k , 2 l j ! l ! I x ( j + 1 , N + l 1 )   H 1   , x [ 0 , 1 ] .
For the convenience of description, B 1 , N ( k ) is replaced by X k .

2.2. Basic Definitions

X k is in the range of [0,1]. The range is divided into m intervals at 1 / m , i.e., [ 0 , 1 / m ) , [ 1 / m , 2 / m ) ,…, [ ( m 1 ) / m , 1 ] . Then the i th interval is [ i / m , ( i + 1 ) / m ) , where i = 0 , 1 , 2 , m 1 . The probability that X k falls into the i th interval [26] is
p i = i / m ( i + 1 ) / m f X k ( x ) d x   ,
where p 0 + p 1 + + p m 1 = 1 . p i is called the interval probability.
The PSER values of all spectral lines are regarded as a sequence X = ( X 0 , X 1 , , X N 1 ) . Let the number of times the data in X falls into the i th interval be t i , which is a random variable, and t i = 0 , 1 , , N , i = 0 m 1 t i = N . Let t = ( t 0 , t 1 , , t m 1 ) . The frequency of the data in sequence X falling into the i th interval is denoted as T i , i.e., T i = t i / N , and i = 0 m 1 T i = 1 .
The random variable Y i is equal to T i log T i , which is called the sample entropy of PSER. The mean of the sample entropy of PSER in the i th interval is H i = E ( Y i )   , and H i is the interval entropy. Notice that any two t i are not independent of each other; therefore, any two Y i are not independent of each other as well. Let Z ( t ; X ) = i = 0 m 1 Y i , i.e.,
Z ( t ; X ) = i = 0 m 1 t i N log ( t i N ) .
Z ( t ; X ) is called sample entropy of PSER. Z ( t ; X ) can be abbreviated as Z ( t ) or Z .
By the definition of entropy, entropy is a mean, and it has no variance. However, the sample entropy is the entropy of a PSER sequence sample; therefore the sample entropy is a random variable, and it has a mean and a variance. The mean of the sample entropy is
H ( m , N ) = E ( Z ) ,
where m and N are the numbers of intervals and spectral bins, respectively. H ( m , N ) can be called the total entropy or PSER entropy. The variance of the sample entropy is denoted as V a r ( Z ) . In signal detection, E ( Z ) and V a r ( Z ) are very important, and the calculation of E ( Z ) and V a r ( Z ) is discussed in the following sections.

2.3. Calculating PSER Entropy Using the Differential Entropy

2.3.1. The Differential Entropy for X k

Under H 0 , by the definition of differential entropy, the differential entropy for X k is
h ( B 1 , N ( k ) ) = + p ( x ) log p ( x ) d x = 0 + p ( x ) log [ ( N 1 ) ( 1 x ) N 2 ] d x = 0 + p ( x ) [ log ( N 1 ) + ( N 2 ) log ( 1 x ) ] d x = log ( N 1 ) ( N 1 ) ( N 2 ) 0 1 ( 1 x ) N 2 log ( 1 x ) d x = N 2 ( N 1 ) ln a log ( N 1 ) ,
where a is the base of the logarithm. When a = e ,
h ( B 1 , N ( k ) ) = N 2 N 1 ln ( N 1 ) .

2.3.2. PSER Entropy Calculated Using Differential Entropy

According to the calculation process of the spectrum magnitude entropy presented by Zhang in [10] and the equation given in [27] (p. 247), when 1 / m 0 , the PSER entropy under H 0 is
H ( m , N ) = i = 0 m 1 p i log p i = i = 0 m 1 f X k ( x ) 1 m ln ( f X k ( x ) 1 m ) h ( B 1 , N ( k ) ) ln ( 1 m ) = N 2 N 1 ln ( N 1 m ) .
If ( N 1 ) / m < e , then H ( m , N ) is negative.

2.3.3. The Defect of the PSER Entropy Calculated Using Differential Entropy

PSER entropy is the mean of the sample entropy, and the sample entropy is nonnegative, therefore, the PSER entropy is nonnegative too. However, the PSER entropy calculated by Equation (14) is not always nonnegative. Especially when ( N 1 ) / m < e , the PSER entropy is negative. The reason why Equation (14) is not always nonnegative is that the differential entropy is not always nonnegative.
The difference between the real PSER entropy calculated by simulation experiment and that calculated by differential entropy is shown in Figure 1. At least 10 4 Monte Carlo simulation experiments were carried out under N = 256 and different values of m .
In Figure 1, the solid line is the PSER entropy calculated by the differential entropy, while the dotted line is the PSER entropy. When m is very large ( 1 / m < 0.005 ), the results are close. However, when m is small ( 1 / m > 0.005 ), the difference between the two methods is large, and even the total entropy calculated by the differential entropy is negative, which is inconsistent with the actual result. Therefore, in this paper a more reasonable method to calculate the PSER entropy is.

2.4. PSER Entropy under H 0

2.4.1. Definitions and Lemmas

Under H 0 , all X k obey the same beta distribution. According to (10), the interval probability in i th interval is
p i = i / m ( i + 1 ) / m ( N 1 ) ( 1 t ) N 2 d t = ( 1 i m ) N 1 ( 1 i + 1 m ) N 1 .
p i is essentially the area of the i th interval on the probability density map. Figure 2 shows p 0 and p 1 when m is 200, and N is 128.
Let Τ m , N = { t = ( t 0 , t 1 , , t m 1 ) : t i { 0 , 1 , 2 , , N } , t 0 + t m 1 = N } . It is a typical multinomial distribution problem that N data fall into m intervals. By the probability formula of multinomial distribution, the probability of t = ( t 0 , t 1 , , t m 1 ) is
Pr ( t ) = Pr ( t 0 , t 1 , , t m 1 ) = ( N t 0 , t 1 , , t m 1 ) p 0 t 0 p 1 t 1 p m 1 t m 1 ,
where
( N t 0 , t 1 , , t m 1 ) = N ! t 0 ! t 1 ! , t m 1 ! ,
which is the multinomial coefficient. The following lemmas are used in the following analysis.
Lemma 1.
If k is a non-negative integer, then [28] (p. 183)
Pr ( t i = k , t Τ m , N ) = N ! k ! ( N k ) ! p i k ( 1 p i ) N k .
Lemma 2.
If j is a non-negative integer, then
t Τ m , N Pr ( t ) = j = 0 N Pr ( t i = j , t Τ m , N ) .
Proof. 
The left side of Equation (18) is t Τ m , N Pr ( t ) = 1 . From Lemma 1, the right side of Equation (18) is
j = 0 N Pr ( t i = j , t Τ m , N ) = j = 0 N N ! j ! ( N j ) ! p i j ( 1 p i ) N j = 1 .
 ☐
Lemma 3.
If k and l are a non-negative integer, and k + l N , then [28] (p. 183)
Pr ( t i = k , t j = l , t Τ m , N ) = N ! k ! l ! ( N k l ) ! p i k p j l ( 1 p i p j ) N k l .
Lemma 4.
If k and l are a non-negative integer, and k + l N , then
t Τ m , N Pr ( t ) = k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) .
Proof. 
From Lemma 3, the right side of Equation (20) is
k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) = k = 0 N l = 0 N k N ! k ! l ! ( N k l ) ! p i k p j l ( 1 p i p j ) N k l = k = 0 N N ! k ! ( N k ) ! p i k ( 1 p i ) N k l = 0 N k ( N k ) ! l ! ( N k l ) ! ( p j 1 p i ) l ( 1 p i p j 1 p i ) N k l = k = 0 N N ! k ! ( N k ) ! p i k ( 1 p i ) N k = 1 .
 ☐

2.4.2. Statistical Characteristics of T i

The mean of T i [28] (p. 183) is
E ( T i ) = j = 0 N Pr ( t i = j , t Τ m , N ) j N = j = 0 N N ! j ! ( N j ) ! p i j ( 1 p i ) N j j N = p i .
The mean-square value of T i [28] (p. 183) is
E ( T i 2 ) = j = 0 N Pr ( t i = j , t Τ m , N ) ( j N ) 2 = N p i 2 + p i p i 2 N .
The variance of T i [28] (p. 183) is
V a r ( T i ) = E ( T i 2 ) E 2 ( T i ) = p i ( 1 p i ) N .

2.4.3. Statistical Characteristics of Y i

For the convenience of description, let
h N ( l ) = l N log ( l N )   , l = 0 , 1 N   .
The mean of the Y i is the mean of all the entropy values of Y i in the i th interval, that is,
H i = E ( Y i ) = j = 0 N Pr ( t i = j , t Τ m , N ) h N ( j ) = j = 0 N ( N j ) p i j ( 1 p i ) N j h N ( j ) .
H i is the interval entropy. The following definition is not used to define the interval entropy in this paper:
H i = E ( Y i ) = j = 0 N Pr ( t i = j , t Τ m , N ) log ( Pr ( t i = j , t Τ m , N ) ) ,
as this definition is the entropy of all probabilities of Y i in the i th interval.
The mean-square value of Y i is
E ( Y i 2 ) = j = 0 N Pr ( t i = j , t Τ m , N ) h N 2 ( j ) = j = 0 N ( N j ) p i j ( 1 p i ) N j h N 2 ( j ) .
The variance of Y i is
V a r ( Y i ) = E ( Y i 2 ) E 2 ( Y i ) .
when i j , the joint entropy of two interval is
H i , j = E ( Y i Y j ) = t Τ n , N Pr ( t ) h N ( t i ) h N ( t j ) = k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) h N ( k ) h N ( l ) = k = 0 N l = 0 N k N ! k ! l ! ( N k l ) ! p i k p j l ( 1 p i p j ) N k l h N ( k ) h N ( l ) .
when i = j , H i , i = E ( Y i 2 ) , i.e., the mean-square value of Y i .

2.4.4. Statistical Characteristics of Z ( t )

The total entropy with m intervals and N spectral bins is
H ( m , N ) = E ( Z ) = t Τ m , N Pr ( t ) Z .
Theorem 1.
The PSER entropy is equal to the sum of all the interval entropy, i.e.,
H ( m , N ) = i = 0 m 1 H i .
Proof. 
According to Lemma 2,
H ( m , N ) = t Τ m , N Pr ( t ) Z = t Τ m , N ( Pr ( t ) × i = 0 m 1 h N ( t i ) ) = i = 0 m 1 j = 0 N Pr ( t i = j , t Τ m , N ) h N ( t i ) = i = 0 m 1 j = 0 N N ! j ! ( N j ) ! p i j ( 1 p i ) N j h N ( t i ) = i = 0 m 1 H i .
 ☐
Theorem 2.
The mean of the mean-square value of the sample entropy is equal to the sum of all the joint entropy of two intervals, i.e.,
E ( Z 2 ) = i = 0 m 1 E ( Y i 2 ) + 2 i = 0 m 2 j = i + 1 m 1 H i , j .
Proof. 
From Lemmas 2 and 4,
E ( Z 2 ) = t Τ m , N Pr ( t ) Z 2 = t Τ m , N ( Pr ( t ) × i = 0 m 1 j = 0 m 1 h N ( t i ) h N ( t j ) ) = t Τ m , N ( Pr ( t ) × ( i = 0 m 1 h N 2 ( t i ) + 2 i = 0 m 2 j = i + 1 m 1 h N ( t i ) h N ( t j ) ) ) = i = 0 m 1 t Τ m , N ( Pr ( t ) h N 2 ( t i ) ) + 2 i = 0 m 2 j = i + 1 m 1 t Τ m , N Pr ( t ) h N ( t i ) h N ( t j ) = i = 0 m 1 E ( Y i 2 ) + 2 i = 0 m 2 j = i + 1 m 1 H i , j .
 ☐
Theorem 3.
The variance of the sample entropy is
V a r ( Z ) = E ( Z 2 ) E 2 ( Z ) = i = 0 m 1 E ( Y i 2 ) + 2 i = 0 m 2 j = i + 1 m 1 H i , j ( i = 0 m 1 H i ) 2 .
For the convenience of description, H ( m , N ) and V a r ( Z ) under H 0 are replaced by μ 0 and σ 0 2 , respectively.

2.4.5. Computational Complexity

The calculation time of each statistic is mainly consumed in factorial calculation and the traversal of all cases.
Factorial calculation involves two cases: N ! j ! ( N j ) ! and N ! k ! l ! ( N k l ) ! . They both have a time complexity of O ( N ) .
There are two methods to traverse all cases: the traversal of one selected interval and the traversal of two selected intervals. The corresponding expressions for these two methods are j = 0 N Pr ( t i = j , t Τ m , N ) and k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) , respectively.
The traversal of one selected interval requires that j take all the values from 0 to N . Considering the time spent computing the factorial, its time complexity is O ( N 2 ) . Therefore, the time complexity of calculating the interval entropy is O ( N 2 ) .
The traversal of two selected intervals requires firstly selection of k spectral bins from all N spectral bins, and then select l spectral bins from the remaining N k spectral bins. If k and l are fixed, then the time complexity is
O ( N N ! k ! l ! ( N k l ) ! ) .
k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) requires listing all the combinations of k and l , and its time complexity is
N ( ( N 0 , 0 , N ) + ( N 0 , 1 , N 1 ) + ( N N , 0 , 0 ) ) = N 3 N ,
i.e., O ( N 3 N ) . The time complexity of calculating the interval joint entropy H i , j is O ( N 3 N ) .
Calculating the total entropy H ( m , N ) requires computation of all m interval entropies, so its time complexity is O ( m N 2 ) .
It takes the most time to calculate the variance of the sample entropy V a r ( Z ) . As all m ( m 1 ) / 2 interval joint entropies have to be computed, the time complexity of calculating V a r ( Z ) is O ( N m 2 3 N ) . In the following experiments, in order to ensure better detection performance, the values of m and N should not be too small, such as m 500 and N 256 , and therefore the calculation time will be very long.

2.5. PSER Entropy under H 1

2.5.1. Definitions and Lemmas

Under H 1 , X k obey β 1 , N 1 ( λ k , i = 0 N 1 λ i λ k ) , and different X k have different non-centrality parameters, and therefore the calculation of total entropy and sample entropy variance under H 1 is much more complicated than that under H 0 . According to Equation (10), the interval probability in i th interval is
p k , i = e ( δ k , 1 + δ k , 2 ) j = 0 l = 0 δ k , 1 j δ k , 2 l j ! l ! ( I ( i + 1 ) / m ( j + 1 , N + l 1 ) I i / m ( j + 1 , N + l 1 ) )   .
where i = 0 m 1 p k , i = 1 . The subscript k stands for the label of X k . Figure 3 shows p 0 and p 1 when m = 200 , N = 128 , δ k , 1 = 1 , and δ k , 2 = 2 .
Under H 1 , different X k obey different probability distributions, therefore this is a multinomial distribution problem under mixture distributions. Let
Τ m , N = { t = ( t 0 , 0 , t 0 , m 1 , t N 1 , 0 , t N 1 , m 1 ) : t k , i { 0 , 1 , 2 , , N } , t i = k = 0 N 1 t k , i , i = 0 m 1 t i = N } ,
where t k , i is the number of times of X k falls into the i th interval. t i represents the times of all X k falls into the i th interval. In a sample, since X k can only fall into one interval, t k , i only can be 0 or 1, and i = 0 m 1 t k , i = 1 .
By the probability formula of multinomial distribution, the probability when t = ( t 0 , 0 , t 0 , m 1 , t N 1 , 0 , t N 1 , m 1 ) is
Pr ( t ) = ( N t 0 , 0 , t 0 , m 1 , t k , 0 , t k , m 1 t N 1 , 0 , t N 1 , m 1 ) p 0 , 0 t 0 , 0 p 0 , m 1 t 0 , m 1 p k , 0 t k , 0 p k , m 1 t k , m 1 p N 1 , 0 t N 1 , 0 p N 1 , m 1 t N 1 , m 1 = N ! p 0 , 0 t 0 , 0 p 0 , m 1 t 0 , m 1 p k , 0 t k , 0 p k , m 1 t k , m 1 p N 1 , 0 t N 1 , 0 p N 1 , m 1 t N 1 , m 1 = N ! k = 0 N 1 i = 0 m 1 p k , i t k , i .
The following lemmas are used in the following analysis.
Lemma 5.
If j is a non-negative integer, then
Pr ( t i = k = 0 N 1 t k , i = j , t Τ m , N ) = N ! ( N j ) ! t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
Proof. 
Pr ( t i = k = 0 N 1 t k , i = j , t Τ m , N ) = t Τ m , N , j = k = 0 N 1 t k , i Pr ( t ) = t Τ m , N , t i = j ( N t 0 , 0 , t 0 , m 1 , t k , 0 , t k , m 1 t N 1 , 0 , t N 1 , m 1 ) p 0 , 0 t 0 , 0 p 0 , m 1 t 0 , m 1 p k , 0 t k , 0 p k , m 1 t k , m 1 p N 1 , 0 t N 1 , 0 p N 1 , m 1 t N 1 , m 1 = t Τ m , N , t i = j ( N t 0 , i , t k , i t N 1 , i , N j ) p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i × ( N j t 0 , 0 , t 0 , i 1 , t 0 , i + 1 , t 0 , m 1 , t N 1 , 0 , t N 1 , i 1 , t N 1 , i + 1 t N 1 , m 1 ) × ( p 0 , 0 1 p 0 , i ) t 0 , 0 ( p 0 , i 1 1 p 0 , i ) t 0 , i 1 ( p 0 , i + 1 1 p 0 , i ) t 0 , i + 1 ( p 0 , m 1 1 p 0 , m 1 ) t 0 , m 1 ( p N 1 , 0 1 p N 1 , i ) t N 1 , 0 ( p N 1 , i 1 1 p N 1 , i ) t N 1 , i 1 ( p N 1 , i + 1 1 p N 1 , i ) t N 1 , i + 1 ( p N 1 , m 1 1 p N 1 , i ) t N 1 , m 1 = N ! ( N j ) ! t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
 ☐
Lemma 6.
If j is a non-negative integer, then
t Τ m , N Pr ( t ) = j = 0 N Pr ( t i = j , t Τ m , N ) .
Proof. 
The left side of Equation (35) is t Τ m , N Pr ( t ) = 1 . From Lemma 5, the right side of Equation (35) is
j = 0 N Pr ( t i = j , t Τ m , N ) = j = 0 N N ! ( N j ) ! t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i = 1 .
 ☐
Lemma 7.
If g and l are a non-negative integer, and g + l N , then
Pr ( t i = g , t j = l , t Τ m , N ) = N ! ( N g l ) ! t i = g , t j = l p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i p 0 , j t 0 , j p k , j t k , j p N 1 , j t N 1 , j × ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i ( 1 p 0 , j ) 1 t 0 , j ( 1 p k , j ) 1 t k , j ( 1 p N 1 , j ) 1 t N 1 , j .
Proof. 
Pr ( t i = g , t j = l , t Τ m , N ) = t Τ m , N , t i = g , t j = l Pr ( t ) = t Τ m , N , t i = g , t j = l ( N t 0 , 0 , t 0 , m 1 , t k , 0 , t k , m 1 t N 1 , 0 , t N 1 , m 1 ) p 0 , 0 t 0 , 0 p 0 , m 1 t 0 , m 1 p k , 0 t k , 0 p k , m 1 t k , m 1 p N 1 , 0 t N 1 , 0 p N 1 , m 1 t N 1 , m 1 = t Τ m , N , t i = g , t j = l ( N t 0 , i , t k , i t N 1 , i , t 0 , j , t k , j t N 1 , j , N g l ) p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i p 0 , j t 0 , j p k , j t k , j p N 1 , j t N 1 , j × ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i ( 1 p 0 , j ) 1 t 0 , j ( 1 p k , j ) 1 t k , j ( 1 p N 1 , j ) 1 t N 1 , j ( N g l t 0 , 0 , t 0 , i 1 , t 0 , i + 1 , t 0 , j 1 , t 0 , j + 1 , t 0 , m 1 , t N 1 , 0 , t N 1 , i 1 , t N 1 , i + 1 t N 1 , j 1 , t N 1 , j + 1 t N 1 , m 1 ) × ( p 0 , 0 1 p 0 , i ) t 0 , 0 ( p 0 , i 1 1 p 0 , i ) t 0 , i 1 ( p 0 , i + 1 1 p 0 , i ) t 0 , i + 1 ( p 0 , m 1 1 p 0 , m 1 ) t 0 , m 1 ( p N 1 , 0 1 p N 1 , j ) t N 1 , 0 ( p N 1 , i 1 1 p N 1 , j ) t N 1 , i 1 ( p N 1 , i + 1 1 p N 1 , j ) t N 1 , i + 1 ( p N 1 , m 1 1 p N 1 , j ) t N 1 , m 1 = t i = g , t j = l ( N t 0 , i , t k , i t N 1 , i , t 0 , j , t k , j t N 1 , j , N g l ) p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i p 0 , j t 0 , j p k , j t k , j p N 1 , j t N 1 , j × ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i ( 1 p 0 , j ) 1 t 0 , j ( 1 p k , j ) 1 t k , j ( 1 p N 1 , j ) 1 t N 1 , j = N ! ( N g l ) ! t i = g , t j = l p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i p 0 , j t 0 , j p k , j t k , j p N 1 , j t N 1 , j × ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i ( 1 p 0 , j ) 1 t 0 , j ( 1 p k , j ) 1 t k , j ( 1 p N 1 , j ) 1 t N 1 , j .
 ☐

2.5.2. Statistical Characteristics of T i

The mean of T i is
E ( T i ) = j = 0 N Pr ( t i = j , t Τ m , N ) j N = j = 0 N t i = j ( N t 0 , i , t k , i t N 1 , i , N j ) p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i j N = j = 0 N j N N ! ( N j ) ! t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
The mean-square value of T i is
E ( T i 2 ) = j = 0 N Pr ( t i = j , t Τ m , N ) ( j N ) 2 = j = 0 N ( j N ) 2 N ! ( N j ) ! t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
The variance of T i is
V a r ( T i ) = E ( T i 2 ) E 2 ( T i ) .

2.5.3. Statistical Characteristics of Y i

The mean of the Y i is
H i = E ( Y i ) = j = 0 N Pr ( t i = j , t Τ m , N ) h N ( j ) = j = 0 N j N N ! ( N j ) ! h N ( j ) t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
The mean-square value of Y i is
E ( Y i 2 ) = j = 0 N Pr ( t i = j , t Τ m , N ) h N 2 ( j ) = j = 0 N j N N ! ( N j ) ! h N 2 ( j ) t i = j p 0 , i t 0 , i p k , i t k , i p N 1 , i t N 1 , i ( 1 p 0 , i ) 1 t 0 , i ( 1 p k , i ) 1 t k , i ( 1 p N 1 , i ) 1 t N 1 , i .
The variance of Y i is
V a r ( Y i ) = E ( Y i 2 ) E 2 ( Y i ) .
when i j , the joint entropy of two interval is
H i , j = E ( Y i Y j ) = t Τ m , N Pr ( t ) h N ( t i ) h N ( t j ) = g = 0 N l = 0 N g Pr ( t i = g , t j = l , t Τ m , N ) h N ( g ) h N ( l )
when i = j , H i , i = E ( Y i 2 ) , i.e., the mean-square value of Y i .

2.5.4. Statistical Characteristics of Z ( t )

Under H 1 , the PSER entropy is
H ( m , N ) = t Τ m , N Pr ( t ) Z ( t ) .
Theorem 4.
Under H 1 , the PSER entropy is equal to the sum of all the interval entropy, i.e.,
H ( m , N ) = j = 0 m 1 H j .
Proof. 
t Τ m , N Pr ( t ) Z = t Τ m , N ( Pr ( t ) × i = 0 m 1 h N ( t i ) ) = i = 0 m 1 t Τ m , N Pr ( t ) h N ( t i ) = i = 0 m 1 j = 0 N Pr ( t i = j , t Τ m , N ) h N ( t i ) = i = 0 m 1 H i .
 ☐
The mean of the mean-square value of Z ( t ) is
E ( Z 2 ) = t Τ m , N Pr ( t ) Z 2 = t Τ m , N ( Pr ( t ) × i = 0 m 1 j = 0 m 1 h N ( t i ) h N ( t j ) ) = i = 0 m 1 E ( Y i 2 ) + 2 i = 0 m 2 j = i + 1 m 1 H i , j .
The variance of Z ( t ) is
V a r ( Z ) = E ( Z 2 ) E 2 ( Z ) = i = 0 m 1 E ( Y i 2 ) + 2 i = 0 m 2 j = i + 1 m 1 H i , j ( i = 0 m 1 H i ) 2 .
For the convenience of description, H ( m , N ) and V a r ( Z ) are denoted as μ 1 and σ 1 2 under H 1 .

2.5.5. Computational Complexity

Under H 1 , many statistics take a much longer time than under H 0 . The calculation time is mainly consumed in two aspects: calculation of p k , i , and the factorial calculation and the traversal of all cases.
  • Calculation of p k , i
As seen from Equation (32), p k , i is expressed by infinite double series under H 1 , and its value could only be obtained by numerical calculation. Since the number of calculation terms is set to be large, it will take a significant amount of calculation time.
2.
Factorial calculation
Similar to the analysis under H 0 , the time complexity of the factorial calculation is O ( N ) .
3.
The traversal of all cases
There are two methods to traverse all cases: the traversal of one selected interval and the traversal of two selected intervals. The corresponding expressions for these two methods are j = 0 N Pr ( t i = j , t Τ m , N ) and g = 0 N l = 0 N g Pr ( t i = g , t j = l , t Τ m , N ) , respectively.
Calculating Pr ( t i = j , t Τ m , N ) is a process of choosing j of N lines, and its time complexity is O ( N C N j ) . Similar to the analysis under H 0 , the computational complexity of j = 0 N Pr ( t i = j , t Τ m , N ) is O ( N 2 N ) . Therefore, the time complexity of calculating the interval entropy is O ( N 2 N ) .
The computational complexity of k = 0 N l = 0 N k Pr ( t i = k , t j = l , t Τ m , N ) is O ( N 3 3 N ) . Therefore the time complexity of calculating the interval joint entropy H i , j is O ( N 3 3 N ) . The time complexity of calculating the PSER entropy is O ( m N 3 3 N ) . The time complexity of calculating the variance of the sample entropy V a r ( Z ) is O ( m 2 N 3 3 N ) .

3. Signal Detector Based on the PSER Entropy

In this section, a signal detection method based on PSER entropy is deduced under the constant false alarm (CFAR) strategy according to the PSER entropy and sample entropy variance derived in Section 2. This method is also called full power spectrum subband energy ratio entropy detector (FPSED), because it detects on the full power spectrum.

3.1. Principle

Signal detection based on PSER entropy takes the sample entropy as a detection statistic to judge whether there is a signal in the whole spectrum. The sample entropy is
Z = i = 0 m 1 Y i
The PSER entropy of GWN is obviously different from that of the mixed signal. In general, the PSER entropy of the mixed signal will be less than that of GWN, but sometimes it will also be greater than that of GWN. This can be seen in Figure 4.
In Figure 4a, the PSER entropy of GWN is higher than that of the noisy Ricker signal. However, the PSER entropy of GWN is lower than that of the noisy Ricker signal in Figure 4b. Therefore, when setting the detection threshold of the PSER entropy detector, the relationship between the PSER entropy of the signal and that of noise should be considered.

3.1.1. The PSER Entropy of a Signal Less Than That of GWN

When the PSER entropy of signal is less than that of GWN, let the threshold be η 1 which tests the decision statistic. If the test statistic is less than η 1 , the signal is deemed to be present, and it is absent otherwise, i.e.,
{ Z > η 1 H 0 Z < η 1 H 1 .
The distribution of Z is regarded as Gaussian in this paper, so
Z { N ( μ 0 , σ 0 2 ) , H 0 N ( μ 1 , σ 1 2 ) , H 1 .
Under the CFAR strategy, the false alarm probability P f can be expressed as follows:
P f = Pr ( Z < η 1   | H 0 ) = 1 Q ( η 1 μ 0 σ 0 ) .
η 1 can be derived from Equation (47)
η 1 = Q 1 ( 1 P f ) σ 0 + μ 0 .
The detection probability P d can be expressed as follows:
P d = Pr ( Z < η 1   | H 1 ) = 1 Q ( η 1 μ 1 σ 1 ) .
Substituting Equation (48) into Equation (49), P d can then be evaluated as follows:
P d = 1 Q ( Q 1 ( 1 P f ) σ 0 + μ 0 μ 1 σ 1 ) .

3.1.2. The PSER Entropy of a Signal Larger Than That of GWN

When the PSER entropy of a signal is larger than that of GWN, let the threshold be η 2 . If the test statistic is larger than η 2 , the signal is deemed to be present, and it is absent otherwise, i.e.,
{ Z < η 2 H 0 Z > η 2 H 1 .
The false alarm probability P f is
P f = Pr ( Z > η 2   | H 0 ) = Q ( η 2 μ 0 σ 0 ) ,
and
η 2 = Q 1 ( P f ) σ 0 + μ 0 .
The detection probability P d is
P d = Pr ( Z > η 2   | H 1 ) = Q ( η 2 μ 1 σ 1 ) .
Substituting Equation (53) into Equation (54), P d can then be evaluated as follows:
P d = Q ( Q 1 ( P f ) σ 0 + μ 0 μ 1 σ 1 ) .

3.2. Other Detection Methods

In the following experiment, the PSER entropy detector compare with the commonly used full spectrum energy detection (FSED) [29] and matched-filtering detector (MFD) methods, under the same condition. In this section, we introduce these two detectors.

3.2.1. Full Spectrum Energy Detection

The performance of FSED is exactly the same as that of classical energy detection (ED). The total spectral energy is measured by the sum of all spectral lines in the power spectrum, that is,
T F D = k = 0 N 1 P ( k ) .
Let γ be the SNR, that is, γ = 1 N 2 σ 2 k = 0 N 1 ( X R 2 ( k ) + X I 2 ( k ) ) . When the detection length N is large enough, T F D obeys a Gaussian distribution:
T F D { N ( N σ 2 , N σ 4 ) H 0 N ( ( 1 + γ ) N σ 2 , ( 1 + 2 γ ) N σ 4 ) H 1 .
Let the threshold be η F D . The false alarm probability and detection probability can be expressed as follows:
P f = Pr ( T F D η F D ) = Q ( η F D N σ 2 N σ 2 )   ,
P d = Pr ( T F D > η F D | H 1 ) = Q ( Q 1 ( P f ) γ N 1 + 2 γ ) .
where η F D = ( Q 1 ( P f ) N + N ) σ 2 .

3.2.2. Matched Filter Detection

The main advantage of matched filtering is the short time to achieve a certain false alarm probability or detection probability. Hence, it requires perfect knowledge of the signal. In the time domain, the detection statistic of matched filtering is
T M F D = n = 0 L 1 s ( n ) x ( n ) ,
where s ( n ) is the transmitted signal, and x ( n ) is signal to be detector. Let E = n = 0 L 1 x 2 ( n ) , i.e., all energy of the signal, and η M F D is the threshold. The false alarm probability and detection probability can be expressed as follows:
P f = Pr ( T M F D η M F D | H 0 ) = Q ( η M F D σ E )   ,
P d = Pr ( T M F D η M F D | H 1 ) = Q ( η M F D E σ E ) .
and η M F D = Q 1 ( P f ) σ E .

4. Experiments

For this section, we verified and compared the detection performances of the FPSED, FSED, and MTD discussed in Section 3 through Monte Carlo simulations. The primary signal was a binary phase shift keying(BPSK) modulated signal with symbol ratio 1   kbit / s , carrier frequency 1000 Hz, and sampling frequency 10 5 Hz.
We performed all Monte Carlo simulations for at least 10 4 independent trials. We set P f to 0.05. We used mean-square error (MSE) to measure the deviation between the theoretical values and actual statistical results. All the programs in the experiment were run in MATLAB set up on a laptop with a Core i5 CPU and 16GB RAM.
Since the PSER entropy μ 1 , and the sample entropy variance σ 0 2 and σ 1 2 cannot be calculated, a large number of simulation data were generated in the experiment to obtain μ ^ 1 , σ ^ 0 2 , and σ ^ 1 2 , which replace μ 1 , σ 0 2 , and σ 1 2 , respectively.

4.1. Experiments under H 0

This section verifies whether the calculation results of each statistic is correct by comparing the statistical results with the theoretical calculation result. The effects of noise intensity, the number of spectral lines and the number of intervals on interval probability, interval entropy, PSER entropy, and the variance of sample entropy are analyzed.

4.1.1. Influence of Noise

According to the probability density function of PSER, PSER has nothing to do with noise intensity. Therefore, noise intensity has no effect on each statistic under H 0 . In the following experiment, the variance of noise has 10 values ranging from 0.1 to 1 at 0.1 intervals, N = 512 , and m = 500 .
In Figure 5 and Figure 6, the theoretical and actual values of interval probability and interval entropy under different noise intensities are compared. The results show that the theoretical values are in good agreement with the statistical values, and the noise intensity has no effect on interval probability. In the first few intervals ( i < 7), the interval probabilities are large, so the interval entropies are also large, contributing more to the total entropy.
In Figure 7, the theoretical values of PSER entropies are compared with the actual values. It can be seen that the theoretical values are basically consistent with the actual values, but the actual values are slightly smaller than the theoretical values. The actual values do not change with noise intensity, indicating that noise intensity has no effect on PSER entropy.
Since the theoretical variance of sample entropy cannot be calculated, only the variation of the actual variance of sample entropy with noise intensity is shown in Figure 8. It can be seen that the variance of sample entropy does not change with the noise intensity.
The above experiments show that the actual statistical results are consistent with the theoretical calculation results, indicating that the calculation methods of interval probability, interval entropy, PSER entropy, and sample entropy variance determined in this paper are correct.

4.1.2. Influence of N

In Figure 9, the effect of N on the interval probability is shown. When m is fixed, the larger N is, the larger p 0 is, and the interval probabilities in other intervals will be smaller. The reason for that is that the larger N is, the smaller the energy ratio of each line to the entire power spectrum.
In Figure 10, the effect of N on the interval entropy is shown. The larger N is, the smaller H i .
In Figure 11, the effect of N on the interval entropy is shown. When m is fixed, the larger N is, the smaller H ( m , N ) becomes. When N is the same, the larger m is, the larger the PSER entropy is.
In Figure 12, the effect of N on the variance of the sample entropy is shown. When m is fixed, the larger N is, the smaller V a r ( Z ) becomes.

4.1.3. Influence of m

In Figure 13, the effect of m on the PSER entropy is shown. When N is fixed, the larger m is, the larger H ( m , N ) becomes.
In Figure 14, the effect of N on the variance of the sample entropy is shown. When N is fixed, the larger m is, the variance of sample entropy increases first, then decreases, and then increases slowly.

4.1.4. The Parameters for the Next Experiment

After theoretical calculations and experimental statistical analysis, the PSER entropy and sample entropy variance used in the following experiments were obtained, as shown in Table 1.

4.2. Experiments under H 1

When N is fixed, the change of PSER entropy of the BPSK signal with noise under H 1 is shown in Figure 15. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR. When the SNR is less than −15 dB, the PSER entropy of noise and that of the BPSK signal are almost the same; therefore, it is impossible to distinguish between noise and the BPSK signal.
Since the PSER entropy of the BPSK signal is always less than that of noise, the threshold η 1 should be used in BPSK signal detection.
As can be seen from Figure 16, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases.
When m is fixed, the change of PSER entropy and sample entropy variance of BPSK signal with noise under H 1 is shown in Figure 17 and Figure 18. It can be seen that the PSER entropy of BPSK signal decreases gradually with the increase of SNR.
As can be seen from Figure 18, with the increase of SNR, the sample entropy variance of the BPSK signal first increases and then gradually decreases.

4.3. Comparison of Detection Performance

When N is 512, m is 200, 500 and 1000, respectively. The detection results of the BSPK signal are shown in Figure 19, Figure 20 and Figure 21.
In Figure 19, the actual false alarm probabilities fluctuate slightly and do not change with the SNR, which is consistent with the characteristics of constant false alarm.
It can be seen from Figure 20 and Figure 21 that the detection probability of the PSER entropy detector is obviously better than that of the FSED method when m is 1000. However, when m is 200, the detection performance is lower than that of FSED. There is no doubt that the detection performance of matched filtering is the best.
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 2.
The deviation between the actual and theoretical detection probabilities was very small, which indicated that the PSER entropy detector was accurate.
When m is 500, N is 256, 512, and 1024, respectively. The detection results for the BSPK signal are shown in Figure 22, Figure 23 and Figure 24.
It can be seen from Figure 23 and Figure 24 that when m is fixed, a larger N does not necessarily imply a better detection performance. The detection probability when N is 1024 is lower than that when N is 512. However, the detection performance of the full spectrum energy detection method will improve with the increase of N .
The MSEs of the actual and theoretical probabilities of these experiments are given in Table 3. The deviations between the actual and theoretical detection probabilities was very small when was 512 or 1024, however were higher when N was 256.

5. Discussions

5.1. Theoretical Calculation of Statistics

In Section 2, we analyzed the computational time complexity of each statistic. When m and N are large, the theoretical values of some statistics, such as V a r ( Z ) under H 0 , H i , H i , j , H ( m , N ) , and V a r ( Z ) under H 1 , cannot be calculated. This restricts the further analysis on the detection performance of the PSER entropy detector.
There are two ways to solve this problem: reducing the computational complexity and finding an approximate solution. Which way is feasible requires further study.

5.2. Experience of Selecting Parameters

The detection probability P d of PSER entropy detection is related to the number of intervals m , the number of power spectrum lines N , and the SNR of the spectrum lines. Since the mathematical expressions of many statistics are too complex, the influence of the three factors on P d cannot be accurately analyzed at present. Based on a large number of experiments, we summarize the following experiences as recommendations for setting parameters.
(1)
m cannot be too small. It can be seen from Figure 20, that, if m is too small, then the detection performance of the PSER entropy detector will be lower than that of energy detector. We suggest that m 500 .
(2)
N must be close to m . It can be seen from Figure 23, that N is not better when bigger. A large number of experiments show that when N is close to m , the detection probability is good.
(3)
When N is fixed, m can be adjusted appropriately through experiments.

5.3. Advantages of the PSER Entropy Detector

When using PSER entropy detection, the noise intensity does not need to be estimated in advance, and prior information of the signal to be detected is not needed. Therefore, the PSER entropy detector is a typical blind signal detection method.

5.4. Further Research

The detection performance of the PSER entropy detector will be further improved if some methods are used to improve the SNR of signals. In future research, a denoising method can be used to improve the SNR, the Welch or autoregressive method can be used to improve the quality of power spectrum estimation, and multi-channel cooperative detection can be used to increase the accuracy of detection.

6. Conclusions

In this paper, the statistical characteristics of PSER entropy are derived through strict mathematical analysis, and the theoretical formulas for calculating the PSER entropy and sample entropy variance from pure noise and mixed signals are obtained. In the process of derivation, we do not use the classical method of approximating PSER entropy using differential entropy, but use interval division and the multinomial distribution to calculate PSER entropy. The calculation results of this method are consistent with the simulation results, which shows that this method is correct. This method is not only suitable for a large number of intervals, but also suitable for a small number of intervals. A signal detector based on the PSER entropy was created according to these statistical characteristics. The performance of the PSER entropy detector is obviously better than that of the classical energy detector. This method does not need to estimate the noise intensity or require any prior information of the signal to be detected, and therefore it is a complete blind signal detector.
The PSER entropy detector can not only be used in spectrum sensing, but also in vibration signal detection, seismic monitoring, and pipeline safety monitoring.

Author Contributions

Conceptualization, Y.H. and H.L.; formal analysis, Y.H. and H.L.; investigation, Y.H. and H.L.; methodology, Y.H. and H.L.; validation Y.H., S.W. and H.L.; writing—original draft, H.L.; writing—review and editing, Y.H. and S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Nova Plan of Beijing City (No. Z201100006820122) and Fundamental Research Funds for the Central Universities (No. 2020RC14).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data is simulated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, M.S.; Kumar, K. Progression on spectrum sensing for cognitive radio networks: A survey, classification, challenges and future research issues. J. Netw. Comput. Appl. 2019, 143, 47–76. [Google Scholar] [CrossRef]
  2. Arjoune, Y.; Kaabouch, N. A Comprehensive Survey on Spectrum Sensing in Cognitive Radio Networks: Recent Advances, New Challenges, and Future Research Directions. Sensors 2019, 19, 126. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kabeel, A.A.; Hussein, A.H.; Khalaf, A.A.M.; Hamed, H.F.A. A utilization of multiple antenna elements for matched filter based spectrum sensing performance enhancement in cognitive radio system. AEU Int. J. Electron. C 2019, 107, 98–109. [Google Scholar] [CrossRef]
  4. Chatziantoniou, E.; Allen, B.; Velisavljevic, V.; Karadimas, P.; Coon, J. Energy Detection Based Spectrum Sensing Over Two-Wave with Diffuse Power Fading Channels. IEEE Trans. Veh. Technol. 2017, 66, 868–874. [Google Scholar] [CrossRef] [Green Version]
  5. Reyes, H.; Subramaniam, S.; Kaabouch, N.; Hu, W.C. A spectrum sensing technique based on autocorrelation and Euclidean distance and its comparison with energy detection for cognitive radio networks. Comput. Electr. Eng. 2016, 52, 319–327. [Google Scholar] [CrossRef] [Green Version]
  6. Yucek, T.; Arslan, H. A survey of spectrum sensing algorithms for cognitive radio applications. IEEE Commun. Surv. Tutor. 2009, 11, 116–130. [Google Scholar] [CrossRef]
  7. Gismalla, E.H.; Alsusa, E. Performance Analysis of the Periodogram-Based Energy Detector in Fading Channels. IEEE Trans. Signal Process. 2011, 59, 3712–3721. [Google Scholar] [CrossRef]
  8. Dikmese, S.; Ilyas, Z.; Sofotasios, P.C.; Renfors, M.; Valkama, M. Sparse Frequency Domain Spectrum Sensing and Sharing Based on Cyclic Prefix Autocorrelation. IEEE J. Sel. Areas Commun. 2017, 35, 159–172. [Google Scholar] [CrossRef]
  9. Li, H.; Hu, Y.; Wang, S. Signal Detection Based on Power-Spectrum Sub-Band Energy Ratio. Electronics 2021, 10, 64. [Google Scholar] [CrossRef]
  10. Zhang, Y.L.; Zhang, Q.Y.; Melodia, T. A frequency-domain entropy-based detector for robust spectrum sensing in cognitive radio networks. IEEE Commun. Lett. 2010, 14, 533–535. [Google Scholar] [CrossRef] [Green Version]
  11. Prieto, G.; Andrade, A.G.; Martinez, D.M.; Galaviz, G. On the Evaluation of an Entropy-Based Spectrum Sensing Strategy Applied to Cognitive Radio Networks. IEEE Access 2018, 6, 64828–64835. [Google Scholar] [CrossRef]
  12. Nagaraj, S.V. Entropy-based spectrum sensing in cognitive radio. Signal Process. 2009, 89, 174–180. [Google Scholar] [CrossRef]
  13. Gu, J.; Liu, W.; Jang, S.J.; Kim, J.M. Spectrum Sensing by Exploiting the Similarity of PDFs of Two Time-Adjacent Detected Data Sets with Cross Entropy. IEICE Trans. Commun. 2011, E94B, 3623–3626. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Zhang, Q.; Wu, S. Entropy-based robust spectrum sensing in cognitive radio. IET Commun. 2010, 4, 428. [Google Scholar] [CrossRef]
  15. Nikonowicz, J.; Jessa, M. A novel method of blind signal detection using the distribution of the bin values of the power spectrum density and the moving average. Digit. Signal Process. 2017, 66, 18–28. [Google Scholar] [CrossRef]
  16. Zhao, N. A Novel Two-Stage Entropy-Based Robust Cooperative Spectrum Sensing Scheme with Two-Bit Decision in Cognitive Radio. Wirel. Pers. Commun. 2013, 69, 1551–1565. [Google Scholar] [CrossRef] [Green Version]
  17. Ejaz, W.; Shah, G.A.; Ul Hasan, N.; Kim, H.S. Optimal Entropy-Based Cooperative Spectrum Sensing for Maritime Cognitive Radio Networks. Entropy 2013, 15, 4993–5011. [Google Scholar] [CrossRef] [Green Version]
  18. So, J. Entropy-based Spectrum Sensing for Cognitive Radio Networks in the Presence of an Unauthorized Signal. KSII Trans. Internet Inf. 2015, 9, 20–33. [Google Scholar]
  19. Ye, F.; Zhang, X.; Li, Y. Collaborative Spectrum Sensing Algorithm Based on Exponential Entropy in Cognitive Radio Networks. Symmetry 2016, 8, 112. [Google Scholar] [CrossRef] [Green Version]
  20. Zhu, W.; Ma, J.; Faust, O. A Comparative Study of Different Entropies for Spectrum Sensing Techniques. Wirel. Pers. Commun. 2013, 69, 1719–1733. [Google Scholar] [CrossRef]
  21. Islam, M.R.; Uddin, J.; Kim, J. Acoustic Emission Sensor Network Based Fault Diagnosis of Induction Motors Using a Gabor Filter and Multiclass Support Vector Machines. Adhoc Sens. Wirel. Netw. 2016, 34, 273–287. [Google Scholar]
  22. Akram, J.; Eaton, D.W. A review and appraisal of arrival-time picking methods for downhole microseismic data. Geophysics 2016, 81, 71–91. [Google Scholar] [CrossRef]
  23. Legese Hailemariam, Z.; Lai, Y.C.; Chen, Y.H.; Wu, Y.H.; Chang, A. Social-Aware Peer Discovery for Energy Harvesting-Based Device-to-Device Communications. Sensors 2019, 19, 2304. [Google Scholar] [CrossRef] [Green Version]
  24. Pei-Han, Q.; Zan, L.; Jiang-Bo, S.; Rui, G. A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems. Chin. Phys. B 2014, 23, 537–547. [Google Scholar]
  25. Mei, F.; Hu, C.; Li, P.; Zhang, J. Study on main Frequency precursor characteristics of Acoustic Emission from Deep buried Dali Rock explosion. Arab. J. Geoences 2019, 12, 645. [Google Scholar] [CrossRef]
  26. Moddemeijer, R. On estimation of entropy and mutual information of continuous distributions. Signal Process. 1989, 16, 233–248. [Google Scholar] [CrossRef] [Green Version]
  27. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006; p. 737. [Google Scholar]
  28. Chung, K.L.; Sahlia, F.A. Elementary Probability Theory, with Stochastic Processes and an Introduction to Mathematical Finance; Springer: New York, NY, USA, 2004; p. 402. [Google Scholar]
  29. Sarker, M.B.I. Energy Detector Based Spectrum Sensing by Adaptive Threshold for Low SNR in CR Networks; Wireless and Optical Communication Conference: New York, NY, USA, 2015; pp. 118–122. [Google Scholar]
Figure 1. Comparison between the power spectrum subband energy ratio (PSER) entropy calcu-lated using differential entropy and the real PSER entropy.
Figure 1. Comparison between the power spectrum subband energy ratio (PSER) entropy calcu-lated using differential entropy and the real PSER entropy.
Entropy 23 00448 g001
Figure 2. The interval probability under H 0 .
Figure 2. The interval probability under H 0 .
Entropy 23 00448 g002
Figure 3. The interval probability of PSER under H 1 .
Figure 3. The interval probability of PSER under H 1 .
Entropy 23 00448 g003
Figure 4. Comparison of the PSER entropy between GWN and the mixed signal (N = 1024, signal–noise ratio (SNR) = 0 dB). (a) m = 1000; (b) m = 500. The blue broken line is the sample entropy of GWN, and the red broken line is the sample entropy of the Ricker signal. The black line is the PSER entropy of GWN, and the blue line is the PSER entropy of the Ricker signal.
Figure 4. Comparison of the PSER entropy between GWN and the mixed signal (N = 1024, signal–noise ratio (SNR) = 0 dB). (a) m = 1000; (b) m = 500. The blue broken line is the sample entropy of GWN, and the red broken line is the sample entropy of the Ricker signal. The black line is the PSER entropy of GWN, and the blue line is the PSER entropy of the Ricker signal.
Entropy 23 00448 g004
Figure 5. Comparison of the theoretical interval probability and the actual interval probability.
Figure 5. Comparison of the theoretical interval probability and the actual interval probability.
Entropy 23 00448 g005
Figure 6. Comparison of the theoretical interval entropy and the actual interval entropy.
Figure 6. Comparison of the theoretical interval entropy and the actual interval entropy.
Entropy 23 00448 g006
Figure 7. Comparison of theoretical PSER entropy and actual PSER entropy.
Figure 7. Comparison of theoretical PSER entropy and actual PSER entropy.
Entropy 23 00448 g007
Figure 8. The actual variance of sample entropy.
Figure 8. The actual variance of sample entropy.
Entropy 23 00448 g008
Figure 9. The effect of N on the interval probability ( m = 500, N = 256, 512, 1024).
Figure 9. The effect of N on the interval probability ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g009
Figure 10. The effect of N on the interval entropy ( m = 500, N = 256, 512, 1024).
Figure 10. The effect of N on the interval entropy ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g010
Figure 11. The effect of N on the PSER entropy ( m = 200, 500, 1000).
Figure 11. The effect of N on the PSER entropy ( m = 200, 500, 1000).
Entropy 23 00448 g011
Figure 12. The effect of N on the variance of the sample entropy ( m = 200, 500, 1000).
Figure 12. The effect of N on the variance of the sample entropy ( m = 200, 500, 1000).
Entropy 23 00448 g012
Figure 13. The effect of m on the PSER entropy ( N = 256, 512, 1024).
Figure 13. The effect of m on the PSER entropy ( N = 256, 512, 1024).
Entropy 23 00448 g013
Figure 14. The effect of m on the variance of the sample entropy ( N = 256, 512, 1024).
Figure 14. The effect of m on the variance of the sample entropy ( N = 256, 512, 1024).
Entropy 23 00448 g014
Figure 15. The change of PSER entropy of the binary phase shift keying (BPSK) signal with noise when N is fixed ( m = 200, 500, 1000, N = 512).
Figure 15. The change of PSER entropy of the binary phase shift keying (BPSK) signal with noise when N is fixed ( m = 200, 500, 1000, N = 512).
Entropy 23 00448 g015
Figure 16. The change of sample entropy variance of BPSK signal with noise when N is fixed ( m = 200, 500, 1000, N = 512).
Figure 16. The change of sample entropy variance of BPSK signal with noise when N is fixed ( m = 200, 500, 1000, N = 512).
Entropy 23 00448 g016
Figure 17. The change of PSER entropy of the BPSK signal with noise when m is fixed ( m = 500, N = 256, 512, 1024).
Figure 17. The change of PSER entropy of the BPSK signal with noise when m is fixed ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g017
Figure 18. The change of sample entropy variance of the BPSK signal with noise when m is fixed ( m = 500, N = 256, 512, 1024).
Figure 18. The change of sample entropy variance of the BPSK signal with noise when m is fixed ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g018
Figure 19. Actual false alarm probabilities of the PSER entropy detector ( m = 200, 500, 1000, N = 512).
Figure 19. Actual false alarm probabilities of the PSER entropy detector ( m = 200, 500, 1000, N = 512).
Entropy 23 00448 g019
Figure 20. Detection probabilities of full spectrum energy detection (FSED), matched-filtering detector (MFD) and PSER entropy detectors ( m = 200, 500, 1000, N = 512).
Figure 20. Detection probabilities of full spectrum energy detection (FSED), matched-filtering detector (MFD) and PSER entropy detectors ( m = 200, 500, 1000, N = 512).
Entropy 23 00448 g020
Figure 21. Receiver operating characteristic (ROC) curve of FSED and PSER entropy detectors ( m = 200, 500, 1000, N = 512).
Figure 21. Receiver operating characteristic (ROC) curve of FSED and PSER entropy detectors ( m = 200, 500, 1000, N = 512).
Entropy 23 00448 g021
Figure 22. Actual false alarm probabilities of the PSER entropy detectors ( m = 500, N = 256, 512, 1024).
Figure 22. Actual false alarm probabilities of the PSER entropy detectors ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g022
Figure 23. Detection probabilities of FSED and PSER entropy detectors (m = 500, N = 256, 512, 1024).
Figure 23. Detection probabilities of FSED and PSER entropy detectors (m = 500, N = 256, 512, 1024).
Entropy 23 00448 g023
Figure 24. ROC of FSED and PSER entropy detectors ( m = 500, N = 256, 512, 1024).
Figure 24. ROC of FSED and PSER entropy detectors ( m = 500, N = 256, 512, 1024).
Entropy 23 00448 g024
Table 1. The parameters under H 0 .
Table 1. The parameters under H 0 .
m2005001000
N
256 μ 0 = 0 . 809521 , σ ^ 0 2 = 0 . 000453 μ 0 = 1 . 652905 , σ ^ 0 2 = 0 . 000169 μ 0 = 2 . 313999 , σ ^ 0 2 = 0 . 000192
512 μ 0 = 0 . 291461 , σ ^ 0 2 = 0 . 000466 μ 0 = 1 . 011005 , σ ^ 0 2 = 0 . 000159 μ 0 = 1 . 665232 , σ ^ 0 2 = 6 . 54 × 10 5
1024 μ 0 = 0 . 035906 , σ ^ 0 2 = 0 . 000130 μ 0 = 0 . 439132 , σ ^ 0 2 = 0 . 000202 μ 0 = 1 . 014594 , σ ^ 0 2 = 7.7 × 10 5
Table 2. Mean-square errors (MSEs) between actual and theoretical probabilities ( N = 512).
Table 2. Mean-square errors (MSEs) between actual and theoretical probabilities ( N = 512).
Probabilitym = 200m = 500m = 1000
Pf0.3059 × 10−40.3887 × 10−42.6785 × 10−4
Pd0.5723 × 10−40.2777 × 10−41.1280 × 10−4
Table 3. MSEs between actual and theoretical probabilities ( m = 500).
Table 3. MSEs between actual and theoretical probabilities ( m = 500).
ProbabilityN = 256N = 512N = 1024
Pf4.687958 × 10−40.622887 × 10−40.125177 × 10−4
Pd2.067099 × 10−40.398899 × 10−40.145410 × 10−4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Hu, Y.; Wang, S. A Novel Blind Signal Detector Based on the Entropy of the Power Spectrum Subband Energy Ratio. Entropy 2021, 23, 448. https://doi.org/10.3390/e23040448

AMA Style

Li H, Hu Y, Wang S. A Novel Blind Signal Detector Based on the Entropy of the Power Spectrum Subband Energy Ratio. Entropy. 2021; 23(4):448. https://doi.org/10.3390/e23040448

Chicago/Turabian Style

Li, Han, Yanzhu Hu, and Song Wang. 2021. "A Novel Blind Signal Detector Based on the Entropy of the Power Spectrum Subband Energy Ratio" Entropy 23, no. 4: 448. https://doi.org/10.3390/e23040448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop