Next Article in Journal
Enabling Secure Communication in Wireless Body Area Networks with Heterogeneous Authentication Scheme
Previous Article in Journal
Edge-Machine-Learning-Assisted Robust Magnetometer Based on Randomly Oriented NV-Ensembles in Diamond
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition of Micro-Motion Jamming Based on Complex-Valued Convolutional Neural Network

1
Information and Navigation School, Air Force Engineering University, Xi’an 710077, China
2
Equipment Management and Unmanned Aerial Vehicle Engineering School, Air Force Engineering University, Xi’an 710051, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1118; https://doi.org/10.3390/s23031118
Submission received: 19 December 2022 / Revised: 13 January 2023 / Accepted: 16 January 2023 / Published: 18 January 2023
(This article belongs to the Section Sensor Networks)

Abstract

:
Micro-motion jamming is a new jamming method to inverse synthetic aperture radar (ISAR) in recent years. Compared with traditional jamming methods, it is more flexible and controllable, and is a great threat to ISAR. The prerequisite of taking relevant anti-jamming measures is to recognize the patterns of micro-motion jamming. In this paper, a method of micro-motion jamming pattern recognition based on complex-valued convolutional neural network (CV-CNN) is proposed. The micro-motion jamming echo signals are serialized and input to the network, and the result of recognition is output. Compared with real-valued convolutional neural network (RV-CNN), it can be found that the proposed method has a higher recognition accuracy rate. Additionally, the recognition accuracy rate is analyzed with different signal-to-noise ratio (SNR) and number of training samples. Simulation results prove the effectiveness of the proposed recognition method.

1. Introduction

With the increasing complexity of electromagnetic environment, electronic warfare plays an important role in modern warfare [1]. Inverse synthetic aperture radar (ISAR) is widely used in modern radar warning systems (RWS), which provides high-resolution situation-awareness information for combat [2,3,4,5,6,7]. Noteworthy, the widely developed micro-motion feature extraction technology [8,9,10,11,12] makes it possible to realize a better target identification that is immune to the traditional ISAR jamming methods. ISAR jamming methods are mainly divided into suppression jamming [13,14] and deception jamming [15,16]. Suppression jamming can suppress the echo signal of the target by transmitting a high-power noise signal or various noise modulation signals. By modulating jamming signals containing the characteristic information of the real target, deception jamming can form false targets similar to the real target after ISAR pulse compression processing.
Recently, a new ISAR jamming method called micro-motion jamming was proposed. Different from the traditional ISAR jamming, the micro-Doppler effect generated by micro-motion modulation can affect ISAR imaging, making it difficult to identify the target correctly. Additionally, modulating jamming signals with the micro-motion feature of false targets can generate false micro-motion features. Thus, this kind of jamming method can work effectively against the micro-motion feature extraction technology and increase the difficulty of target identification [17,18,19,20]. Therefore, it is worthy to research an efficient way of anti-micro-motion jamming so that the ISAR can be well-performed. According to the operation principle, micro-motion jamming can be categorized into three types: modulation and repeater micro-motion jamming, micro-motion scattered wave jamming, and pulse convolution micro-motion jamming [21,22]. To ensure the effect of radar jamming, more than two jamming signals are simultaneously employed. Therefore, ISAR tends to be confronted with various jamming [23,24]. In this case, the recognition of ISAR jamming patterns is the prerequisite of taking corresponding anti-jamming measures. At present, related research has been carried out on the recognition of ISAR jamming patterns. The traditional research method of ISAR jamming recognition is to extract the signal features which can distinguish the target echo and different ISAR jamming patterns, then input the features into the classifier to realize ISAR jamming recognition [23,24,25,26]. Although these methods based on feature extraction can achieve a high recognition rate, it requires very professional and experienced man-rely features extraction ability to the determine recognition strategy. The efficiency of these methods may downgrade when various complex micro-motion jamming signals are simultaneously deployed.
Thanks to the development of artificial intelligence technology, deep learning has been successfully applied to ISAR jamming pattern recognition [27,28,29,30,31,32,33,34,35]. For instance, Wang et al. [27] implemented the recognition of jamming patterns by CNN for three kinds of jamming, including suppression jamming, multiple false targets jamming, and narrow-pulse jamming. The short-time Fourier transform image of the signal was used as input to CNN. Shao et al. [28] proposed a CNN-based Siamese network in order to solve the problem of insufficient training samples and recognized 12 kinds of radar jamming signals. Qu et al. [29] designed a jamming recognition network by integrating residual blocks and asymmetric convolution blocks with the power-spectrum characteristics of jamming signals, realizing the recognition of 10 kinds of suppression jamming signals. However, the jamming patterns recognized by these methods mainly focus on traditional suppression, and deception jamming signals. Research on pattern recognition for new types of jamming such as micro-motion jamming is still scarce and needs to be further investigated.
It should be noted that the micro-motion of the target will be directly attached in the radar echo, so most of the micro-motion features can be extracted by analyzing the complex echoes. Therefore, for micro-motion jamming signal recognition, this paper proposes a micro-motion jamming pattern recognition method based on complex-valued convolutional neural network (CV-CNN). In particular, micro-motion jamming is a kind of inter-pulse coherent jamming. It is difficult to identify micro-motion jamming patterns only from the signal features of a single pulse. Therefore, in this paper, multiple pulse signals of radar echoes are serialized and input to the network. The main contributions of this paper are as follows:
  • To our knowledge, this is the first time to realize micro-motion jamming recognition based on artificial intelligence technology. The simulation results verified the feasibility of well-performed classification ability.
  • Compared with methods based on real-valued convolutional neural network (RV-CNN), the proposed method can learn more detail features and thus improve the recognition accuracy effectively.
  • Compared to the traditional feature-based micro-motion jamming signal recognition methods, the proposed CV-CNN-based method can reduce the reliance of human-based feature extraction.
The additional contents of this paper are organized as follows. In Section 2, the mechanism of three typical kinds of micro-motion jamming patterns is introduced. In Section 3, the method of micro-motion jamming pattern recognition based on CV-CNN is given. In Section 4, relevant simulation results and comprehensive analysis are presented. In Section 5, conclusions of this paper are given.

2. Micro-Motion Jamming Signal Model

Assuming that ISAR transmits the linear frequency modulation (LFM) signal, it can be expressed as:
s ( t ^ , t m ) = rect ( t ^ τ ) exp [ j 2 π ( f 0 t + 1 2 k t ^ 2 ) ]
where τ is the pulse width, f 0 is the radar carrier frequency, k is the chirp rate, t ^ is the fast time, t m is the slow time, and t = t ^ + t m represents the full time. rect ( ) is the rectangular window function, which can be defined as:
rect ( t ^ τ ) = {   1 | t ^ | τ / 2   0 | t ^ | τ / 2

2.1. Modulation and Repeater Micro-Motion Jamming

After intercepting the ISAR transmit signal, the jammer calculates the phase modulation of the false micro-motion point and the forwarding delay, and then forwards the jamming signal to the ISAR receiver after the phase modulation of the radar signal containing the micro-motion jamming information. Taking the rotating micro-motion point P for example, the phase modulation term of the jammer can be expressed as:
e ( t m ) = σ P exp ( j r P sin ( ω P t m + φ ) )
where σ P is the scattering coefficient of point P, r P is the rotation radius of point P, ω P is rotation angular velocity of point P, φ is the initial phase. Without considering the noise, the radar echo of the point P can be described as:
s P ( t ^ , t m ) = rect ( t ^ 2 R P ( t m ) / c τ ) exp [ j 2 π ( f 0 ( t 2 R P ( t m ) c ) + 1 2 k ( t ^ 2 R P ( t m ) c ) 2 ) ]
where R P represents the distance between the point P and the radar, c is the speed of light. Then the jamming signal after micro-motion modulation and forwarded by the jammer can be expressed as:
J 1 ( t ^ , t m ) = s P ( t ^ Δ t , t m ) e ( t m )            = rect ( t ^ 2 R P ( t m ) / c Δ t τ ) e ( t m )       exp [ j 2 π ( f 0 ( t 2 R P ( t m ) c Δ t ) + 1 2 k ( t ^ 2 R P ( t m ) c Δ t ) 2 ) ]
where Δ t represents the forwarding delay of the jammer. By changing the jammer forwarding delay and times, the location and number of jamming strips can be controlled to have different degrees of effect on ISAR imaging.

2.2. Micro-Motion Scattered Wave Jamming

After intercepting the ISAR transmit signal, the jammer modulates the micro-motion jamming information into the phase of the radar signal. Then, the jammer forwards the jamming signal to the target, which scatters the jamming signal to the ISAR receiver. Assuming that the distance between the radar and the micro-motion point P is R R P ( t m ) , and the distance between the jammer and the micro-motion point P is R T P ( t m ) , then the distance of the jamming signal echo path can be described as:
R T P R ( t m ) = R R P ( t m ) + R T P ( t m )
Therefore, the micro-motion scattered wave jamming signal can be expressed as:
J 2 ( t ^ , t m ) = rect ( t ^ 2 R T P R ( t m ) / c Δ t τ ) e ( t m )       exp [ j 2 π ( f 0 ( t 2 R T P R ( t m ) c Δ t ) + 1 2 k ( t ^ 2 R T P R ( t m ) c Δ t ) 2 ) ]
The jammer and ISAR can be equivalently regarded as the transmitter and receiver of the bistatic radar. Similar to the modulation and repeater micro-motion jamming, the micro-motion scattered wave jamming can generate jamming strips in azimuth. In addition, supposing that the equivalent bistatic angle is α , according to the characteristic of scatter-wave signal, micro-motion scattered wave jamming can generate 2-D images of false targets similar to the target, which differ from the actual target image by the angle of α / 2 .

2.3. Pulse Convolution Micro-Motion Jamming

After intercepting the ISAR transmit signal, the jammer uses pulse sequences with different delays to convolve with it and then forwards the jamming signal to the ISAR receiver. This jamming pattern is essentially a time-delayed forwarding of micro-motion points within selected range cells. The false micro-motion point echoes are generated by controlling the forwarding delay parameters through looping in a fixed period. Assuming a micro-motion point P on the target, its radar echoes can be expressed as:
s P ( t ^ , t m ) = rect ( t ^ 2 R P ( t m ) / c τ ) exp [ j 2 π ( f 0 ( t 2 R P ( t m ) c ) + 1 2 k ( t ^ 2 R P ( t m ) c ) 2 ) ]
Assuming that the jammer forwarding delay Δ t varies cyclically with slow time t m within the interval [ t 1 , t 2 ] , the pulse sequence can be described as:
p ( t ^ , t m ) = δ ( t ^ Δ t ( t m ) )
where δ ( ) represents the unit impulse function. Specify f ( t ) = f 1 ( t ) f 2 ( t ) is the convolution of f 1 ( t ) and f 2 ( t ) , the pulse convolution micro-motion jamming signal can be expressed as:
J 3 ( t ^ , t m ) = p ( t ^ , t m ) s P ( t ^ , t m )            = rect ( t ^ 2 R P ( t m ) / c Δ t ( t m ) τ )       exp [ j 2 π ( f 0 ( t 2 R P ( t m ) c Δ t ( t m ) ) + 1 2 k ( t ^ 2 R P ( t m ) c Δ t ( t m ) ) 2 ) ]

3. Recognition of Micro-Motion Jamming Based on CV-CNN

In this section, the main structure of CV-CNN is introduced, and the function of each part is analyzed. In addition, the processing process is described with the pre-processing of the input signal for the micro-motion jamming recognition.

3.1. The Structure of CV-CNN

Considering that the recognized micro-motion jamming signals are complex-valued signals, CV-CNN can effectively preserve the amplitude and phase characteristics of the signal compared with RV-CNN. Similar to RV-CNN, CV-CNN consists of several cascaded layers, including an input layer, several convolution layers (including activation layers, pooling layers and dropout layers), a normalization layer, a global average pooling layer, a fully connected layer, and a Softmax layer. Meanwhile, a 1-D CV-CNN structure with serialization of 2-D multi-pulse signals is designed for efficient processing of 1-D signals, which will be explained in detail later. The proposed network framework of CV-CNN for micro-motion jamming recognition is shown in Figure 1.
In the complex convolution calculation, suppose that the kth output of the l-1th complex convolution layer is a k l 1 W l 1 × H l 1 × I l 1 , which is also the kth input of the lth convolution layer. Then, the ith output of the lth convolution layer is a i l W l × H l × I l after convolution calculation, where denotes the complex domain. The convolution calculation of the lth convolution layer contains several complex convolution kernels w i k l F l × F l × I l 1 × I l and a bias b i l [36]. Assuming that A i l represents the result of the complex convolution calculation of the input a k l 1 and the complex convolution kernel w i k l , the calculation process can be expressed as follows:
A i l = k = 1 K w i k l a k l 1 + b i l      = k = 1 K ( Re ( w i k l ) Re ( a k l 1 ) Im ( w i k l ) Im ( a k l 1 ) + Re ( b i l ) )      + i k = 1 K ( Re ( w i k l ) Im ( a k l 1 ) Im ( w i k l ) Re ( a k l 1 ) + Im ( b i l ) )
where character represents the convolution calculation, Re ( ) and Im ( ) are the real and imaginary parts of the extracted complex values respectively. Then a i l can be expressed as:
a i l = f ( Re ( A i l ) ) + i f ( Im ( A i l ) )
where f ( ) denotes the nonlinear activation function. In the designed CV-CNN, we use the complex activation function modReLU, which is obtained by extending the ReLU activation function used for RV-CNN. The modReLU activation function can be expressed as:
mod Re LU ( a ) = Re LU ( | a | + b ) e i θ a = { ( | a | + b ) a | a |   ,   ( | a | + b ) 0 0        ,   otherwise
where a is the input, θ a is the phase of a , and b is the learnable parameter. By setting b, the activation function can reach the position of the dead zone since it is always positive. The complex data does not change its phase before and after passing through the activation function, which is the characteristic of modReLU activation function.
After the complex convolution calculation, the corresponding output is fed into the global average pooling layer. This layer is also extended from the real domain to the complex domain. Mathematically, the complex domain extension of the global average pooling layer is defined as:
C G A P = G ( Re ( a ) ) + i G ( Im ( a ) )
where G ( ) represents the global average pooling calculation in real domain.
Then, the output of the complex global average pooling layer is used as the input of the complex fully connected layer. After the calculation of the fully connected layer, the output complex result is taken as the input of the Softmax classifier after taking the magnitude to obtain the final recognition result.
The CV-CNN also uses a backpropagation algorithm in the process of updating the parameters. In this paper, the polynomial maximum likelihood function is used as the loss function. The values of weights and deviations can be updated by the following equations:
w i k l ( t + 1 ) = w i k l ( t ) + Δ w i k l ( t ) = w i k l ( t ) η L w i k l ( t )
b i l ( t + 1 ) = b i l ( t ) + Δ b i l ( t ) = b i l ( t ) η L b i l ( t )
The key lies in the calculation of the error gradient of the weights, which can be derived from the following equation:
L w i k l = L Re ( w i k l ) + i L Im ( w i k l )    = ( L Re ( A i l ) Re ( A i l ) Re ( w i k l + 1 ) + L Im ( A i l ) Im ( A i l ) Re ( w i k l ) )    + i ( L Re ( A i l ) Re ( A i l ) Im ( w i k l ) + L Im ( A i l ) Im ( A i l ) Im ( w i k l ) )
Suppose there is an intermediate quantity δ i l to represent the error term, whose mathematical can be expressed as:
δ i l = L Re ( A i l ) i L Im ( A i l )
According to Equations (11), (12), and (18), Equation (17) can be simplified as:
L w i k l = δ i l ( a i l 1 )
where ( ) * denotes taking the conjugate. Substituting the obtained gradient into Equation (15) to complete the update of the weight parameter. By iterating continuously, the error can be continuously reduced until it is minimized.

3.2. Signal Data Pre-Processing

Due to the specificity of convolutional calculation in CV-CNN, the convolution layer can be transformed into generalized matrix multiplication. In this case the data needs to be pre-processed before the network training is performed to accommodate a more efficient convolutional calculation.
As shown in Figure 2a, the echo signal data of size L × L is converted into size 1 × L 2 by serialization process. Assuming a total of M samples in the training set, which are combined into a complete 2-D data matrix for the training matrix of size M × L 2 . At the same time, the class-label matrix of the same dimension size is designed and stored. Then, the training matrix and the convolutional kernels are processed separately. We take K convolutional kernels with the size of 1 × σ , and the step of 1 as an example. As shown in Figure 2b, all the convolutional kernels are expanded to form the convolutional kernel matrix W with size K ( L 2 σ + 1 ) × σ ( L 2 σ + 1 ) . Meanwhile, the training matrix is patched according to the range of data covered by the convolutional kernels and the way of movement to form the input matrix X with size σ ( L 2 σ + 1 ) × M , as shown in Figure 2c. Thus, it can match the convolutional matrix in size. As shown in Figure 2d, after the matrix operation, the output matrix Y = W X with size K ( L 2 σ + 1 ) × M can be obtained to complete the convolution calculation. After the matrix Y passes through the activation layer, it is used as the input data of the next convolution layer for a new round of calculation.

4. Simulations and Results

4.1. Datasets Design

Assuming the ISAR transmit LFM signal, the carrier frequency f 0 is 10 GHz, the bandwidth B is 400 MHz, the pulse width τ is 5 µs, the pulse repetition frequency (PRF) is 200 MHz, and the pulse number is 512. The ISAR transmits signals to an aircraft scattering point target as shown in Figure 3, using ISAR rotation target imaging model. The rotation angular velocity of the target is set to 0.02 rad/s.
According to whether the ISAR is jammed and different patterns of micro-motion jamming, the echo signals are divided into four classes as follows:
{ Class 1 : s ( t ^ , t m ) = s 0 ( t ^ , t m ) + n ( t ) Class 2 : s ( t ^ , t m ) = s 0 ( t ^ , t m ) + J 1 ( t ^ , t m ) + n ( t ) Class 3 : s ( t ^ , t m ) = s 0 ( t ^ , t m ) + J 2 ( t ^ , t m ) + n ( t ) Class 4 : s ( t ^ , t m ) = s 0 ( t ^ , t m ) + J 3 ( t ^ , t m ) + n ( t )
where s 0 ( t ^ , t m ) is the real target echo signal, J 1 ( t ^ , t m ) is the modulation and repeater micro-motion jamming signal, J 2 ( t ^ , t m ) is the micro-motion scattered wave jamming signal, J 3 ( t ^ , t m ) is the pulse convolution micro-motion jamming signal, and n ( t ) is additive white Gaussian noise. In order to display different complex-valued echo initial signals in the form of images, we take the complex values of the echo signal matrix as modulo values to generate images of the four echo signal classes as shown in Figure 4.
In the designed dataset, four classes of echo signal matrix are constructed as shown in Figure 4, which are target echo without jamming (Class1), modulation and repeater micro-motion jamming (Class2), micro-motion scattered wave jamming (Class3), and pulse convolution micro-motion jamming (Class4). Especially, the jamming parameters of the abovementioned three micro-motion jamming patterns are shown in Table 1.
10 groups of jamming parameters are selected within the range of jamming parameters shown in Table 1, and the parameters are chosen as randomly dispersed as possible while having good jamming effects. Three occasions with different signal-to-noise ratio (SNR) of 5, 0, and -5 dB are constructed. For each occasion, the samples of each class are randomly generated within the range of the jamming-to-signal ratio (JSR) of -10~20 dB based on Monte Carlo method. Among the samples, 500 samples per class are selected as the training set, 200 samples per class are selected as the validation set, and 150 samples per class are selected as the test set. Overall, there are four classes of echo signals (they are target echo signals and three micro-motion jamming signals) in the dataset, with 2000 training samples, 800 validation samples, and 600 test samples under each SNR. Each sample is a 512 × 512 matrix of a total of 262144 complex values.

4.2. Simulation Settings

In the simulation experiments, RV-CNN is used as a comparison in order to verify the effectiveness of the micro-motion jamming recognition method based on CV-CNN. The main sections of the network structure of RV-CNN and the parameter settings are shown in Table 2.
The RV-CNN for comparison consists of three convolution layers, three pooling layers, three activation layers, a dropout layer, a normalization layer, a global pooling layer, a fully connected layer, and an output layer. The output from the full connection layer is 4, which means it would support 4 properties for the four classes. The step lengths of the convolution layers are set to 1, and the ReLU activation function is used for the activation function.
The network structure and parameter settings for the proposed CV-CNN are shown in Table 3. It can be seen that the overall structure is similar to that from RV-CNN. The main difference is that CV-CNN uses modReLU activation function as the activation function, which is extended to complex domain compared with ReLU. It should be noted that the pre-processing for dataset of the CV-CNN reserves the complex data structure, while that of RV-CNN transfers the complex value into real value and extracts out the exact real data. For example, the complex value can be transferred to one real value by absolution, or it can divide the real part and the image part to be two real values.
The same hyperparameter settings are used for training with RV-CNN and CV-CNN, the range of learning rate is set to [0.00005, 0.005], the weight decay rate is set to 0.0001, batch size is set to 20, and the training epochs is set to 25.

4.3. Recognition Results and Analysis

In this section, the dataset under SNR = −5 dB condition is analyzed, and the recognition results of CV-CNN and RV-CNN are compared. The training results are shown in Figure 5, and Figure 5a,b represent the training accuracy and training loss with increasing epoch for CV-CNN and RV-CNN, respectively. It can be seen that the training loss of both models gradually decreases and converges to a stable level as the training time grows. Among them, the training accuracy of RV-CNN starts from 56.33 to 100% and reaches more than 95% at epoch = 4. Its training loss function starts from 0.733084 to 0.000648. In contrast, the initial training accuracy of CV-CNN is 70.07% and converges to about 95% at epoch = 3, with an initial training loss function of 0.239535; additionally, its convergence speed is much faster. To better merge the training and validation results, we introduce the logarithmic function to discern the variation. As shown in Figure 5c,d, the logarithmic processing is not very apparent for training accuracy because the values are adjacent to 1. Fortunately, it is easily observed for training loss, and the obvious decreasing tendency can be seen. For CV-CNN and RV-CNN, the training loss changes with four orders, which exemplifies the excellent training ability of these models.
After that, the trained model is adjusted by using the validation set to improve the generalization ability of the model, and the results are shown in Figure 6. It can be seen that the hyperparameter validity of CV-CNN is significantly higher than that of RV-CNN, both in terms of validation accuracy and validation loss; where CV-CNN has an initial value of 93.8% validation accuracy and has reached about 95% accuracy at epoch = 2, while RV-CNN has an initial value of 68% accuracy and reaches the same accuracy only at epoch = 4. In terms of validation loss, the initial loss value of CV-CNN is much lower than that of RV-CNN, and the loss value is already less than 0.1 at epoch = 2. Overall, the learning ability of CV-CNN is better than that of RV-CNN, because CV-CNN extracts more refined signal features and can train the model to show better results more rapidly. Similarly, the validation results are also logarithmically processed to achieve a better presentation effect. As can be seen in Figure 6c,d, the improvement for validation accuracy is not ideal, but for the validation loss, it is worth trying. In Figure 6d, near three orders variation can be observed thanks to the good training ability of these models.
Finally, a test set is used to test the CV-CNN model and RV-CNN model in recognizing the four echo signal classes. The test assigns 150 samples to each class, and the JSR of the samples is randomly distributed in the range of −10 dB~20 dB. Figure 7 shows the recognition output results based on CV-CNN and RV-CNN when the input is the target echo without jamming (Class1). It can be seen that both models have the best recognition ability for Class1 with no error output.
It should be noted that in the process of recognition for Class1, the Softmax function outputs recognition probability results of only 1 and 0 due to the good recognition effect, so the original results can be shown in the image directly. However, for Class2~Class4, they have probabilistic results with recognition rates of 10−3 or less, which are difficult to display directly in the image. Therefore, in the following analysis, log10(Softmax) is used to represent the output results to show more intuitively the distribution range of different output results. In particular, since the signal characteristic of Class1 is much more significant than other jamming signals, the probability of Class2~Class4 being misclassified as Class1 in the model output is 0. Because of log10(0) = −∞, the output results cannot be displayed in the image, thus only the recognition results of Class2~Class4 are shown in Figure 8 and Figure 9.
Figure 8 shows the output results of CV-CNN model for Class2~Class4 on a test set. It can be seen that when the input is Class2, most of the samples are recognized as Class2, with only one sample misclassified as Class3. When the input is Class3, two samples are misclassified as Class2, and thhree samples are misclassified as Class4. When the input is Class4, only one sample is misclassified as Class2, and the rest are correctly recognized.
Figure 9 shows the output results of RV-CNN model for Class2~Class4 on a test set. It can be seen that when the input is Class2, two samples are misclassified as Class3 and one sample is misclassified as Class4. When the input is Class3, two samples are misclassified as Class2 and four samples are misclassified as Class4. When the input is Class4, one sample is misclassified as Class2, two samples are misclassified as Class3, and the rest are correctly identified.
By comparing the results in Figure 8 and Figure 9, it can be found that the distribution of recognition probability points of the CV-CNN model is variously scattered, while the results from RV-CNN model are more stable and keep in exact level. The reason is that the CV-CNN extracts more signal features and can describe the signal nuances. To better measure the recognition ability of the two models for the micro-motion jamming signals recognition, the concept of probability distance is introduced as follows:
P XY = log 10 ( P p r c _ t r u e _ ClassX P c o n f _ ClassY ) ,   X , Y = 1 , 2 , 3 , 4
where, P p r c _ t r u e _ ClassX represents the probability that ClassX is correctly classified and P c o n f _ ClassY represents the probability that ClassX is misclassified as ClassY. A larger probability distance means the recognition ability is better and less likely to confuse the signals. The probability distance based on CV-CNN in Figure 8 is up to 14.9, but the probability distance based on RV-CNN in Figure 9 is only up to 1.85. In comparison, RV-CNN is more likely to confuse the signals.
According to the above results, the confusion matrix is shown in Figure 10, where Figure 10a is the result of CV-CNN and Figure 10b is the result of RV-CNN. In comparison, CV-CNN has better recognition ability of micro-motion jamming.
Based on the above recognition results, the comparison of the recognition ability of CV-CNN and RV-CNN is summarized in Table 4. It can be seen that the recognition ability of CV-CNN for micro-motion jamming signals is better than that of RV-CNN. The reason is that training in the complex field can effectively retain more signal features and have better recognition ability.
In addition, the construction and division of the dataset often affects the training effect of the model. The effects of different SNRs and training set sizes on the training effect of CV-CNN model will be studied below. The training sets with SNR of −5 dB, 0 dB, and 5 dB are selected, and the training set sample size of various signal patterns is set to 50, 100, 150, 200, 250, and 300. Moreover, four kinds of signals are mixed in the test dataset. Here, we introduce an overall accuracy to evaluate the effectiveness of the method. It can be expressed as a rate between right classified sample number n and the total sample number N. The training and test results are shown in Table 5.
As can be seen from the table, the overall recognition accuracy of the model improves significantly as the SNR increases for the same training samples. In addition, increasing the number of samples in the training set enables the model to learn more detail features of the signals. Therefore, as the number of the samples increases, the recognition accuracy also improves under the same SNR.

5. Discussion

In discussion, the hyperparameters are mainly considered. First, the convolution layer number is a significant parameter that determines the structure of the CV-CNN. In this work, the layer number is investigated to find the optimal convolution size. In this investigation, the convolution structures with the layer numbers of 2, 3, 4, and 5 are constructed. The kernel numbers are set to (32, 64), (32, 64, 128), (32, 64, 128, 256), and (32, 64, 128, 256, 512). Noteworthy, larger layer numbers require more computer memory and should be occupied to perform the training. It may downgrade the efficiency. However, a simplified convolution structure can hardly describe the exact signal figuration. Based on the parameter setting above, the experiment is carried out. The overall classification accuracy for different layer numbers is shown in Figure 11. It can be seen that, when considering the accuracy, larger layer number would provide a better classification result. However, for the time cost, lower layer numbers can be more efficient.
Besides the convolution layer number, other hyperparameters such as the learning rate, batch size, and number of convolutional kernels also make significant contributions to the deep learning results. Moreover, artificial intervention is required. In this part, a dataset with SNR = 5 dB is used to further investigate the influence of the value of hyperparameters on the model training results, and to provide a reference for improving the training efficiency and recognition effect of the network. The experimental results are shown in Table 6. We can easily find that the appropriate setting of hyperparameters can not only improve the training efficiency of the model, but also improve its identification accuracy to a certain extent. When the epoch value remains unchanged, the recognition accuracy of the model decreases with the increase of the learning rate and the number of convolution kernels. However, with the increase of the batch size, the recognition capability increases. In particular, if the identification effect is not ideal under the current hyperparameter, you can raise the epoch times appropriately. Through repeated training, the training effect of the parameters can be improved.

6. Conclusions

In this paper, a micro-motion jamming recognition method based on CV-CNN is proposed. The main advantage of the network is that the complex processing capability is added to the structure of the RV-CNN, so that it is able to calculate the convolution in the complex field directly. At the same time, the echo signal matrix datasets of target echo without jamming and three patterns of micro-motion jamming are constructed, and the input 2-D complex signals are serialized. The simulation results show that CV-CNN can better utilize most of the information of the complex signal and realize more accurate parameter training, thus achieving better recognition results. The simulation results show that when SNR = −5 dB and JSR are in the range of −10 to 20 dB, the recognition accuracy of CV-CNN for micro-motion jamming patterns can reach up to 99.33%, which is nearly 1% better than that of RV-CNN under the same conditions.

Author Contributions

Conceptualization, Q.Z.; methodology, C.S. and T.L.; writing—original draft preparation, C.S., Q.Z. and T.L.; writing—review and editing, Z.L. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China with grant number 62131020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Spezio, A.E. Electronic warfare systems. IEEE Trans. Microwave Theory Tech. 2002, 50, 633–644. [Google Scholar] [CrossRef]
  2. Liu, X.W.; Zhang, Q.; Chen, Y.C.; Su, L.H.; Chen, Y.J. Task allocation optimization for multi-target ISAR imaging in radar network. IEEE Sens. J. 2018, 18, 122–132. [Google Scholar] [CrossRef]
  3. Zhu, F.; Luo, Y.; Zhang, Q.; Feng, Y.Q.; Bai, Y.Q. ISAR Imaging for Avian Species Identification with Frequency-Stepped Chirp Signals. IEEE Geosci. Remote Sens. Lett. 2010, 7, 151–155. [Google Scholar] [CrossRef]
  4. Li, W.; Yuan, Y.; Zhang, Y.; Luo, Y. Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN. Remote Sens. 2022, 14, 5270. [Google Scholar] [CrossRef]
  5. Zhou, Z.; Liu, L.; Du, R.; Zhou, F. Three-Dimensional Geometry Reconstruction Method for Slowly Rotating Space Targets Utilizing ISAR Image Sequence. Remote Sens. 2022, 14, 1144. [Google Scholar] [CrossRef]
  6. Wei, J.; Shao, S.; Ma, H.; Wang, P.; Zhang, L.; Liu, H. High-Resolution ISAR Imaging with Modified Joint Range Spatial-Variant Autofocus and Azimuth Scaling. Sensors 2020, 20, 5047. [Google Scholar] [CrossRef]
  7. Zeng, C.; Qin, M.; Li, D.; Liu, H.; Chai, Y. An Efficient ISAR Imaging of Targets with Complex Motions Based on a Quasi-Time-Frequency Analysis Bilinear Coherent Algorithm. Sensors 2018, 18, 2814. [Google Scholar] [CrossRef] [Green Version]
  8. Luo, Y.; Zhang, Q.; Qiu, C.W.; Liang, X.J.; Li, K.M. Micro-Doppler Effect Analysis and Feature Extraction in ISAR Imaging with Stepped-Frequency Chirp Signals. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2087–2098. [Google Scholar]
  9. Wang, Z.H.; Chen, Y.J.; Yuan, H.; Luo, Y.; Zhang, Q. Real Micro-Doppler Parameters Extraction of Spinning Targets Based on Rotating Interference Antenna. Remote Sens. 2022, 14, 5300. [Google Scholar] [CrossRef]
  10. Zhu, N.; Hu, J.; Xu, S.; Wu, W.; Zhang, Y.; Chen, Z. Micro-Motion Parameter Extraction for Ballistic Missile with Wideband Radar Using Improved Ensemble EMD Method. Remote Sens. 2021, 13, 3545. [Google Scholar] [CrossRef]
  11. Bu, L.; Zhu, Y.; Chen, Y.; Song, X.; Yang, Y.; Zang, Y. Micro-Motion Parameter Extraction of Multi-Scattering-Point Target Based on Vortex Electromagnetic Wave Radar. Remote Sens. 2022, 14, 5908. [Google Scholar] [CrossRef]
  12. Qin, X.; Deng, B.; Wang, H. Micro-Doppler Feature Extraction of Rotating Structures of Aircraft Targets with Terahertz Radar. Remote Sens. 2022, 14, 3856. [Google Scholar] [CrossRef]
  13. Peng, S.R.; Liu, Q.; Li, X.; Tang, Z.Y. Effect Analysis of Noise FM Jamming against ISAR. Mod. Radar 2008, 30, 33–40. [Google Scholar]
  14. Cui, R.; Xue, L. Influence Analysis of Noise Barrage Jamming on Inverse Synthetic Aperture Radar Imaging. Mod. Def. Technol. 2008, 36, 92–98. [Google Scholar]
  15. Pace, P.E.; Fouts, D.J.; Ekestorm, S.R.; Karow, C. Digital False-Target Image Synthesizer for Countering ISAR. IEE Proc. Radar Sonar Navig. 2002, 149, 248–257. [Google Scholar] [CrossRef]
  16. Fouts, D.J.; Pace, P.E.; Karow, C. A Single-Chip False Target Radar Image Generator for Countering Wideband Image Radars. IEEE J. Solid-State Circuits 2001, 37, 751–759. [Google Scholar] [CrossRef]
  17. Bai, X.R.; Xing, M.D.; Zhou, F.; Lu, G.Y.; Bao, Z. Imaging of Micromotion Targets with Rotating Parts Based on Empirical-Mode Decomposition. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3514–3522. [Google Scholar] [CrossRef]
  18. Zhu, B.Y.; Xue, L.; Bi, D.P. A Micro-motion Jamming Method to ISAR Based on Digital Image Synthesiser. Fire Control. Command Control 2012, 37, 35–38. [Google Scholar]
  19. Zhu, B.Y.; Xue, L.; Bi, D.P. A Micro-motion Feature Deception Jamming Method to ISAR. In Proceedings of the IEEE 10th International Conference on Signal Processing, Beijing, China, 24–28 October 2010; pp. 2287–2290. [Google Scholar]
  20. Shi, X.R.; Zhou, F.; Liu, L. Micro-Doppler Deception Jamming for Tracked Vehicles. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 609–612. [Google Scholar]
  21. Tang, Z.Z.; Dong, C.X.; Chang, X.; Liu, M.M.; Zhao, G.Q. ISAR Micro-Doppler Scatter-Wave Jamming Method Based on Towed Jammer. Acta Aeronaut. Astronaut. Sin. 2018, 39, 185–191. [Google Scholar]
  22. Tang, Z.Z. Study on Countermeasures against ISAR Based on Micro-Doppler Effect. Ph.D. Thesis, Xidian University, Xi’an, China, March 2019. [Google Scholar]
  23. Li, J.X.; Shen, Q.; Yan, H. Signal feature analysis and experimental verification of radar deception jamming. In Proceedings of the 2011 IEEE CIE International Conference on Radar, Chengdu, China, 24–27 October 2011; pp. 230–233. [Google Scholar]
  24. Tian, X.; Tang, B.; Gui, G. Product Spectrum Matrix Feature Extraction and Recognition of Radar Deception Jamming. Int. J. Electron. 2013, 100, 1621–1629. [Google Scholar] [CrossRef]
  25. Zhou, H.; Dong, C.; Wu, R.; Xu, X.; Guo, Z. Feature Fusion Based on Bayesian Decision Theory for Radar Deception Jamming Recognition. IEEE Access 2021, 9, 16296–16304. [Google Scholar] [CrossRef]
  26. Du, C.; Tang, B. Novel Unconventional-Active-Jamming Recognition Method for Wideband Radars Based on Visibility Graphs. Sensors 2019, 19, 2344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wang, Y.; Sun, B.; Wang, N. Recognition of Radar Active-Jamming Through Convolutional Neural Networks. J. Eng. 2019, 21, 7695–7697. [Google Scholar] [CrossRef]
  28. Shao, G.; Chen, Y.; Wei, Y. Convolutional neural network-based radar jamming signal classification with sufficient and limited samples. IEEE Access 2020, 8, 80588–80598. [Google Scholar] [CrossRef]
  29. Qu, Q.; Wei, S.; Liu, S.; Liang, J.; Shi, J. JRNet: Jamming Recognition Networks for Radar Compound Suppression Jamming Signals. IEEE Trans. Veh. Technol. 2020, 69, 15035–15045. [Google Scholar] [CrossRef]
  30. Zhou, Y.; Shang, S.; Song, X.; Zhang, S.; You, T.; Zhang, L. Intelligent Radar Jamming Recognition in Open Set Environment Based on Deep Learning Networks. Remote Sens. 2022, 14, 6220. [Google Scholar] [CrossRef]
  31. Hou, Y.; Ren, H.; Lv, Q.; Wu, L.; Yang, X.; Quan, Y. Radar-Jamming Classification in the Event of Insufficient Samples Using Transfer Learning. Symmetry 2022, 14, 2318. [Google Scholar] [CrossRef]
  32. Shen, J.; Li, Y.; Zhu, Y.; Wan, L. Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network. Electronics 2022, 11, 3280. [Google Scholar] [CrossRef]
  33. Lang, B.; Gong, J. JR-TFViT: A Lightweight Efficient Radar Jamming Recognition Network Based on Global Representation of the Time–Frequency Domain. Electronics 2022, 11, 2794. [Google Scholar] [CrossRef]
  34. Qiu, L.; Fan, Y.Z. A Radar Jamming Recognition Method based on Hybrid Dilated Convolution. In Proceedings of the 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications, Changchun, China, 20–22 May 2022; pp. 692–695. [Google Scholar]
  35. Kong, Y.K.; Wang, X.; Wu, C.X.; Yu, X.X.; Cui, G.L. Active Deception Jamming Recognition in the Presence of Extended Target. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  36. Zhang, Z.M.; Wang, H.P.; Feng, X.; Ya, Q.J. Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
Figure 1. CV-CNN structure framework.
Figure 1. CV-CNN structure framework.
Sensors 23 01118 g001
Figure 2. Standard procedure for generalized matrix multiplication of convolution layers. (a) Serialization process of echo signal data, (b) Expand convolutional kernels to form convolutional kernel matrix, (c) Form input matrix, (d) Obtain output matrix.
Figure 2. Standard procedure for generalized matrix multiplication of convolution layers. (a) Serialization process of echo signal data, (b) Expand convolutional kernels to form convolutional kernel matrix, (c) Form input matrix, (d) Obtain output matrix.
Sensors 23 01118 g002
Figure 3. The aircraft scattering point target.
Figure 3. The aircraft scattering point target.
Sensors 23 01118 g003
Figure 4. The images of four different classes of echo signal matrix. (a) Class1, (b) Class2, (c) Class3, (d) Class4.
Figure 4. The images of four different classes of echo signal matrix. (a) Class1, (b) Class2, (c) Class3, (d) Class4.
Sensors 23 01118 g004
Figure 5. Training results of CV-CNN and RV-CNN on training set. (a) Training accuracy, (b) Training loss, (c) Training accuracy with logarithmic scale, (d) Training loss with logarithmic scale.
Figure 5. Training results of CV-CNN and RV-CNN on training set. (a) Training accuracy, (b) Training loss, (c) Training accuracy with logarithmic scale, (d) Training loss with logarithmic scale.
Sensors 23 01118 g005
Figure 6. Validation results of CV-CNN and RV-CNN on validation set. (a) Validation accuracy, (b) Validation loss, (c) Validation accuracy with logarithmic scale, (d) Validation loss with logarithmic scale.
Figure 6. Validation results of CV-CNN and RV-CNN on validation set. (a) Validation accuracy, (b) Validation loss, (c) Validation accuracy with logarithmic scale, (d) Validation loss with logarithmic scale.
Sensors 23 01118 g006
Figure 7. Recognition results with different JSRs when the input is Class1. (a) CV-CNN, (b) RV-CNN.
Figure 7. Recognition results with different JSRs when the input is Class1. (a) CV-CNN, (b) RV-CNN.
Sensors 23 01118 g007
Figure 8. Recognition results of CV-CNN with different inputs and different JSRs. (a) Input is Class2, (b) Input is Class3, (c) Input is Class4.
Figure 8. Recognition results of CV-CNN with different inputs and different JSRs. (a) Input is Class2, (b) Input is Class3, (c) Input is Class4.
Sensors 23 01118 g008
Figure 9. Recognition results of RV-CNN with different inputs and different JSRs. (a) Input is Class2, (b) Input is Class3, (c) Input is Class4.
Figure 9. Recognition results of RV-CNN with different inputs and different JSRs. (a) Input is Class2, (b) Input is Class3, (c) Input is Class4.
Sensors 23 01118 g009
Figure 10. Confusion matrix results of CV-CNN and RV-CNN on test set. (a) CV-CNN, (b) RV-CNN.
Figure 10. Confusion matrix results of CV-CNN and RV-CNN on test set. (a) CV-CNN, (b) RV-CNN.
Sensors 23 01118 g010
Figure 11. Training results under different number of convolutional layers. (a) Accuracy, (b) Time.
Figure 11. Training results under different number of convolutional layers. (a) Accuracy, (b) Time.
Sensors 23 01118 g011
Table 1. Micro-motion jamming parameters.
Table 1. Micro-motion jamming parameters.
Jamming PatternsMain ParametersRange of Values
modulation and repeater micro-motion jamming r P (m)5~10
ω P (rad/s)10~20
forwarding times2~4
micro-motion scattered wave jamming r P (m)5~10
ω P (rad/s)10~20
α (rad)0~π
forwarding times2~4
pulse convolution micro-motion jamming Δ t (ns)0~90
time delay interval (ns)1~5
time delay range cells1~7
forwarding times2~4
Table 2. The structure and parameter settings of RV-CNN.
Table 2. The structure and parameter settings of RV-CNN.
Layer NameMain Parameters
Normalization LayerBatch Normalization
Convolution LayerConv. NumConv. ParametersActivationPoolingDropout
Conv. 1Kernel size: 5
Input num: 1
Output num: 16
ReLUMaxPooling (2 × 2)-
Conv. 2Kernel size: 7
Input num: 16
Output num: 32
ReLUMaxPooling (2 × 2)-
Conv. 3Kernel size: 9
Input num: 32
Output num: 64
ReLUMaxPooling (2 × 2)50%
Global Pooling LayerAdaptive averaging pooling processing
Fully Connected LayerInput num: 64
Output num: 4
Output LayerSoftmax
Table 3. The structure and parameter settings of CV-CNN.
Table 3. The structure and parameter settings of CV-CNN.
Layer NameMain Parameters
Normalization LayerComplex Batch Normalization
Convolution LayerConv. NumConv. ParametersActivationPoolingComplex Dropout
CConv. 1Kernel size: 5
Input num: 1
Output num: 16
modReLUCMaxPooling (2 × 2)-
CConv. 2Kernel size: 7
Input num: 16
Output num: 32
modReLUCMaxPooling (2 × 2)-
CConv. 3Kernel size: 9
Input num: 32
Output num: 64
modReLUCMaxPooling (2 × 2)50%
Global Pooling LayerAdaptive averaging pooling processing
Complex Fully Connected LayerInput num: 64
Output num: 4
Output LayerSoftmax
Table 4. Signal pattern recognition results of different network models when SNR = −5 dB.
Table 4. Signal pattern recognition results of different network models when SNR = −5 dB.
Signal PatternCV-CNNRV-CNN
Recognition Accuracy (%)Probability DistanceRecognition Accuracy (%)Probability Distance
Class1100-100-
Class299.3312~14.9980.1~0.6
Class396.671~1096.670.55~0.6
Class499.334.5~8961.5~1.85
Table 5. Effects of different SNRs and different training set sample sizes on the recognition rate of CV-CNN model.
Table 5. Effects of different SNRs and different training set sample sizes on the recognition rate of CV-CNN model.
Number of Samples for Each Signal PatternRecognition Accuracy (%)
SNR = −5 dBSNR = 0 dBSNR = 5 dB
N = 5088.9696.0496.89
N = 10090.2896.5597.43
N = 15091.1797.3699
N = 20095.1398.2899.57
N = 25096.998.9199.85
N = 30097.3799.1599.98
Table 6. The influence of main hyperparameter settings on model recognition accuracy.
Table 6. The influence of main hyperparameter settings on model recognition accuracy.
HyperparameterVariation RangeRecognition Accuracy (%)
Epoch = 2Epoch = 5Epoch = 10
learning rate[5 × 10−5, 5 × 10−3]98.9399.2399.86
[1 × 10−4, 1 × 10−2]98.5698.9999.23
[5 × 10−4, 5 × 10−2]85.595.1398.87
[1 × 10−3, 1 × 10−1]38.3768.2286.57
batch sizeBZ = 467.592.3396.12
BZ = 1081.596.3798.22
BZ = 1596.597.8699.51
BZ = 2097.599.2199.72
kernel number(16, 32, 64)100100100
(32, 64, 128)99.9299.86100
(64, 128, 256)99.8898.67100
(128, 256, 512)98.6599.12100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, C.; Zhang, Q.; Lin, T.; Liu, Z.; Li, S. Recognition of Micro-Motion Jamming Based on Complex-Valued Convolutional Neural Network. Sensors 2023, 23, 1118. https://doi.org/10.3390/s23031118

AMA Style

Shi C, Zhang Q, Lin T, Liu Z, Li S. Recognition of Micro-Motion Jamming Based on Complex-Valued Convolutional Neural Network. Sensors. 2023; 23(3):1118. https://doi.org/10.3390/s23031118

Chicago/Turabian Style

Shi, Chongwei, Qun Zhang, Tao Lin, Zhidong Liu, and Shiliang Li. 2023. "Recognition of Micro-Motion Jamming Based on Complex-Valued Convolutional Neural Network" Sensors 23, no. 3: 1118. https://doi.org/10.3390/s23031118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop