Next Article in Journal
Mapping Trails and Tracks in the Boreal Forest Using LiDAR and Convolutional Neural Networks
Previous Article in Journal
Sustainable Urban Land Management Based on Earth Observation Data—State of the Art and Trends
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

MRCS-Net: Multi-Radar Clustering Segmentation Networks for Full-Pulse Sequences

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1538; https://doi.org/10.3390/rs17091538
Submission received: 10 February 2025 / Revised: 6 April 2025 / Accepted: 24 April 2025 / Published: 26 April 2025

Abstract

:
To facilitate the full-pulse sequence received by a radar reconnaissance receiver, this study proposed a clustering segmentation method for radar signals. Owing to the influence of the complex electromagnetic environment, the probability of the occurrence of time–frequency overlapping of signals increases, and the demand for signal localization and classification becomes higher. However, most existing studies have only classified and identified individual pulse signals and lack the ability to analyze signals for full pulses. This study proposed a multi-radar cluster-based segmentation network (MRCS-Net) for large time-length full-pulse signals. The network innovatively addresses the processing challenges of prolonged full-pulse signals and effectively achieves the classification and recognition of different pulses under time–frequency overlapping conditions. The proposed algorithm filters the signal with SincNet and then sequentially feeds the sequence into a long short-term memory network. Consequently, the outputs are clustered and segmented using multilayer perceptrons and classifiers. Experiments were conducted on six different types of radar signals. The results demonstrated that the proposed method exhibited lower segmentation error rate metric compared to other similar methods. Moreover, it outperformed other methods in terms of recognition performance.

Graphical Abstract

1. Introduction

The clustering and segmentation of radars based on impulse signals has been extensively studied. Certain techniques in this field include time–frequency image-based [1], signal sequence-based [2], and pulse description word (PDW) dot matrix-based [3] methods.
Numerous clustering algorithms have been developed for the unknown pulse signals. Liu et al. (2022) [4] proposed an unsupervised identification method based on time–frequency analysis and multiview low-rank sparse subspace clustering (MLRSSC). This method yielded excellent results. Lang et al. (2022) [5] proposed an adaptive density peak clustering (SD-ADPC) method based on subspace decomposition. They tackled the problems of low algorithmic accuracy and high computational costs. Dong et al. (2022) [6] proposed a distributed clustering method based on spatial information, which solved the problem of clustering accuracy degradation in the case of mixed target signals. Xu et al. (2022) [7] constructed a decision tree classification model based on feature vectors and used the learned model to recognize unknown radar pulse sequences; the proposed algorithm exhibited robustness when applied to noisy environment scenarios. Chen et al. (2023) [8] proposed a clustering algorithm based on the MaskR-CNN instance segmentation network to solve the radar pulse stream clustering problem. Mao et al. (2023) [9] designed an intelligent radar signal deinterleaving algorithm by encoding the frequency feature matrix and semantic segmentation network to accomplish pixel-level segmentation of the frequency feature matrix and improve the deinterleaving accuracy. However, because of the complex electromagnetic environment, the probability of overlapping pulses increased. Moreover, most algorithms are difficult to apply to clustering analysis under overlapping signals. Therefore, these algorithms must preprocess the pulse sequences before analysis, which is computationally expensive. Thus, there is a need for a clustering algorithm that can handle many overlapping pulse signals in real time.
Thus, this study proposed a multi-radar clustering segmentation algorithm for full-pulse sequences. The overall flowchart is shown in Figure 1. First, the signal sequence was filtered using SincNet [10] to obtain the low and high cutoff frequencies of the signal. These were then segmented using a window with a customized step to obtain the feature vectors of the signal sequence and then sequentially input into a long short-term memory (LSTM) network to detect the activity of radar impulses. Subsequently, the outputs of the LSTM were fed into the 1-DCNN, respectively. The outputs of the 1-DCNN were fused with a concatenate layer to fuse the extracted features, and clustering was done under a classifier. The experimental results demonstrate that the proposed algorithm is applicable to various types of radar pulse signals, particularly to full-pulse signals with time–frequency overlapping.
In summary, our main contributions in this paper are as follows:
  • We proposed a deep learning framework for full-impulse signal segmentation clustering, which establishes a breakthrough in full-pulse clustering orientation and demonstrates enhanced accuracy compared to traditional single-pulse clustering approaches.
  • We integrated a SincNet subnetwork and a pulse activity detection subnetwork. The SincNet network performs the signal filtering process, and the impulse activity detection network implements the signal clustering and recognition.
  • Extensive experiments show that our proposed method has excellent performance for processing long-time full-pulse sequences. Our model achieved excellent results on segmentation error rate and recognition accuracy metrics.

2. Multi-Radar Clustering and Segmentation Networks

The proposed method used SincNet to extract the start and cutoff frequencies in the signal sequence. The filtered signal was extracted under the convolutional layer to extract the feature vectors, which were then edited via the LSTM for the embedding code. Finally, the embedding code was clustered and segmented using a multilayer perceptron and classifier.

2.1. SincNet Architecture

SincNet is a neural network architecture grounded in convolutional neural networks (CNNs), which derives meaningful filters by imposing specific constraints on the initial convolutional layer. A detailed representation of its structural design is provided in Figure 2. Compared with standard CNNs, SincNet has significant features in terms if signal filtering and can effectively localize to the frequency range where the signal is active.
In impulse signal analysis, SincNet performs convolution on the complete input pulse sequence by utilizing a series of parameterized sinc functions, which are realized through the application of a bandpass filter, and the sinc function is defined as sinc x = sin x / x . The SinNet convolution operation for the first convolutional layer is expressed as (1) [10]:
o u t p u t n = i n p u t n f n = l = 0 L 1 x l f n l
where i n p u t represents the input signal sequence processed by the SincNet model, and n denotes the total number of samples, f n is a filter of length L, and o u t p u t is the filtered output. The low and high cutoff frequencies f 1 , f 2 of the input sequence were learned by the first layer of filters.
In standard CNNs, L weights of each filter must be learned from the data. In contrast, SincNet only requires the input signal to be convolved with a predetermined function g with only a small number of learnable parameters θ, as in the following equation:
o u t p u t n = i n p u t n g n , θ
The defined function g employs a filter bank of bandpass filters in the frequency domain and is transformed into the time domain via an inverse Fourier transform using the sinc function. Consequently, g could be denoted as,
g n , f 1 , f 2 = 2 f 2 sinc 2 π f 2 n 2 f 1 sinc 2 π f 1 n
By employing a sinc function in its filtering mechanism, SincNet significantly decreases the parameter count in the initial convolutional layer relative to traditional convolutional neural networks (CNNs) [11]. The sinc function is particularly effective in handling digital signals, including audio and electroencephalogram (EEG) data. The sinc filter focuses its attention on the two cutoff frequencies, whereas conventional CNNs compute the output of the entire filter bank; consequently, SincNet significantly reduces the computational cost. In addition, the pool size in SincNet was 2 × 2, and the dropout rate was set to 0.5, which improved the generalization ability of the model, reduced parameter redundancy, and overcome the overfitting issue of the model.

2.2. Pulse Activity Detection

Based on the analysis in the previous section, SincNet generates filtered features of the input signal. Accordingly, this section proposes a pulse activity detection layer (PADL) for analyzing the activity range of each pulse and pulse type. Specifically, PADL comprises n LSTM networks, one one-dimensional (1-D) CNN, and one classifier. Before the signal is fed into the LSTM, it must be segmented using a 256-length rectangular window with a step size of 256 to ensure the input length aligns with the LSTM’s input dimensions. The LSTM architecture incorporates 32 memory units and integrates a dropout structure, which randomly deactivates hidden layer nodes during training while preserving their weights to mitigate overfitting. The CNN structure comprises three layers: two convolutional layers and one pooling layer. The pooling layer downsamples the features extracted by the convolutional layers, preserving essential information while reducing computational complexity. The classifier consists of a concatenate layer followed by two fully connected layers. The concatenate layer merges the features derived from the preceding network, while the fully connected layers facilitate the transformation of these features into the final classification output.

2.2.1. Long Short-Term Memory Network

LSTM can efficiently process and predict data at long intervals in a time series. LSTM employs memory cells to store state information from prior layers [12], enabling hidden layer neurons to be influenced by distant inputs through selective feature retention. LSTM effectively addresses the challenges of long sequences by incorporating memory cells, input gates, output gates, and forgetting gates, as illustrated in Figure 3. The memory cell stores critical information, while the input gate determines whether to incorporate the current input into the memory cell. The forgetting gate regulates the retention or removal of information within the memory cell, and the output gate controls whether the stored information is utilized as the current output. Controlling these gates is effective for capturing important long-time dependencies in sequence and can solve the gradient problem. The expression for the LSTM cell at time t is
i t = σ W x i x t + W h i h t 1 + b i
f t = σ W x f x t + W h f h t 1 + b f
o t = σ W x o x t + W h o h t 1 + b o
g t = σ W x c x t + W h c h t 1 + b c
c t = f t c t 1 + i t g t
h t = o t tanh c t
where x t denotes the feature information input at moment t; c t and h t denote the cell state and hidden state at moment t, respectively; g t is the candidate input cell; b and W denote the gate bias and weight parameter, respectively; and σ is a sigmoid activation function that maps a real number to the interval [0, 1], which controls the amount of information passing through each gate. Further, tanh is the hyperbolic tangential activation function, and denotes the Hadamard product.

2.2.2. 1-D Convolutional Neural Network and Classifier

A CNN is a deep learning architecture widely used in image problems, automatic feature extraction, and classification tasks [13]. A CNN comprises neurons with weights and biases similar to those of classical neural networks. Each neuron receives inputs, combines them, and produces an output, typically by using a nonlinear activation function.
With the input being sequential data from the LSTMs, we used a 1-DCNN (Figure 4) that was more compatible with the data, had fewer covariates, and was faster to compute.
The mathematical representation of 1D-CNN is as follows:
o l k = f W l k x t k + b k
where denotes the convolution operator, W l k is the weight of the kth group of 1-D convolution kernels, l is the length of the 1-D convolution kernel, x t k is the input sequence, t is the length of the input sequence, b k is the bias of the kth group of 1-D convolution kernels, o l k is the output sequence after the 1-D convolution operation, l is the length of the output sequence of the 1D convolution layer, and f · is a nonlinear activation function.

2.2.3. Model Parameter Setting

In this study, the processed data used were a 1-D sampling sequence of unknown length that was segmented and input into SincNet, which was then input into LSTM in chronological order. Subsequently, the sequence was analyzed using a 1-DCNN. The convolutional layer performs a convolutional operation on the input data by sliding a fixed-size window, extracting features within the window and mapping them to the next layer. As demonstrated in Table 1, the corresponding floating point operations per second (FLOPs) were calculated for each network layer.

3. Simulation and Time–Frequency Analysis of Radar Signals

In this section, various modulation signals are created and preprocessed. The signal parameters were set within a reasonable range to ensure generalizability of the dataset.

3.1. Simulation of Radar Signals

Specific parameters were established for each signal to determine the rationality and reliability of the dataset. Six representative signals and two hybrid modulated signals were designated: linear frequency modulation (LFM), nonlinear frequency modulation (NLFM), Costas code, binary phase shift keying (BPSK), Frank code, and normal signal (NS). Considering only the radar signals and noise, the receiver intercepted individual signals as follows:
f t = s t + n t = A t exp j φ t + n t
where n t is the additive Gaussian white noise (AGWN), and A t and j φ t are the amplitude and phase modulation functions of the signal, respectively. Different modulation types corresponded to different j φ t , depending primarily on the signal parameters. The parameter settings are listed in Table 2.
Each sample of the dataset used in this study was full-pulse sequence data, and the simulated environment contained the simultaneous reception of pulse signals from up to six radars. Therefore, based on (11), the mathematical expression for the full-pulse sequence signal is:
x t = i = 1 N A i t exp j φ i t + n t

3.2. Time–Frequency Analysis

Time–frequency analysis (TFA) is commonly used to study non-stationary signals. In this study, we used the Choi–Williams time–frequency distribution (CWD), which is a bilinear TFA method with the advantages of simplicity and suppression of cross terms. The CWD of the signal x t can be expressed as follows:
C x t , ω = + + A x θ , τ ϕ θ , τ exp j t θ + ω τ d τ d θ ,
A x θ , τ = 1 2 π + r x t , τ exp j θ t d t ,
r x t , τ = x t + τ / 2 x t τ / 2 ,
where t and ω are the time and frequency coordinates, respectively, and ϕ θ , τ is the exponential kernel function, defined as
ϕ θ , τ = exp θ 2 τ 2 σ .
To suppress the cross terms and satisfy the need for frequency-domain resolution, we adopted σ = 1 . The TFIs of the various types of radar signals established by CWD are shown in Figure 5, and the CWD spectrum of the full-pulse sequence is shown in Figure 6.

4. Experimental Results and Analysis

4.1. Training Process

A Python 3.7 based Linux system was used as the experimental environment. An NVIDIA 3080Ti was used for acceleration, the number of epochs was set to 500, and the batch size was set to 64. The training optimizer used was Adam, which combined the properties of the momentum method and adaptive learning rate to update the model parameters by calculating the exponentially shifted averages of the gradient and momentum estimates. This process effectively solved nonconvex optimization problems, accelerated convergence, and performed better when dealing with sparse gradients. A Python-based Linux system was used as the experimental environment. An NVIDIA 3080Ti was used for acceleration, the number of epochs was set to 500, and the batch size was set to 64. The training optimizer used was Adam, which combined the properties of the momentum method and adaptive learning rate to update the model parameters by calculating the exponentially shifted averages of the gradient and momentum estimates. This process effectively solved nonconvex optimization problems, accelerated convergence, and performed better when dealing with sparse gradients. The dataset comprised 1000 signal samples per category for each signal-to-noise ratio (SNR) level. With SNR values ranging from −9 dB to 5 dB, this resulted in 15 × 1000 × 6 = 90,000 samples across all categories. Employing a 4:1 training-test split ratio, the dataset was partitioned into 72,000 training samples and 18,000 testing samples.

4.2. Influence Analysis of Network Parameters

To obtain a suitable network architecture, the effect of parameters on network performance was derived by varying the parameters. Table 3 lists the experimental results. During the first part of the experiments, the convolutional kernel size of the 1-DNN was kept constant, and the number of LSTM units was gradually increased. When the number of LSTM units changed, the recognition accuracy of MRCS remained around 95%. The experimental results show that the temporal features in the sincNet output could be extracted using the LSTM with the appropriate number of units. Hence, the network architecture with the 32 LSTM units was chosen, as shown in Table 3. In the second part of the experiments, the number of LSTM units was kept constant, and only the size of the 1-DCNN convolutional kernel was changed. When the convolutional kernel size was 1 × 4, the accuracy of the network recognition showed a decreasing trend. Thus, we used a network architecture with a convolutional kernel size of 1 × 3. In the third part of the experiment, optimal network performance was achieved with a Dropout ratio of 0.5, as shown in Table 3.

4.3. Experimental Results and Comparative Analysis

By inserting the full-pulse sequence into the MRCS, we obtained the results of clustering and segmentation of different pulses in the pulse sequence, and the visualization of the clustering and segmentation effect is shown in Figure 7. To verify the effectiveness of the MRCS, a single LSTM and 1-DCNN were compared, both of which had the same parameter settings as the MRCS. We analyzed the output of the model as a metric of segmentation error rate (SER), which indicated the extent to which the system incorrectly segmented the radar in the data. The SER was calculated as shown in Figure 8, SER = n C + n M + n F / N , where n C is the length of the obfuscated data, n M is the length of the lost data, and n F is the length of the false alarm data. The SER results of different networks are shown in Table 4.
Another metric used was the identification of the segmented data and obtaining the identification accuracy for the identification type compared with the ground truth, to determine the reliability of the segmented location. Figure 9 illustrates the recognition accuracies of MRCS, single 1-DCNN, single LSTM, and ResNet under the test set. The accuracy of MRCS was higher than the other networks. At 0 dB, the recognition accuracies of MRCS, single 1-DCNN, single LSTM, and ResNet were 96%, 92%, 91%, and 92%, respectively. The recognition accuracies of MRCS were at least 5% higher than those of single LSTM, single 1-DCNN, and ResNet. Analysis of data with varying temporal overlap ratios in Table 5 shows that as the temporal overlap increased, the recognition accuracy declined. However, the MRCS algorithm still demonstrated robust performance under these conditions.
Figure 10, Figure 11, Figure 12 and Figure 13 show the confusion matrices of the MRCS, single 1-DCNN, single LSTM, and ResNet, respectively, representing the detailed information of the classification results of the model. The distribution of the confusion matrices indicated that all networks could differentiate between the six classes of signals; however, the single LSTM and single 1-DCNN exhibited a greater degree of confusion for the LFM and Frank signals, and MRCS exhibited somewhat better differentiation performance. To analyze the generalization capability of the algorithm, we extracted pulse signals from the public dataset (RML2016.10a) and reconstructed them into full-pulse signals for validation. The experimental results are presented in Table 6.

5. Conclusions

This study proposed a full-pulse clustering segmentation algorithm that integrated the frequency interception capability of SincNet, feature extraction capability of 1-DCNN, and sequence analysis capability of LSTM. Using the bandpass data provided by SincNet and the sequence features provided by the 1-DCNN, the full-pulse sequence was successfully and accurately localized and clustered. Therefore, this method solved the problems of high pulse overlap rate, variety of pulse signals, and large amount of pulse data in the full-pulse sequence data. The empirical validation of a large amount of data showed that our method was applicable to full-pulse sequence data with good clustering performance. In future, we plan to explore clustering algorithms for full-pulse sequences with higher accuracy. Although the signal segmentation ability of the algorithm proposed in this paper is good, the application environment is still only slightly overlapping radar signals, and in the future, we will work on the separation and clustering of full-pulse signals under severe overlapping. Future research will focus on algorithmic refinements through enhanced signal preprocessing, with the specific objective of improving robustness in increasingly complex electromagnetic environments.

Author Contributions

Conceptualization, T.C. and Y.L.; Methodology, Y.L.; Software, L.G.; Validation, B.Y.; Formal analysis, Y.L.; Resources, T.C.; Data curation, B.Y.; Writing—original draft, Y.L.; Writing—review & editing, T.C.; Funding acquisition, T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, S.; Ru, H.; Li, D.; Shui, P.; Xue, J. Marine Radar Small Target Classification Based on Block-Whitened TimeFrequency Spectrogram and Pre-Trained CNN. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5101311. [Google Scholar] [CrossRef]
  2. Wan, H.; Tian, X.; Liang, J.; Shen, X. Sequence-Feature Detection of Small Targets in Sea Clutter Based on Bi-LSTM. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4208811. [Google Scholar] [CrossRef]
  3. Chen, T.; Liu, Y.; Guo, L.; Lei, Y. A novel deinterleaving method for radar pulse trains using pulse descriptor word dot matrix images and cascade-recurrent loop network. IET Radar Sonar Navig. 2023, 17, 1626–1638. [Google Scholar] [CrossRef]
  4. Liu, L.; Xu, S. Unsupervised radar signal recognition based on multi-block—Multi-view Low-Rank Sparse Subspace Clustering. IET Radar Sonar Navig. 2022, 16, 542–551. [Google Scholar] [CrossRef]
  5. Lang, P.; Fu, X.; Cui, Z.; Feng, C.; Chang, J. Subspace Decomposition Based Adaptive Density Peak Clustering for Radar Signals Sorting. IEEE Signal Process. Lett. 2022, 29, 424–428. [Google Scholar] [CrossRef]
  6. Dong, X.; Liang, Y.; Wang, J. Distributed Clustering Method Based on Spatial Information. IEEE Access 2022, 10, 53143–53152. [Google Scholar] [CrossRef]
  7. Xu, T.; Yuan, S.; Liu, Z.; Guo, F. Radar Emitter Recognition Based on Parameter Set Clustering and Classification. Remote Sens. 2022, 14, 4468. [Google Scholar] [CrossRef]
  8. Chen, T.; Yang, B.; Guo, L. Radar Pulse Stream Clustering Based on MaskRCNN Instance Segmentation Network. IEEE Signal Process. Lett. 2023, 30, 1022–1026. [Google Scholar] [CrossRef]
  9. Mao, Y.; Ren, W.; Li, X.; Yang, Z.; Cao, W. Sep-RefineNet: A Deinterleaving Method for Radar Signals Based on Semantic Segmentation. Appl. Sci. 2023, 13, 2726. [Google Scholar] [CrossRef]
  10. Prashanth, H.C.; Rao, M.; Eledath, D.; Ramasubramanian, C. Trainable windows for SincNet architecture. Eurasip J. Audio Speech Music. Process. 2023, 2023, 3. [Google Scholar] [CrossRef]
  11. Wei, G.; Zhang, Y.; Min, H.; Xu, Y. End-to-end speaker identification research based on multi-scale SincNet and CGAN. Neural Comput. Appl. 2023, 35, 22209–22222. [Google Scholar] [CrossRef]
  12. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart.
Figure 1. Overall flowchart.
Remotesensing 17 01538 g001
Figure 2. Architecture of SincNet.
Figure 2. Architecture of SincNet.
Remotesensing 17 01538 g002
Figure 3. Architecture of LSTM.
Figure 3. Architecture of LSTM.
Remotesensing 17 01538 g003
Figure 4. Architecture of 1-DCNN and Classifier.
Figure 4. Architecture of 1-DCNN and Classifier.
Remotesensing 17 01538 g004
Figure 5. Time–frequency distribution of CWD for each modulation type.
Figure 5. Time–frequency distribution of CWD for each modulation type.
Remotesensing 17 01538 g005
Figure 6. Time–frequency distribution of CWD for full-pulse sequences.
Figure 6. Time–frequency distribution of CWD for full-pulse sequences.
Remotesensing 17 01538 g006
Figure 7. Visualization of model output.
Figure 7. Visualization of model output.
Remotesensing 17 01538 g007
Figure 8. Schematic of SER calculation.
Figure 8. Schematic of SER calculation.
Remotesensing 17 01538 g008
Figure 9. Accuracy of the MRCS, single 1-DCNN, single LSTM, and ResNet.
Figure 9. Accuracy of the MRCS, single 1-DCNN, single LSTM, and ResNet.
Remotesensing 17 01538 g009
Figure 10. Confusion matrix of the MRCS.
Figure 10. Confusion matrix of the MRCS.
Remotesensing 17 01538 g010
Figure 11. Confusion matrix of the single 1-DCNN.
Figure 11. Confusion matrix of the single 1-DCNN.
Remotesensing 17 01538 g011
Figure 12. Confusion matrix of the single LSTM.
Figure 12. Confusion matrix of the single LSTM.
Remotesensing 17 01538 g012
Figure 13. Confusion matrix of the single ResNet.
Figure 13. Confusion matrix of the single ResNet.
Remotesensing 17 01538 g013
Table 1. Parameter of MRCS-Net.
Table 1. Parameter of MRCS-Net.
SubnetworkNetwork LayerHyperparametersValueFLOPs
SincNetSinc filterWindow FunctionHamming10.5 M
MaxpoolPooling Size1 × 210.2 K
DropoutDropout0.510.2 K
Conv1DKenel Size1 × 34.47 M
LSTMLSTMNeuron Number25634.3 M
1-DCNNConv1DKenel Size1 × 31.6 M
MaxpoolPooling Size1 × 24.1 K
Conv1DKenel Size1 × 31.6 M
ClassifierFull connected layer Units12849.2 K
ClassifierFull connected layer UnitsC2.3 K
Table 2. Parameterization of signals.
Table 2. Parameterization of signals.
Signal WaveformsParametersUniform Ranges
LFM&NLFMNormalized sampling rate f s 1
Number of samples N [600, 1200]
B f s / 8 , 3 f s / 8
Initial frequency f 0 f s / 8 , 3 f s / 8
Coastas N [600, 1200]
Number changed[3, 6]
Fundamental   frequency   f u f s / 24
BPSK N [600, 1200]
Barker codes[8]
Carrier frequency f c f s / 8 , 3 f s / 8
Frank N [600, 1200]
Frequency steps M [4, 8]
f c f s / 8 , 3 f s / 8
NS N [600, 1200]
f c f s / 8 , 3 f s / 8
Table 3. Experimental results with different parameters.
Table 3. Experimental results with different parameters.
Filter Size of 1-DCNNNumber of LSTM UnitsDropoutAccuracy
1 × 3160.2592.45%
3293.08%
6492.91%
1 × 23290.83%
1 × 394.26%
1 × 493.28%
1 × 592.67%
1 × 3160.595.41%
3296.75%
6494.80%
1 × 23292.17%
1 × 396.75%
1 × 495.39%
1 × 595.12%
1 × 3160.7591.50%
3292.11%
6492.02%
1 × 23287.44%
1 × 392.52%
1 × 492.39%
1 × 591.71%
Table 4. SER for each network.
Table 4. SER for each network.
NetworkConfusionMissFalse alarmSER
MRCS4.24%4.95%0.43%9.62%
Single 1-DCNN4.61%4.96%0.70%10.27%
Single LSTM4.88%5.48%1.21%11.57%
Table 5. Experimental results with different temporal overlap ratios.
Table 5. Experimental results with different temporal overlap ratios.
Temporal Overlap RatioMRCSSingle 1-DCNNSingle LSTMResNet
10%96.0%93.6%91.3%94.1%
15%92.2%89.6%85.1%90.7%
20%87.9%82.1%76.4%86.0%
Table 6. Experimental results with the public dataset.
Table 6. Experimental results with the public dataset.
SNRMRCSSingle 1-DCNNSingle LSTMResnet
−931.5%21.3%30.7%30.4%
−748.2%35.1%45.5%47.1%
−562.6%49.8%58.1%59.0%
−375.7%60.2%70.8%71.9%
−182.0%68.7%73.2%78.9%
186.7%73.4%78.7%80.5%
387.5%75.1%77.0%82.6%
588.3%75.3%77.6%82.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, T.; Lei, Y.; Guo, L.; Yang, B. MRCS-Net: Multi-Radar Clustering Segmentation Networks for Full-Pulse Sequences. Remote Sens. 2025, 17, 1538. https://doi.org/10.3390/rs17091538

AMA Style

Chen T, Lei Y, Guo L, Yang B. MRCS-Net: Multi-Radar Clustering Segmentation Networks for Full-Pulse Sequences. Remote Sensing. 2025; 17(9):1538. https://doi.org/10.3390/rs17091538

Chicago/Turabian Style

Chen, Tao, Yu Lei, Limin Guo, and Boyi Yang. 2025. "MRCS-Net: Multi-Radar Clustering Segmentation Networks for Full-Pulse Sequences" Remote Sensing 17, no. 9: 1538. https://doi.org/10.3390/rs17091538

APA Style

Chen, T., Lei, Y., Guo, L., & Yang, B. (2025). MRCS-Net: Multi-Radar Clustering Segmentation Networks for Full-Pulse Sequences. Remote Sensing, 17(9), 1538. https://doi.org/10.3390/rs17091538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop