Next Article in Journal
Design Phase-Locked Loop Using a Continuous-Time Bandpass Delta-Sigma Time-to-Digital Converter
Previous Article in Journal
Thermal Degradation Diagnosis of ATE Driver Boards Using ALT-Derived Cumulative Degradation Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Noise Robustness in Few-Shot Automatic Modulation Classification via Complex-Valued Autoencoders

by
Minghui Gao
1,
Binquan Zhang
1,*,
Lu Wang
1,
Xiaogang Tang
1,* and
Hao Huan
2
1
School of Space Information, Space Engineering University, Beijing 101416, China
2
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(3), 674; https://doi.org/10.3390/electronics15030674
Submission received: 25 December 2025 / Revised: 24 January 2026 / Accepted: 2 February 2026 / Published: 3 February 2026

Abstract

The emergence of radio frequency machine learning has significantly propelled the application of deep learning (DL) methods in automatic modulation classification (AMC). However, under non-cooperative scenarios, the performance of DL-based AMC suffers severe performance degradation due to scarce labeled samples and noise interference. To enhance noise robustness in few-shot AMC, this paper proposes a complex-domain autoencoder-based method where a complex-valued noise reduction network (CNRN) is embedded into the AMC framework, jointly extracting complex-valued and temporal features from noisy signals to achieve signal–noise separation. Our framework executes four sequential operations: high-signal-to-noise-ratio (high-SNR) samples are first isolated from limited raw data via unsupervised classification; rotation and cyclic time-shifting operations then augment the sample space; the CNRN is subsequently trained on augmented data; and final AMC classification is implemented through DL-based classifiers. Experimental validation on RML 2016.10a dataset demonstrates: (1) for −20 dB signals, denoising achieves 20.18 dB SNR improvement with 87.74% mean squared error reduction; (2) across the −20 dB to 18 dB range, denoised signals exhibit accuracy improvements of 21.57% under DL-based classifiers. Physical validation further confirms that the proposed method exhibits enhanced noise robustness, demonstrating its practical utility in real-world scenarios.

1. Introduction

Automatic modulation classification (AMC), functioning as a pivotal intermediate stage between signal detection and demodulation, plays a fundamental role in characterizing radio propagation environments and signal schemes within communication systems [1,2]. AMC serves to classify received signals into one of several candidate modulation types in non-cooperative communication environments, where prior knowledge of transmission parameters is unavailable. The emergence of intelligent communication systems has significantly elevated the importance of AMC in critical domains such as spectrum sensing and dynamic spectrum access [3,4]. Driven by its extensive application prospects and increasing practical requirements, AMC has garnered substantial research attention across both civilian and military domains.
Current methodologies for AMC are predominantly categorized into three classes: likelihood-based (LB) approaches, feature-based (FB) approaches, and deep learning-based (DLB) methods [5,6]. LB hypothesis testing offers a solid theoretical foundation, yet demands substantial prior knowledge and exhibits high computational complexity. FB methods deliver better recognition performance and are widely adopted in engineering practice; however, they suffer from limited adaptability and rely heavily on manually engineered features. Extensive studies have demonstrated that both LB and FB methodologies are typically tailored to specific modulation schemes or specialized operational scenarios. Consequently, the requisite selection of appropriate techniques based on prior knowledge constrains their adaptability to complex situations.
The remarkable pattern recognition capabilities of DLB approaches have led to their widespread application in AMC [7], particularly in achieving outstanding recognition accuracy. However, under non-cooperative scenarios, acquiring sufficient signal samples is challenging, and the resulting scarcity of training data severely restricts the performance of deep neural networks (DNNs) [8,9]. To address these limitations, related research primarily explores solutions from two perspectives: data and model/algorithm design [10]. The data-centric approach aims to achieve few-shot AMC by expanding the dataset or enhancing feature representations. For instance, Chen et al. utilized wavelet transform for data augmentation, reconstructing new samples by replacing detail coefficients from discrete wavelet decomposition to expand the training set [11]. Kong et al. employed supervised contrastive learning to enhance the feature representation of signals, thereby improving recognition accuracy under few-shot conditions [12]. Conversely, the model/algorithm-centric approach focuses on optimizing network architectures to deeply exploit distributional differences within limited data for AMC. For example, Tan et al. designed a multi-scale feature fusion module and a distribution similarity classifier to achieve AMC with limited labeled samples [13]. Zha et al. employed a designed multi channel feature extraction network to obtain multi-modal fused features which allows for effective classification of new signal classes with only a few samples [14].
The proliferation of radiating devices intensifies signal superposition within wireless channels, leading to increasingly complex propagation environments [15,16]. Illustrative examples include: complex electromagnetic distributions in unmanned aerial vehicle communication systems [17]; proximity constraints of wireless sensor nodes in underwater environments [18]; and dynamic multi-link conditions in satellite constellations operating in space [19]. Concurrently, signal reflections and refractions during transmission induce carrier frequency offset and phase noise [20], resulting in received signal distortion [21]. Furthermore, wireless channel conditions are adversely affected by natural factors such as meteorological variations and terrain features [22]. Within these complex propagation environments, the compounded effects of radio signals and external natural phenomena precipitate signal quality degradation. Consequently, interference from noise and other impairments substantially undermines the accuracy and reliability of AMC under few-shot conditions [23,24,25].

1.1. Related Works

To address performance degradation in few-shot AMC caused by impaired sample quality, Jiang et al. leveraged signal power spectra integrated with transfer learning and sparse autoencoders, moderately enhancing classification accuracy with limited training data [26]. Concurrently, Li et al. devised a noise preprocessing module to mitigate impulsive interference in underwater acoustic communications, architecting a hybrid neural network combining attention-aided convolutional neural networks (Att-CNN) with sparse autoencoders to achieve noise-robust recognition under data scarcity [27]. Zhang et al. pioneered channel–spatial attention mechanisms fused with relation networks, extracting discriminative feature distributions for fine-grained modulation classification while bolstering low-signal-to-noise-ratio (low-SNR) robustness [28]. Further advancing this domain, Wang et al. developed the integrated attention fusion network, synergizing noise preprocessing, attention mechanisms, and few-shot learning to enhance feature extraction efficiency and noise immunity with minimal labeled samples [29]. Zhang et al. designed a foreground segmentation-based few-shot learning framework to identify fine-grained modulation types of jamming signals. This framework can remove background noise and clutter with very few samples, thus demonstrating strong robustness in diverse noisy environments [30].
Although the aforementioned methods enhance AMC performance, achieving robust and high-accuracy recognition under noisy conditions remains a persistent challenge for existing AMC frameworks [31]. Hao et al. introduced a meta-learning framework correcting data distribution bias through optimized initialization and class-correlated confusion algorithms, substantially improving low-SNR performance in data-deficient regimes [32]. To tackle the challenge of noise-affected few-shot learning (FSL) in AMC, Sun et al. introduced a modulated signal pre-transformation framework [33]. By integrating an adaptive noise filtering module and an info-preserved augmentation module, this framework significantly boosts performance on existing FSL benchmarks. Complementarily, Li et al. employed EfficientDet for bidirectional feature fusion, augmenting inter-signal correlations, and implemented transfer learning for marine noise-robust signal classification [34]. Yi et al. constructed a denoising model based on a masked autoencoder structure to address the challenge of low-SNR AMC for radio-frequency proximity sensor signals and achieved few-shot recognition under intermittent sampling modes via transfer learning [35]. Zhan et al. proposed a window-based multi-scale temporal cross-attention method for few-shot modulation recognition. They designed a windowed cross-attention mechanism to achieve cross-modal fusion between constellation diagrams and in-phase/quadrature (I/Q) signals, capturing inter-window relationships and local–global information, thereby suppressing noise and enhancing robustness [36].
Currently, AMC methods designed to address noise robustness under few-shot conditions are primarily categorized into two groups: integrated monolithic models with inherent noise resistance and separated modular models that employ denoising preprocessing. Integrated monolithic models, such as those in [28,36], typically enhance noise robustness by combining few-shot learning with mechanisms like attention, which amplify useful signal features while suppressing irrelevant or noisy components. Separated modular models, exemplified by [29,35], primarily mitigate the impact of noise on AMC through a denoising preprocessing stage to improve recognition rates. This paradigm consists of two distinct modules—preprocessing and classification—offering greater flexibility. The modular design allows for easy adaptation of the classifier module to meet varying requirements, making this approach more versatile in practical applications.

1.2. Motivation and Contributions

Although these approaches demonstrate progressive enhancements, current research predominantly processes signals through segregated I/Q components while neglecting complex-domain characteristics (amplitude–phase interdependencies) [37,38], and predominantly employs real-valued network architectures incapable of fully exploiting complex-valued signal representations, thereby limiting denoising efficacy and warranting further investigation into complex-domain feature extraction paradigms.
Inspired by the integration of complex-domain theory with deep learning methodologies, this paper presents a complex-valued noise reduction network for few-shot modulation signals. The proposed framework first introduces an unsupervised classification module to obtain high-SNR samples to alleviate the difficulty of acquiring samples for the training sets of denoising networks. Then, by using a data augmentation module including rotation and cyclic time shifting (CTS), the problem regarding the limited sample sizes of end-to-end denoising networks in non-cooperative scenarios can be solved. Subsequently, a complex-valued noise reduction network (CNRN) is designed to solve the problem that the existing real-valued networks ignore the complex-domain characteristics of modulation signals. The amplitude and phase characteristics of modulation signals are fully explored, and the model proposed in this paper is validated on a public dataset and our collected data. Finally, we implement DLB few-shot AMC. Extensive comparative experiments validate the effectiveness of the proposed method, demonstrating significant improvements in both denoising performance and modulation classification accuracy. The contributions of this work are summarized as follows:
  • An unsupervised SNR classification method based on time–frequency representations is designed to address the challenge of constructing high-quality training samples for denoising networks. The framework employs the K-means algorithm to cluster modulation signals using time–frequency diagrams, integrating image information entropy for high/low SNR binary classification.
  • To overcome the scarcity of high-quality labeled samples, a composite augmentation strategy incorporating rotation and CTS operations expands the classified high-SNR dataset, effectively mitigating overfitting in few-shot network training scenarios.
  • A CNRN is developed to overcome existing real-valued networks’ inability to effectively learn complex-domain features of communication signals. Trained on augmented high-quality samples, the CNRN extracts discriminative complex-domain features during denoising.
  • Extensive comparative validations experiments have been conducted on a public dataset and collected data, encompassing both diverse denoising approaches and representative DLB classifiers. The results verify the superior performance of the proposed method.
This work focuses on the generic challenge of few-shot AMC in non-cooperative wireless communication scenarios, where the primary and most statistically universal impairment is assumed to be additive white Gaussian noise (AWGN). The proposed framework is designed to enhance robustness against this fundamental distortion model.

1.3. Organization

The remainder of this paper is organized as follows. Section 2 details the proposed methodology for enhancing noise robustness under few-shot conditions. Section 3 presents experimental configurations and comparative results. Section 4 discusses performance evaluations and limitations. Finally, conclusions and future research directions are provided in Section 5.

2. Methods

2.1. Signal Model

The representation of signal x ( t ) obtained after carrier modulation of the baseband signal is given by (1). In its representation, the modulated signal comprises orthogonal I and Q components, corresponding to the real and imaginary parts, respectively [39].
x ( t ) = A 0 A c cos 2 π f 0 + f c t + θ 0 + θ c + j sin 2 π f 0 + f c t + θ 0 + θ c
where A 0 , f 0 and θ 0 denote the amplitude, frequency, and phase of the baseband signal, respectively, while A c , f c and θ c represent the amplitude, frequency, and phase of the carrier.
At the cooperative transmitter, the complex-valued modulated signal x ( t ) propagates through a complex-valued linear time-invariant (LTI) wireless channel characterized by transfer function h ( t ) , and is subsequently corrupted by AWGN ψ ( t ) from the electromagnetic environment. The intercepted signal s ( t ) at the non-cooperative receiver is expressed as:
s ( t ) = x ( t ) h ( t ) + ψ ( t ) = x I ( t ) + j x Q ( t ) h I ( t ) + j h Q ( t ) + ψ I ( t ) + j ψ Q ( t ) = x I ( t ) h I ( t ) x Q ( t ) h Q ( t ) + ψ I ( t ) + j x I ( t ) h Q ( t ) + x Q ( t ) h I ( t ) + ψ Q ( t )
where ∗ indicates that the time-domain signal enters the communication system for the convolution operation with the transfer function h ( t ) , I ( t ) represents the real part of the complex-valued number, and Q ( t ) represents the imaginary part of the complex-valued number.

2.2. Framework Design

AMC in non-cooperative scenarios faces the dual challenges of insufficient sample availability and complex electromagnetic environments characterized by diverse interference sources. This leads to received modulated signals being contaminated by varying levels of noise. Signal denoising preprocessing plays a crucial role in enhancing the robustness of AMC. Therefore, this paper proposes a complex-valued autoencoder-based method, specifically designed for few-shot AMC, to enhance robust modulation classification. The overall framework is illustrated in Figure 1.
In non-cooperative scenarios, the electromagnetic environment is complex and dynamic, resulting in scarce collected sample data with significant SNR variations. Training a complex-valued autoencoder network requires a sufficient number of high-quality samples. To address this, the paper designs an SNR classification module. During the offline stage, this module operates in Steps 1–3: data transform of the few-shot noisy signals is performed based on the Choi–Williams distribution (CWD) to obtain time–frequency representations; the unsupervised K-means algorithm is then applied to cluster the time–frequency features extracted by a CNN; and high-SNR data are separated and acquired based on the image information entropy. In Step 4 during the offline stage, a data augmentation module based on rotation and CTS expands the quantity of high-SNR samples obtained from Step 3, generating augmented data (i.e., a substantial set of high-SNR samples). The augmented samples are then utilized to train the complex-valued denoising module, CNRN. In the online deployment stage, noisy signals undergo denoising through the pre-trained CNRN before being fed into the modulation recognition network, yielding substantially enhanced robustness in classification performance.

2.3. SNR Classification Module

To address the challenge that intercepted low-SNR signals lack corresponding clean counterparts—making it difficult to acquire sample pairs that satisfy DL network training requirements—this paper proposes an SNR classification module based on time–frequency representations, as illustrated in Figure 2. Leveraging the characteristic that time–frequency representations contain richer information than numerical sequences [5], we first transform noisy signals into time–frequency diagrams using CWD [40]. Subsequently, CNN extracts features from these time–frequency diagrams. These features are then clustered via K-means to achieve SNR-based clustering. Finally, high- and low-SNR categories are discriminated by evaluating the information entropy of the clustered image representations.
The accuracy of SNR clustering improves with richer noisy signal features. Therefore, leveraging the capability of time–frequency representations to provide joint time–frequency distribution information of signals [41], we employ them for SNR classification. As depicted in Figure 3, coherent bright regions in the time–frequency representation indicate signal distribution, where higher brightness corresponds to greater signal magnitude (energy/intensity). Scattered bright regions represent noise distribution, while dark areas denote the absence of signals. Comparative analysis of Figure 3a,b demonstrates that noise disrupts the structured envelope distribution in the time–frequency representation, even to the extent of obscuring the inherent envelope shape of the signal. Consequently, the proposed time–frequency representation-based SNR classification method achieves unsupervised clustering by simultaneously preserving modulation-related signal characteristics and exploiting the rich feature information inherent in image representations.
As a fundamental and widely-used clustering algorithm, K-means employs unsupervised iteration to identify optimal partitioning schemes for k clusters. K-means performs binary clustering based on the features of time–frequency diagrams. During this binary clustering process, two cluster centers, μ m 1 and μ m 2 , are randomly selected. The sum of the average distances, denoted as G m , from other samples x m 1 and x m 2 in the two clusters to their respective cluster centers is calculated using (3). The iteration terminates when the average distance G m meets the minimum condition. After the iteration stops, two clustering results, X 1 and X 2 , for the time–frequency diagrams are obtained.
G m = d x m 1 , μ m 1 2 + d x m 2 , μ m 2 2
Although clustering signal time–frequency representations allows distinguishing between high-SNR and low-SNR classes by visually inspecting the signal quality in clustered results, increasing modulation types incur higher computational overhead. To enable faster discrimination, we compute the average information entropy H ¯ 1 and H ¯ 2 of clustered images X 1 and X 2 according to (4). Higher image entropy indicates greater disorder in the representation, where increased disorder corresponds to stronger noise and lower SNR [42]. Following this principle, the cluster containing high-SNR signal time–frequency representations is selected based on lower entropy values. Ultimately, high-SNR signal samples Z are extracted from the dataset using their sample labels.
H - = 1 k i = 0 255 j = 0 255 P i , j x log 2 P i , j x / k
where i represents the grey value of a pixel, j represents the average grey value of the associated neighbourhood, P i , j represents the proportion of pixels whose grey value is i and whose average neighbourhood grey value is j, x represents the sample, and k represents the number of samples.

2.4. Data Augmentation Module

To address the challenge of insufficient availability of high-quality, reliably labeled samples in non-cooperative scenarios—which impedes the training of DLB denoising methods—this paper proposes a data augmentation module employing rotation and CTS operations. This module applies phase rotation and CTS operations to the high-SNR samples identified by the SNR classification module. These operations significantly increase the volume of training data by generating augmented samples from the original limited high-SNR set.
Prior to data augmentation, wavelet transform (WT) is applied to the high-SNR data Z obtained from the SNR classification module. These high-SNR samples—characterized by limited quantity yet minimal noise contamination—undergo denoising via WT to produce refined approximations of clean signal data Z for subsequent operations.
Ψ a , ξ t = a 1 2 Ψ t ξ a a > 0 , ξ R
where a is the scaling factor, ξ is the frequency shift factor, and Ψ a , ξ t is the wavelet basis function.
Exploiting the spatial orthogonality of complex-valued modulated signals, intercepted signals may exhibit rotated orientations in space due to varying polarization alignments. Concurrently, the temporal continuity of modulated signals—combined with randomness in signal interception and window overlapping effects during processing—may cause intercepted samples to contain adjacent signal segments or non-information-bearing redundant bits (e.g., guard intervals). Therefore, the proposed rotation and CTS augmentation method expands the dataset while preserving the temporal correlation of modulation signal sequences [43].
The rotation process is as follows:
I Q = cos θ sin θ sin θ cos θ I Q θ = 0 , 2 π · 1 n , 2 π · 2 n , , 2 π · n 1 n
where I ; Q represents the dataset with a limited number of original signal samples, I ; Q represents the dataset augmented by rotation, and θ represents the counterclockwise rotation angle, which is divided according to intervals of 360 on average according to the expansion multiple n.
The CTS process is given by (7) and (8). Radio signals exhibit periodicity. Removing data from the previous period in the signal is equivalent to introducing a time delay to the received signal. To facilitate data processing, the removed segment is appended to the end of the signal sample—effectively performing a cyclic left shift. When 0 < l < N( l N · 1 n , N · 2 n , , N · ( n 1 ) n ), enhancement is applied via (7); when l = N, enhancement is implemented through (8).
r IQ = s I l s I N 1 s I 0 s I l 1 s Q l s Q N 1 s Q 0 s Q l 1
r IQ = s I 0 s I l 1 s Q 0 s Q l 1
where r IQ is the signal sample matrix in IQ format with a length of N, N represents the length of the window added when processing the intercepted signal, which is also the signal sample length, and l represents the length of the loop shifted to the left.

2.5. Complex-Valued Denoising Module

Addressing the limitation of real-valued networks in fully extracting complex-valued features of signals transmitted through electromagnetic environments, this paper proposes a complex-valued autoencoder-based noise reduction network. The denoising network adopts an autoencoder framework [44]. During encoding, high-dimensional features of noisy signals undergo dimensionality reduction to extract latent variables capturing intrinsic signal characteristics. The decoding process then reconstructs these intrinsic features from the latent space into data features matching the dimensionality of the noisy input, thereby achieving denoising and producing purified signal data.
Current processing methods for communication modulation signals predominantly follow the formulation in (9), where signals are represented as real-valued two-dimensional arrays by separating real and imaginary components into distinct channels. This real-valued representation fails to capture complex-domain characteristics such as amplitude and phase. Since signal properties dictate communication system behaviors, and system architectures determine network design paradigms, existing denoising networks are predominantly real-valued (mapping function is given by (10) [45]). To better preserve the complex-domain characteristics of modulated signal time series, this paper proposes a complex-valued neural network for signal denoising, with its mapping function defined in (11).
s = I Q = I 0 I 1 I N 1 Q 0 Q 1 Q N 1
where I and Q represent the real and imaginary parts, respectively, of the received signal sample s, which are both real numbers, and N represents the signal sample length after performing sampling and window processing.
s out = W 1 · s in , W 1 = w 11 w 12 w 21 w 22
where s in and s out represent the input and output of the network, respectively, and W 1 represents the weight matrix of the real-valued network.
s out = W 2 · s in , W 2 = w r w i w i w r
where s in and s out represent the input and output of the network, respectively, W 2 represents the weight matrix of the complex-valued network, w r and w i represent the weight values of the real and imaginary cores of the complex-valued network, respectively.
Leveraging the temporal regularity inherent in communication modulation signals and long short-term memory (LSTM) networks’ proficiency in processing sequential data [46], our denoising model adopts LSTM as its foundational architecture. To effectively capture complex-domain characteristics and temporal features of modulated signals, we design a complex-valued LSTM (CLSTM) unit with cross-term interactions within an autoencoder framework.
As illustrated in Figure 4, the training phase feeds augmented high-SNR signals D (processed via rotation and CTS) into the proposed CNRN. These inputs are corrupted with AWGN ψ ( t ) (−20 dB to 18 dB in 2 dB intervals) to form noisy-clean sample pairs. The network weight W n is optimized by minimizing the loss between predictions D and clean targets D until convergence. The specific implementation details are shown in Algorithm 1. During testing, noisy signals are denoised by the trained network, with performance evaluated using correlation metrics.
The architecture of the CNRN is illustrated in Figure 4 (right). The network consists of six layers of CLSTM units alternating with five layers of complex-valued cross-terms. The encoder part comprises three CLSTM layers and three cross-term layers, while the decoder part consists of three CLSTM layers and two cross-term layers.
Each CLSTM unit contains separate real and imaginary kernels. The real kernel extracts features from the I component of the signal, and the imaginary kernel extracts features from the Q component. After each CLSTM layer extracts the I and Q features, a complex-valued cross-term layer combines the real and imaginary features to achieve complex-domain feature extraction. The specific implementation process is as follows:
(1)
The real and imaginary parts ( I in , Q in ) of the complex-valued input signal s in = I in + j Q in are fed into the real and imaginary kernels of the first CLSTM layer.
(2)
The output features ( I n , Q n ) and the kernel weights ( w r n , w i n ) from the CLSTM real and imaginary kernels are obtained.
(3)
Cross-term calculation is performed to obtain I n and Q n , as shown in the following,
I n = I n w r n Q n w i n Q n = I n w i n + Q n w r n
where n 1 , 2 , 3 , 4 , 5 denotes the network layer index. Here, I n and Q n represent the input to the n-th cross-term layer (which are also the output features of the n-th CLSTM layer), while I n and Q n denote the output of the n-th cross-term layer. This network mapping relationship, consistent with the complex-valued LTI system mapping process of a complex-valued communication system, is formulated in (2).
(4)
The real and imaginary parts ( I n , Q n ) of the cross-term output are then fed into the real and imaginary kernels of the next CLSTM layer for further feature extraction.
(5)
(2)–(4) are repeated twice to complete the encoding process.
(6)
(2)–(4) are repeated twice more to complete the decoding process.
To achieve denoising for multiple modulation types, the CNRN employs a multi-task learning paradigm where each channel focuses on denoising one specific modulation type. This design prevents feature aliasing across different signal types, which could otherwise degrade denoising performance. Accordingly, the channel numbers for the three CLSTM layers in the encoder are set to 8, 128, and 256, respectively. The decoder reconstructs the signal by upsampling the latent features obtained from the encoded and dimensionally reduced representation. Starting from the 256-channel latent features, three CLSTM layers upsample them to restore a signal with the same dimension as the input, yielding the denoised modulated signal. Therefore, the channel numbers for the three CLSTM layers in the decoder are 256, 128, and 8, respectively. Finally, the denoised signal is decomposed channel-wise to obtain the denoising results s out for different modulation types, completing the multi-task learning process.
Algorithm 1: Offline Training Process of Denoising Module
Electronics 15 00674 i001

2.6. DLB Automatic Modulation Classifier

Meta-learning, as a common approach to addressing few-shot AMC problems, is employed in this paper, where a CNN is used to extract signal features and construct a meta-learning classifier for modulation-type identification. The noise robustness of this framework is subsequently validated. The CNN feature extractor consists of two Conv1D layers with 32 channels and a kernel size of 7, followed by a GlobalAveragePooling1D layer.
The CNN is defined as a neural network f θ with meta-parameters θ , initialized as θ 0 . The parameters are updated over N gradient steps using the support set S b (i.e., the training set) to obtain θ N , after which the network performance is evaluated on the query set T b (i.e., the test set). Here, b denotes the index of a specific task within a batch of tasks, and each set of N updates constitutes one inner loop.
The CNN parameters after i update steps on S b can be expressed as:
θ i b = θ i 1 b α θ L S b ( f θ i 1 b )
where α is the learning rate, θ i b represents the updated CNN weights for task b after i steps, and L S b ( f θ i 1 b ) denotes the loss on the support set of task b after i 1 updates.
Given a batch size of B tasks, the meta-objective is defined as:
L m e t a ( θ 0 ) = b = 1 B L T b ( f θ N b ( θ 0 ) )
where θ N b is derived from θ 0 via (13). Equation (14) evaluates the quality of the initial parameters θ 0 by aggregating the losses over all tasks after inner-loop adaptation. The initial parameters θ 0 are optimized by minimizing the meta-objective. Since θ 0 encapsulates knowledge across tasks, this optimization process is referred to as the outer-loop update.
The update of the meta-parameters combines both inner-loop and outer-loop adjustments, expressed as:
θ 0 = θ 0 β θ b = 1 B L T b ( f θ N b ( θ 0 ) )
where β is the meta-learning rate and L T b denotes the loss on the query set of task b. In this work, the following hyperparameters are used: N = 2 , B = 5 , α = β = 0.001 , and the loss function is categorical cross-entropy.

2.7. Evaluation Metrics

To validate the performance of the proposed approach in mitigating the impact of noise on few-shot AMC and enhancing noise robustness, the evaluation is conducted from two perspectives: on the one hand, SNR and the mean squared error (MSE) are used for comparative evaluation of the denoising effect of the proposed method; on the other hand, classification accuracy is used for comparative evaluation of the robustness enhancement effect of the proposed method.

2.7.1. Evaluation Metrics for Denoising Performance

SNR is adopted to evaluate the performance of the denoising network by comparing the SNR of original and denoised signals. The higher SNR, the better the signal quality and the better the denoising effect. Meanwhile, MSE can be used to estimate the degree of difference between the denoised signal x n and the clean signal x n . A smaller MSE indicates a smaller difference between the denoised signal and the clean signal, thus a better denoising performance. SNR and MSE, as quantitative evaluation indicators for the signal denoising effect, are defined as follows [47], where N denotes the length of the signal.
S N R = 10 log 10 1 N n = 0 N 1 x ( n ) 2 1 N n = 0 N 1 x ( n ) x ( n ) 2
M S E = n = 0 N 1 x ( n ) x ( n ) 2 N
In addition, the time–frequency representation analysis [41] can distinguish signal differences by visually observing frequency points and envelopes to assist quantitative indicators. Frequency points in the time–frequency representation are represented by brightness; for example, the strongest brightness in the green wireframe in Figure 3 indicates high-frequency points, while the envelope refers to the distribution shape of bright areas in the time–frequency representations. Therefore, to more intuitively reflect the denoising effect, on the basis of using quantitative indicators SNR and MSE, the denoising effect is evaluated by observing the frequency points and envelopes in the time–frequency representations of original and denoised signals. The higher the consistency of frequency points and envelopes between the time–frequency representations of the denoised signal and that of the original signal, the better the denoising effect.

2.7.2. Evaluation Metrics for Robustness Enhancement

Modulation classification accuracy, as an indicator for evaluating feasibility in engineering applications, can be used to verify the effect of enhancing modulation classification robustness. The higher the average classification accuracy under different SNRs, the better the noise robustness of modulation classification. The classification accuracy is defined as follows [48]:
A c c u r a c y = T P + T N T P + F P + T N + F N
where T P denotes true positives (correctly predicted positive examples), T N signifies true negatives (correctly predicted counterexamples), F P represents false positives (positive cases incorrectly predicted), and F N indicates false negatives (counterexamples incorrectly predicted).

3. Validation Experiments and Results

3.1. Platform and Dataset

This paper uses the time–frequency toolbox of MATLAB 2019a to convert the input signal sequences into time–frequency representations and uses the TensorFlow 2.2 and Keras 2.3.1 frameworks in the Python 3.6 language to build DL networks. An Intel(R) Core(TM) i7-9750H CPU @ 2.60 GHz and an NVIDIA GeForce RTX 2080 GPU are used as hardware for the experimental environment. Our experiments utilize the public RML 2016.10a dataset [49], complemented by data collected from a physical verification platform (Figure 5) employing GNU Radio and USRP B210, to validate the method’s feasibility in real-world conditions. The characteristics of both datasets are summarized in Table 1, with both adhering to a 2 × 128 data format.
RML 2016.10a: The dataset used for training the network models comprises eight digital modulation types across 20 SNR levels, with 50 samples per modulation type per SNR. A total of 200 samples per modulation type per SNR level were selected to form the test and evaluation dataset. For the noise robustness validation, this dataset was partitioned in a 2:8 ratio respectively. Additionally, although this dataset does not encompass all possible high-order modulations (e.g., QAM128/256), it includes a representative set of commonly used and easily confused types (e.g., 8PSK, QAM16, QAM64). This is adequate for investigating the core few-shot learning and noise robustness problem, as confusion among these classes under low-sample conditions remains a significant challenge, as evidenced in our results.
Collected data: The data collection setup, depicted in Figure 5, utilized two software-defined radio platforms for signal transmission, reception, and recording. The parameters were configured as follows: a sampling rate of 600 kHz, a bandwidth of 200 kHz, and a carrier frequency of 650 MHz. At the transmitter, a randomly generated binary data stream was encoded, modulated, and transmitted. The receiver directly stored the captured signals. Subsequent processing in MATLAB involved windowing, normalization, and noise addition to form the measured dataset. This dataset includes seven modulation types across 20 SNR levels (−20 dB to 18 dB), containing 20,000 samples per modulation type per SNR, resulting in a total of 2.8 million samples. To emulate the SNR uncertainty inherent in real-world environments, a small-scale training set was created by randomly selecting 50 samples per modulation type, irrespective of SNR, yielding a total of 350 samples. Meanwhile, the test set for this collected data was constructed using the same methodology as for the RML 2016.10a dataset.

3.2. Experimental Settings

To validate the proposed method, the experiments using both a public dataset and a collected dataset mainly consist of two parts:
Part 1: Validation of denoising performance. This part evaluates the denoising performance through quantitative metrics (SNR and MSE). By comparing these metrics between original and denoised signals, we verify that the proposed method effectively addresses the signal-noise separation problem for non-cooperative modulated signals in complex electromagnetic environments.
To further demonstrate that the proposed method preserves the intrinsic characteristics of modulated signals while minimizing quality degradation, we conduct a quantitative comparison among five denoising approaches: proposed few-shot CNRN (F-CNRN), wavelet denoising, smooth denoising, denoising autoencoder based on CNN (DAE-CNN) [50], and denoising autoencoder based on LSTM (DAE-LSTM) [51]. This comparison specifically assesses the preservation of essential signal features. The parameters for the four baseline denoising methods are detailed in the Supplementary Materials (Experimental Data 10).
Part 2: Validation of AMC robustness enhancement. This part verifies the effect of the proposed method on enhancing noise robustness. Modulation classification is performed on noisy signals and denoised signals respectively, and the effectiveness of this work in enhancing the robustness of AMC is evaluated through the classification accuracy.
Additionally, to validate the effectiveness of each module in the proposed method during the offline stage, verification experiments were conducted on the SNR classification module and the data augmentation module using public datasets. The results are provided in the Supplementary Materials (validation of the SNR classification module: Experimental Data 1–3; validation of the data augmentation module: Experimental Data 4–5). The validation of the complex-valued denoising module is presented in Section 3.3.

3.3. Noise Reduction Performance

3.3.1. Verification Based on RML 2016.10a

The SNR and MSE of denoised test signals are computed across the SNR range from −20 dB to 18 dB. By comparing variations in these metrics between the original and denoised signals, we assess the noise suppression capability of the denoising module for few-shot signals.
Figure 6 presents the changes in SNR and MSE for eight types of signals at −20 dB after denoising by F-CNRN. Notably, the SNR of all eight signals increased significantly, with SNR increments exceeding 20 dB, while their MSE decreased by over 6. The evaluation metrics for the GFSK signal exhibit particularly prominent changes, attributable to GFSK’s inherent superior noise immunity compared to other modulation types. Figure 7 illustrates the SNR-MSE comparison for original and denoised QAM64 signals. Analysis of the time–frequency representation reveals that at an SNR of −10 dB, the signal is essentially submerged in noise, obscuring its complete time–frequency characteristics. However, after applying the proposed method, the SNR reaches 1.42 dB, effectively restoring the signal’s time–frequency features, while preserving high-frequency components in the diagram. More comprehensive comparative results of time–frequency diagrams are provided in the Supplementary Materials (Experimental Data 6).
Furthermore, a quantitative comparison of five denoising methods—F-CNRN, wavelet denoising, smooth denoising, DAE-CNN, and DAE-LSTM—is presented in Figure 8. The two DLB denoising autoencoders utilize real-valued network architectures. In Figure 8, the gray and red curves represent the original and denoised BPSK signal, respectively. The results demonstrate that denoising substantially improves SNR for modulated signals from −20 dB to 12 dB. However, signals between 14 dB and 18 dB exhibit negative SNR gains after denoising. Crucially, since denoising in communication modulation signals primarily aims to facilitate subsequent modulation classification, and noise has minimal impact on classification accuracy above 14 dB, this SNR reduction has negligible influence on classification performance. Additionally, MSE decreases significantly across the entire range (−20 dB to 18 dB) after denoising. Higher SNR and lower MSE indicate reduced noise content. Collectively, these metrics confirm that the proposed denoising method effectively removes noise from modulated signals.
Comparing with the three DLB denoising methods—F-CNRN, DAE-CNN, and DAE-LSTM—in Figure 8, both SNR and MSE metrics clearly indicate that the proposed F-CNRN method significantly outperforms the other two real-valued network approaches. This further validates the effectiveness of the proposed complex-valued denoising network. Figure 8a distinctly shows that for input SNRs below −6 dB, the output SNR after F-CNRN denoising is markedly higher than that achieved by wavelet denoising, smooth denoising, DAE-CNN, and DAE-LSTM (e.g., dramatically improving from −20 dB to 0 dB). However, for input SNRs above −6 dB, F-CNRN’s output SNR becomes lower than that of wavelet and smooth denoising. Notably, for input SNRs exceeding 13 dB, the output SNR even drops below the original SNR. This phenomenon occurs because DLB denoising methods converge around an optimal value, exhibiting smaller variance between maximum and minimum values. Similarly, Figure 8b clearly demonstrates that the MSE after F-CNRN denoising is consistently lower than that of the other four methods, underscoring its superior denoising performance.

3.3.2. Verification Based on Collected Data

The denoising performance of the F-CNRN on the collected data is evaluated using the SNR and MSE. A comparative analysis of these metrics for 8PSK signals, original and denoised, is presented in Table 2.
For modulation signals with input SNRs ranging from −20 dB to 10 dB, the output SNR after denoising shows significant improvement. Notably, a signal at −20 dB input achieves an output SNR of 0.11 dB, representing an enhancement of 20.11 dB. Conversely, for signals with input SNRs between 12 dB and 18 dB, the output SNR is marginally lower than the input SNR. However, analysis of the MSE values provides crucial context: the pre-denoising MSE for this high-SNR range is consistently 0.07, which is reduced to 0.01 after denoising. This substantial reduction in MSE indicates that the denoised signal is, in fact, closer to the clean reference signal. The observed stabilization (flattening) of the output SNR in this region reflects this improved signal fidelity. Furthermore, since modulation signals above 12 dB are inherently less susceptible to noise, the slight decrease in SNR post-denoising has a negligible impact on the final modulation classification accuracy.
Moreover, a pronounced reduction in MSE is observed across the entire SNR range from −20 dB to 18 dB after denoising. For instance, the MSE for a −20 dB signal decreases from 0.82 to 0.13, a reduction of 0.69, corresponding to an 84.15% improvement relative to its pre-denoising value. This demonstrates that the deviation between the denoised signal and the clean reference is substantially minimized. In summary, the denoising effectiveness is quantified by two key metrics: higher output SNR and lower MSE indicate less residual noise in the signal. The results presented in Table 2 validate that the proposed denoising method is effective for signals under real-world conditions.
Furthermore, Figure 9 illustrates the comparison of SNR and MSE for the original and denoised 8PSK signals, achieved by the proposed method versus four other baseline approaches. Combined with the results from Figure 8 in Section 3.3.1 and Figure 9 in Section 3.3.2, it further corroborates that the proposed denoising method surpasses wavelet denoising, smooth denoising, DAE-CNN, and DAE-LSTM.

3.4. Robustness Enhancement Validation

As illustrated in Figure 10, within the SNR range of −20 to −2 dB, the classification accuracy on both datasets is substantially improved after F-CNRN denoising, with gains of 36.48% and 26.8%, respectively. In the range of 0 to 18 dB, although the enhancement in accuracy is less pronounced compared to that at lower SNRs, noticeable improvements of 6.63% and 9.16% are still achieved. Across the full SNR range from −20 to 18 dB, the overall classification accuracy reaches 74.06% and 88.55%, reflecting improvements of 21.57% and 17.98%, respectively. Furthermore, the increased intensity along the diagonal of the confusion matrix at −8 dB indicates that the proposed method effectively mitigates the adverse effects of noise on few-shot AMC, demonstrating strong noise robustness.
Given the variability in few-shot experimental settings across related studies, this paper conducts a comprehensive comparison based on several key factors, including data sources, number of modulation types, SNR ranges, and dataset sizes. Table 3 summarizes the few-shot AMC performance reported in the literature [13,52,53,54,55,56]. The study in [13] employs the smallest training set, but is limited to only three modulation types, thus lacking generalization capability for robustness comparisons. Although [52] covers 11 modulation types, its overall noise robustness remains limited. Methods in [53,55] achieve relatively high classification accuracy with 11 modulation types and limited samples, yet exhibit insufficient robustness at low SNR levels. While [54] demonstrates stronger robustness in terms of classification accuracy compared to other works, its dataset is an order of magnitude larger, which diminishes the practical relevance of its robustness claims. The approach in [56] employs signals with a length of 1024, but does not adequately address robustness across a wider range of modulation types and SNR conditions.
In contrast, the proposed method achieves higher AMC accuracy with fewer samples, under broader noise conditions, and for more modulation types on both the public and collected datasets. These results further validate the enhanced robustness of the proposed approach in real-world scenarios with varying noise intensities.

3.5. Computation Overhead

The total parameter count of the proposed Algorithm 1 is 1.169 MB. Specifically, the parameter sizes of its constituent modules are as follows: the SNR classification module occupies 489.28 KB, the data augmentation module requires 0.004 KB, and the complex-valued denoising module accounts for 680 KB. It should be noted that the classification and data augmentation modules are employed exclusively during the offline phase. Therefore, when comparing the computational overhead across different methods—which is primarily evaluated based on the online deployment stage—the proposed method’s overhead is dominated by the parameter size of the complex denoising module.
In comparison, the parameter sizes of the four methods—wavelet, smoothing, CNN-DAE, and LSTM-DAE—are 243.71 KB, 87.97 KB, 4.24 KB, and 352.64 KB, respectively. Although our method surpasses the method reported in [8] (which has 4.28 MB of parameters) and [57] (which has 1.507 MB of parameters), it requires more memory during operation and exhibits higher space complexity than traditional denoising methods and real-valued network-based approaches. This is attributed to the complex-valued structure of F-CNRN, which increases computational overhead during network inference. It is worth noting, however, that this increase remains within the same order of magnitude. Given the significant improvement in denoising performance, the associated increase in space complexity is considered acceptable. The DLB automatic modulation classifier contains 7.944 KB of parameters and achieves an inference time of 0.009 ms per sample. Furthermore, the inference time of F-CNRN reaches 0.514 ms per sample. This meets the low-latency requirements of 5G-enabled Internet of Things applications [58] and satisfies the real-time processing demands for practical deployment.

4. Discussion

4.1. Performance and Advantages

The proposed method employs a modular denoising-and-recognition architecture, which enhances classification robustness and provides flexibility for classifier adaptation to varying requirements, as detailed in the Supplementary Materials (Experimental Dataset 9) .
The denoising module in F-CNRN can effectively separate noise from the signal while maintaining the signal quality. Under low-SNR conditions, the signal quality is significantly enhanced, which leads to a noticeable improvement in the subsequent modulation classification accuracy, further meeting the requirement of strong robustness for AMC. In addition, the proposed denoising module causes minimal loss to the quality of high-SNR signals, and the classification results of modulation types for high-SNR signals are not negatively affected.
The proposed method, as a complex-valued network, can comprehensively and deeply extract the signal features of few-shot data. Due to the possibility of different spatial polarization modes during the transmission of wireless signals, the IQ form is generally used to represent signals. However, this has led to the neglect of the complex-valued nature of signals when building networks. To extract comprehensive features of the data, using a complex-valued network is more in line with the complex-valued characteristics of signals [45]. It makes full use of the correlation between I and Q components, contributing to enhancing the noise robustness of few-shot AMC [59,60,61].

4.2. Limitations

In engineering practice, different preprocessing methods for intercepted signals prior to AMC will affect the performance [62]. However, this paper did not consider the application of actual signal processing during the experiment, so whether the proposed method can achieve the same effect in engineering practice remains to be further verified.
Future work will combine this method with actual signal processing technologies and conduct online testing through software-defined radio platforms [63] to verify its stability and reliability in complex environments. At the same time, efforts should be made to strengthen the integration of signal detection [64,65] and signal sorting [66] to eliminate the unstable effects caused by preprocessing, thereby further improving the robustness of modulation classification.
Furthermore, while the proposed modular architecture of denoising and recognition facilitates easy improvements to the classifier, its computational complexity may still pose challenges for edge device deployment. Future work could focus on designing a lightweight [67], integrated model that combines noise robustness and efficient computation to achieve effective few-shot AMC in resource-constrained environments.
Additionally, this research focuses on general non-cooperative wireless communication scenarios. Future work will investigate AMC in specific scenarios, collect modulation signals with characteristics typical of those environments, and design corresponding validation experiments.

5. Conclusions

This paper proposes a method to enhance noise robustness for few-shot modulated signals. Unsupervised binary classification of high/low SNR signals is achieved through K-means clustering of time–frequency representations across the −20 dB to 18 dB range. Rotation and CTS operations augment high-SNR samples, effectively mitigating overfitting in few-shot training. The expanded high-quality samples trains a CNRN, which denoises low-SNR signals while extracting discriminative features from complex-valued modulated signals. Experimental results on both a public dataset and a collected dataset show that the denoised signals exhibit significantly reduced MSE and notably improved SNR. Experimental results on both the public and collected datasets indicate that after denoising, the signal MSE is significantly reduced and the SNR is notably improved. This leads to an overall accuracy increase of 21.57% and 17.98%, respectively, when validated with the classifier. These findings validate the method’s efficacy for few-shot scenarios. Future work will investigate generalization capabilities of denoising-enhanced modulation classification to establish cross-scenario applicability.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/electronics15030674/s1. References [68,69,70] are cited in the Supplementary Materials.

Author Contributions

Conceptualization, M.G. and B.Z.; methodology, M.G. and B.Z.; software, M.G.; validation, M.G.; formal analysis, B.Z. and L.W.; investigation, L.W.; resources, X.T.; data curation, M.G.; writing—original draft preparation, M.G.; writing—review and editing, M.G. and B.Z.; visualization, M.G. and B.Z.; supervision, L.W. and X.T.; project administration, X.T. and H.H.; funding acquisition, X.T. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 62027801.

Data Availability Statement

In this paper, the RML 2016.10a dataset and collected data are employed for experimental verification. The RML 2016.10a dataset is a representative dataset for the testing and evaluation of current AMC methods. Readers can obtain the dataset from the author by email (bqzhang@hgd.edu.cn).

Acknowledgments

I would like to acknowledge our colleagues for their wonderful collaboration and patient support. I also thank all the reviewers and editors for their great help and useful suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dobre, O.; Abdi, A.; Bar-Ness, Y.; Su, W. Survey of automatic modulation classification techniques: Classical approaches and new trends. IET Commun. 2007, 1, 137–156. [Google Scholar] [CrossRef]
  2. Zheng, Q.; Tian, X.; Yu, L.; Elhanashi, A.; Saponara, S. Recent Advances in Automatic Modulation Classification Technology: Methods, Results, and Prospects. Int. J. Intell. Syst. 2025, 2025, 4067323. [Google Scholar] [CrossRef]
  3. Wang, J.; Jiao, Y.; Ding, G.; Zhang, M.; Tang, P.; Zhang, D.; Xu, Y. Unsupervised Transformer-Based ClusterGAN and Infinite Gaussian Mixture Models for Modulation Signal Clustering. IEEE Trans. on Cogn. Commun. Netw. 2025, 12, 1227–1240. [Google Scholar] [CrossRef]
  4. Hao, X.; Yang, S.; Liu, R.; Feng, Z.; Peng, T.; Huang, B. SMTC-CL: Continuous Learning via Selective Multi-Task Coordination for Adaptive Signal Classification. IEEE Trans. Cogn. Commun. Netw. 2025, 11, 1664–1681. [Google Scholar] [CrossRef]
  5. Zuo, X.; Yang, Y.; Yao, R.; Fan, Y.; Li, L. An Automatic Modulation Recognition Algorithm Based on Time–Frequency Features and Deep Learning with Fading Channels. Remote Sens. 2024, 16, 4550. [Google Scholar] [CrossRef]
  6. Wang, X.; Zhao, Y.; Huang, Z. A Survey of Deep Transfer Learning in Automatic Modulation Classification. IEEE Trans. Cogn. Commun. Netw. 2025, 11, 1357–1381. [Google Scholar] [CrossRef]
  7. Li, X.; Dong, F.; Zhang, S.; Guo, W. A Survey on Deep Learning Techniques in Wireless Signal Recognition. Wirel. Commun. Mob. Com. 2019, 2019, 5629572. [Google Scholar] [CrossRef]
  8. Shi, Y.; Xu, H.; Zhang, Y.; Qi, Z.; Wang, D. GAF-MAE: A Self-Supervised Automatic Modulation Classification Method Based on Gramian Angular Field and Masked Autoencoder. IEEE Trans. Cogn. Commun. Netw. 2024, 10, 94–106. [Google Scholar] [CrossRef]
  9. Ma, J.; Hu, M.; Chen, X.; Wan, L.; Wang, J. Few-Shot Automatic Modulation Classification via Semi-Supervised Metric Learning and Lightweight Conv-Transformer Model. IEEE Trans. Cogn. Commun. Netw. 2025, 12, 1012–1024. [Google Scholar] [CrossRef]
  10. Xing, H.; Wang, S.; Wang, J.; Mei, L.; Xu, Y.; Zhou, H.; Xu, H.; Jiao, L. PSRNet: Few-Shot Automatic Modulation Classification Under Potential Domain Differences. IEEE Trans. Wirel. Commun. 2025, 24, 371–384. [Google Scholar] [CrossRef]
  11. Chen, T.; Zheng, S.; Qiu, K.; Zhang, L.; Xuan, Q.; Yang, X. Augmenting Radio Signals with Wavelet Transform for Deep Learning-Based Modulation Recognition. IEEE Trans. Cogn. Commun. Netw. 2024, 10, 2029–2044. [Google Scholar] [CrossRef]
  12. Kong, W.; Jiao, X.; Xu, Y.; Zhang, B.; Yang, Q. An Efficient Model for Few-Shot Automatic Modulation Recognition Based on Supervised Contrastive Learning. IEEE Trans. Veh. Technol. 2025, 74, 3533–3538. [Google Scholar] [CrossRef]
  13. Tan, H.; Zhang, Z.; Li, Y.; Shi, X.; Zhou, F. Multi-Scale Feature Fusion and Distribution Similarity Network for Few-Shot Automatic Modulation Classification. IEEE Signal Process. Lett. 2024, 31, 2890–2894. [Google Scholar] [CrossRef]
  14. Zha, Y.; Wang, H.; Shen, Z.; Wang, J. A Few-Shot Modulation Recognition Method Based on Multi-Modal Feature Fusion. IEEE Trans. Veh. Technol. 2024, 73, 10823–10828. [Google Scholar] [CrossRef]
  15. Weng, L.; He, Y.; Peng, J.; Zheng, J.; Li, X. Deep cascading network architecture for robust automatic modulation classification. Neurocomputing 2021, 455, 308–324. [Google Scholar] [CrossRef]
  16. Ghasemzadeh, P.; Hempel, M.; Wang, H.; Sharif, H. GGCNN: An Efficiency-Maximizing Gated Graph Convolutional Neural Network Architecture for Automatic Modulation Identification. IEEE Trans. Wirel. Commun. 2023, 22, 6033–6047. [Google Scholar] [CrossRef]
  17. Shen, Y.; Yang, Y.; Liu, Y.; Wang, N.; Ma, L.; Wang, H.; Li, F. DFCT-net for automatic modulation recognition in UAV communication systems. Neurocomputing 2025, 653, 131200. [Google Scholar] [CrossRef]
  18. Kumar, D.M.; Kumar, K.S.; Durai, A.V.; Vijayakumar, R.; Raju, Y.V. Underwater Wireless Transmission Using EESAR Routing Protocol. In Proceedings of the 2022 6th International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, 1–3 December 2022; IEEE: New York, NY, USA, 2022; pp. 621–626. [Google Scholar] [CrossRef]
  19. Kahraman, I.; Köse, A.; Koca, M.; Anarim, E. Age of Information in Internet of Things: A Survey. IEEE Internet Things J. 2024, 11, 9896–9914. [Google Scholar] [CrossRef]
  20. Bai, J.; Liu, X.; Wang, Y.; Xiao, Z.; Chen, F.; Zhou, H.; Jiao, L. Integrating Prior Knowledge and Contrast Feature for Signal Modulation Classification. IEEE Internet Things J. 2024, 11, 21461–21473. [Google Scholar] [CrossRef]
  21. Yashashwi, K.; Sethi, A.; Chaporkar, P. A Learnable Distortion Correction Module for Modulation Recognition. IEEE Wirel. Commun. Lett. 2019, 8, 77–80. [Google Scholar] [CrossRef]
  22. Dong, W.Y.; Yang, S.; Chen, S. Uplink Performance Analysis of Heterogeneous Non-Terrestrial Networks in Harsh Environments: A Novel Stochastic Geometry Model. IEEE Trans. Commun. 2025, 73, 6734–6747. [Google Scholar] [CrossRef]
  23. Rahman, R.; Nguyen, D.C. Improved Modulation Recognition Using Personalized Federated Learning. IEEE Trans. Veh. Technol. 2024, 73, 19937–19942. [Google Scholar] [CrossRef]
  24. Wang, X.; Wang, J.; Lu, X. Modulation Recognition Method for Wireless Signals Based on Joint Neural Networks. IEEE Access 2024, 12, 121712–121722. [Google Scholar] [CrossRef]
  25. Shi, Y.; Xu, H.; Qi, Z.; Zhang, Y.; Wang, D.; Jiang, L. STTMC: A Few-Shot Spatial Temporal Transductive Modulation Classifier. IEEE Trans. Mach. Learn. Commun. Netw. 2024, 2, 546–559. [Google Scholar] [CrossRef]
  26. Jiang, N.; Wang, B. Modulation Recognition of Underwater Acoustic Communication Signals Based on Data Transfer. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26 May 2019; IEEE: New York, NY, USA, 2019; pp. 243–246. [Google Scholar] [CrossRef]
  27. Li, Y.; Wang, B.; Shao, G.; Shao, S. Automatic Modulation Classification for Short Burst Underwater Acoustic Communication Signals Based on Hybrid Neural Networks. IEEE Access 2020, 8, 227793–227809. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Li, Y.; Gao, M. Few-Shot Learning of Signal Modulation Recognition based on Attention Relation Network. In Proceedings of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; IEEE: New York, NY, USA, 2021; pp. 1372–1376. [Google Scholar] [CrossRef]
  29. Wang, H.; Wang, B.; Li, Y. IAFNet: Few-Shot Learning for Modulation Recognition in Underwater Impulsive Noise. IEEE Commun. Lett. 2022, 26, 1047–1051. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Li, Y.; Zhai, Q.; Li, Y.; Gao, M. Few-Shot Learning for Fine-Grained Signal Modulation Recognition Based on Foreground Segmentation. IEEE Trans. Veh. Technol. 2022, 71, 2281–2292. [Google Scholar] [CrossRef]
  31. Tian, X.; Zheng, Q.; Li, B.; Qiao, D.; Yu, K.; Wei, Z.; Li, B.; Jiang, H.; Li, X.; Lin, Y.; et al. A survey on deep learning enabled automatic modulation classification methods: Data representations, model structures, and regularization techniques. Signal Process. 2026, 242, 110444. [Google Scholar] [CrossRef]
  32. Hao, X.; Feng, Z.; Yang, S.; Wang, M.; Jiao, L. Automatic Modulation Classification via Meta-Learning. IEEE Internet Things J. 2023, 10, 12276–12292. [Google Scholar] [CrossRef]
  33. Sun, P.; Su, J.; Wen, Z.; Zhou, Y.; Hong, Z.; Yu, S.; Zhou, H. Boosting Signal Modulation Few-Shot Learning with Pre-Transformation. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  34. Li, M.; Li, J.; Feng, H. Detection and Recognition of Underwater Acoustic Communication Signal Under Ocean Background Noise. IEEE Access 2024, 12, 149432–149446. [Google Scholar] [CrossRef]
  35. Yi, G.; Hao, X.; Yan, X.; Wang, J.; Dai, J. Automatic Modulation Recognition for Radio Frequency Proximity Sensor Signals Based on Masked Autoencoders and Transfer Learning. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 8700–8712. [Google Scholar] [CrossRef]
  36. Zhan, Y.; Li, X.; Zhan, A.; Wu, C. W-MSTC: A Window Based Multi-Scale Temporal Cross Attention Method for Few-Shot Modulation Recognition. IEEE Commun. Lett. 2026, 30, 402–406. [Google Scholar] [CrossRef]
  37. Benvenuto, N.; Piazza, F. On the complex backpropagation algorithm. IEEE Trans. Signal Process. 1992, 40, 967–969. [Google Scholar] [CrossRef]
  38. Virtue, P.; Yu, S.X.; Lustig, M. Better than real: Complex-valued neural nets for MRI fingerprinting. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: New York, NY, USA, 2017; pp. 3953–3957. [Google Scholar] [CrossRef]
  39. O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional Radio Modulation Recognition Networks. In Engineering Applications of Neural Networks; Springer International Publishing: Cham, Switzerland, 2016; pp. 213–226. [Google Scholar] [CrossRef]
  40. Zhang, J.; Hou, C.; Lin, Y.; Zhang, J.; Xu, Y.; Chen, S. Frequency Hopping Signal Modulation Recognition Based on Time-Frequency Analysis. In Proceedings of the 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS), Denver, CO, USA, 4–7 October 2021; IEEE: New York, NY, USA, 2021; pp. 46–52. [Google Scholar] [CrossRef]
  41. Zhang, H.; Li, L.; Pan, H.; Li, W.; Tian, S. Time-Frequency Aliased Signal Identification Based on Multimodal Feature Fusion. Sensors 2024, 24, 2558. [Google Scholar] [CrossRef] [PubMed]
  42. Hu, Z.; Huang, J.; Hu, D.; Wang, Z. A Time-Frequency Image Denoising Method via Neural Networks for Radar Waveform Recognition. IEEE Commun. Lett. 2023, 27, 150–154. [Google Scholar] [CrossRef]
  43. Huang, H.; Gui, G.; Gacanin, H.; Yuen, C.; Sari, H.; Adachi, F. Deep Regularized Waveform Learning for Beam Prediction with Limited Samples in Non-Cooperative mmWave Systems. IEEE Trans. Veh. Technol. 2023, 72, 9614–9619. [Google Scholar] [CrossRef]
  44. Zavala-Mondragón, L.A.; de With, P.H.; van der Sommen, F. A Signal Processing Interpretation of Noise-Reduction Convolutional Neural Networks: Exploring the mathematical formulation of encoding-decoding CNNs. IEEE Signal Process. Mag. 2023, 40, 38–63. [Google Scholar] [CrossRef]
  45. Lee, C.; Hasegawa, H.; Gao, S. Complex-Valued Neural Networks: A Comprehensive Survey. IEEE/CAA J. Autom. Sin. 2022, 9, 1406–1426. [Google Scholar] [CrossRef]
  46. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  47. Wang, W.; Gan, L. Radio Frequency Fingerprinting Improved by Statistical Noise Reduction. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1444–1452. [Google Scholar] [CrossRef]
  48. Liang, Z.; Tao, M.; Xie, J.; Yang, X.; Wang, L. A Radio Signal Recognition Approach Based on Complex-Valued CNN and Self-Attention Mechanism. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1358–1373. [Google Scholar] [CrossRef]
  49. O’Shea, T.J. Radio Machine Learning Dataset Generation with GNU Radio. In Proceedings of the 6th GNU Radio Conference, Boulder, CO, USA, 12–16 September 2016; pp. 69–74. [Google Scholar]
  50. Kim, H.; Shahid, A.; Fontaine, J.; De Poorter, E.; Moerman, I.; Nam, H. Automatic Modulation Classification using Relation Network with Denoising Autoencoder. In Proceedings of the 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 19–21 October 2022; IEEE: New York, NY, USA, 2022; pp. 485–488. [Google Scholar] [CrossRef]
  51. Xu, T.; Ma, Y. Signal Automatic Modulation Classification and Recognition in View of Deep Learning. IEEE Access 2023, 11, 114623–114637. [Google Scholar] [CrossRef]
  52. Wang, Y.; Bai, J.; Xiao, Z.; Zhou, H.; Jiao, L. MsmcNet: A Modular Few-Shot Learning Framework for Signal Modulation Classification. IEEE Trans. Signal Process. 2022, 70, 3789–3801. [Google Scholar] [CrossRef]
  53. Li, L.; Huang, J.; Cheng, Q.; Meng, H.; Han, Z. Automatic Modulation Recognition: A Few-Shot Learning Method Based on the Capsule Network. IEEE Wirel. Commun. Lett. 2021, 10, 474–477. [Google Scholar] [CrossRef]
  54. Zhou, Q.; Zhang, R.; Mu, J.; Zhang, H.; Zhang, F.; Jing, X. AMCRN: Few-Shot Learning for Automatic Modulation Classification. IEEE Commun. Lett. 2022, 26, 542–546. [Google Scholar] [CrossRef]
  55. Tang, Z.; Tao, M.; Su, J.; Gong, Y.; Fan, Y.; Li, T. Data Augmentation for Signal Modulation Classification using Generative Adverse Network. In Proceedings of the 2021 IEEE 4th International Conference on Electronic Information and Communication Technology (ICEICT), Xi’an, China, 18–20 August 2021; IEEE: New York, NY, USA, 2021; pp. 450–453. [Google Scholar] [CrossRef]
  56. Feng, S.; Wang, Y.; Wen, Z.; Xu, L.; Yan, M. Fine-Grained Transductive Prototypical Network Based Few-Shot Signal Modulation Classification Using Coarse Labels. IEEE Trans. on Cogn. Commun. Netw. 2025, 12, 2189–2204. [Google Scholar] [CrossRef]
  57. Zheng, G.; Zang, B.; Yang, P.; Zhang, W.; Li, B. FE-SKViT: A Feature-Enhanced ViT Model with Skip Attention for Automatic Modulation Recognition. Remote Sens. 2024, 16, 4204. [Google Scholar] [CrossRef]
  58. Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef]
  59. Georgiou, G.; Koutsougeras, C. Complex domain backpropagation. IEEE Trans. Circuits Syst. II 1992, 39, 330–334. [Google Scholar] [CrossRef]
  60. Xiao, C.; Yang, S.; Feng, Z. Complex-Valued Depthwise Separable Convolutional Neural Network for Automatic Modulation Classification. IEEE Trans. Instrum. Meas. 2023, 72, 1–10. [Google Scholar] [CrossRef]
  61. Li, W.; Deng, W.; Wang, K.; You, L.; Huang, Z. A Complex-Valued Transformer for Automatic Modulation Recognition. IEEE Internet Things J. 2024, 11, 22197–22207. [Google Scholar] [CrossRef]
  62. Peng, S.; Sun, S.; Yao, Y.D. A Survey of Modulation Classification Using Deep Learning: Signal Representation and Data Preprocessing. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 7020–7038. [Google Scholar] [CrossRef]
  63. Hitefield, S.D.; Ogorzalek, J.; Leffke, Z.; Black, J.T. A Software Radio Based Satellite Communications Simulator for Small Satellites Using GNU Radio. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; IEEE: New York, NY, USA, 2019; pp. 1–12. [Google Scholar] [CrossRef]
  64. Liu, W.; Liu, J.; Hao, C.; Gao, Y.; Wang, Y.L. Multichannel adaptive signal detection: Basic theory and literature review. Sci. China Inf. Sci. 2022, 65, 121301. [Google Scholar] [CrossRef]
  65. Patel, A.; Kosko, B. Optimal Noise Benefits in Neyman–Pearson and Inequality-Constrained Statistical Signal Detection. IEEE Trans. Signal Process. 2009, 57, 1655–1669. [Google Scholar] [CrossRef]
  66. Fu, W.; Hu, Z.; Li, D. A Sorting Algorithm for Multiple Frequency-Hopping Signals in Complex Electromagnetic Environments. Circuits, Syst. Signal Process. 2019, 39, 245–267. [Google Scholar] [CrossRef]
  67. Cheng, H.; Zhang, M.; Shi, J.Q. A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10558–10578. [Google Scholar] [CrossRef] [PubMed]
  68. Calinski, T.; Harabasz, J. A Dendrite Method for Cluster Analysis. Commun. Stat. Simul. Comput. 1974, 3, 1–27. [Google Scholar] [CrossRef]
  69. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  70. Davies, D.L.; Bouldin, D.W. A Cluster Separation Measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 224–227. [Google Scholar] [CrossRef]
Figure 1. Overall structure of the proposed few-shot modulation classification algorithm with enhanced noise robustness. Offline Stage: (1) conversion of IQ signals into time–frequency representations, (2) feature extraction from these representations using CNN, (3) K-means clustering of the extracted features followed by SNR classification of the clusters using image information entropy, and (4) application of rotation and CTS data augmentation to samples classified into the high-SNR category, followed by training of the denoising module CNRN using the augmented samples. Online Stage: incoming signals are preprocessed by the trained CNRN module and subsequently fed into the DLB classifier for AMC.
Figure 1. Overall structure of the proposed few-shot modulation classification algorithm with enhanced noise robustness. Offline Stage: (1) conversion of IQ signals into time–frequency representations, (2) feature extraction from these representations using CNN, (3) K-means clustering of the extracted features followed by SNR classification of the clusters using image information entropy, and (4) application of rotation and CTS data augmentation to samples classified into the high-SNR category, followed by training of the denoising module CNRN using the augmented samples. Online Stage: incoming signals are preprocessed by the trained CNRN module and subsequently fed into the DLB classifier for AMC.
Electronics 15 00674 g001
Figure 2. Unsupervised SNR classification module based on time–frequency representations: The limited IQ signals are first transformed into time–frequency representations using the CWD. Then, features are extracted from these time–frequency representations using a CNN. Subsequently, K-means clustering is applied to the extracted features. Finally, the image information entropy is calculated separately for each of the two resulting clusters, and the low-SNR and high-SNR categories are distinguished by comparing the magnitudes of their respective image information entropy values.
Figure 2. Unsupervised SNR classification module based on time–frequency representations: The limited IQ signals are first transformed into time–frequency representations using the CWD. Then, features are extracted from these time–frequency representations using a CNN. Subsequently, K-means clustering is applied to the extracted features. Finally, the image information entropy is calculated separately for each of the two resulting clusters, and the low-SNR and high-SNR categories are distinguished by comparing the magnitudes of their respective image information entropy values.
Electronics 15 00674 g002
Figure 3. Information distribution of time–frequency representations for: (a) noisy BPSK signal (SNR = −4 dB); (b) denoised BPSK signal.
Figure 3. Information distribution of time–frequency representations for: (a) noisy BPSK signal (SNR = −4 dB); (b) denoised BPSK signal.
Electronics 15 00674 g003
Figure 4. The training/testing process and network architecture of CNRN.
Figure 4. The training/testing process and network architecture of CNRN.
Electronics 15 00674 g004
Figure 5. Data collection platform.
Figure 5. Data collection platform.
Electronics 15 00674 g005
Figure 6. Variations in SNR and MSE of eight types of denoised signals. Taking BPSK as an example, more comprehensive results are provided in the Supplementary Materials (Experimental Data 7).
Figure 6. Variations in SNR and MSE of eight types of denoised signals. Taking BPSK as an example, more comprehensive results are provided in the Supplementary Materials (Experimental Data 7).
Electronics 15 00674 g006
Figure 7. Comparison of original and denoised QAM64 signals in terms of SNR and MSE.
Figure 7. Comparison of original and denoised QAM64 signals in terms of SNR and MSE.
Electronics 15 00674 g007
Figure 8. BPSK-modulated signals are employed to compare critical performance metrics—including (a) SNR and (b) MSE—following denoising via approaches including smooth, wavelet, DAE-CNN, DAE-LSTM, and F-CNRN.
Figure 8. BPSK-modulated signals are employed to compare critical performance metrics—including (a) SNR and (b) MSE—following denoising via approaches including smooth, wavelet, DAE-CNN, DAE-LSTM, and F-CNRN.
Electronics 15 00674 g008
Figure 9. 8PSK-modulated signals are employed to compare critical performance metrics—including (a) SNR and (b) MSE—following denoising via approaches including smooth, wavelet, DAE-CNN, DAE-LSTM, and F-CNRN.
Figure 9. 8PSK-modulated signals are employed to compare critical performance metrics—including (a) SNR and (b) MSE—following denoising via approaches including smooth, wavelet, DAE-CNN, DAE-LSTM, and F-CNRN.
Electronics 15 00674 g009
Figure 10. A comparison of accuracy before and after denoising based on F-CNRN for few-shot modulation classification: (a) RML 2016.10a; (b) collected data. For each group: accuracy versus SNR range (left); confusion matrix for eight modulation types at SNR = −8 dB before denoising (middle); post-denoising confusion matrix at identical SNR (right). Comprehensive experimental results are provided in Supplementary Materials (Experimental Data 8).
Figure 10. A comparison of accuracy before and after denoising based on F-CNRN for few-shot modulation classification: (a) RML 2016.10a; (b) collected data. For each group: accuracy versus SNR range (left); confusion matrix for eight modulation types at SNR = −8 dB before denoising (middle); post-denoising confusion matrix at identical SNR (right). Comprehensive experimental results are provided in Supplementary Materials (Experimental Data 8).
Electronics 15 00674 g010
Table 1. Characteristics of different datasets.
Table 1. Characteristics of different datasets.
DatasetTypesUseSample Size
RML 2016.10a8PSK, BPSK, CPFSK, GFSK, PAM4, QAM16, QAM64, QPSKTrain8000 (50 × 20 × 8)
Test32,000 (200 × 20 × 8)
Collected data2ASK, 2FSK, 8PSK, QPSK, QAM16, QAM64, GMSKTrain350 (50 × 7)
Test28,000 (200 × 20 × 7)
Table 2. Comparison of original and denoised 8PSK signals in terms of SNR and MSE.
Table 2. Comparison of original and denoised 8PSK signals in terms of SNR and MSE.
SNR (dB)MSE
OriginalDenoisedOriginalDenoised
−200.110.820.13
−180.160.550.13
−160.240.370.13
−140.400.260.13
−120.610.190.13
−100.900.140.13
−81.310.110.12
−61.900.100.11
−42.640.090.09
−23.530.080.07
04.550.080.06
25.700.070.04
46.960.070.03
68.200.070.02
89.440.070.02
1010.610.070.02
1211.760.070.01
1412.560.070.01
1613.240.070.01
1813.760.070.01
Table 3. Few-shot classification accuracy of different methods.
Table 3. Few-shot classification accuracy of different methods.
MethodExperimental DataAccuracy (%)
Source Type SNR (dB) Training Set Size Low 1 High 2 Total Maximum
MS2F-DS [13]RML 2016.10a3 types[−20, 18]30050.4995.5173.0095.62
MsmcNet [52]RML 2016.10a11 types[−20, 6]10,78031.2980.0345.2182.16
AMR-CapsNet [53]RML 2016.04c11 types[−6, 12]400057.0783.0475.2589.53
AMCRN [54]RML 2016.10a11 types[−20, 18]120,00060.7091.9776.3492.43
GAN + CNN [55]RML 2016.10a11 types[−10, 18]11,00056.2286.1176.1586.60
FTPNet [56]RML 2018.01a4 types[−4, 8]10,24046.5574.9466.8382.71
ProposedRML 2016.10a8 types[−20, 18]640064.5483.5874.0685.58
Collected data7 types[−20, 18]560082.5594.5588.5596.10
1 Low represents the range of SNR less than 0. 2 High represents the range of SNR greater than or equal to 0. In the table, values highlighted in bold represent the better performance for each respective metric (column).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, M.; Zhang, B.; Wang, L.; Tang, X.; Huan, H. Enhancing Noise Robustness in Few-Shot Automatic Modulation Classification via Complex-Valued Autoencoders. Electronics 2026, 15, 674. https://doi.org/10.3390/electronics15030674

AMA Style

Gao M, Zhang B, Wang L, Tang X, Huan H. Enhancing Noise Robustness in Few-Shot Automatic Modulation Classification via Complex-Valued Autoencoders. Electronics. 2026; 15(3):674. https://doi.org/10.3390/electronics15030674

Chicago/Turabian Style

Gao, Minghui, Binquan Zhang, Lu Wang, Xiaogang Tang, and Hao Huan. 2026. "Enhancing Noise Robustness in Few-Shot Automatic Modulation Classification via Complex-Valued Autoencoders" Electronics 15, no. 3: 674. https://doi.org/10.3390/electronics15030674

APA Style

Gao, M., Zhang, B., Wang, L., Tang, X., & Huan, H. (2026). Enhancing Noise Robustness in Few-Shot Automatic Modulation Classification via Complex-Valued Autoencoders. Electronics, 15(3), 674. https://doi.org/10.3390/electronics15030674

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop