Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = I/Q fusion network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9442 KiB  
Article
A Novel Approach for Robust Automatic Modulation Recognition Based on Reversible Column Networks
by Dan Jing, Tao Xu, Liang Han, Hongfei Yin, Liangchao Li, Yan Zhang, Ming Li, Mian Pan and Liang Guo
Electronics 2025, 14(3), 618; https://doi.org/10.3390/electronics14030618 - 5 Feb 2025
Viewed by 971
Abstract
Automatic Modulation Recognition (AMR) technology, as a key component of intelligent wireless communication, has significant military and civilian value, and there is an urgent need to research relevant algorithms to quickly and effectively identify the modulation type of signals. However, existing models often [...] Read more.
Automatic Modulation Recognition (AMR) technology, as a key component of intelligent wireless communication, has significant military and civilian value, and there is an urgent need to research relevant algorithms to quickly and effectively identify the modulation type of signals. However, existing models often suffer from issues such as neglecting the correlation between IQ components of signals, poor feature extraction capability, and difficulty in achieving an effective balance between detection performance and computational resource utilization. To address these issues, this article proposes an automatic modulation classification method based on convolutional neural networks (CNNs)—OD_SERCNET. To prevent feature loss or useful features from being compressed, a reversible column network (REVCOL) is used as the backbone network to ensure that the overall information remains unchanged when features are decoupled. At the same time, a novel IQ channel fusion network is designed to preprocess the input signal, fully exploring the correlation between IQ components of the same signal and improving the network’s feature extraction ability. In addition, to improve the network’s ability to capture global information, we have improved the original reversible fusion module by introducing an effective attention mechanism. Finally, the effectiveness of this method is validated using various datasets, and the simulation results show that the average accuracy of OD_SRCNET improves by 1–10% compared to other SOTA models, and we explore the optimal number of subnetworks, achieving a better balance between accuracy and computational resource utilization. Full article
Show Figures

Figure 1

24 pages, 2630 KiB  
Article
The Research of Intra-Pulse Modulated Signal Recognition of Radar Emitter under Few-Shot Learning Condition Based on Multimodal Fusion
by Yunhao Liu, Sicun Han, Chengjun Guo, Jiangyan Chen and Qing Zhao
Electronics 2024, 13(20), 4045; https://doi.org/10.3390/electronics13204045 - 14 Oct 2024
Cited by 1 | Viewed by 1817
Abstract
Radar radiation source recognition is critical for the reliable operation of radar communication systems. However, in increasingly complex electromagnetic environments, traditional identification methods face significant limitations. These methods often struggle with high noise levels and diverse modulation types, making it difficult to maintain [...] Read more.
Radar radiation source recognition is critical for the reliable operation of radar communication systems. However, in increasingly complex electromagnetic environments, traditional identification methods face significant limitations. These methods often struggle with high noise levels and diverse modulation types, making it difficult to maintain accuracy, especially when the Signal-to-Noise Ratio (SNR) is low or the available training data are limited. These difficulties are further intensified by the necessity to generalize in environments characterized by a substantial quantity of noisy, low-quality signal samples while being constrained by a limited number of desirable high-quality training samples. To more effectively address these issues, this paper proposes a novel approach utilizing Model-Agnostic Meta-Learning (MAML) to enhance model adaptability in few-shot learning scenarios, allowing the model to quickly learn with limited data and optimize parameters effectively. Furthermore, a multimodal fusion neural network, DCFANet, is designed, incorporating residual blocks, squeeze and excitation blocks, and a multi-scale CNN, to fuse I/Q waveform data and time–frequency image data for more comprehensive feature extraction. Our model enables more robust signal recognition, even when the signal quality is severely degraded by noise or when only a few examples of a signal type are available. Testing on 13 intra-pulse modulated signals in an Additive White Gaussian Noise (AWGN) environment across SNRs ranging from −20 to 10 dB demonstrated the approach’s effectiveness. Particularly, under a 5way5shot setting, the model achieves high classification accuracy even at −10dB SNR. Our research underscores the model’s ability to address the key challenges of radar emitter signal recognition in low-SNR and data-scarce conditions, demonstrating its strong adaptability and effectiveness in complex, real-world electromagnetic environments. Full article
(This article belongs to the Special Issue Digital Signal Processing and Wireless Communication)
Show Figures

Figure 1

16 pages, 11167 KiB  
Article
AbFTNet: An Efficient Transformer Network with Alignment before Fusion for Multimodal Automatic Modulation Recognition
by Meng Ning, Fan Zhou, Wei Wang, Shaoqiang Wang, Peiying Zhang and Jian Wang
Electronics 2024, 13(18), 3725; https://doi.org/10.3390/electronics13183725 - 20 Sep 2024
Viewed by 1627
Abstract
Multimodal automatic modulation recognition (MAMR) has emerged as a prominent research area. The effective fusion of features from different modalities is crucial for MAMR tasks. An effective multimodal fusion mechanism should maximize the extraction and integration of complementary information. Recently, fusion methods based [...] Read more.
Multimodal automatic modulation recognition (MAMR) has emerged as a prominent research area. The effective fusion of features from different modalities is crucial for MAMR tasks. An effective multimodal fusion mechanism should maximize the extraction and integration of complementary information. Recently, fusion methods based on cross-modal attention have shown high performance. However, they overlook the differences in information intensity between different modalities, suffering from quadratic complexity. To this end, we propose an efficient Alignment before Fusion Transformer Network (AbFTNet) based on an in-phase quadrature (I/Q) and Fractional Fourier Transform (FRFT). Specifically, we first align and correlate the feature representations of different single modalities to achieve mutual information maximization. The single modality feature representations are obtained using the self-attention mechanism of the Transformer. Then, we design an efficient cross-modal aggregation promoting (CAP) module. By designing the aggregation center, we integrate two modalities to achieve the adaptive complementary learning of modal features. This operation bridges the gap in information intensity between different modalities, enabling fair interaction. To verify the effectiveness of the proposed methods, we conduct experiments on the RML2016.10a dataset. The experimental results show that multimodal fusion features significantly outperform single-modal features in classification accuracy across different signal-to-noise ratios (SNRs). Compared to other methods, AbFTNet achieves an average accuracy of 64.59%, with a 1.36% improvement over the TLDNN method, reaching the state of the art. Full article
Show Figures

Figure 1

22 pages, 3394 KiB  
Article
Multi-View and Multimodal Graph Convolutional Neural Network for Autism Spectrum Disorder Diagnosis
by Tianming Song, Zhe Ren, Jian Zhang and Mingzhi Wang
Mathematics 2024, 12(11), 1648; https://doi.org/10.3390/math12111648 - 24 May 2024
Cited by 2 | Viewed by 2107
Abstract
Autism Spectrum Disorder (ASD) presents significant diagnostic challenges due to its complex, heterogeneous nature. This study explores a novel approach to enhance the accuracy and reliability of ASD diagnosis by integrating resting-state functional magnetic resonance imaging with demographic data (age, gender, and IQ). [...] Read more.
Autism Spectrum Disorder (ASD) presents significant diagnostic challenges due to its complex, heterogeneous nature. This study explores a novel approach to enhance the accuracy and reliability of ASD diagnosis by integrating resting-state functional magnetic resonance imaging with demographic data (age, gender, and IQ). This study is based on improving the spectral graph convolutional neural network (GCN). It introduces a multi-view attention fusion module to extract useful information from different views. The graph’s edges are informed by demographic data, wherein an edge-building network computes weights grounded in demographic information, thereby bolstering inter-subject correlation. To tackle the challenges of oversmoothing and neighborhood explosion inherent in deep GCNs, this study introduces DropEdge regularization and residual connections, thus augmenting feature diversity and model generalization. The proposed method is trained and evaluated on the ABIDE-I and ABIDE-II datasets. The experimental results underscore the potential of integrating multi-view and multimodal data to advance the diagnostic capabilities of GCNs for ASD. Full article
(This article belongs to the Special Issue Network Biology and Machine Learning in Bioinformatics)
Show Figures

Figure 1

12 pages, 26648 KiB  
Article
An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism
by Zhao Ma, Shengliang Fang, Youchen Fan, Gaoxing Li and Haojie Hu
Electronics 2023, 12(17), 3661; https://doi.org/10.3390/electronics12173661 - 30 Aug 2023
Cited by 6 | Viewed by 2103
Abstract
This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from [...] Read more.
This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from the original I/Q sequence on three branches, respectively, learn the spatiotemporal features of the original signal from different “perspectives” through the convolution kernels with different shapes, and perform channel fusion on the output feature maps of the three branches to obtain a multi-domain mixed feature map. Then, the deep features of the signal are extracted by connecting multiple convolution layers in the time domain. Secondly, a plug-and-play channel attention module is constructed, which can be embedded into any feature extraction layer to give higher weight to the more valuable channels in the output feature map to achieve the purpose of feature correction for the output feature map. The experimental results on the RadiomL2016.10A dataset show that the designed HFECNET-CA has higher recognition accuracy and fewer trainable parameters compared to other networks. Under 20 SNRs, the average recognition accuracy reached 63.92%, and the highest recognition accuracy reached 93.64%. Full article
(This article belongs to the Special Issue Machine Learning for Radar and Communication Signal Processing)
Show Figures

Figure 1

22 pages, 7107 KiB  
Article
A Multi-Modal Modulation Recognition Method with SNR Segmentation Based on Time Domain Signals and Constellation Diagrams
by Ruifeng Duan, Xinze Li, Haiyan Zhang, Guoting Yang, Shurui Li, Peng Cheng and Yonghui Li
Electronics 2023, 12(14), 3175; https://doi.org/10.3390/electronics12143175 - 21 Jul 2023
Cited by 9 | Viewed by 2362
Abstract
Deep-learning-based automatic modulation recognition (AMR) has recently attracted significant interest due to its high recognition accuracy and the lack of a need to manually set classification standards. However, it is extremely challenging to achieve a high recognition accuracy in increasingly complex channel environments [...] Read more.
Deep-learning-based automatic modulation recognition (AMR) has recently attracted significant interest due to its high recognition accuracy and the lack of a need to manually set classification standards. However, it is extremely challenging to achieve a high recognition accuracy in increasingly complex channel environments and balance the complexity. To address this issue, we propose a multi-modal AMR neural network model with SNR segmentation called M-LSCANet, which integrates an SNR segmentation strategy, lightweight residual stacks, skip connections, and an attention mechanism. In the proposed model, we use time domain I/Q data and constellation diagram data only in medium and high signal-to-noise (SNR) regions to jointly extract the signal features. But for the low SNR region, only I/Q signals are used. This is because constellation diagrams are very recognizable in the medium and high SNRs, which is conducive to distinguishing high-order modulation. However, in the low SNR region, excessive similarity and the blurring of constellations caused by heavy noise will seriously interfere with modulation recognition, resulting in performance loss. Remarkably, the proposed method uses lightweight residuals stacks and rich ski connections, so that more initial information is retained to learn the constellation diagram feature information and extract the time domain features from shallow to deep, but with a moderate complexity. Additionally, after feature fusion, we adopt the convolution block attention module (CBAM) to reweigh both the channel and spatial domains, further improving the model’s ability to mine signal characteristics. As a result, the proposed approach significantly improves the overall recognition accuracy. The experimental results on the RadioML 2016.10B public dataset, with SNR ranging from −20 dB to 18 dB, show that the proposed M-LSCANet outperforms existing methods in terms of classification accuracy, achieving 93.4% and 95.8% at 0 dB and 12 dB, respectively, which are improvements of 2.7% and 2.0% compared to TMRN-GLU. Moreover, the proposed model exhibits a moderate parameter number compared to state-of-the-art methods. Full article
Show Figures

Figure 1

13 pages, 2704 KiB  
Article
Automatic Modulation Recognition Based on Deep-Learning Features Fusion of Signal and Constellation Diagram
by Hui Han, Zhijian Yi, Zhigang Zhu, Lin Li, Shuaige Gong, Bin Li and Mingjie Wang
Electronics 2023, 12(3), 552; https://doi.org/10.3390/electronics12030552 - 20 Jan 2023
Cited by 23 | Viewed by 5260
Abstract
In signal communication based on a non-cooperative communication system, the receiver is an unlicensed third-party communication terminal, and the modulation parameters of the transmitter signal cannot be predicted in advance. After the RF signal passes through the RF band-pass filter, low noise amplifier, [...] Read more.
In signal communication based on a non-cooperative communication system, the receiver is an unlicensed third-party communication terminal, and the modulation parameters of the transmitter signal cannot be predicted in advance. After the RF signal passes through the RF band-pass filter, low noise amplifier, and image rejection filter, the intermediate frequency signal is obtained by down-conversion, and then the IQ signal is obtained in the baseband by using the intermediate frequency band-pass filter and down-conversion. In this process, noise and signal frequency offset are inevitably introduced. As the basis of subsequent analysis and interpretation, modulation recognition has important research value in this environment. The introduction of deep learning also brings new feature mining tools. Based on this, this paper proposes a signal modulation recognition method based on multi-feature fusion and constructs a deep learning network with a double-branch structure to extract the features of IQ signal and multi-channel constellation, respectively. It is found that through the complementary characteristics of different forms of signals, a more complete signal feature representation can be constructed. At the same time, it can better alleviate the influence of noise and frequency offset on recognition performance, and effectively improve the classification accuracy of modulation recognition. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

22 pages, 5687 KiB  
Article
Electromagnetic Modulation Signal Classification Using Dual-Modal Feature Fusion CNN
by Jiansheng Bai, Jinjie Yao, Juncheng Qi and Liming Wang
Entropy 2022, 24(5), 700; https://doi.org/10.3390/e24050700 - 15 May 2022
Cited by 11 | Viewed by 3156
Abstract
AMC (automatic modulation classification) plays a vital role in spectrum monitoring and electromagnetic abnormal signal detection. Up to now, few studies have focused on the complementarity between features of different modalities and the importance of the feature fusion mechanism in the AMC method. [...] Read more.
AMC (automatic modulation classification) plays a vital role in spectrum monitoring and electromagnetic abnormal signal detection. Up to now, few studies have focused on the complementarity between features of different modalities and the importance of the feature fusion mechanism in the AMC method. This paper proposes a dual-modal feature fusion convolutional neural network (DMFF-CNN) for AMC to use the complementarity between different modal features fully. DMFF-CNN uses the gram angular field (GAF) image coding and intelligence quotient (IQ) data combined with CNN. Firstly, the original signal is converted into images by GAF, and the GAF images are used as the input of ResNet50. Secondly, it is converted into IQ data and as the complex value network (CV-CNN) input to extract features. Furthermore, a dual-modal feature fusion mechanism (DMFF) is proposed to fuse the dual-modal features extracted by GAF-ResNet50 and CV-CNN. The fusion feature is used as the input of DMFF-CNN for model training to achieve AMC of multi-type signals. In the evaluation stage, the advantages of the DMFF mechanism proposed in this paper and the accuracy improvement compared with other feature fusion algorithms are discussed. The experiment shows that our method performs better than others, including some state-of-the-art methods, and has superior robustness at a low signal-to-noise ratio (SNR), and the average classification accuracy of the dataset signals reaches 92.1%. The DMFF-CNN proposed in this paper provides a new path for the AMC field. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Back to TopTop