Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (337)

Search Parameters:
Keywords = deep learning for wireless communication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 11360 KB  
Review
Intelligent Modulation Recognition of Frequency-Hopping Communications: Theory, Methods, and Challenges
by Mengxuan Lan, Zhongqiang Luo and Mingjun Jiang
Big Data Cogn. Comput. 2025, 9(12), 318; https://doi.org/10.3390/bdcc9120318 - 11 Dec 2025
Viewed by 92
Abstract
In wireless communication, information security, and anti-interference technology, modulation recognition of frequency-hopping signals has always been a key technique. Its widespread application in satellite communications, military communications, and drone communications holds broad prospects. Traditional modulation recognition techniques often rely on expert experience to [...] Read more.
In wireless communication, information security, and anti-interference technology, modulation recognition of frequency-hopping signals has always been a key technique. Its widespread application in satellite communications, military communications, and drone communications holds broad prospects. Traditional modulation recognition techniques often rely on expert experience to construct likelihood functions or manually extract relevant features, involving cumbersome steps and low efficiency. In contrast, deep learning-based modulation recognition replaces manual feature extraction with an end-to-end feature extraction and recognition integrated architecture, where neural networks automatically extract signal features, significantly enhancing recognition efficiency. Current deep learning-based modulation recognition research primarily focuses on conventional fixed-frequency signals, leaving gaps in intelligent modulation recognition for frequency-hopping signals. This paper aims to summarise the current research progress in intelligent modulation recognition for frequency-hopping signals. It categorises intelligent modulation recognition for frequency-hopping signals into two mainstream approaches, analyses them in conjunction with the development of intelligent modulation recognition, and explores the close relationship between intelligent modulation recognition and parameter estimation for frequency-hopping signals. Finally, the paper summarises and outlines future research directions and challenges in the field of intelligent modulation recognition for frequency-hopping signals. Full article
Show Figures

Figure 1

20 pages, 3599 KB  
Article
An Adaptative Wavelet Time–Frequency Transform with Mamba Network for OFDM Automatic Modulation Classification
by Hongji Xing, Xiaogang Tang, Lu Wang, Binquan Zhang and Yuepeng Li
AI 2025, 6(12), 323; https://doi.org/10.3390/ai6120323 - 9 Dec 2025
Viewed by 210
Abstract
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key [...] Read more.
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key technology due to its characteristics, such as high reliability, high data rate, and low latency, and has been widely applied in various fields. As a component of cognitive radios, automatic modulation classification (AMC) plays an important role in remote sensing and electromagnetic spectrum sensing. However, under current complex channel conditions, there are issues such as low signal-to-noise ratio (SNR), Doppler frequency shift, and multipath propagation. Methods: Coupled with the inherent problem of indistinct characteristics in high-order modulation, these currently make it difficult for AMC to focus on OFDM and high-order digital modulation. Existing methods are mainly based on a single model-driven approach or data-driven approach. The Adaptive Wavelet Mamba Network (AWMN) proposed in this paper attempts to combine model-driven adaptive wavelet transform feature extraction with the Mamba deep learning architecture. A module based on the lifting wavelet scheme effectively captures discriminative time–frequency features using learnable operations. Meanwhile, a Mamba network constructed based on the State Space Model (SSM) can capture long-term temporal dependencies. This network realizes a combination of model-driven and data-driven methods. Results: Tests conducted on public datasets and a custom-built real-time received OFDM dataset show that the proposed AWMN achieves a performance reaching higher accuracies of 62.39%, 64.50%, and 74.95% on the public Rml2016(a) and Rml2016(b) datasets and our formulated EVAS dataset, while maintaining a compact parameter size of 0.44 M. Conclusions: These results highlight its potential for improving the automatic modulation classification of high-order OFDM modulation in 5G/6G systems. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

15 pages, 1660 KB  
Article
MLD-Net: A Multi-Level Knowledge Distillation Network for Automatic Modulation Recognition
by Xihui Zhang, Linrun Zhang, Meng Zhang, Zhenxi Zhang, Peiru Li, Xiaoran Shi and Feng Zhou
Sensors 2025, 25(23), 7143; https://doi.org/10.3390/s25237143 - 22 Nov 2025
Viewed by 532
Abstract
Automatic Modulation Recognition (AMR) is a critical technology for intelligent wireless communication systems, but the deployment of high-performance deep learning models is often hindered by their substantial computational and memory requirements. To address this challenge, this paper proposes a multi-level knowledge distillation network, [...] Read more.
Automatic Modulation Recognition (AMR) is a critical technology for intelligent wireless communication systems, but the deployment of high-performance deep learning models is often hindered by their substantial computational and memory requirements. To address this challenge, this paper proposes a multi-level knowledge distillation network, namely MLD-Net, for creating a lightweight and powerful AMR model. Our approach employs a large Transformer-based network as a teacher to guide the training of a compact and efficient Reformer-based student model. The knowledge contained in the large model is transferred across three distinct granularities: at the output level, to convey high-level predictive distributions; at the feature level, to align intermediate representations; and at the attention level, to propagate relational information about signal characteristics. This comprehensive distillation strategy empowers the student model to effectively emulate the teacher’s complex reasoning processes. Experimental results on the RML2016.10A benchmark dataset demonstrate that MLD-Net achieves state-of-the-art performance, outperforming other baseline models across a wide range of signal-to-noise ratios while requiring only a fraction of the parameters. Extensive ablation study further confirms the collaborative contribution of each distillation level, validating that the proposed MLD-Net is an effective solution for developing lightweight and efficient AMR networks for edge deployment. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

20 pages, 22800 KB  
Review
Deep Learning Empowered Signal Detection for Spatial Modulation Communication Systems
by Shaopeng Jin, Yuyang Peng and Fawaz AL-Hazemi
Mathematics 2025, 13(22), 3731; https://doi.org/10.3390/math13223731 - 20 Nov 2025
Viewed by 241
Abstract
Index modulation (IM) has attracted increasing research attention in recent years. Spatial modulation (SM) as a popular IM scheme is effective to increase spectral efficiency using the antenna index to transmit extra information bits. It can also address some issues that occur in [...] Read more.
Index modulation (IM) has attracted increasing research attention in recent years. Spatial modulation (SM) as a popular IM scheme is effective to increase spectral efficiency using the antenna index to transmit extra information bits. It can also address some issues that occur in multiple-input multiple-output systems, such as inter-channel interference and inter-antenna synchronization. Artificial intelligence, especially deep learning (DL), has made significant inroads in wireless communication. Recently, more researchers have started to apply DL methods to IM-based applications such as signal detection. Many results have proven that DL methods can achieve breakthroughs in metrics like bit error rate (BER) and time complexity compared to conventional signal detection methods. However, the problem of how to design this novel method in practical scenarios is far from fully understood. This article surveys several DL-based signal detection methods for IM and its variants. Moreover, we discuss the performance of different neural network structures, some of which can achieve better performance compared to original neural network. In the implementation, trade-offs between BER and time complexity, as well as neural network’s training time, are discussed. Several simulation results are provided to demonstrate how the DL method in signal detection of SM can lead to improvements in BER and time complexity. Finally, some challenges and open issues that suggest future research directions are discussed. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

19 pages, 3120 KB  
Article
Computer-Vision- and Edge-Enabled Real-Time Assistance Framework for Visually Impaired Persons with LPWAN Emergency Signaling
by Ghadah Naif Alwakid, Mamoona Humayun and Zulfiqar Ahmad
Sensors 2025, 25(22), 7016; https://doi.org/10.3390/s25227016 - 17 Nov 2025
Viewed by 409
Abstract
In recent decades, various assistive technologies have emerged to support visually impaired individuals. However, there remains a gap in terms of solutions that provide efficient, universal, and real-time capabilities by combining robust object detection, robust communication, continuous data processing, and emergency signaling in [...] Read more.
In recent decades, various assistive technologies have emerged to support visually impaired individuals. However, there remains a gap in terms of solutions that provide efficient, universal, and real-time capabilities by combining robust object detection, robust communication, continuous data processing, and emergency signaling in dynamic environments. In many existing systems, trade-offs are made in range, latency, or reliability when applied in changing outdoor or indoor scenarios. In this study, we propose a comprehensive framework specifically tailored for visually impaired people, integrating computer vision, edge computing, and a dual-channel communication architecture including low-power wide-area network (LPWAN) technology. The system utilizes the YOLOv5 deep-learning model for the real-time detection of obstacles, paths, and assistive tools (such as the white cane) with high performance: precision 0.988, recall 0.969, and mAP 0.985. Implementation of edge-computing devices is introduced to offload computational load from central servers, enabling fast local processing and decision-making. The communications subsystem uses Wi-Fi as the primary link, while a LoRaWAN channel acts as a fail-safe emergency alert network. An IoT-based panic button is incorporated to transmit immediate location-tagged alerts, enabling rapid response by authorities or caregivers. The experimental results demonstrate the system’s low latency and reliable operations under varied real-world conditions, indicating significant potential to improve independent mobility and quality of life for visually impaired people. The proposed solution offers cost-effective and scalable architecture suitable for deployment in complex and challenging environments where real-time assistance is essential. Full article
(This article belongs to the Special Issue Technological Advances for Sensing in IoT-Based Networks)
Show Figures

Figure 1

36 pages, 4374 KB  
Review
Spectrum Sensing in Cognitive Radio Internet of Things: State-of-the-Art, Applications, Challenges, and Future Prospects
by Akeem Abimbola Raji and Thomas O. Olwal
J. Sens. Actuator Netw. 2025, 14(6), 109; https://doi.org/10.3390/jsan14060109 - 13 Nov 2025
Viewed by 1131
Abstract
The proliferation of Internet of Things (IoT) devices due to remarkable developments in mobile connectivity has caused a tremendous increase in the consumption of broadband spectrums in fifth generation (5G) mobile access. In order to secure the continued growth of IoT, there is [...] Read more.
The proliferation of Internet of Things (IoT) devices due to remarkable developments in mobile connectivity has caused a tremendous increase in the consumption of broadband spectrums in fifth generation (5G) mobile access. In order to secure the continued growth of IoT, there is a need for efficient management of communication resources in the 5G wireless access. Cognitive radio (CR) is advanced to maximally utilize bandwidth spectrums in the radio communication network. The integration of CR into IoT networks is a promising technology that is aimed at productive utilization of the spectrum, with a view to making more spectral bands available to IoT devices for communication. An important function of CR is spectrum sensing (SS), which enables maximum utilization of the spectrum in the radio networks. Existing SS techniques demonstrate poor performance in noisy channel states and are not immune from the dynamic effects of wireless channels. This article presents a comprehensive review of various approaches commonly used for SS. Furthermore, multi-agent deep reinforcement learning (MADRL) is proposed for enhancing the accuracy of spectrum detection in erratic wireless channels. Finally, we highlight challenges that currently exist in SS in CRIoT networks and further state future research directions in this regard. Full article
Show Figures

Figure 1

38 pages, 4109 KB  
Article
End-to-End DAE–LDPC–OFDM Transceiver with Learned Belief Propagation Decoder for Robust and Power-Efficient Wireless Communication
by Mohaimen Mohammed and Mesut Çevik
Sensors 2025, 25(21), 6776; https://doi.org/10.3390/s25216776 - 5 Nov 2025
Viewed by 745
Abstract
This paper presents a Deep Autoencoder–LDPC–OFDM (DAE–LDPC–OFDM) transceiver architecture that integrates a learned belief propagation (BP) decoder to achieve robust, energy-efficient, and adaptive wireless communication. Unlike conventional modular systems that treat encoding, modulation, and decoding as independent stages, the proposed framework performs end-to-end [...] Read more.
This paper presents a Deep Autoencoder–LDPC–OFDM (DAE–LDPC–OFDM) transceiver architecture that integrates a learned belief propagation (BP) decoder to achieve robust, energy-efficient, and adaptive wireless communication. Unlike conventional modular systems that treat encoding, modulation, and decoding as independent stages, the proposed framework performs end-to-end joint optimization of all components, enabling dynamic adaptation to varying channel and noise conditions. The learned BP decoder introduces trainable parameters into the iterative message-passing process, allowing adaptive refinement of log-likelihood ratio (LLR) statistics and enhancing decoding accuracy across diverse SNR regimes. Extensive experimental results across multiple datasets and channel scenarios demonstrate the effectiveness of the proposed design. At 10 dB SNR, the DAE–LDPC–OFDM achieves a BER of 1.72% and BLER of 2.95%, outperforming state-of-the-art models such as Transformer–OFDM, CNN–OFDM, and GRU–OFDM by 25–30%, and surpassing traditional LDPC–OFDM systems by 38–42% across all tested datasets. The system also achieves a PAPR reduction of 26.6%, improving transmitter power amplifier efficiency, and maintains a low inference latency of 3.9 ms per frame, validating its suitability for real-time applications. Moreover, it maintains reliable performance under time-varying, interference-rich, and multipath fading channels, confirming its robustness in realistic wireless environments. The results establish the DAE–LDPC–OFDM as a high-performance, power-efficient, and scalable architecture capable of supporting the demands of 6G and beyond, delivering superior reliability, low-latency performance, and energy-efficient communication in next-generation intelligent networks. Full article
(This article belongs to the Special Issue AI-Driven Security and Privacy for IIoT Applications)
Show Figures

Figure 1

13 pages, 6355 KB  
Article
TranSIC-Net: An End-to-End Transformer Network for OFDM Symbol Demodulation with Validation on DroneID Signals
by Zhihong Wang and Zi-Xin Xu
Sensors 2025, 25(20), 6488; https://doi.org/10.3390/s25206488 - 21 Oct 2025
Viewed by 760
Abstract
Demodulating Orthogonal Frequency Division Multiplexing (OFDM) signals in complex wireless environments remains a fundamental challenge, especially when traditional receiver designs rely on explicit channel estimation under adverse conditions such as low signal-to-noise ratio (SNR) or carrier frequency offset (CFO). Motivated by practical challenges [...] Read more.
Demodulating Orthogonal Frequency Division Multiplexing (OFDM) signals in complex wireless environments remains a fundamental challenge, especially when traditional receiver designs rely on explicit channel estimation under adverse conditions such as low signal-to-noise ratio (SNR) or carrier frequency offset (CFO). Motivated by practical challenges in decoding DroneID—a proprietary OFDM-like signaling format used by DJI drones with a nonstandard frame structure—we present TranSIC-Net, a Transformer-based end-to-end neural network that unifies channel estimation and symbol detection within a single architecture. Unlike conventional methods that treat these steps separately, TranSIC-Net implicitly learns channel dynamics from pilot patterns and exploits the attention mechanism to capture inter-subcarrier correlations. While initially developed to tackle the unique structure of DroneID, the model demonstrates strong generalizability: with minimal adaptation, it can be applied to a wide range of OFDM systems. Extensive evaluations on both synthetic OFDM waveforms and real-world unmanned aerial vehicle (UAV) signals show that TranSIC-Net consistently outperforms least-squares plus zero-forcing (LS+ZF) and leading deep learning baselines such as ProEsNet in terms of bit error rate (BER), estimation accuracy, and robustness—highlighting its effectiveness and flexibility in practical wireless communication scenarios. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

17 pages, 2166 KB  
Article
Blind Separation and Feature-Guided Modulation Recognition for Single-Channel Mixed Signals
by Zhiping Tan, Tianhui Fu, Xi Wu and Yixin Zhu
Electronics 2025, 14(20), 4103; https://doi.org/10.3390/electronics14204103 - 20 Oct 2025
Viewed by 562
Abstract
With increasingly scarce spectrum resources, frequency-domain signal overlap interference has become a critical issue, making multi-user modulation classification (MUMC) a significant challenge in wireless communications. Unlike single-user modulation classification (SUMC), MUMC suffers from feature degradation caused by signal aliasing, feature redundancy, and low [...] Read more.
With increasingly scarce spectrum resources, frequency-domain signal overlap interference has become a critical issue, making multi-user modulation classification (MUMC) a significant challenge in wireless communications. Unlike single-user modulation classification (SUMC), MUMC suffers from feature degradation caused by signal aliasing, feature redundancy, and low inter-class discriminability. To address these challenges, this paper proposes a collaborative “separation–recognition” framework. The framework begins by separating overlapping signals via a band partitioning and FastICA module to alleviate feature degradation. For the recognition phase, we design a dual-branch network: one branch extracts prior knowledge features, including amplitude, phase, and frequency, from the I/Q sequence and models their temporal dependencies using a bidirectional LSTM; the other branch learns deep hierarchical representations directly from the raw signal through multi-scale convolutional layers. The features from both branches are then adaptively fused using a gated fusion module. Experimental results show that the proposed method achieves superior performance over several baseline models across various signal conditions, validating the efficacy of the dual-branch architecture and the overall framework. Full article
Show Figures

Figure 1

20 pages, 5744 KB  
Article
Decoupling Rainfall and Surface Runoff Effects Based on Spatio-Temporal Spectra of Wireless Channel State Information
by Hao Li, Yin Long and Tehseen Zia
Electronics 2025, 14(20), 4102; https://doi.org/10.3390/electronics14204102 - 20 Oct 2025
Viewed by 424
Abstract
Leveraging ubiquitous wireless signals for environmental sensing provides a highly promising pathway toward constructing low-cost and high-density flood monitoring systems. However, in real-world flood scenarios, the wireless channel is simultaneously affected by rainfall-induced signal attenuation and complex multipath effects caused by surface runoff [...] Read more.
Leveraging ubiquitous wireless signals for environmental sensing provides a highly promising pathway toward constructing low-cost and high-density flood monitoring systems. However, in real-world flood scenarios, the wireless channel is simultaneously affected by rainfall-induced signal attenuation and complex multipath effects caused by surface runoff (water accumulation). These two physical phenomena become intertwined in the received signals, resulting in severe feature ambiguity. This not only greatly limits the accuracy of environmental sensing but also hinders communication systems from performing effective channel compensation. How to disentangle these combined effects from a single wireless link represents a fundamental scientific challenge for achieving high-precision wireless environmental sensing and ensuring communication reliability under harsh conditions. To address this challenge, we propose a novel signal processing framework that aims to effectively decouple the effects of rainfall and surface runoff from Channel State Information (CSI) collected using commercial Wi-Fi devices. The core idea of our method lies in first constructing a two-dimensional CSI spatiotemporal spectrogram from continuously captured multicarrier CSI data. This spectrogram enables high-resolution visualization of the unique “fingerprints” of different physical effects—rainfall manifests as smooth background attenuation, whereas surface runoff appears as sparse high-frequency textures. Building upon this representation, we design and implement a Dual-Decoder Convolutional Autoencoder deep learning model. The model employs a shared encoder to learn the mixed CSI features, while two distinct decoder branches are responsible for reconstructing the global background component attributed to rainfall and the local texture component associated with surface runoff, respectively. Based on the decoupled signal components, we achieve simultaneous and highly accurate estimation of rainfall intensity (mean absolute error below 1.5 mm/h) and surface water accumulation (detection accuracy of 98%). Furthermore, when the decoupled and refined channel estimates are applied to a communication receiver for channel equalization, the Bit Error Rate (BER) is reduced by more than one order of magnitude compared to conventional equalization methods. Full article
Show Figures

Figure 1

19 pages, 1077 KB  
Article
Research on Optimization of RIS-Assisted Air-Ground Communication System Based on Reinforcement Learning
by Yuanyuan Yao, Xinyang Liu, Sai Huang and Xinwei Yue
Sensors 2025, 25(20), 6382; https://doi.org/10.3390/s25206382 - 16 Oct 2025
Viewed by 625
Abstract
In urban emergency communication scenarios, building obstructions can reduce the performance of base station (BS) communication networks. To address such issues, this paper proposes an air-ground wireless network enabled by an unmanned aerial vehicle (UAV) and assisted by reconfigurable intelligent surfaces (RIS). This [...] Read more.
In urban emergency communication scenarios, building obstructions can reduce the performance of base station (BS) communication networks. To address such issues, this paper proposes an air-ground wireless network enabled by an unmanned aerial vehicle (UAV) and assisted by reconfigurable intelligent surfaces (RIS). This system enhances the efficacy of UAV-enabled MISO networks. Treating the UAV as an intelligent agent moving in 3D space, sensing changes in the channel environment, and adopting zero-forcing (ZF) precoding to eliminate interference from ground users. Meanwhile, joint design is performed for UAV movement, RIS phase shifts, and power allocation for users. We propose two deep reinforcement learning (DRL) algorithms, which are termed D3QN-WF and DDQN-WF, respectively. Simulation results indicate that D3QN-WF achieves a 15.9% higher sum rate and 50.1% greater throughput than the DDQN-WF baseline, while also demonstrating significantly faster convergence. Full article
Show Figures

Figure 1

15 pages, 1507 KB  
Article
End-to-End Constellation Mapping and Demapping for Integrated Sensing and Communications
by Jiayong Yu, Jiahao Bai, Jingxuan Huang, Xingyi Wang, Jun Feng, Fanghao Xia and Zhong Zheng
Electronics 2025, 14(20), 4070; https://doi.org/10.3390/electronics14204070 - 16 Oct 2025
Viewed by 592
Abstract
Integrated sensing and communication (ISAC) is a transformative technology for sixth-generation (6G) wireless networks. In this paper, we investigate end-to-end constellation mapping and demapping in ISAC systems, leveraging OFDM-based waveforms and an adaptive DNN architecture for pulse-based transmission. Specifically, we propose an end-to-end [...] Read more.
Integrated sensing and communication (ISAC) is a transformative technology for sixth-generation (6G) wireless networks. In this paper, we investigate end-to-end constellation mapping and demapping in ISAC systems, leveraging OFDM-based waveforms and an adaptive DNN architecture for pulse-based transmission. Specifically, we propose an end-to-end autoencoder framework that optimizes the constellation through adaptive symbol distribution shaping via deep learning, enhancing communication reliability with symbol mapping and boosting sensing capabilities with an improved peak-to-sidelobe ratio (PSLR). The autoencoder consists of an autoencoder mapper (AE-Mapper) and an autoencoder demapper (AE-Demapper), jointly trained using a composite loss function to optimize constellation points and achieve flexible performance balance in communication and sensing. Simulation results demonstrate that the proposed DNN-based end-to-end design achieves dynamic balance between PSLR of the autocorrelation function (ACF) and bit error rate (BER). Full article
Show Figures

Figure 1

24 pages, 1626 KB  
Article
Physical Layer Security Enhancement in IRS-Assisted Interweave CIoV Networks: A Heterogeneous Multi-Agent Mamba RainbowDQN Method
by Ruiquan Lin, Shengjie Xie, Wencheng Chen and Tao Xu
Sensors 2025, 25(20), 6287; https://doi.org/10.3390/s25206287 - 10 Oct 2025
Viewed by 564
Abstract
The Internet of Vehicles (IoV) relies on Vehicle-to-Everything (V2X) communications to enable cooperative perception among vehicles, infrastructures, and devices, where Vehicle-to-Infrastructure (V2I) links are crucial for reliable transmission. However, the openness of wireless channels exposes IoV to eavesdropping, threatening privacy and security. This [...] Read more.
The Internet of Vehicles (IoV) relies on Vehicle-to-Everything (V2X) communications to enable cooperative perception among vehicles, infrastructures, and devices, where Vehicle-to-Infrastructure (V2I) links are crucial for reliable transmission. However, the openness of wireless channels exposes IoV to eavesdropping, threatening privacy and security. This paper investigates an Intelligent Reflecting Surface (IRS)-assisted interweave Cognitive IoV (CIoV) network to enhance physical layer security in V2I communications. A non-convex joint optimization problem involving spectrum allocation, transmit power for Vehicle Users (VUs), and IRS phase shifts is formulated. To address this challenge, a heterogeneous multi-agent (HMA) Mamba RainbowDQN algorithm is proposed, where homogeneous VUs and a heterogeneous secondary base station (SBS) act as distinct agents to simplify decision-making. Simulation results show that the proposed method significantly outperform benchmark schemes, achieving a 13.29% improvement in secrecy rate and a 54.2% reduction in secrecy outage probability (SOP). These results confirm the effectiveness of integrating IRS and deep reinforcement learning (DRL) for secure and efficient V2I communications in CIoV networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 3386 KB  
Article
Edge-AI Enabled Resource Allocation for Federated Learning in Cell-Free Massive MIMO-Based 6G Wireless Networks: A Joint Optimization Perspective
by Chen Yang and Quanrong Fang
Electronics 2025, 14(19), 3938; https://doi.org/10.3390/electronics14193938 - 4 Oct 2025
Cited by 1 | Viewed by 1074
Abstract
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity [...] Read more.
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity and fairness, which leads to degraded performance in large-scale deployments. To address this gap, we propose a joint optimization framework that integrates communication–computation co-design, fairness-aware aggregation, and a hybrid strategy combining convex relaxation with deep reinforcement learning. Extensive experiments on benchmark vision datasets and real-world wireless traces demonstrate that the framework achieves up to 23% higher accuracy, 18% lower latency, and 21% energy savings compared with state-of-the-art baselines. These findings advance joint optimization in federated learning (FL) and demonstrate scalability for 6G applications. Full article
Show Figures

Figure 1

17 pages, 6267 KB  
Article
Local and Remote Digital Pre-Distortion for 5G Power Amplifiers with Safe Deep Reinforcement Learning
by Christian Spano, Damiano Badini, Lorenzo Cazzella and Matteo Matteucci
Sensors 2025, 25(19), 6102; https://doi.org/10.3390/s25196102 - 3 Oct 2025
Cited by 1 | Viewed by 992
Abstract
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. [...] Read more.
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. This paper introduces DRL-DPD, a Deep Reinforcement Learning-based solution for DPD that aims to reduce computational burden, improve adaptation to dynamic environments, and minimize resource consumption. To ensure safety and regulatory compliance, we integrate an ad-hoc Safe Reinforcement Learning algorithm, CRE-DDPG (Cautious-Recoverable-Exploration Deep Deterministic Policy Gradient), which prevents ACLR measurements from falling below safety thresholds. Simulations and hardware experiments demonstrate the potential of DRL-DPD with CRE-DDPG to surpass current DPD limitations in both local and remote configurations, paving the way for more efficient communication systems, especially in the context of 5G and beyond. Full article
Show Figures

Figure 1

Back to TopTop