Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = beamforming training

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 423 KiB  
Article
A Deep Learning-Driven Solution to Limited-Feedback MIMO Relaying Systems
by Kwadwo Boateng Ofori-Amanfo, Bridget Durowaa Antwi-Boasiako, Prince Anokye, Suho Shin and Kyoung-Jae Lee
Mathematics 2025, 13(14), 2246; https://doi.org/10.3390/math13142246 - 11 Jul 2025
Viewed by 371
Abstract
In this work, we investigate a new design strategy for the implementation of a deep neural network (DNN)-based limited-feedback relay system by using conventional filters to acquire training data in order to jointly solve the issues of quantization and feedback. We aim to [...] Read more.
In this work, we investigate a new design strategy for the implementation of a deep neural network (DNN)-based limited-feedback relay system by using conventional filters to acquire training data in order to jointly solve the issues of quantization and feedback. We aim to maximize the effective channel gain to reduce the symbol error rate (SER). By harnessing binary feedback information from the implemented DNNs together with efficient beamforming vectors, a novel approach to the resulting problem is presented. We compare our proposed system to a Grassmannian codebook system to show that our system outperforms its benchmark in terms of SER. Full article
Show Figures

Figure 1

21 pages, 2797 KiB  
Article
Model-Driven Meta-Learning-Aided Fast Beam Prediction in Millimeter-Wave Communications
by Wenqin Lu, Xueqin Jiang, Yuwen Cao, Tomoaki Ohtsuki and Enjian Bai
Electronics 2025, 14(13), 2734; https://doi.org/10.3390/electronics14132734 - 7 Jul 2025
Viewed by 291
Abstract
Beamforming plays a key role in improving the spectrum utilization efficiency of multi-antenna systems. However, we observe that (i) conventional beam prediction solutions suffer from high model training overhead and computational latency and thus cannot adapt quickly to changing wireless environments, and (ii) [...] Read more.
Beamforming plays a key role in improving the spectrum utilization efficiency of multi-antenna systems. However, we observe that (i) conventional beam prediction solutions suffer from high model training overhead and computational latency and thus cannot adapt quickly to changing wireless environments, and (ii) deep-learning-based beamforming may face the risk of catastrophic oblivion in dynamically changing environments, which can significantly degrade system performance. Inspired by the above challenges, we propose a continuous-learning-inspired beam prediction model for fast beamforming adaptation in dynamic downlink millimeter-wave (mmWave) communications. More specifically, we develop a meta-empirical replay (MER)-based beam prediction model. It combines empirical replay and optimization-based meta-learning. This approach optimizes the trade-offs between transmission and interference in dynamic environments, enabling effective fast beamforming adaptation. Finally, the high-performance gains brought by the proposed model in dynamic communication environments are verified through simulations. The simulation results show that our proposed model not only maintains a high-performance memory for old tasks but also adapts quickly to new tasks. Full article
Show Figures

Figure 1

35 pages, 2010 KiB  
Article
Intelligent Transmission Control Scheme for 5G mmWave Networks Employing Hybrid Beamforming
by Hazem (Moh’d Said) Hatamleh, As’ad Mahmoud As’ad Alnaser, Roba Mahmoud Ali Aloglah, Tomader Jamil Bani Ata, Awad Mohamed Ramadan and Omar Radhi Aqeel Alzoubi
Future Internet 2025, 17(7), 277; https://doi.org/10.3390/fi17070277 - 24 Jun 2025
Viewed by 336
Abstract
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is [...] Read more.
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is required due to the growing demands for spectrum resources in upcoming enormous machine-type communication applications. Ultra-high data speed, reduced latency, and improved connection are all promised by the development of 5G mmWave networks. Yet, due to severe route loss and directional communication requirements, there are substantial obstacles to transmission reliability and energy efficiency. To address this limitation in this research we present an intelligent transmission control scheme tailored to 5G mmWave networks. Transport control protocol (TCP) performance over mmWave links can be enhanced for network protocols by utilizing the mmWave scalable (mmS)-TCP. To ensure that users have the stronger average power, we suggest a novel method called row compression two-stage learning-based accurate multi-path processing network with received signal strength indicator-based association strategy (RCTS-AMP-RSSI-AS) for an estimate of both the direct and indirect channels. To change user scenarios and maintain effective communication constantly, we utilize the innovative method known as multi-user scenario-based MATD3 (Mu-MATD3). To improve performance, we introduce the novel method of “digital and analog beam training with long-short term memory (DAH-BT-LSTM)”. Finally, as optimizing network performance requires bottleneck-aware congestion reduction, the low-latency congestion control schemes (LLCCS) are proposed. The overall proposed method improves the performance of 5G mmWave networks. Full article
(This article belongs to the Special Issue Advances in Wireless and Mobile Networking—2nd Edition)
Show Figures

Figure 1

20 pages, 5129 KiB  
Article
Multi-Band Analog Radio-over-Fiber Mobile Fronthaul System for Indoor Positioning, Beamforming, and Wireless Access
by Hang Yang, Wei Tian, Jianhua Li and Yang Chen
Sensors 2025, 25(7), 2338; https://doi.org/10.3390/s25072338 - 7 Apr 2025
Viewed by 640
Abstract
In response to the urgent demands of the Internet of Things for precise indoor target positioning and information interaction, this paper proposes a multi-band analog radio-over-fiber mobile fronthaul system. The objective is to obtain the target’s location in indoor environments while integrating remote [...] Read more.
In response to the urgent demands of the Internet of Things for precise indoor target positioning and information interaction, this paper proposes a multi-band analog radio-over-fiber mobile fronthaul system. The objective is to obtain the target’s location in indoor environments while integrating remote beamforming capabilities to achieve wireless access to the targets. Vector signals centered at 3, 4, 5, and 6 GHz for indoor positioning and centered at 30 GHz for wireless access are generated centrally in the distributed unit (DU) and fiber-distributed to the active antenna unit (AAU) in the multi-band analog radio-over-fiber mobile fronthaul system. Target positioning is achieved by radiating electromagnetic waves indoors through four omnidirectional antennas in conjunction with a pre-trained neural network, while high-speed wireless communication is realized through a phased array antenna (PAA) comprising four antenna elements. Remote beamforming for the PAA is implemented through the integration of an optical true time delay pool in the multi-band analog radio-over-fiber mobile fronthaul system. This integration decouples the weight control of beamforming from the AAU, enabling centralized control of beam direction at the DU and thereby reducing the complexity and cost of the AAU. Simulation results show that the average accuracy of localization classification can reach 86.92%, and six discrete beam directions are achieved via the optical true time delay pool. In the optical transmission layer, when the received optical power is 10 dBm, the error vector magnitudes (EVMs) of vector signals in all frequency bands remain below 3%. In the wireless transmission layer, two beam directions were selected for verification. Once the beam is aligned with the target device at maximum gain and the received signal is properly processed, the EVM of millimeter-wave vector signals remains below 11%. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

35 pages, 2387 KiB  
Article
Multi-Channel Speech Enhancement Using Labelled Random Finite Sets and a Neural Beamformer in Cocktail Party Scenario
by Jayanta Datta, Ali Dehghan Firoozabadi, David Zabala-Blanco and Francisco R. Castillo-Soria
Appl. Sci. 2025, 15(6), 2944; https://doi.org/10.3390/app15062944 - 8 Mar 2025
Viewed by 1471
Abstract
In this research, a multi-channel target speech enhancement scheme is proposed that is based on deep learning (DL) architecture and assisted by multi-source tracking using a labeled random finite set (RFS) framework. A neural network based on minimum variance distortionless response (MVDR) beamformer [...] Read more.
In this research, a multi-channel target speech enhancement scheme is proposed that is based on deep learning (DL) architecture and assisted by multi-source tracking using a labeled random finite set (RFS) framework. A neural network based on minimum variance distortionless response (MVDR) beamformer is considered as the beamformer of choice, where a residual dense convolutional graph-U-Net is applied in a generative adversarial network (GAN) setting to model the beamformer for target speech enhancement under reverberant conditions involving multiple moving speech sources. The input dataset for this neural architecture is constructed by applying multi-source tracking using multi-sensor generalized labeled multi-Bernoulli (MS-GLMB) filtering, which belongs to the labeled RFS framework, to obtain estimations of the sources’ positions and the associated labels (corresponding to each source) at each time frame with high accuracy under the effect of undesirable factors like reverberation and background noise. The tracked sources’ positions and associated labels help to correctly discriminate the target source from the interferers across all time frames and generate time–frequency (T-F) masks corresponding to the target source from the output of a time-varying, minimum variance distortionless response (MVDR) beamformer. These T-F masks constitute the target label set used to train the proposed deep neural architecture to perform target speech enhancement. The exploitation of MS-GLMB filtering and a time-varying MVDR beamformer help in providing the spatial information of the sources, in addition to the spectral information, within the neural speech enhancement framework during the training phase. Moreover, the application of the GAN framework takes advantage of adversarial optimization as an alternative to maximum likelihood (ML)-based frameworks, which further boosts the performance of target speech enhancement under reverberant conditions. The computer simulations demonstrate that the proposed approach leads to better target speech enhancement performance compared with existing state-of-the-art DL-based methodologies which do not incorporate the labeled RFS-based approach, something which is evident from the 75% ESTOI and PESQ of 2.70 achieved by the proposed approach as compared with the 46.74% ESTOI and PESQ of 1.84 achieved by Mask-MVDR with self-attention mechanism at a reverberation time (RT60) of 550 ms. Full article
Show Figures

Figure 1

16 pages, 1589 KiB  
Article
A Two-Phase Deep Learning Approach to Link Quality Estimation for Multiple-Beam Transmission
by Mun-Suk Kim
Electronics 2024, 13(22), 4561; https://doi.org/10.3390/electronics13224561 - 20 Nov 2024
Viewed by 832
Abstract
In the multi-user multiple-input-multiple-output (MU-MIMO) beamforming (BF) training defined by the 802.11ay standard, since a single initiator transmits a significant number of action frames to multiple responders, inefficient configuration of the transmit antenna arrays when sending these action frames increases the signaling and [...] Read more.
In the multi-user multiple-input-multiple-output (MU-MIMO) beamforming (BF) training defined by the 802.11ay standard, since a single initiator transmits a significant number of action frames to multiple responders, inefficient configuration of the transmit antenna arrays when sending these action frames increases the signaling and latency overheads of MU-MIMO BF training. To configure appropriate transmit antenna arrays for transmitting action frames, the initiator needs to accurately estimate the signal to noise ratios (SNRs) measured at the responders for each configuration of the transmit antenna arrays. In this paper, we propose a two-phase deep learning approach to improve the accuracy of SNR estimation for multiple concurrent beams by reducing the measurement errors of the SNRs for individual single beams when each action frame is transmitted through multiple concurrent beams. Through simulations, we demonstrated that our proposed scheme enabled more responders to successfully receive action frames during MU-MIMO BF training compared to existing schemes. Full article
(This article belongs to the Special Issue Digital Signal Processing and Wireless Communication)
Show Figures

Figure 1

18 pages, 1548 KiB  
Article
Deep Learning-Based Low-Frequency Passive Acoustic Source Localization
by Arnav Joshi and Jean-Pierre Hickey
Appl. Sci. 2024, 14(21), 9893; https://doi.org/10.3390/app14219893 - 29 Oct 2024
Cited by 1 | Viewed by 1489
Abstract
This paper develops benchmark cases for low- and very-low-frequency passive acoustic source localization (ASL) using synthetic data. These cases can be potentially applied to the detection of turbulence-generated low-frequency acoustic emissions in the atmosphere. A deep learning approach is used as an alternative [...] Read more.
This paper develops benchmark cases for low- and very-low-frequency passive acoustic source localization (ASL) using synthetic data. These cases can be potentially applied to the detection of turbulence-generated low-frequency acoustic emissions in the atmosphere. A deep learning approach is used as an alternative to conventional beamforming, which performs poorly under these conditions. The cases, which include two- and three-dimensional ASL, use a shallow and inexpensive convolutional neural network (CNN) with an appropriate input feature to optimize the source localization. CNNs are trained on a limited dataset to highlight the computational tractability and viability of the low-frequency ASL approach. Despite the modest training sets and computational expense, detection accuracies of at least 80% and far superior performance compared with beamforming are achieved—a result that can be improved with more data, training, and deeper networks. These benchmark cases offer well-defined and repeatable representative problems for comparison and further development of deep learning-based low-frequency ASL. Full article
Show Figures

Figure 1

20 pages, 3003 KiB  
Article
Equipment Sounds’ Event Localization and Detection Using Synthetic Multi-Channel Audio Signal to Support Collision Hazard Prevention
by Kehinde Elelu, Tuyen Le and Chau Le
Buildings 2024, 14(11), 3347; https://doi.org/10.3390/buildings14113347 - 23 Oct 2024
Viewed by 1234
Abstract
Construction workplaces often face unforeseen collision hazards due to a decline in auditory situational awareness among on-foot workers, leading to severe injuries and fatalities. Previous studies that used auditory signals to prevent collision hazards focused on employing a classical beamforming approach to determine [...] Read more.
Construction workplaces often face unforeseen collision hazards due to a decline in auditory situational awareness among on-foot workers, leading to severe injuries and fatalities. Previous studies that used auditory signals to prevent collision hazards focused on employing a classical beamforming approach to determine equipment sounds’ Direction of Arrival (DOA). No existing frameworks implement a neural network-based approach for both equipment sound classification and localization. This paper presents an innovative framework for sound classification and localization using multichannel sound datasets artificially synthesized in a virtual three-dimensional space. The simulation synthesized 10,000 multi-channel datasets using just fourteen single sound source audiotapes. This training includes a two-staged convolutional recurrent neural network (CRNN), where the first stage learns multi-label sound event classes followed by the second stage to estimate their DOA. The proposed framework achieves a low average DOA error of 30 degrees and a high F-score of 0.98, demonstrating accurate localization and classification of equipment near workers’ positions on the site. Full article
(This article belongs to the Special Issue Big Data Technologies in Construction Management)
Show Figures

Figure 1

23 pages, 4654 KiB  
Article
Effective Acoustic Model-Based Beamforming Training for Static and Dynamic Hri Applications
by Alejandro Luzanto, Nicolás Bohmer, Rodrigo Mahu, Eduardo Alvarado, Richard M. Stern and Néstor Becerra Yoma
Sensors 2024, 24(20), 6644; https://doi.org/10.3390/s24206644 - 15 Oct 2024
Viewed by 2070
Abstract
Human–robot collaboration will play an important role in the fourth industrial revolution in applications related to hostile environments, mining, industry, forestry, education, natural disaster and defense. Effective collaboration requires robots to understand human intentions and tasks, which involves advanced user profiling. Voice-based communication, [...] Read more.
Human–robot collaboration will play an important role in the fourth industrial revolution in applications related to hostile environments, mining, industry, forestry, education, natural disaster and defense. Effective collaboration requires robots to understand human intentions and tasks, which involves advanced user profiling. Voice-based communication, rich in complex information, is key to this. Beamforming, a technology that enhances speech signals, can help robots extract semantic, emotional, or health-related information from speech. This paper describes the implementation of a system that provides substantially improved signal-to-noise ratio (SNR) and speech recognition accuracy to a moving robotic platform for use in human–robot interaction (HRI) applications in static and dynamic contexts. This study focuses on training deep learning-based beamformers using acoustic model-based multi-style training with measured room impulse responses (RIRs). The results show that this approach outperforms training with simulated RIRs or matched measured RIRs, especially in dynamic conditions involving robot motion. The findings suggest that training with a broad range of measured RIRs is sufficient for effective HRI in various environments, making additional data recording or augmentation unnecessary. This research demonstrates that deep learning-based beamforming can significantly improve HRI performance, particularly in challenging acoustic environments, surpassing traditional beamforming methods. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

11 pages, 1333 KiB  
Communication
Virtual Array-Based Signal Detection and Carrier Frequency Offset Estimation in a Multistatic Collaborative Passive Detection System
by Xiaomao Cao, Hong Ma, Hua Zhang and Jiang Jin
Remote Sens. 2024, 16(17), 3152; https://doi.org/10.3390/rs16173152 - 26 Aug 2024
Viewed by 974
Abstract
To tackle the problem of difficult signal detection and carrier frequency synchronization faced by wireless communication among stations of the multistatic passive detection system in interference environments, an adaptive signal detection and carrier frequency offset (CFO) estimation method based on a virtual array [...] Read more.
To tackle the problem of difficult signal detection and carrier frequency synchronization faced by wireless communication among stations of the multistatic passive detection system in interference environments, an adaptive signal detection and carrier frequency offset (CFO) estimation method based on a virtual array is proposed in this paper. This is a data-aided method that utilizes a training sequence composed of three segments of sub-training sequences with different symbols. This method first uses spatial spectrum estimation to obtain the coarse frequency estimations of interference signals and CFO from virtual array signals constructed from the first two sub-training sequences. Then, beamforming is conducted accordingly on the virtual array signals constructed from the third sub-training sequence to suppress the in-band interferences and protrude the expected signal. Finally, improved performance of signal detection and CFO estimation is obtained with the beamformed signals. Simulation experiments show that a missed detection probability as low as 1 × 10−4, with a false detection probability of 1 × 10−3, can be obtained under a signal-to-interference ratio (SIR) of −10 dB and Eb/N0 of 1 dB. Moreover, the proposed method can also simultaneously achieve a CFO estimation error that is lower than 3%, with the condition of Eb/N0 being as low as −5 dB under different SIRs. Simulation results validate the proposed method and demonstrate the promising application prospects of the proposed method in networked passive detection scenarios. Full article
Show Figures

Graphical abstract

14 pages, 527 KiB  
Article
CAWE-ACNN Algorithm for Coprime Sensor Array Adaptive Beamforming
by Fulai Liu, Wu Zhou, Dongbao Qin, Zhixin Liu, Huifang Wang and Ruiyan Du
Sensors 2024, 24(17), 5454; https://doi.org/10.3390/s24175454 - 23 Aug 2024
Cited by 1 | Viewed by 1149
Abstract
This paper presents a robust adaptive beamforming algorithm based on an attention convolutional neural network (ACNN) for coprime sensor arrays, named the CAWE-ACNN algorithm. In the proposed algorithm, via a spatial and channel attention unit, an ACNN model is constructed to enhance the [...] Read more.
This paper presents a robust adaptive beamforming algorithm based on an attention convolutional neural network (ACNN) for coprime sensor arrays, named the CAWE-ACNN algorithm. In the proposed algorithm, via a spatial and channel attention unit, an ACNN model is constructed to enhance the features contributing to beamforming weight vector estimation and to improve the signal-to-interference-plus-noise ratio (SINR) performance, respectively. Then, an interference-plus-noise covariance matrix reconstruction algorithm is used to obtain an appropriate label for the proposed ACNN model. By the calculated label and the sample signals received from the coprime sensor arrays, the ACNN is well-trained and capable of accurately and efficiently outputting the beamforming weight vector. The simulation results verify that the proposed algorithm achieves excellent SINR performance and high computation efficiency. Full article
(This article belongs to the Special Issue Signal Detection and Processing of Sensor Arrays)
Show Figures

Figure 1

21 pages, 2816 KiB  
Article
Adaptive Hybrid Beamforming Codebook Design Using Multi-Agent Reinforcement Learning for Multiuser Multiple-Input–Multiple-Output Systems
by Manasjyoti Bhuyan, Kandarpa Kumar Sarma, Debashis Dev Misra, Koushik Guha and Jacopo Iannacci
Appl. Sci. 2024, 14(16), 7109; https://doi.org/10.3390/app14167109 - 13 Aug 2024
Cited by 1 | Viewed by 2734
Abstract
This paper presents a novel approach to designing beam codebooks for downlink multiuser hybrid multiple-input–multiple-output (MIMO) wireless communication systems, leveraging multi-agent reinforcement learning (MARL). The primary objective is to develop an environment-specific beam codebook composed of non-interfering beams, learned by cooperative agents within [...] Read more.
This paper presents a novel approach to designing beam codebooks for downlink multiuser hybrid multiple-input–multiple-output (MIMO) wireless communication systems, leveraging multi-agent reinforcement learning (MARL). The primary objective is to develop an environment-specific beam codebook composed of non-interfering beams, learned by cooperative agents within the MARL framework. Machine learning (ML)-based beam codebook design for downlink communications have been based on channel state information (CSI) feedback or only reference signal received power (RSRP), consisting of an offline training and user clustering phase. In massive MIMO, the full CSI feedback data is of large size and is resource-intensive to process, making it challenging to implement efficiently. RSRP alone for a stand-alone base station is not a good marker of the position of a receiver. Hence, in this work, uplink CSI estimated at the base station along with feedback of RSRP and binary acknowledgment of the accuracy of received data is utilized to design the beamforming codebook at the base station. Simulations using sub-array antenna and ray-tracing channel demonstrate the proposed system’s ability to learn topography-aware beam codebook for arbitrary beams serving multiple user groups simultaneously. The proposed method extends beyond mono-lobe and fixed beam architectures by dynamically adapting arbitrary shaped beams to avoid inter-beam interference, enhancing the overall system performance. This work leverages MARL’s potential in creating efficient beam codebooks for hybrid MIMO systems, paving the way for enhanced multiuser communication in future wireless networks. Full article
(This article belongs to the Special Issue New Challenges in MIMO Communication Systems)
Show Figures

Figure 1

24 pages, 42566 KiB  
Article
Deblurring of Beamformed Images in the Ocean Acoustic Waveguide Using Deep Learning-Based Deconvolution
by Zijie Zha, Xi Yan, Xiaobin Ping, Shilong Wang and Delin Wang
Remote Sens. 2024, 16(13), 2411; https://doi.org/10.3390/rs16132411 - 1 Jul 2024
Cited by 2 | Viewed by 1500
Abstract
A horizontal towed linear coherent hydrophone array is often employed to estimate the spatial intensity distribution of incident plane waves scattered from the geological and biological features in an ocean acoustic waveguide using conventional beamforming. However, due to the physical limitations of the [...] Read more.
A horizontal towed linear coherent hydrophone array is often employed to estimate the spatial intensity distribution of incident plane waves scattered from the geological and biological features in an ocean acoustic waveguide using conventional beamforming. However, due to the physical limitations of the array aperture, the spatial resolution after conventional beamforming is often limited by the fat main lobe and the high sidelobes. Here, we propose a method originated from computer vision deblurring based on deep learning to enhance the spatial resolution of beamformed images. The effect of image blurring after conventional beamforming can be considered a convolution of beam pattern, which acts as a point spread function (PSF), and the original spatial intensity distributions of incident plane waves. A modified U-Net-like network is trained on a simulated dataset. The instantaneous acoustic complex amplitude is assumed following circular complex Gaussian random (CCGR) statistics. Both synthetic data and experimental data collected from the South China Sea Experiment in 2021 are used to illustrate the effectiveness of this approach, showing a maximum 700% reduction in a 3 dB width over conventional beamforming. A lower normalized mean square error (NMSE) is provided compared with other deconvolution-based algorithms, such as the Richardson–Lucy algorithm and the approximate likelihood model-based deconvolution algorithm. The method is applicable in various acoustic imaging applications that employ linear coherent hydrophone arrays with one-dimensional conventional beamforming, such as ocean acoustic waveguide remote sensing (OAWRS). Full article
(This article belongs to the Topic Advances in Underwater Acoustics and Aeroacoustics)
Show Figures

Figure 1

12 pages, 12837 KiB  
Article
Improved Convolutional Neural Network for Wideband Space-Time Beamforming
by Ming Guo, Zixuan Shen, Yuee Zhou and Shenghui Li
Electronics 2024, 13(13), 2492; https://doi.org/10.3390/electronics13132492 - 26 Jun 2024
Viewed by 1852
Abstract
Wideband beamforming technology is an effective solution in millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems to compensate for severe path loss through beamforming gain. However, traditional adaptive wideband digital beamforming (AWDBF) algorithms suffer from serious performance degradation when there are insufficient signal snapshots, [...] Read more.
Wideband beamforming technology is an effective solution in millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems to compensate for severe path loss through beamforming gain. However, traditional adaptive wideband digital beamforming (AWDBF) algorithms suffer from serious performance degradation when there are insufficient signal snapshots, and the training process of the existing neural network-based wideband beamforming network is slow and unstable. To address the above issues, an AWDBF method based on the convolutional neural network (CNN) structure, the improved wideband beamforming prediction network (IWBPNet), is proposed. The proposed method increases the network’s feature extraction capability for array signals through deep convolutional layers, thus alleviating the problem of insufficient network feature extraction capabilities. In addition, the pooling layers are introduced into the IWBPNet to solve the problem that the fully connected layer of the existing neural network-based wideband beamforming algorithm is too large, resulting in slow network training, and the pooling operation increases the generalization ability of the network. Furthermore, the IWBPNet has good wideband beamforming performance with low signal snapshots, including beam pattern performance and output signal-to-interference-plus-noise ratio (SINR) performance. The simulation results show that the proposed algorithm has superior performance compared with the traditional wideband beamformer with low signal snapshots. Compared with the wideband beamforming algorithm based on the neural network, the training time of IWBPNet is only 10.6% of the original neural network-based wideband beamformer, while the beamforming performance is slightly improved. Simulations and numerical analyses demonstrate the effectiveness and superiority of the proposed wideband beamformer. Full article
Show Figures

Figure 1

17 pages, 494 KiB  
Article
Robust Offloading for Edge Computing-Assisted Sensing and Communication Systems: A Deep Reinforcement Learning Approach
by Li Shen, Bin Li and Xiaojie Zhu
Sensors 2024, 24(8), 2489; https://doi.org/10.3390/s24082489 - 12 Apr 2024
Cited by 2 | Viewed by 1556
Abstract
In this paper, we consider an integrated sensing, communication, and computation (ISCC) system to alleviate the spectrum congestion and computation burden problem. Specifically, while serving communication users, a base station (BS) actively engages in sensing targets and collaborates seamlessly with the edge server [...] Read more.
In this paper, we consider an integrated sensing, communication, and computation (ISCC) system to alleviate the spectrum congestion and computation burden problem. Specifically, while serving communication users, a base station (BS) actively engages in sensing targets and collaborates seamlessly with the edge server to concurrently process the acquired sensing data for efficient target recognition. A significant challenge in edge computing systems arises from the inherent uncertainty in computations, mainly stemming from the unpredictable complexity of tasks. With this consideration, we address the computation uncertainty by formulating a robust communication and computing resource allocation problem in ISCC systems. The primary goal of the system is to minimize total energy consumption while adhering to perception and delay constraints. This is achieved through the optimization of transmit beamforming, offloading ratio, and computing resource allocation, effectively managing the trade-offs between local execution and edge computing. To overcome this challenge, we employ a Markov decision process (MDP) in conjunction with the proximal policy optimization (PPO) algorithm, establishing an adaptive learning strategy. The proposed algorithm stands out for its rapid training speed, ensuring compliance with latency requirements for perception and computation in applications. Simulation results highlight its robustness and effectiveness within ISCC systems compared to baseline approaches. Full article
(This article belongs to the Special Issue Integrated Sensing and Communication)
Show Figures

Figure 1

Back to TopTop