Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (159)

Search Parameters:
Keywords = minimum mean square error (MMSE)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1073 KB  
Article
An MMSE-Optimized Pre-Rake Receiver with a Comparative Analysis of Channel Estimation Methods for Multipath Channels
by Aoba Morimoto, Jaesang Cha, Incheol Jeong and Chang-Jun Ahn
Electronics 2026, 15(7), 1540; https://doi.org/10.3390/electronics15071540 - 7 Apr 2026
Viewed by 196
Abstract
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. [...] Read more.
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. This failure is primarily driven by the unavoidable latency between uplink reception and downlink transmission, leading to severe performance deterioration. To address these challenges and enhance system robustness in modern high-speed scenarios, we propose an improved hybrid transceiver architecture. This scheme integrates multiplexed Pre-Rake processing with a Matched Filter-based Rake receiver and employs a Minimum Mean Square Error (MMSE) equalizer to suppress the severe Inter-Symbol Interference (ISI) and Multi-User Interference (MUI). Furthermore, we conduct a comparative analysis of channel estimation methods tailored for a 10 Mbps high-speed transmission environment.Our investigation reveals that while complex quadratic interpolation is often prioritized in low-data-rate studies, simple averaging is sufficient and even superior in high-speed communications. This is because the shortened slot duration allows simple averaging to effectively track channel variations while avoiding the noise overfitting associated with higher-order interpolation. The simulation results demonstrate that the proposed MMSE-optimized architecture achieves superior Bit Error Rate (BER) performance, providing a practical and computationally efficient solution for next-generation mobile networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

18 pages, 2493 KB  
Article
Deep Learning-Based Receiver for Low-Complexity 6G Partial LIS Architectures
by Mário Marques da Silva, Héctor Orrillo and Rui Dinis
Appl. Sci. 2026, 16(7), 3429; https://doi.org/10.3390/app16073429 - 1 Apr 2026
Viewed by 363
Abstract
The sixth generation (6G) of wireless networks demands extreme energy efficiency and massive connectivity, positioning large intelligent surfaces (LIS) as a pivotal technology. However, the practical deployment of LIS is constrained by the overwhelming computational complexity and power consumption required to process thousands [...] Read more.
The sixth generation (6G) of wireless networks demands extreme energy efficiency and massive connectivity, positioning large intelligent surfaces (LIS) as a pivotal technology. However, the practical deployment of LIS is constrained by the overwhelming computational complexity and power consumption required to process thousands of antenna elements. To address these challenges, this article proposes a deep learning-based receiver architecture that integrates the spatial efficiency of Partial LIS with advanced non-linear detection. By activating only a subset of antenna panels closest to the user terminal (Partial LIS), the system significantly reduces hardware overhead and Radio Frequency (RF) power consumption. To compensate for the performance loss, the multi-user interference (MUI) generated by the linear combining stage, and the increased MUI inherent in a reduced-aperture environment, a specialized Multilayer Perceptron (MLP) network is implemented. Unlike traditional Zero-Forcing (ZF) or Minimum Mean Squared Error (MMSE) receivers, which require energy-intensive matrix inversions for each frequency component, the proposed neural-network-enabled receiver achieves near-optimal performance using low-complexity combining followed by intelligent learning-based interference suppression. Simulation results demonstrate that the proposed hybrid architecture provides a scalable, “green” solution for 6G uplink scenarios. Notably, the deep learning approach is shown to effectively suppress the performance loss of reduced apertures, achieving a BER comparable to traditional linear benchmarks even with a reduced physical aperture, maintaining good Bit Error Rate (BER) performance while dramatically reducing the computational and hardware footprint. Full article
(This article belongs to the Special Issue Applications of Wireless and Mobile Communications, 2nd Edition)
Show Figures

Figure 1

21 pages, 6478 KB  
Article
Experimental Investigation of Distributed Array Adaptive Beamforming for Interference Suppression in UAV Swarms
by Rio King, Gregory Huff, Trevor Bois and Bailey Campbell
Drones 2026, 10(4), 253; https://doi.org/10.3390/drones10040253 - 1 Apr 2026
Viewed by 491
Abstract
This paper investigates the use of adaptive beamforming algorithms for communication systems and sensing networks using motion-dynamic distributed random arrays. These distributed arrays include swarms of unmanned aerial vehicles (UAVs) and are formed by unconnected antennas mounted on independent mobile platforms. This paper [...] Read more.
This paper investigates the use of adaptive beamforming algorithms for communication systems and sensing networks using motion-dynamic distributed random arrays. These distributed arrays include swarms of unmanned aerial vehicles (UAVs) and are formed by unconnected antennas mounted on independent mobile platforms. This paper investigates the robustness of adaptive beamforming algorithms subject to nonidealities intrinsic to distributed random arrays such as positional error, hardware noise variations, and non-uniform elements. A simulation framework developed to evaluate various beamforming algorithms in the presence of non-idealities demonstrates that minimum variance distortionless response (MVDR) beamforming is sensitive to nominal positional errors, while minimum mean squared error (MMSE) beamforming maintains interference suppression regardless of positional error and is robust to non-uniform elements. Experiments confirm that MMSE beamforming demonstrates interference suppression in real-world channels with heterogeneous hardware. These results establish adaptive mean-squared-error-based beamforming as a robust solution for distributed random arrays. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

23 pages, 2175 KB  
Article
Robust Long Short-Term Memory-Enabled Beamforming for Cell-Free Massive MIMOs in 6G Networks
by Tadele A. Abose and Thomas O. Olwal
Electronics 2026, 15(7), 1397; https://doi.org/10.3390/electronics15071397 - 27 Mar 2026
Viewed by 391
Abstract
This paper presents a performance evaluation of a long short-term memory (LSTM)-based precoder for cell-free (CF) massive multiple-input multiple-output (MIMO) systems in 6G networks operating under hardware impairments and imperfect channel state information (CSI). It also compares the proposed method with traditional Kalman, [...] Read more.
This paper presents a performance evaluation of a long short-term memory (LSTM)-based precoder for cell-free (CF) massive multiple-input multiple-output (MIMO) systems in 6G networks operating under hardware impairments and imperfect channel state information (CSI). It also compares the proposed method with traditional Kalman, minimum mean square error (MMSE), and zero forcing (ZF) precoders. Simulations conducted at 2.4 GHz show that the LSTM-based scheme offers improved spectral efficiency (SE) and energy efficiency (EE) while remaining computationally feasible. Specifically, the LSTM precoder achieves an average per-user SE of 1.74 bps/Hz, representing gains of about 1.15% over Kalman, 3.45% over MMSE, 4.6% over ZF, and 5.75% over MRT. Under severe hardware impairments, it provides a 2.94% improvement over Kalman and a 5.88% improvement over MMSE. The total SE reaches 17.4 bps/Hz, increasing the overall system capacity by approximately 2.87% over Kalman, 4.02% over MMSE, 6.32% over ZF, and 8.05% over MRT when the number of users (K) is 10. The LSTM-based precoder also achieves the highest peak EE, indicating that its learning-driven adaptability yields higher SE for comparable power usage. Despite a slight increase in power consumption, its inference time remains shorter than both MMSE and ZF, offering a favorable balance between performance and computational complexity. Overall, the results demonstrate that a learning-driven, impairment-aware precoding approach provides significant advantages in terms of robustness and scalability for next-generation 6G CF massive MIMO networks, particularly in non-ideal hardware environments. Full article
Show Figures

Figure 1

18 pages, 4228 KB  
Article
Design Space Exploration on Blind Equalization Algorithms: Numerical Representation Analysis for SoC-FPGA
by David Marquez-Viloria, L. J. Morantes-Guzman, Neil Guerrero-Gonzalez and Marin B. Marinov
Appl. Sci. 2026, 16(6), 2777; https://doi.org/10.3390/app16062777 - 13 Mar 2026
Viewed by 322
Abstract
Field-Programmable Gate Arrays (FPGAs) have become an important platform for accelerating real-time communication systems, and System-on-Chip (SoC) devices provide the flexibility to design and optimize architectures that support high data rates, different modulation formats, and channel equalization schemes. Selecting the appropriate architecture can [...] Read more.
Field-Programmable Gate Arrays (FPGAs) have become an important platform for accelerating real-time communication systems, and System-on-Chip (SoC) devices provide the flexibility to design and optimize architectures that support high data rates, different modulation formats, and channel equalization schemes. Selecting the appropriate architecture can be guided through Design Space Exploration (DSE) using high-level synthesis tools, which enables the identification of numerical representations that balance performance with reduced hardware resource consumption. Despite their relevance, recent developments in communication systems often overlook the impact of numerical precision in Digital Signal Processing algorithms, particularly the trade-offs between floating- and fixed-point arithmetic when targeting hardware implementations. In this work, two widely used blind equalization algorithms, the Constant Modulus Algorithm (CMA) and the Multi-Modulus Algorithm (MMA), were implemented on a low-cost Ultra96 SoC-FPGA to analyze the effect of a fixed-point representation. A multi-objective Design Space Exploration methodology was applied to minimize hardware utilization while maintaining reliable transmission performance. Resource consumption, latency, and throughput were measured across different binary formats using the Minimum Mean Square Error (MMSE) criterion. Parallelization techniques were incorporated to improve throughput. The DSE generated comprehensive performance surfaces quantifying latency, MMSE convergence, and FPGA resource utilization (DSP48E/FF/LUT/BRAM) across fixed-point formats, achieving optimal 4 MS/s throughput configurations. Although this throughput is naturally lower than the Gigabit speeds required in backbone optical networks, the results demonstrate the effectiveness of numerical representation optimization in resource-constrained SoC-FPGA devices, offering a practical approach for real-time Edge and IoT implementations where cost and hardware limitations are critical. Full article
Show Figures

Figure 1

23 pages, 527 KB  
Article
Time-Domain Oversampling-Enabled Multi-NS Reception for MoCDMA
by Weidong Gao, Yuanhui Wang and Jun Li
Symmetry 2026, 18(2), 380; https://doi.org/10.3390/sym18020380 - 20 Feb 2026
Viewed by 278
Abstract
In molecular communication via diffusion (MCvD) uplinks where multiple nano-sensors report concurrently to a fusion center (FC), the long channel memory and the near–far imbalance jointly create strong multiple access interference (MAI) coupled with residual inter-symbol/inter-chip effects. This paper studies an oversampling-enabled time-domain [...] Read more.
In molecular communication via diffusion (MCvD) uplinks where multiple nano-sensors report concurrently to a fusion center (FC), the long channel memory and the near–far imbalance jointly create strong multiple access interference (MAI) coupled with residual inter-symbol/inter-chip effects. This paper studies an oversampling-enabled time-domain reception for an uplink molecular code-division multiple-access (MoCDMA) system employing bipolar molecular signalling. By exploiting intra-chip oversampling at the FC, three linear detectors following the principles of maximum ratio combining (MRC), zero-forcing (ZF), and minimum mean-square error (MMSE) are developed and further enhanced through a feedback-assisted interference subtraction (FAIS) scheme that combines single-tap ISI feedback equalization with near-to-far successive MAI subtraction. Owing to the complementary structure of bipolar molecular emissions, the signal-dependent counting noise corresponding to the two molecule types can be jointly modeled in a symmetric and information-independent manner to support unified linear detection and FAIS processing. Numerical results demonstrate that oversampling effectively improves detection reliability, while increasing the molecular emission budget alone is insufficient to mitigate near–far effects. Moreover, FAIS provides significant performance gains, particularly for far NSs. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 3214 KB  
Article
Enhanced GNSS Navigation Using a Centered Error Entropy Extended Kalman Filter in Non-Gaussian Noise Environments
by Yi Chang, Dah-Jing Jwo and Bo-Yang Lee
Sensors 2026, 26(4), 1148; https://doi.org/10.3390/s26041148 - 10 Feb 2026
Viewed by 391
Abstract
Global Navigation Satellite Systems (GNSSs) observables, such as those of the Global Positioning System (GPS), are frequently affected by multipath effects that cause unpredictable signal interference at the receiver, posing significant challenges for accurate state estimation in complex environments with non-Gaussian noise or [...] Read more.
Global Navigation Satellite Systems (GNSSs) observables, such as those of the Global Positioning System (GPS), are frequently affected by multipath effects that cause unpredictable signal interference at the receiver, posing significant challenges for accurate state estimation in complex environments with non-Gaussian noise or outliers. The traditional extended Kalman filter (EKF), based on the minimum mean square error (MMSE) criterion, assumes Gaussian noise distributions and exhibits degraded performance under non-Gaussian conditions. To overcome this limitation, the minimum error entropy (MEE) criterion was proposed to reduce random uncertainty in estimation error distributions; however, due to its translation invariance property, MEE may inadvertently increase bias when errors contain systematic offsets, leading to poor convergence. In contrast, the maximum correntropy criterion (MCC) concentrates the error probability density function (PDF) around zero, enabling effective entropy adjustment even in the presence of bias and achieving superior error convergence. This paper presents the centered error entropy (CEE) extended Kalman filter (CEE-EKF) that integrates the complementary merits of both MEE and MCC approaches to overcome their individual limitations. Experimental validation in complex nonlinear GPS environments with non-Gaussian noise demonstrates that the CEE-EKF significantly outperforms individual algorithms in noise suppression, particularly exhibiting enhanced robustness and accuracy when handling outliers. These results offer an effective approach to enhancing the reliability of GPS navigation in challenging real-world environments, and the algorithm can be readily extended to other GNSS applications. Full article
Show Figures

Figure 1

22 pages, 462 KB  
Article
A Secure Spatial Multiplexing Transmission Scheme in MIMO Amplify-and-Forward Wiretap Relaying Systems Using Deliberate Precoder Randomization
by Kyunbyoung Ko and Changick Song
Sensors 2026, 26(3), 860; https://doi.org/10.3390/s26030860 - 28 Jan 2026
Viewed by 277
Abstract
Physical-layer security offers low probability of interception (LPI) in wireless communication systems. While prior methods such as the directional beamforming and secrecy coding schemes require knowledge of the eavesdropper (Eve)’s channel, passive eavesdropping limits their practicality. Artificial additive noise and artificial fast fading [...] Read more.
Physical-layer security offers low probability of interception (LPI) in wireless communication systems. While prior methods such as the directional beamforming and secrecy coding schemes require knowledge of the eavesdropper (Eve)’s channel, passive eavesdropping limits their practicality. Artificial additive noise and artificial fast fading (AFF) schemes address the issue by degrading detection ability of a potential Eve without knowing its channel information. In particular, AFF achieves LPI by effectively shortening the coherence time of Eve’s channel using a random precoder while keeping the legitimate receiver (Bob)’s channel deterministic. In this paper, we propose a novel AFF design for spatial multiplexing multi-input multi-output (MIMO) amplify-and-forward (AF) relay systems. First, we formulate an optimization problem to achieve minimum mean squared error (MMSE) of Bob’s signals while guaranteeing LPI conditions from Eve, which is generally non-convex. To tackle the non-convexity of the problem, we apply a convex set approximation technique and thereby derive a simple closed-form design. Finally, we evaluated the performance of both Bob and Eve via computer simulations to demonstrate the effectiveness of our proposed design. Full article
(This article belongs to the Special Issue Advanced MIMO Antenna Technologies for Intelligent Sensing Networks)
Show Figures

Figure 1

33 pages, 11440 KB  
Article
A Vision-Assisted Acoustic Channel Modeling Framework for Smartphone Indoor Localization
by Can Xue, Huixin Zhuge and Zhi Wang
Sensors 2026, 26(2), 717; https://doi.org/10.3390/s26020717 - 21 Jan 2026
Viewed by 370
Abstract
Conventional acoustic time-of-arrival (TOA) estimation in complex indoor environments is highly susceptible to multipath reflections and occlusions, resulting in unstable measurements and limited physical interpretability. This paper presents a smartphone-based indoor localization method built on vision-assisted acoustic channel modeling, and develops a fusion [...] Read more.
Conventional acoustic time-of-arrival (TOA) estimation in complex indoor environments is highly susceptible to multipath reflections and occlusions, resulting in unstable measurements and limited physical interpretability. This paper presents a smartphone-based indoor localization method built on vision-assisted acoustic channel modeling, and develops a fusion anchor integrating a pan–tilt–zoom (PTZ) camera and a near-ultrasonic signal transmitter to explicitly perceive indoor geometry, surface materials, and occlusion patterns. First, vision-derived priors are constructed on the anchor side based on line-of-sight reachability, orientation consistency, and directional risk, and are converted into soft anchor weights to suppress the impact of occlusion and pointing mismatch. Second, planar geometry and material cues reconstructed from camera images are used to generate probabilistic room impulse response (RIR) priors that cover the direct path and first-order reflections, where environmental uncertainty is mapped into path-dependent arrival-time variances and prior probabilities. Finally, under the RIR prior constraints, a path-wise posterior distribution is built from matched-filter outputs, and an adaptive fusion strategy is applied to switch between maximum a posteriori (MAP) and minimum mean square error (MMSE) estimators, yielding debiased TOA measurements with calibratable variances for downstream localization filters. Experiments in representative complex indoor scenarios demonstrate mean localization errors of 0.096 m and 0.115 m in static and dynamic tests, respectively, indicating improved accuracy and robustness over conventional TOA estimation. Full article
Show Figures

Figure 1

32 pages, 1500 KB  
Article
Communication-Efficient Asynchronous Fusion for Multi-Radar Systems via State and Covariance Projection
by Wenhui Xue, Peng Chen, Chunguo Li, Zhenxin Cao and Shuqin Zhang
Electronics 2026, 15(2), 458; https://doi.org/10.3390/electronics15020458 - 21 Jan 2026
Viewed by 484
Abstract
Multi-radar systems can significantly improve tracking robustness and accuracy, but practical deployments are challenged by asynchronous sensing timestamps across distributed platforms and by limited communication bandwidth. This paper proposes a communication-efficient asynchronous track fusion framework based on state and covariance projection. Each radar [...] Read more.
Multi-radar systems can significantly improve tracking robustness and accuracy, but practical deployments are challenged by asynchronous sensing timestamps across distributed platforms and by limited communication bandwidth. This paper proposes a communication-efficient asynchronous track fusion framework based on state and covariance projection. Each radar performs local Kalman filtering and transmits only a compact track message consisting of the posterior state estimate, the associated error covariance, and a timestamp. At the fusion center, a causal reference time is chosen as the latest received timestamp, and all tracks are projected to this common time using a hybrid constant-acceleration (CA)/constant-velocity (CV) motion model with appropriately discretized process noise, followed by information-form (inverse-covariance) fusion. Under standard linear-Gaussian assumptions, the fusion rule is minimum mean square error (MMSE)-optimal when the projected estimation errors are approximately independent. We also analyze the computational complexity and the communication payload of the proposed procedure. Monte Carlo simulations with five heterogeneous radars and random inter-radar time offsets up to 37.5 ms over 100 runs show that the proposed fusion reduces the steady-state range root mean square error (RMSE) by about 66% and the radial-velocity RMSE by about 31% relative to the average single-radar tracker, while maintaining statistical consistency as verified by the normalized estimation error squared (NEES). These results indicate that projection-based track fusion provides an effective accuracy–communication trade-off for asynchronous multi-radar tracking. Full article
(This article belongs to the Special Issue Challenges and Opportunities in the Internet of Vehicles)
Show Figures

Figure 1

25 pages, 4082 KB  
Article
Statistical CSI-Based Downlink Precoding for Multi-Beam LEO Satellite Communications
by Feng Zhu, Yunfei Wang, Ziyu Xiang and Xiqi Gao
Aerospace 2026, 13(1), 60; https://doi.org/10.3390/aerospace13010060 - 7 Jan 2026
Viewed by 614
Abstract
With the rapid development of low-Earth-orbit (LEO) satellite communications, multi-beam precoding has emerged as a key technology for improving spectrum efficiency. However, the long propagation delay and large Doppler frequency offset pose significant challenges to existing precoding techniques. To address this issue, this [...] Read more.
With the rapid development of low-Earth-orbit (LEO) satellite communications, multi-beam precoding has emerged as a key technology for improving spectrum efficiency. However, the long propagation delay and large Doppler frequency offset pose significant challenges to existing precoding techniques. To address this issue, this paper investigates downlink precoding design for multi-beam LEO satellite communications. First, the downlink channel and signal models are established. Then, we reveal that traditional zero-forcing (ZF), regularized zero-forcing (RZF), and minimum mean square error (MMSE) precoding schemes all require the satellite transmitter to acquire the instantaneous channel state information (iCSI) of all users, which is challenging to obtain in satellite communication systems. Subsequently, we propose a downlink precoding design based on statistical channel state information (sCSI) and derive closed-form solutions for statistical-ZF, statistical-RZF, and statistical-MMSE precoding. Furthermore, we propose that sCSI can be computed using the positions of the satellite and users, which reduces the system overhead and complexity of sCSI acquisition. Monte Carlo simulations under the 3GPP non-terrestrial network (NTN) channel model are employed to verify the performance of the proposed method. The simulation results show that the proposed method achieves sum-rate performance comparable to that of iCSI-based schemes and the optimal transmission performance based on sum-rate maximization. In addition, the proposed method significantly reduces the computational complexity of the satellite payload and the system feedback overhead. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

28 pages, 2832 KB  
Article
Unsupervised Neural Beamforming for Uplink MU-SIMO in 3GPP-Compliant Wireless Channels
by Cemil Vahapoglu, Timothy J. O’Shea, Wan Liu, Tamoghna Roy and Sennur Ulukus
Sensors 2026, 26(2), 366; https://doi.org/10.3390/s26020366 - 6 Jan 2026
Viewed by 642
Abstract
Beamforming is highly significant for the physical layer of wireless communication systems, for multi-antenna systems such as multiple input multiple output (MIMO) and massive MIMO, since it improves spectral efficiency and reduces interference. Traditional linear beamforming methods such as zero-forcing beamforming (ZFBF) and [...] Read more.
Beamforming is highly significant for the physical layer of wireless communication systems, for multi-antenna systems such as multiple input multiple output (MIMO) and massive MIMO, since it improves spectral efficiency and reduces interference. Traditional linear beamforming methods such as zero-forcing beamforming (ZFBF) and minimum mean square error (MMSE) beamforming provide closed-form solutions. Yet, their performance drops when they face non-ideal conditions such as imperfect channel state information (CSI), dynamic propagation environment, or high-dimensional system configurations, primarily due to static assumptions and computational limitations. These limitations have led to the rise of deep learning-based beamforming, where data-driven models derive beamforming solutions directly from CSI. By leveraging the representational capabilities of cutting-edge deep learning architectures, along with the increasing availability of data and computational resources, deep learning presents an adaptive and potentially scalable alternative to traditional methodologies. In this work, we unify and systematically compare our two unsupervised learning architectures for uplink receive beamforming: a simple neural network beamforming (NNBF) model, composed of convolutional and fully connected layers, and a transformer-based NNBF model that integrates grouped convolutions for feature extraction and transformer blocks to capture long-range channel dependencies. They are evaluated in a common multi-user single input multiple output (MU-SIMO) system model to maximize sum-rate across single-antenna user equipments (UEs) under 3GPP-compliant channel models, namely TDL-A and UMa. Furthermore, we present a FLOPs-based asymptotic computational complexity analysis for the NNBF architectures alongside baseline methods, namely ZFBF and MMSE beamforming, explicitly characterizing inference-time scaling behavior. Experiments for the simple NNBF are performed under simplified assumptions such as stationary UEs and perfect CSI across varying antenna configurations in the TDL-A channel. On the other hand, transformer-based NNBF is evaluated in more realistic conditions, including urban macro environments with imperfect CSI, diverse UE mobilities, coding rates, and modulation schemes. Results show that the transformer-based NNBF achieves superior performance under realistic conditions at the cost of increased computational complexity, while the simple NNBF presents comparable or better performance than baseline methods with significantly lower complexity under simplified assumptions. Full article
(This article belongs to the Special Issue Sensor Networks and Communication with AI)
Show Figures

Figure 1

20 pages, 2676 KB  
Article
Memory-Efficient Iterative Signal Detection for 6G Massive MIMO via Hybrid Quasi-Newton and Deep Q-Networks
by Adeb Salh, Mohammed A. Alhartomi, Ghasan Ali Hussain, Fares S. Almehmadi, Saeed Alzahrani, Ruwaybih Alsulami and Abdulrahman Amer
Electronics 2025, 14(24), 4832; https://doi.org/10.3390/electronics14244832 - 8 Dec 2025
Viewed by 562
Abstract
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output [...] Read more.
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output (m-MIMO) systems with hundreds or even thousands of antennas, which introduce substantial challenges for signal detection algorithms. Conventional linear detectors, especially the linear Minimum Mean Square Error (MMSE) detectors, face prohibitive computational complexity due to high-dimensional matrix inversions, and their performance remains inherently restricted by the limitations of linear processing. The current research suggested an Iterative Signal Detection (ISD) algorithm with significant limitations being occupied with the combination of Deep Q-Network (DQN) and Quasi-Newton algorithms. The method incorporates the Broyden-Net, which could be faster with less memory training than the model in the case of spatially correlated channels, a Quasi-Newton method, and DQN to improve the m-MIMO detection. The proposed techniques support the computational efficiency of realistic 6G systems and outperform linear detectors. The simulation findings proved that the DQN-improved Quasi-Newton algorithm is more appropriate than traditional algorithms, since it combines the reward design, limited memory updates, and adaptive interference mitigation to shorten convergence time by 60% and increase the confrontation to correlated fading. Full article
(This article belongs to the Special Issue Advances in MIMO Communication)
Show Figures

Figure 1

23 pages, 1260 KB  
Article
On Deep Learning Hybrid Architectures for MIMO-OFDM Channel Estimation
by Inês Almeida, João Guerreiro and Rui Dinis
Electronics 2025, 14(23), 4692; https://doi.org/10.3390/electronics14234692 - 28 Nov 2025
Cited by 1 | Viewed by 1216
Abstract
Traditional estimation methods face challenges in adverse conditions in systems such as Multiple Input Multiple Output (MIMO) with Orthogonal Frequency Division Multiplexing (OFDM). To overcome those challenges, Deep Learning (DL) approaches have been proposed as an interesting alternative, thanks to their ability to [...] Read more.
Traditional estimation methods face challenges in adverse conditions in systems such as Multiple Input Multiple Output (MIMO) with Orthogonal Frequency Division Multiplexing (OFDM). To overcome those challenges, Deep Learning (DL) approaches have been proposed as an interesting alternative, thanks to their ability to capture channel features without much complexity. This paper presents a hybrid approach that combines DL with traditional estimation methods such as Least Squares (LS) and Minimum Mean Square Error (MMSE), which we designate as DL-Enhanced. Our main innovation is a phase-preserving mechanism that maintains critical phase information frequently degraded in purely data-driven approaches. We evaluate the proposed technique considering MIMO-OFDM systems considering 3GPP Clustered Delay Line Model C (CDL-C) channels. Simulation results demonstrate that our method outperforms conventional techniques at high-SNR levels, thanks to neural network-based feature extraction and adaptive processing. Full article
Show Figures

Figure 1

11 pages, 3760 KB  
Article
Enhanced Optical Wireless Communications via Deep Neural Network Assisted Pre-Equalization for Faster-than-Nyquist Transmission
by Xindong Yue, Xingyu Zhang, Zhaoheng Wu, Yue Zhang, Huiqin Wang and Minghua Cao
Photonics 2025, 12(11), 1112; https://doi.org/10.3390/photonics12111112 - 11 Nov 2025
Viewed by 628
Abstract
The Faster-than-Nyquist (FTN) technology is widely used in optical wireless communication (OWC) systems to improve data rates and spectrum efficiency. However, it introduces inter-symbol interference (ISI), which can affect communication reliability. To address this issue, we propose a pre-equalization algorithm based on a [...] Read more.
The Faster-than-Nyquist (FTN) technology is widely used in optical wireless communication (OWC) systems to improve data rates and spectrum efficiency. However, it introduces inter-symbol interference (ISI), which can affect communication reliability. To address this issue, we propose a pre-equalization algorithm based on a deep neural network (DNN). The performance analysis primarily focuses on the bit-error-rate (BER) under a Gamma-Gamma atmospheric turbulence channel with varying acceleration factors. Simulation results show that our scheme effectively reduces the degradation in BER caused by ISI. Additionally, we observe an inverse relationship between the BER performance and the atmospheric refractive index constants as well as transmission distance, while a direct proportionality exists with respect to the filter roll-off factor and laser wavelength. Furthermore, comparing with conventional minimum mean square error (MMSE) and zero-forcing (ZF) algorithms highlights the superior performance of our proposal. Full article
Show Figures

Figure 1

Back to TopTop