Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (261)

Search Parameters:
Keywords = receiver gain error

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 629 KB  
Article
Tiny Neural Receiver: Enabling On-Device Learning for Scalable and Adaptive 6G Devices
by Iñigo Bilbao, Eneko Iradier, Jon Montalban, Marta Fernández, Iñaki Eizmendi and Pablo Angueira
AI 2026, 7(4), 144; https://doi.org/10.3390/ai7040144 - 17 Apr 2026
Abstract
The evolution toward 6G communications requires integrating Tiny Machine Learning (TinyML) principles to enable intelligent, energy-efficient, and adaptable signal processing at the network edge. However, current receiver architectures face a fundamental trade-off: classical model-driven designs, while naturally efficient due to their basis in [...] Read more.
The evolution toward 6G communications requires integrating Tiny Machine Learning (TinyML) principles to enable intelligent, energy-efficient, and adaptable signal processing at the network edge. However, current receiver architectures face a fundamental trade-off: classical model-driven designs, while naturally efficient due to their basis in communication theory, lack the flexibility to adapt to varying channel conditions. Meanwhile, fully data-driven deep-learning-based approaches break the stringent resource constraints of TinyML. This paper introduces the tiny neural receiver (TNR), a pioneering architecture that bridges these paradigms by integrating model-based signal processing with lightweight neural optimization to overcome this challenge. The TNR’s primary contribution is its unique hybrid design, which combines the efficiency and interpretability of traditional theory-based receivers with the ability to adapt to different contexts using trainable neural components. This integration occurs within resource budgets that align with TinyML specifications. Experimental results show that the TNR achieves a 5 dB SNR reduction at a target block error rate of 104. The reported 5 dB SNR gain is a direct result of our resource-aware design framework, which selectively applies lightweight neural optimization to only the most impactful receiver blocks (channel estimation and decoding) to maximize gain without exceeding TinyML complexity limits. This achievement is further supported by an end-to-end training protocol that uses 15,000 iterations of over-the-air data to fine-tune these parameters for the specific static 3.5 GHz propagation channel and OFDM configuration evaluated. Furthermore, the TNR’s modular design enables flexible deployment across a range of 6G scenarios, from mobile broadband to mission-critical IoT. This establishes the TNR as a promising framework for AI-native 6G receivers. Full article
Show Figures

Figure 1

30 pages, 1086 KB  
Article
Complex-Valued Orthogonal Unitary Superposition Encoding for Robust Three-Qubit Quantum-Error-Correction-Based Image Transmission
by Udara Jayasinghe and Anil Fernando
Algorithms 2026, 19(4), 304; https://doi.org/10.3390/a19040304 - 13 Apr 2026
Viewed by 222
Abstract
Efficient and reliable transmission of compressed images over noisy channels remains a significant challenge due to the high sensitivity to noise. Quantum communication offers a promising solution by encoding classical information into quantum states; however, these states are still susceptible to noise and [...] Read more.
Efficient and reliable transmission of compressed images over noisy channels remains a significant challenge due to the high sensitivity to noise. Quantum communication offers a promising solution by encoding classical information into quantum states; however, these states are still susceptible to noise and quantum decoherence. To address these limitations, we propose a complex-valued orthogonal unitary superposition (COUS) encoding integrated with a three-qubit quantum error correction (QEC) framework for robust and low-complexity quantum image transmission. The COUS encoding preserves both amplitude and phase information, enhancing reconstruction fidelity while maintaining practical scalability. In the proposed system, images are first compressed using either the joint photographic experts group (JPEG) standard or the high-efficiency image file (HEIF) standard and encoded into quantum states. Quantum channel coding is then applied to protect against quantum noise, followed by COUS encoding prior to transmission. At the receiver, the transmitted data undergoes COUS decoding, quantum error correction, quantum decoding, and source decoding to reconstruct the images. Performance improvements are observed across peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI) metrics. Simulation results demonstrate that the proposed approach outperforms conventional Hadamard encoding-based three-qubit QEC schemes, achieving maximum channel signal-to-noise ratio (SNR) gains of up to 6 dB, and surpasses bandwidth-equivalent classical communication systems employing polar codes, achieving channel SNR gains of up to 12 dB. These results highlight the potential of the proposed method as a practical solution for high-fidelity quantum image communication, overcoming the limitations of existing approaches. Full article
Show Figures

Figure 1

14 pages, 4605 KB  
Article
A K-Band Four-Channel Beamformer with Temperature Compensation Based on 65 nm CMOS Process
by Cetian Wang, Yanning Liu, Xuejie Liao, Fan Zhang, Chun Deng, Ying Liu, Wenxu Sun, He Guan and Deyun Zhou
Micromachines 2026, 17(4), 462; https://doi.org/10.3390/mi17040462 - 10 Apr 2026
Viewed by 280
Abstract
This paper presents a K-band four-channel phased array beamformer with temperature compensation in 65 nm CMOS for 5G and satellite communications. The beamformer includes a four-way power divider/combiner, four RF channels, and digital control circuits. Each RF channel comprises a receive chain, a [...] Read more.
This paper presents a K-band four-channel phased array beamformer with temperature compensation in 65 nm CMOS for 5G and satellite communications. The beamformer includes a four-way power divider/combiner, four RF channels, and digital control circuits. Each RF channel comprises a receive chain, a transmit chain, and a pair of receive/transmit (TX/RX) single-pole double-throw (SPDT) switches. The receive chain consists of a low-noise amplifier (LNA), a six-bit reflective-type phase shifter (RTPS), a drive amplifier (DA), two temperature-compensation attenuators (TCAs), and a six-bit attenuator (ATT); the transmit chain integrates a power amplifier (PA), two TCAs, a six-bit RTPS, a DA, and a six-bit ATT. Measurements show the chip exhibits 0–4.5 dB gain, noise figure (NF) < 7.8 dB, root mean square (RMS) phase error < 3.5°, and RMS gain error < 0.4 dB in receive mode operating in 19–23 GHz. In transmit mode operating in 21–23 GHz, it provides 6–10 dB gain range, RMS phase error < 3.4°, RMS gain error < 0.25 dB, and output power at 1 dB compression point (OP1dB) > 6.5 dBm. In addition, the receive and transmit gain variations are within 0.8 dB and 0.4 dB, respectively, when temperature ranges from −55 °C to 85 °C. With a compact footprint of 3.5 × 4.8 mm2, the beamformer consumes 110 mW (receive) and 190 mW (transmit) DC power per channel. Full article
(This article belongs to the Special Issue Recent Advancements in Microwave and Optoelectronics Devices)
Show Figures

Figure 1

24 pages, 2013 KB  
Article
Capacity-Enhanced Li-Fi Transmission Using Autoencoder-Based Latent Representation: Performance Analysis Under Practical Optical Links
by Serin Kim, Yong-Yuk Won and Jiwon Park
Photonics 2026, 13(4), 356; https://doi.org/10.3390/photonics13040356 - 8 Apr 2026
Viewed by 342
Abstract
Visible light communication (VLC)-based Li-Fi systems suffer from limitations in transmission capacity expansion due to the restricted modulation bandwidth of LEDs. In this study, a latent representation-based NRZ-OOK Li-Fi transmission framework that exploits the statistical feature distribution of the latent space is proposed [...] Read more.
Visible light communication (VLC)-based Li-Fi systems suffer from limitations in transmission capacity expansion due to the restricted modulation bandwidth of LEDs. In this study, a latent representation-based NRZ-OOK Li-Fi transmission framework that exploits the statistical feature distribution of the latent space is proposed to improve transmission efficiency without expanding the physical bandwidth. An autoencoder is employed to transform input images into low-dimensional latent vectors, which are then quantized and modulated for transmission. At the receiver, hard decision and inverse quantization are performed, and the image is reconstructed through a trained decoder by leveraging the distribution characteristics of the latent representation. The effective transmission capacity gain Gcap is defined to quantify the amount of representable information relative to the original data under the same physical link resources according to the latent dimension, achieving up to a 49-fold data representation efficiency. The experimental results over practical optical links (0.5–1.5 m) showed that, in short-range conditions, larger latent dimensions maintained higher reconstruction PSNR, whereas under channel degradation conditions, smaller latent dimensions exhibited higher robustness, demonstrating a performance inversion phenomenon. Furthermore, it was confirmed that the dominant factor governing reconstruction performance shifts from the representational capability of the data to error accumulation characteristics depending on the channel condition. These results suggest that the latent representation-based transmission framework is an effective Li-Fi strategy that can simultaneously consider transmission efficiency and channel robustness through information representation optimization in bandwidth-limited environments. Full article
Show Figures

Figure 1

30 pages, 1323 KB  
Article
Circular Polarization-Based Quantum Encoding for Image Transmission over Error-Prone Channels
by Udara Jayasinghe and Anil Fernando
Signals 2026, 7(2), 37; https://doi.org/10.3390/signals7020037 - 8 Apr 2026
Viewed by 245
Abstract
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these [...] Read more.
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these limitations, this paper proposes a circular polarization-based quantum encoding framework for image transmission over error-prone channels. In the proposed approach, source images are compressed and source-encoded using standard image coding formats, including the joint photographic experts group (JPEG) standard and the high-efficiency image file format (HEIF), and converted into classical bitstreams. The resulting bitstreams are protected using channel coding and mapped onto quantum states via circular polarization representations, where left- and right-hand circularly polarized states encode binary information. The encoded quantum states are transmitted over noisy quantum channels to model channel impairments. At the receiver, appropriate quantum decoding and channel decoding operations are applied to recover the classical bitstream, followed by source decoding to reconstruct the image. The performance of the proposed framework is evaluated using image quality metrics, including peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Simulation results demonstrate that the proposed circular polarization-based encoding scheme outperforms existing quantum image encoding techniques, achieving channel SNR gains of 4 dB over state-of-the-art Hadamard-based encoding and 3 dB over frequency-domain quantum encoding methods under severe noise conditions. These results indicate that circular polarization-based quantum encoding provides improved noise robustness and reconstruction fidelity for practical quantum image transmission systems. Full article
Show Figures

Figure 1

19 pages, 4570 KB  
Article
Adaptive Deletion of Gaussian Ellipsoids in 3D Gaussian Splatting
by Fei Zhang, Yinghui Wang, Bo Yi and Jiaxin Ma
Mathematics 2026, 14(7), 1197; https://doi.org/10.3390/math14071197 - 3 Apr 2026
Viewed by 302
Abstract
As a leading method for Novel View Synthesis (NVS), 3D Gaussian Splatting (3DGS) faces limitations. Fixed thresholds governing Gaussian scale and opacity lead to over-reconstruction or under-reconstruction, while the linear penalty used for handling outliers during optimization tends to introduce artifacts. Therefore, we [...] Read more.
As a leading method for Novel View Synthesis (NVS), 3D Gaussian Splatting (3DGS) faces limitations. Fixed thresholds governing Gaussian scale and opacity lead to over-reconstruction or under-reconstruction, while the linear penalty used for handling outliers during optimization tends to introduce artifacts. Therefore, we propose Adaptive 3DGS featuring a dynamic deletion mechanism. Specifically, our method calculates coverage for each Gaussian based on its scale during removal. Gaussians with high coverage face stricter scale thresholds to reduce over-reconstruction, while those with lower coverage receive lenient thresholds to preserve details. Simultaneously, transparency-based contribution assessment is applied. Gaussians with low contribution meet stricter transparency thresholds to combat over-reconstruction, while high-contribution ones get lenient thresholds to mitigate under-reconstruction. During optimization, introducing Huber loss promotes quadratic growth for small errors, reducing smoothing to alleviate artifacts and better preserve details. Evaluation on standard datasets shows our method improves peak signal-to-noise ratio (PSNR) by 0.3 dB over 3DGS and 0.5 dB over MS-3DGS at 4× resolution, and it achieves a 0.1 dB gain over Mip-Splatting, confirming its effectiveness and robustness. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

23 pages, 1056 KB  
Article
Deep Learning-Driven Atomic Norm Optimization for Accurate Downlink Channel Estimation in FDD Systems
by Ke Xu, Sining Li, Changwei Huang, Dan Wu, Changning Wei, Dongjun Zhang, Richu Jin, Huilin Ren, Zhuoqiao Ji, Xinbo Chen and Weiqiang Wu
Electronics 2026, 15(7), 1461; https://doi.org/10.3390/electronics15071461 - 1 Apr 2026
Viewed by 215
Abstract
In this paper, we propose a downlink (DL) channel estimation scheme for frequency-division duplex (FDD) multi-antenna orthogonal frequency-division multiplexing (OFDM) systems, leveraging atomic norm minimization (ANM) and deep neural networks (DNN). Unlike time-division duplex (TDD) systems, where uplink (UL) and DL channels are [...] Read more.
In this paper, we propose a downlink (DL) channel estimation scheme for frequency-division duplex (FDD) multi-antenna orthogonal frequency-division multiplexing (OFDM) systems, leveraging atomic norm minimization (ANM) and deep neural networks (DNN). Unlike time-division duplex (TDD) systems, where uplink (UL) and DL channels are reciprocal, FDD systems do not share this reciprocity, leading to increased channel training overhead. However, both theoretical analyses and empirical evidence reveal that key channel characteristics—such as angles of arrival and departure, path delays, and the number of propagation paths—exhibit partial reciprocity between UL and DL. Building on this insight, we design a DL channel estimation scheme that exploits frequency-independent UL parameters along with estimated DL channel gains. Our method integrates ANM with DNN to enhance estimation accuracy and efficiency. Specifically, ANM formulates the estimation problem while avoiding the off-grid errors inherent in traditional grid-based methods. To further mitigate performance degradation in clustered-path channels and reduce computational complexity, we introduce a DNN-based architecture that predicts channel parameters. The DNN captures hidden relationships between received pilot signals and frequency-independent channel parameters, enabling accurate estimation with linear time complexity. During training, ANM assists in serving users, ensuring reliable performance. Once the DNN is fully trained, it takes over to balance quality of service (QoS) and latency, providing an efficient and accurate solution for DL channel estimation in FDD-OFDM systems. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

8 pages, 1159 KB  
Proceeding Paper
Integration of Deep Learning Methods into the Design of Microwave Transceiver Components for a 5G Mid-Band System
by Pedro Escudero-Villa, Santiago Huebla-Huilca and Jenny Paredes-Fierro
Eng. Proc. 2026, 124(1), 95; https://doi.org/10.3390/engproc2026124095 - 30 Mar 2026
Viewed by 287
Abstract
This study evaluates the application of deep learning techniques to the design of a microwave transmitter–receiver system operating in the 5G mid-band. The proposed architecture consists of four stages—signal generation, amplification, mixing, and filtering—each initially designed using conventional microwave methods and subsequently integrated [...] Read more.
This study evaluates the application of deep learning techniques to the design of a microwave transmitter–receiver system operating in the 5G mid-band. The proposed architecture consists of four stages—signal generation, amplification, mixing, and filtering—each initially designed using conventional microwave methods and subsequently integrated into a complete transceiver. Simulation data were generated and component-specific convolutional neural networks (CNNs) were implemented in Python using TensorFlow/Keras. Across all models, an average error reduction exceeding 90% was achieved, with most networks converging after the third training cycle. System-level integration shows that the baseline design achieved a transmitted power of −32.637 dBm and a gain of 1.116 dB, while the deep learning-based design yielded −33.912 dBm and 0.738 dB. Additional analysis of S-parameters confirms acceptable impedance matching and a frequency response of around 3.5 GHz. These results illustrate that deep learning provides an effective complementary methodology for multi-component microwave system modeling and optimization in 5G applications. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

24 pages, 3376 KB  
Article
EMDiC: Physics-Informed Conditional Diffusion Denoising for Frequency-Domain Electromagnetic Signals
by Zhenlin Du, Miaomiao Gao, Zhijie Qu and Xiaojuan Zhang
Appl. Sci. 2026, 16(7), 3249; https://doi.org/10.3390/app16073249 - 27 Mar 2026
Viewed by 356
Abstract
Frequency-domain electromagnetic (FDEM) measurements for shallow subsurface exploration are frequently corrupted by noise, which masks weak secondary-field responses and degrades interpretation. We propose an electromagnetic diffusion CNN (EMDiC) for 1D multi-frequency FDEM denoising, where denoising is formulated as conditional diffusion-based generation. EMDiC combines [...] Read more.
Frequency-domain electromagnetic (FDEM) measurements for shallow subsurface exploration are frequently corrupted by noise, which masks weak secondary-field responses and degrades interpretation. We propose an electromagnetic diffusion CNN (EMDiC) for 1D multi-frequency FDEM denoising, where denoising is formulated as conditional diffusion-based generation. EMDiC combines an analytic frequency–spatial encoder, a Feature-wise Linear Modulation (FiLM)-conditioned convolutional hourglass backbone, and a physics-informed composite loss built on velocity loss to improve waveform reconstruction under severe noise. A reproducible synthetic dataset is constructed through layered-earth forward modeling with concentric Transmitter–Receiver (TX–RX) geometry, multiple target categories, and mixed noise waveforms. On synthetic benchmarks covering multiple noise levels and material types, EMDiC achieves the best overall performance in Root Mean Square Error (RMSE), Signal-to-Noise Ratio (SNR), and Normalized cross-correlation (NCC) among 1D U-Net, diffusion-based variants, and representative neural baselines, with the clearest gains under medium-to-strong noise and for targets with pronounced induction responses. Ablation experiments verify the complementary contributions of electromagnetic positional encoding (EMPE), FiLM conditioning, and the composite loss. Field data validation with a self-developed GEM-3 system further shows that EMDiC improves cross-frequency coherence and suppresses oscillations while preserving the main response characteristics. Full article
Show Figures

Figure 1

37 pages, 1745 KB  
Article
Boundary-Aware Contrastive Learning for Log Anomaly Detection
by Fouad Ailabouni, Jesús-Ángel Román-Gallego, María-Luisa Pérez-Delgado and Laura Grande Pérez
Appl. Sci. 2026, 16(7), 3208; https://doi.org/10.3390/app16073208 - 26 Mar 2026
Viewed by 321
Abstract
Log anomaly detection in modern distributed systems is challenging. Anomalous behaviors are rare. Manual labeling is expensive. Session boundaries are often set by fixed heuristics before model training. This fixed-boundary assumption is problematic because segmentation errors propagate into representation learning and cannot be [...] Read more.
Log anomaly detection in modern distributed systems is challenging. Anomalous behaviors are rare. Manual labeling is expensive. Session boundaries are often set by fixed heuristics before model training. This fixed-boundary assumption is problematic because segmentation errors propagate into representation learning and cannot be corrected during optimization. To address this, this paper proposes BASN (Boundary-Aware Sessionization Network), a boundary-aware contrastive learning framework that jointly learns session boundaries and anomaly representations using a differentiable soft-reset mechanism. BASN does not treat sessionization as a separate step. Instead, it predicts boundary probabilities from event semantics and temporal gaps, then modulates end-to-end session-state updates. The session representations are optimized with self-supervised contrastive learning, enabling effective zero-shot anomaly detection and few-shot adaptation. Experiments on four benchmark datasets (BGL, HDFS, OpenStack, SSH) show strong zero-shot performance (area under the receiver operating characteristic curve, AUROC 0.935–0.975) and boundary alignment with expert-validated proxy segmentation (boundary F1 0.825–0.877). Comparative gains over baselines are reported in the article after bibliography correction, baseline verification, and expanded statistical analysis. BASN is also computationally efficient, requiring less than 10 ms per session on a Graphics Processing Unit (GPU) and less than 45 ms on a Central Processing Unit (CPU). This is compatible with real-time inference needs in the evaluated settings. However, cross-system transfer AUROC (0.735–0.812) remains below in-domain performance. Domain-specific adaptation is still needed for deployment in environments that differ greatly from the training domain. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 453 KB  
Article
Healthcare Providers’ Perspectives on Generative Artificial Intelligence (GenAI) Adoption, Adaptation, Assimilation, and Use in the United States
by Obinna O. Oleribe, Marissa Brash, Adati Tarfa, Ricardo Izurieta and Simon D. Taylor-Robinson
Healthcare 2026, 14(6), 775; https://doi.org/10.3390/healthcare14060775 - 19 Mar 2026
Viewed by 679
Abstract
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study [...] Read more.
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study aimed to assess U.S. healthcare providers’ perceptions of generative AI adoption, perceived usefulness, training needs, barriers, and strategies for safe integration. Methods: A nationwide, IRB-approved, cross-sectional survey was administered to healthcare professionals using Qualtrics. A convenience sample of clinicians was recruited via professional listservs and e-mail invitations. The 20-page questionnaire captured demographics, GenAI exposure, organizational adoption status, perceived usefulness (5-point scale), barriers, and mitigation strategies. SPSS v27 and Microsoft Excel were used for statistical analysis. Results: Of 130 respondents, 109 completed the core survey (completion rate 83.8%). Participants were 38.5% physicians, 16.5% nurses, 12.8% allied professionals, and 32.2% other providers; 54.2% were women, and 64.8% were ≥50 years. Overall, 86.9% agreed that GenAI is useful in current patient care, rising to 92.9% when asked about future usefulness. Only 42.4% had received formal GenAI training, and just 23.2% reported that their organization had begun adopting AI. The top perceived benefits were improved documentation/clerking (57.0%) and error reduction (49.4%). Dominant barriers included limited AI knowledge (24.7%) and fear of job loss (16.9%). Despite concerns, 72% expressed willingness to support broader GenAI adoption, favoring human oversight (67.1%) and staff training (60.8%) as key safeguards. There were statistically significant findings in perceived AI usefulness by gender (χ2 = 29.2; p < 0.001); organizational adoption of AI (χ2 = 31.6.2; p = 0.047) and where AI is most useful (χ2 = 101.1; p < 0.001) by qualifications; and support for AI adoption by age (χ2 = 18.0; p = 0.02). Conclusions: U.S. clinicians in our survey viewed GenAI as useful but reported limited training and organizational infrastructure needed for confident use while also expressing concerns regarding data privacy and ethical risk. Education programs and transparent, provider-led implementation strategies may accelerate responsible GenAI assimilation while addressing ethical and workforce concerns. Also, health administrators should use the efficiency gains to improve provider–patient relationships and clinicians’ work–life balance while reducing clinician burnout rates. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

21 pages, 4917 KB  
Article
Design and Performance Analysis of an RIS-Empowered RM-DCSK System for Wireless Powered Communication
by Fang Liu, Junjun Ma and Qihao Yu
Entropy 2026, 28(3), 300; https://doi.org/10.3390/e28030300 - 5 Mar 2026
Viewed by 296
Abstract
This paper proposed a reconfigurable intelligent surface (RIS)-empowered reference-modulated differential chaos shift keying (RM-DCSK) wireless powered communication (WPC) system. As a noncoherent chaotic communication scheme, the proposed system exploits the reference reuse property of RM-DCSK, where the reference signal simultaneously carries data information, [...] Read more.
This paper proposed a reconfigurable intelligent surface (RIS)-empowered reference-modulated differential chaos shift keying (RM-DCSK) wireless powered communication (WPC) system. As a noncoherent chaotic communication scheme, the proposed system exploits the reference reuse property of RM-DCSK, where the reference signal simultaneously carries data information, thereby improving spectral efficiency while maintaining noncoherent and channel-estimation-free reception with low receiver circuit complexity. Furthermore, RIS is utilized to reconfigure the propagation environment and mitigate the path loss effect of WPC links. At the user equipment (UE), a harvest–store–use (HSU) energy harvesting and finite-buffer model is developed, and a threshold-based on/off transmission policy is adopted to enable sustainable uplink transmission. To quantify the gain of energy buffering and management, a bufferless baseline system is further established. Closed-form bit error rate (BER) expressions are obtained under multi-path Rayleigh fading channels for both the proposed RIS-RM-DCSK-WPC system and bufferless baseline system. Finally, simulation results validate the analysis and demonstrate that the proposed system achieves superior BER performance compared with representative benchmarks, including existing RIS-aided DCSK-WPC, RM-DCSK-WPC, and bufferless RIS-RM-DCSK-WPC systems. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

15 pages, 5848 KB  
Article
A Software Defined Radio Implementation of Non-Orthogonal Multiple Access with Reliable Decoding via Error Correction
by Dipanjan Adhikary and Eirini Eleni Tsiropoulou
Future Internet 2026, 18(3), 128; https://doi.org/10.3390/fi18030128 - 2 Mar 2026
Viewed by 518
Abstract
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, [...] Read more.
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, and host to software defined radios processing jitter, create obstacles in the practical implementation of NOMA. This paper bridges the gap between theory and hardware by introducing a complete two-user NOMA transmit–receive chain on a low-cost ADALM-Pluto software defined radio (SDR) platform. The proposed implementation integrates matched filtering, offset estimation and correction, SIC with waveform reconstruction and subtraction, and reliability reinforcement via rate-1/2 convolutional coding with Viterbi decoding. We have performed a complete validation of the proposed design in both downlink and uplink modes. We collected data regarding the packet-level and system-related metrics, such as end-to-end latency, bit error rate (BER), and success rate. Moreover, we demonstrate the implementation of the uplink NOMA without need for expensive GPS-disciplined oscillators by leveraging the Pluto Rev-C dual-transmit channels that share a common oscillator. We present detailed experimental results at 915 MHz with BPSK modulation for the downlink performance, and also show a full implementation of the uplink NOMA. We observe excellent reliability for the downlink setup and good reliability for the uplink system. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2026–2027)
Show Figures

Graphical abstract

19 pages, 1356 KB  
Article
Signal Detection Method for OTFS System Based on Adaptive Wavelet Convolutional Neural Network
by You Wu and Mengyao Zhou
Sensors 2026, 26(4), 1397; https://doi.org/10.3390/s26041397 - 23 Feb 2026
Viewed by 463
Abstract
In Orthogonal Time–Frequency Space (OTFS) systems, signal detection algorithms based on convolutional neural networks (CNNs) suffer from insufficient feature extraction and are limited by local mixing. Additionally, fixed convolution kernels struggle to match the sparsity and non-stationary characteristics of OTFS signals in the [...] Read more.
In Orthogonal Time–Frequency Space (OTFS) systems, signal detection algorithms based on convolutional neural networks (CNNs) suffer from insufficient feature extraction and are limited by local mixing. Additionally, fixed convolution kernels struggle to match the sparsity and non-stationary characteristics of OTFS signals in the delay-Doppler domain, resulting in slow convergence and high training costs. We do not stop at simply integrating more features outside the existing CNN framework. Instead, we go deeper into the network and replace the fixed convolution kernels with wavelet convolution layers that have time–frequency-adaptive capabilities. This fundamental change allows the network to more intrinsically match the physical characteristics of OTFS signals in the delay-Doppler domain, thereby achieving excellent detection performance while also gaining faster convergence efficiency. Therefore, this paper proposes a signal detection method using an adaptive wavelet convolutional neural network (AWCNN). The approach replaces the first convolutional layer of a standard CNN with an adaptive wavelet layer, which leverages the time–frequency localization properties of Sym4 wavelet kernels along with learnable scaling and translation factors. This enhances the network’s ability to extract sparse features from OTFS signals. Additionally, the model incorporates both the original received signal and preliminary estimates from the message-passing (MP) algorithm as input features, enriching the dataset and further improving detection performance. Experimental results demonstrate that the AWCNN model achieves superior convergence efficiency compared to the CNN model, which attains a bit error rate (BER) comparable to that of the CNN algorithm at a low signal-to-noise ratio of 2 dB, operating without the need for pilot-assisted channel state information acquisition. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

25 pages, 896 KB  
Article
Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning
by Negasa Berhanu Fite, Getachew Mamo Wegari and Heidi Steendam
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211 - 23 Feb 2026
Viewed by 1008
Abstract
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received [...] Read more.
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems. Full article
Show Figures

Figure 1

Back to TopTop