Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (208)

Search Parameters:
Keywords = discrete Fourier transform (DFT)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1206 KB  
Article
HASwinNet: A Swin Transformer-Based Denoising Framework with Hybrid Attention for mmWave MIMO Systems
by Xi Han, Houya Tu, Jiaxi Ying, Junqiao Chen and Zhiqiang Xing
Entropy 2026, 28(1), 124; https://doi.org/10.3390/e28010124 - 20 Jan 2026
Viewed by 144
Abstract
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic [...] Read more.
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic noise sensitivity and clustered sparse multipath structures. These challenges are particularly severe under limited pilot resources and low signal-to-noise ratio (SNR) conditions. To address these difficulties, this paper proposes HASwinNet, a deep learning (DL) framework designed for mmWave channel denoising. The framework integrates a hierarchical Swin Transformer encoder for structured representation learning. It further incorporates two complementary branches. The first branch performs sparse token extraction guided by angular-domain significance. The second branch focuses on angular-domain refinement by applying discrete Fourier transform (DFT), squeeze-and-excitation (SE), and inverse DFT (IDFT) operations. This generates a mask that highlights angularly coherent features. A decoder combines the outputs of both branches with a residual projection from the input to yield refined channel estimates. Additionally, we introduce an angular-domain perceptual loss during training. This enforces spectral consistency and preserves clustered multipath structures. Simulation results based on the Saleh–Valenzuela (S–V) channel model demonstrate that HASwinNet achieves significant improvements in normalized mean squared error (NMSE) and bit error rate (BER). It consistently outperforms convolutional neural network (CNN), long short-term memory (LSTM), and U-Net baselines. Furthermore, experiments with reduced pilot symbols confirm that HASwinNet effectively exploits angular sparsity. The model retains a consistent advantage over baselines even under pilot-limited conditions. These findings validate the scalability of HASwinNet for practical 6G mmWave backhaul applications. They also highlight its potential in ISAC scenarios where accurate channel recovery supports both communication and sensing. Full article
Show Figures

Figure 1

24 pages, 15172 KB  
Article
Real-Time Hand Gesture Recognition for IoT Devices Using FMCW mmWave Radar and Continuous Wavelet Transform
by Anna Ślesicka and Adam Kawalec
Electronics 2026, 15(2), 250; https://doi.org/10.3390/electronics15020250 - 6 Jan 2026
Viewed by 305
Abstract
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system [...] Read more.
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system introduces a novel pre-processing stage based on the Continuous Wavelet Transform (CWT). The CWT enables the extraction of discriminative time–frequency features directly from raw radar signals, improving the interpretability and robustness of the learned representations. A lightweight convolutional neural network architecture is then designed to process the CWT maps for efficient classification on edge IoT devices. Experimental validation with data collected from 20 participants performing five standardized gestures demonstrates that the proposed framework achieves an accuracy of up to 99.87% using the Morlet wavelet, with strong generalization to unseen users (82–84% accuracy). The results confirm that the integration of CWT-based radar signal processing with deep learning forms a computationally efficient and accurate intelligent system for human–computer interaction in real-time IoT environments. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

25 pages, 1271 KB  
Article
Fast Algorithms for Small-Size Type VII Discrete Cosine Transform
by Marina Polyakova, Aleksandr Cariow and Mirosław Łazoryszczak
Electronics 2026, 15(1), 98; https://doi.org/10.3390/electronics15010098 - 24 Dec 2025
Viewed by 187
Abstract
This paper presents new fast algorithms for the type VII discrete cosine transform (DCT-VII) applied to input data sequences of lengths ranging from 3 to 8. Fast algorithms for small-sized trigonometric transforms enable the processing of small data blocks in image and video [...] Read more.
This paper presents new fast algorithms for the type VII discrete cosine transform (DCT-VII) applied to input data sequences of lengths ranging from 3 to 8. Fast algorithms for small-sized trigonometric transforms enable the processing of small data blocks in image and video coding with low computational complexity. To process the information in image and video coding standards, the fast DCT-VII algorithms can be used, taking into account the relationships between the DCT-VII and the type II discrete cosine transform (DCT-II). Additionally, such algorithms can be used in other digital signal processing tasks as components for constructing algorithms for large-sized transforms, leading to reduced system complexity. Existing fast odd DCT algorithms have been designed using relationships among discrete cosine transforms (DCTs), discrete sine transforms (DSTs), and the discrete Fourier transform (DFT); among different types of DCTs and DSTs; and between the coefficients of the transform matrix. However, these algorithms require a relatively large number of multiplications and additions. The process of obtaining such algorithms is difficult to understand and implement. To overcome these shortcomings, this paper applies a structural approach to develop new fast DCT-VII algorithms. The process begins by expressing the DCT-VII as a matrix-vector multiplication, then reshaping the block structure of the DCT-VII matrix to align with matrix patterns known from the basic papers in which the structural approach was introduced. If the matrix block structure does not match any known pattern, rows and columns are reordered, and sign changes are applied as needed. If this is insufficient, the matrix is decomposed into the sum of two or more matrices, each analyzed separately and transformed similarly if required. As a result, factorizations of DCT-VII matrices for different input sequence lengths are obtained. Based on these factorizations, fast DCT-VII algorithms with reduced arithmetic complexity are constructed and presented with pseudocode. To illustrate the computational flow of the resulting algorithms and their modular design, which is suitable for VLSI implementation, data-flow graphs are provided. The new DCT-VII algorithms reduce the number of multiplications by approximately 66% compared to direct matrix-vector multiplication, although the number of additions decreases by only about 6%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 854 KB  
Article
Research on Software Optimization for Discrete Fourier Test
by Xianwei Yang and Lan Wang
Axioms 2026, 15(1), 4; https://doi.org/10.3390/axioms15010004 - 22 Dec 2025
Viewed by 222
Abstract
Random sequences are critical to cryptographic technologies and applications. Randomness testing typically employs probabilistic statistical techniques for evaluating the randomness properties of such sequences. Both the National Institute of Standards and Technology (NIST, Gaithersburg, MD, U.S.) and the State Cryptography Administration (SCA, China) [...] Read more.
Random sequences are critical to cryptographic technologies and applications. Randomness testing typically employs probabilistic statistical techniques for evaluating the randomness properties of such sequences. Both the National Institute of Standards and Technology (NIST, Gaithersburg, MD, U.S.) and the State Cryptography Administration (SCA, China) have issued guidelines for randomness testing, each of which includes the Discrete Fourier Transform (DFT) test as one of the mandatory assessments. This paper focuses on the efficient implementation of the DFT test and proposes a fast implementation approach that leverages FFTW (Fastest Fourier Transform in the West). Comprehensive experimental tests and performance evaluations were performed both before and after optimization of the algorithm. The results show that the optimized algorithm increases the speed of the DFT test for a single sample by a factor of 3.37. Full article
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 716
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

33 pages, 12224 KB  
Article
Unsupervised Clustering of InSAR Time-Series Deformation in Mandalay Region from 2022 to 2025 Using Dynamic Time Warping and Longest Common Subsequence
by Jingyi Qin, Zhifang Zhao, Dingyi Zhou, Mengfan Yuan, Chaohai Liu, Xiaoyan Wei and Tin Aung Myint
Remote Sens. 2025, 17(23), 3920; https://doi.org/10.3390/rs17233920 - 3 Dec 2025
Viewed by 739
Abstract
Urban land subsidence poses a significant threat in rapidly urbanizing regions, threatening infrastructure integrity and sustainable development. This study focuses on Mandalay, Myanmar, and presents a novel clustering framework—Dynamic Time Warping and Trend-based Longest Common Subsequence with Agglomerative Hierarchical Clustering (DTLCS-AHC)—to classify spatiotemporal [...] Read more.
Urban land subsidence poses a significant threat in rapidly urbanizing regions, threatening infrastructure integrity and sustainable development. This study focuses on Mandalay, Myanmar, and presents a novel clustering framework—Dynamic Time Warping and Trend-based Longest Common Subsequence with Agglomerative Hierarchical Clustering (DTLCS-AHC)—to classify spatiotemporal deformation patterns from Small Baseline Subset (SBAS) Interferometric Synthetic Aperture Radar (InSAR) time series derived from Sentinel-1A imagery covering January 2022 to March 2025. The method identifies four characteristic deformation regimes: stable uplift, stable subsidence, primary subsidence, and secondary subsidence. Time–frequency analysis employing Empirical Mode Decomposition (EMD) and Discrete Fourier Transform (DFT) reveals seasonal oscillations in stable areas. Notably, a transition from subsidence to uplift was detected in specific areas approximately seven months prior to the Mw 7.7 earthquake, but causal relationships require further validation. This study further establishes correlations between subsidence and both urban expansion and rainfall patterns. A physically informed conceptual model is developed through multi-source data integration, and cross-city validation in Yangon confirms the robustness and generalizability of the approach. This research provides a scalable technical framework for deformation monitoring and risk assessment in tropical, data-scarce urban environments. Full article
Show Figures

Graphical abstract

14 pages, 829 KB  
Article
SPaRLoRA: Spectral-Phase Residual Initialization for LoRA in Low-Resource ASR
by Liang Lan, Wenyong Wang, Guanyu Zou, Jia Wang and Daliang Wang
Electronics 2025, 14(22), 4466; https://doi.org/10.3390/electronics14224466 - 16 Nov 2025
Viewed by 577
Abstract
Parameter-efficient fine-tuning (PEFT) methods like Low-Rank Adaptation (LoRA) are widely used to adapt large pre-trained models under limited resources, yet they often underperform full fine-tuning in low-resource automatic speech recognition (ASR). This gap stems partly from initialization strategies that ignore speech signals’ inherent [...] Read more.
Parameter-efficient fine-tuning (PEFT) methods like Low-Rank Adaptation (LoRA) are widely used to adapt large pre-trained models under limited resources, yet they often underperform full fine-tuning in low-resource automatic speech recognition (ASR). This gap stems partly from initialization strategies that ignore speech signals’ inherent spectral-phase structure. Unlike SVD/QR-based approaches (PiSSA, OLoRA) that construct mathematically optimal but signal-agnostic subspaces, we propose SPaRLoRA (Spectral-Phase Residual LoRA), which leverages Discrete Fourier Transform (DFT) bases to create speech-aware low-rank adapters. SPaRLoRA explicitly incorporates both magnitude and phase information by concatenating real and imaginary parts of DFT basis vectors, and applies residual correction to focus learning exclusively on components unexplained by the spectral subspace. Evaluated on a 200-h Sichuan dialect ASR benchmark, SPaRLoRA achieves a 2.1% relative character error rate reduction over standard LoRA, outperforming variants including DoRA, PiSSA, and OLoRA. Ablation studies confirm the individual and complementary benefits of spectral basis, phase awareness, and residual correction. Our work demonstrates that signal-structure-aware initialization significantly enhances parameter-efficient fine-tuning for low-resource ASR without architectural changes or added inference cost. Full article
(This article belongs to the Special Issue Multimodal Learning and Transfer Learning)
Show Figures

Figure 1

22 pages, 4147 KB  
Article
A Methodological Framework for Analyzing and Differentiating Daily Physical Activity Across Groups Using Digital Biomarkers from the Frequency Domain
by Ya-Ting Liang, Chuhsing Kate Hsiao, Amrita Chattopadhyay, Tzu-Pin Lu, Po-Hsiu Kuo and Charlotte Wang
Mathematics 2025, 13(22), 3616; https://doi.org/10.3390/math13223616 - 11 Nov 2025
Viewed by 504
Abstract
Human daily physical activity (PA), monitored via wearable devices, provides valuable information for real-time health assessment and disease prevention. However, analyzing time-domain PA data is challenging due to large data volumes and high inter- and intra-individual heterogeneity. Traditional PA analyses often rely on [...] Read more.
Human daily physical activity (PA), monitored via wearable devices, provides valuable information for real-time health assessment and disease prevention. However, analyzing time-domain PA data is challenging due to large data volumes and high inter- and intra-individual heterogeneity. Traditional PA analyses often rely on demographics, while advanced methods utilize time-domain summary statistics (e.g., L5, M10) or functional principal component analysis (FPCA). This study presents a data-efficient approach utilizing the Discrete Fourier Transform (DFT) to convert time-domain data into a compact set of frequency-domain variables. Our research suggests that adding these DFT variables can significantly enhance model performance. We demonstrate that incorporating DFT-derived variables substantially improves model performance. Specifically, (1) a small subset of DFT variables effectively captures major PA levels with effective dimensionality reduction; (2) these variables retain known associations with factors like age, sex, and weekday/weekend status; (3) they enhance the performance of classifiers. Mathematical and empirical analyses further confirm the reliability and interpretability of DFT-based features in dimension reduction. Across three mental health studies, these DFT-derived variables successfully capture key PA characteristics while retaining known associations and strengthening model performance. Overall, the proposed DFT-based framework offers a robust and scalable tool for analyzing accelerometer data, with broad applicability in health and behavioral research. Full article
(This article belongs to the Special Issue Advanced Methods and Applications in Medical Informatics)
Show Figures

Figure 1

19 pages, 4452 KB  
Article
A New Low PAPR Modulation Scheme for 6G: Offset Rotation Interpolation Modulation
by Yu Xin, Jian Hua and Guanghui Yu
Electronics 2025, 14(20), 4031; https://doi.org/10.3390/electronics14204031 - 14 Oct 2025
Viewed by 670
Abstract
The article proposes a novel modulation scheme with a low peak-to-average ratio (PAPR), referred to as offset rotation interpolation modulation (ORIM), which is particularly suitable for low-power consumption and enhanced coverage scenarios in the sixth generation (6G) of wireless communication. ORIM comprises three [...] Read more.
The article proposes a novel modulation scheme with a low peak-to-average ratio (PAPR), referred to as offset rotation interpolation modulation (ORIM), which is particularly suitable for low-power consumption and enhanced coverage scenarios in the sixth generation (6G) of wireless communication. ORIM comprises three modulation schemes: I-QPSK, I-BPSK, and I-π/2 BPSK. They are derived from cyclic offsetting, phase rotation, and interpolation, and applied to QPSK, BPSK, and π/2 BPSK, respectively. Simulation results in discrete Fourier transform-spread-orthogonal frequency division multiplexing (DFT-s-OFDM) systems demonstrate that ORIM achieves a lower peak-to-average power ratio (PAPR) than the π/2-BPSK scheme specified in the 5G New Radio (NR) protocol, without incurring any performance degradation in terms of block error rate (BLER). Moreover, with the addition of frequency domain spectrum shaping (FDSS), I-π/2 BPSK demonstrates superior performance over π/2-BPSK in both PAPR and BLER metrics under the TDL-A channel conditions. In addition, the complexity of modulation at the transmitting end or demodulation at the receiving end of ORIM is of the same order of magnitude as that of π/2 BPSK, thereby achieving a certain level of overall performance improvement. Full article
Show Figures

Figure 1

21 pages, 1828 KB  
Article
Deep Learning-Based Eye-Writing Recognition with Improved Preprocessing and Data Augmentation Techniques
by Kota Suzuki, Abu Saleh Musa Miah and Jungpil Shin
Sensors 2025, 25(20), 6325; https://doi.org/10.3390/s25206325 - 13 Oct 2025
Viewed by 1003
Abstract
Eye-tracking technology enables communication for individuals with muscle control difficulties, making it a valuable assistive tool. Traditional systems rely on electrooculography (EOG) or infrared devices, which are accurate but costly and invasive. While vision-based systems offer a more accessible alternative, they have not [...] Read more.
Eye-tracking technology enables communication for individuals with muscle control difficulties, making it a valuable assistive tool. Traditional systems rely on electrooculography (EOG) or infrared devices, which are accurate but costly and invasive. While vision-based systems offer a more accessible alternative, they have not been extensively explored for eye-writing recognition. Additionally, the natural instability of eye movements and variations in writing styles result in inconsistent signal lengths, which reduces recognition accuracy and limits the practical use of eye-writing systems. To address these challenges, we propose a novel vision-based eye-writing recognition approach that utilizes a webcam-captured dataset. A key contribution of our approach is the introduction of a Discrete Fourier Transform (DFT)-based length normalization method that standardizes the length of each eye-writing sample while preserving essential spectral characteristics. This ensures uniformity in input lengths and improves both efficiency and robustness. Moreover, we integrate a hybrid deep learning model that combines 1D Convolutional Neural Networks (CNN) and Temporal Convolutional Networks (TCN) to jointly capture spatial and temporal features of eye-writing. To further improve model robustness, we incorporate data augmentation and initial-point normalization techniques. The proposed system was evaluated using our new webcam-captured Arabic numbers dataset and two existing benchmark datasets, with leave-one-subject-out (LOSO) cross-validation. The model achieved accuracies of 97.68% on the new dataset, 94.48% on the Japanese Katakana dataset, and 98.70% on the EOG-captured Arabic numbers dataset—outperforming existing systems. This work provides an efficient eye-writing recognition system, featuring robust preprocessing techniques, a hybrid deep learning model, and a new webcam-captured dataset. Full article
Show Figures

Figure 1

19 pages, 1661 KB  
Article
Joint Wavelet and Sine Transforms for Performance Enhancement of OFDM Communication Systems
by Khaled Ramadan, Ibrahim Aqeel and Emad S. Hassan
Mathematics 2025, 13(20), 3258; https://doi.org/10.3390/math13203258 - 11 Oct 2025
Viewed by 504
Abstract
This paper presents a modified Orthogonal Frequency Division Multiplexing (OFDM) system that combines Discrete Wavelet Transform (DWT) with Discrete Sine Transform (DST) to enhance data rate capacity over traditional Discrete Fourier Transform (DFT)-based OFDM systems. By applying Inverse Discrete Wavelet Transform (IDWT) to [...] Read more.
This paper presents a modified Orthogonal Frequency Division Multiplexing (OFDM) system that combines Discrete Wavelet Transform (DWT) with Discrete Sine Transform (DST) to enhance data rate capacity over traditional Discrete Fourier Transform (DFT)-based OFDM systems. By applying Inverse Discrete Wavelet Transform (IDWT) to the modulated Binary Phase Shift Keying (BPSK) bits, the constellation diagram reveals that half of the time-domain samples after single-level Haar IDWT are zeros, while the other half are real. The proposed system utilizes these 0.5N zero values, modulating them with the DST (IDST) and assigning them as the imaginary part of the signal. Performance comparisons demonstrate that the Bit-Error-Rate (BER) of this hybrid DWT-DST configuration lies between that of BPSK and Quadrature Phase Shift Keying (QPSK) in a DWT-based system, while also achieving data rate improvement of 0.5N. Additionally, simulation results indicate that the proposed approach demonstrates stable performance even in the presence of estimation errors, with less than 3.4% BER degradation for moderate errors, and consistently better robustness than QPSK-based systems while offering improved data rate efficiency over BPSK. This novel configuration highlights the potential for more efficient and reliable data transmission in OFDM systems, making it a promising alternative to conventional DWT or DFT-based methods. Full article
(This article belongs to the Special Issue Computational Intelligence in Communication Networks)
Show Figures

Figure 1

12 pages, 844 KB  
Article
Enhance the Performance of Expectation Propagation Detection in Spatially Correlated Massive MIMO Channels via DFT Precoding
by Huaicheng Luo, Jia Tang, Zeliang Ou, Yitong Liu and Hongwen Yang
Entropy 2025, 27(10), 1030; https://doi.org/10.3390/e27101030 - 1 Oct 2025
Viewed by 634
Abstract
Expectation Propagation (EP) has emerged as a promising detection algorithm for large-scale multiple-input multiple-output (MIMO) systems owing to its excellent performance and practical complexity. However, transmit antenna correlation significantly degrades the performance of EP detection, especially when the number of transmit and receive [...] Read more.
Expectation Propagation (EP) has emerged as a promising detection algorithm for large-scale multiple-input multiple-output (MIMO) systems owing to its excellent performance and practical complexity. However, transmit antenna correlation significantly degrades the performance of EP detection, especially when the number of transmit and receive antennas is equal and high-order modulation is adopted. Based on the fact that the eigenvector matrix of the channel transmit correlation matrix approaches asymptotically to a discrete Fourier transform (DFT) matrix, a DFT precoder is proposed to effectively eliminate transmit antenna correlation. Simulation results demonstrate that for high-order, high-dimensional massive MIMO systems with strong transmit antenna correlation, employing the proposed DFT precoding can significantly accelerate the convergence of the EP algorithm and reduce the detection error rate. Full article
(This article belongs to the Special Issue Next-Generation Multiple Access for Future Wireless Communications)
Show Figures

Figure 1

17 pages, 369 KB  
Article
AI-Assisted Dynamic Port and Waveform Switching for Enhancing UL Coverage in 5G NR
by Alejandro Villena-Rodríguez, Francisco J. Martín-Vega, Gerardo Gómez, Mari Carmen Aguayo-Torres, José Outes-Carnero, F. Yak Ng-Molina and Juan Ramiro-Moreno
Sensors 2025, 25(18), 5875; https://doi.org/10.3390/s25185875 - 19 Sep 2025
Viewed by 850
Abstract
The uplink of 5G networks allows selecting the transmit waveform between cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) and discrete Fourier transform spread OFDM (DFT-S-OFDM) to cope with the diverse operational conditions of the power amplifiers (PAs) in different user equipment (UEs). CP-OFDM [...] Read more.
The uplink of 5G networks allows selecting the transmit waveform between cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) and discrete Fourier transform spread OFDM (DFT-S-OFDM) to cope with the diverse operational conditions of the power amplifiers (PAs) in different user equipment (UEs). CP-OFDM leads to higher throughput when the PAs are operating in their linear region, which is mostly the case for cell-interior users, whereas DFT-S-OFDM is more appealing when PAs are exhibiting non-linear behavior, which is associated with cell-edge users. Therefore, existing waveform selection solutions rely on predefined signal-to-noise ratio (SNR) thresholds that are computed offline. However, the varying user and channel dynamics, as well as their interactions with power control, require an adaptable threshold selection mechanism. In this paper, we propose an intelligent waveform-switching mechanism based on deep reinforcement learning (DRL) that learns optimal switching thresholds for the current operational conditions. In this proposal, a learning agent aims at maximizing a function built using available throughput percentiles in real networks. Said percentiles are weighted so as to improve the cell-edge users’ service without dramatically reducing the cell average. Aggregated measurements of signal-to-noise ratio (SNR) and timing advance (TA), available in real networks, are used in the procedure. In addition, the solution accounts for the switching cost, which is related to the interruption of the communication after every switch due to implementation issues, which has not been considered in existing solutions. Results show that our proposed scheme achieves remarkable gains in terms of throughput for cell-edge users without degrading the average throughput. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks: 3rd Edition)
Show Figures

Figure 1

21 pages, 1557 KB  
Article
Spectral-Based Fault Detection Method in Marine Diesel Engine Operation
by Joško Radić, Matko Šarić and Ante Rubić
Sensors 2025, 25(18), 5669; https://doi.org/10.3390/s25185669 - 11 Sep 2025
Viewed by 694
Abstract
The possibility of developing autonomous vessels has recently become increasingly interesting. As most vessels are powered by diesel engines, the idea of developing a method to detect engine malfunctions by analyzing signals from microphones placed near the engine and accelerometers mounted on the [...] Read more.
The possibility of developing autonomous vessels has recently become increasingly interesting. As most vessels are powered by diesel engines, the idea of developing a method to detect engine malfunctions by analyzing signals from microphones placed near the engine and accelerometers mounted on the engine housing is intriguing. This paper presents a method for detecting engine malfunctions by analyzing signals obtained from the output of a microphone and accelerometer. The algorithm is based on signal analysis in the frequency domain using discrete Fourier transform (DFT), and the same procedure is applied to both acoustic and vibration data. The proposed method was tested on a six-cylinder marine diesel engine where a fault was emulated by deactivating one cylinder. In controlled experiments across five rotational speeds, the method achieved an accuracy of approximately 98.3% when trained on 75 operating cycles and evaluated over 15 cycles. The average precision and recall across all sensors exceeded 97% and 96%, respectively. The ability of the algorithm to treat microphone and accelerometer signals identically simplifies implementation, and the detection accuracy can be increased further by adding additional sensors. Full article
Show Figures

Figure 1

19 pages, 9297 KB  
Article
Vibration Control of Wheels in Distributed Drive Electric Vehicle Based on Electro-Mechanical Braking
by Yinggang Xu, Zheng Zhu, Zhaonan Li, Xiangyu Wang, Liang Li and Heng Wei
Machines 2025, 13(8), 730; https://doi.org/10.3390/machines13080730 - 17 Aug 2025
Viewed by 1038
Abstract
Electro-Mechanical Braking (EMB), as a novel brake-by-wire technology, is rapidly being implemented in vehicle chassis systems. Nevertheless, the integrated design of the EMB caliper contributes to an increased unsprung mass in Distributed Drive Electric Vehicles (DDEVs). Experimental results indicate that when the Anti-lock [...] Read more.
Electro-Mechanical Braking (EMB), as a novel brake-by-wire technology, is rapidly being implemented in vehicle chassis systems. Nevertheless, the integrated design of the EMB caliper contributes to an increased unsprung mass in Distributed Drive Electric Vehicles (DDEVs). Experimental results indicate that when the Anti-lock Braking System (ABS) is activated, these factors can induce high-frequency wheel oscillations. To address this issue, this study proposes an anti-oscillation control strategy tailored for EMB systems. Firstly, a quarter-vehicle model is established that incorporates the dynamics of the drive motor, suspension, and tire, enabling analysis of the system’s resonant behavior. The Discrete Fourier Transform (DFT) is applied to the difference between wheel speed and vehicle speed to extract the dominant frequency components. Then, an Adaptive Braking Intensity Field Regulation (ABIFR) strategy and a Model Predictive and Logic Control (MP-LC) framework are developed. These methods modulate the amplitude and frequency of braking torque reductions executed by the ABS to suppress high-frequency wheel oscillations, while ensuring sufficient braking force. Experimental validation using a real vehicle demonstrates that the proposed method increases the Mean Fully Developed Deceleration (MFDD) by 14.8% on low-adhesion surfaces and 15.2% on high-adhesion surfaces. Furthermore, the strategy significantly suppresses 12–13 Hz high-frequency oscillations, restoring normal ABS control cycles and enhancing both braking performance and ride comfort. Full article
(This article belongs to the Special Issue Advances in Dynamics and Control of Vehicles)
Show Figures

Figure 1

Back to TopTop