Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (661)

Search Parameters:
Keywords = fourier neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 1839 KB  
Article
A Novel Hybrid Neural Network with Optimized Feature Selection for Spindle Thermal Error Prediction
by Lifeng Yin, Chenglong Li, Yaohan Peng, Hao Tang, Ningruo Wang and Huayue Chen
Appl. Syst. Innov. 2026, 9(2), 40; https://doi.org/10.3390/asi9020040 - 5 Feb 2026
Abstract
In modern intelligent manufacturing, spindle thermal errors are critical to machining accuracy. To address this, we propose a two-stage prediction framework. First, for feature selection, an enhanced Red-Billed Magpie Optimization algorithm (RBMO-X) optimizes the parameters of a hybrid convolutional neural network (DLTK). Concurrently, [...] Read more.
In modern intelligent manufacturing, spindle thermal errors are critical to machining accuracy. To address this, we propose a two-stage prediction framework. First, for feature selection, an enhanced Red-Billed Magpie Optimization algorithm (RBMO-X) optimizes the parameters of a hybrid convolutional neural network (DLTK). Concurrently, PSO-optimized HDBSCAN clustering combined with Pearson correlation selects optimal temperature-sensitive points. The DLTK network integrates LSTM, deformable convolution, Transformer, and Fourier KAN modules for robust spatiotemporal feature extraction. The experimental results demonstrate significant improvements. The proposed feature selection method improves the Silhouette index by 32.39% and increases BWP by 49.16%. Using the selected points reduces prediction RMSE by 31.89% compared to random selection. The final RBMO-X-DLTK model achieves an RMSE of 0.181 μm, an MAE of 0.128 μm, and an R2 score of 0.9978, outperforming seven benchmark models (e.g., BP, LSTM, CNN-LSTM). In practical validation, the model enabled an average thermal error reduction of 89%. This integrated approach provides a robust and accurate solution for spindle thermal error prediction, demonstrating strong generalization capability. Full article
7 pages, 950 KB  
Proceeding Paper
Fourier–Transformer Mixer Network for Efficient Video Scene Graph Prediction
by Daozheng Qu and Yanfei Ma
Eng. Proc. 2025, 120(1), 16; https://doi.org/10.3390/engproc2025120016 - 2 Feb 2026
Viewed by 73
Abstract
In video scene graph prediction, the aim is to capture structured object interactions that occur over time in dynamic visual content. While recent spatiotemporal attention-based models have improved performance, they often suffer from high computational costs and limited structural consistency across long sequences. [...] Read more.
In video scene graph prediction, the aim is to capture structured object interactions that occur over time in dynamic visual content. While recent spatiotemporal attention-based models have improved performance, they often suffer from high computational costs and limited structural consistency across long sequences. Therefore, we developed a Fourier transformer mixer network (FTM-Net), a modular, frequency-aware architecture that integrates spatial and temporal modeling via spectral operations. It incorporates a resolution-invariant Fourier Mixer for global spatial encoding and a Fast Fourier Transform (FFT)-Net-based temporal encoder that efficiently represents long-range dependencies with less complexity. To improve structural integrity, we introduce a spectral consistency loss function that synchronizes high-frequency relational patterns between frames. Experiments conducted utilizing the Action Genome dataset demonstrate that FTM-Net surpasses previous methodologies in terms of both Recall@K and mean Recall@K while markedly decreasing parameter count and inference duration, providing an efficient, interpretable, and generalizable approach for structured video comprehension. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

44 pages, 1721 KB  
Systematic Review
Vibration-Based Predictive Maintenance for Wind Turbines: A PRISMA-Guided Systematic Review on Methods, Applications, and Remaining Useful Life Prediction
by Carlos D. Constantino-Robles, Francisco Alberto Castillo Leonardo, Jessica Hernández Galván, Yoisdel Castillo Alvarez, Luis Angel Iturralde Carrera and Juvenal Rodríguez-Reséndiz
Appl. Mech. 2026, 7(1), 11; https://doi.org/10.3390/applmech7010011 - 26 Jan 2026
Viewed by 327
Abstract
This paper presents a systematic review conducted under the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, analyzing 286 scientific articles focused on vibration-based predictive maintenance strategies for wind turbines within the context of advanced Prognostics and Health Management (PHM). The [...] Read more.
This paper presents a systematic review conducted under the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, analyzing 286 scientific articles focused on vibration-based predictive maintenance strategies for wind turbines within the context of advanced Prognostics and Health Management (PHM). The review combines international standards (ISO 10816, ISO 13373, and IEC 61400) with recent developments in sensing technologies, including piezoelectric accelerometers, microelectromechanical systems (MEMS), and fiber Bragg grating (FBG) sensors. Classical signal processing techniques, such as the Fast Fourier Transform (FFT) and wavelet-based methods, are identified as key preprocessing tools for feature extraction prior to the application of machine-learning-based diagnostic algorithms. Special emphasis is placed on machine learning and deep learning techniques, including Support Vector Machines (SVM), Random Forest (RF), Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and autoencoders, as well as on hybrid digital twin architectures that enable accurate Remaining Useful Life (RUL) estimation and support autonomous decision-making processes. The bibliometric and case study analysis covering the period 2020–2025 reveals a strong shift toward multisource data fusion—integrating vibration, acoustic, temperature, and Supervisory Control and Data Acquisition (SCADA) data—and the adoption of cloud-based platforms for real-time monitoring, particularly in offshore wind farms where physical accessibility is constrained. The results indicate that vibration-based predictive maintenance strategies can reduce operation and maintenance costs by more than 20%, extend component service life by up to threefold, and achieve turbine availability levels between 95% and 98%. These outcomes confirm that vibration-driven PHM frameworks represent a fundamental pillar for the development of smart, sustainable, and resilient next-generation wind energy systems. Full article
Show Figures

Graphical abstract

22 pages, 4947 KB  
Article
CV-EEGNet: A Compact Complex-Valued Convolutional Network for End-to-End EEG-Based Emotion Recognition
by Wenhao Wang, Dongxia Yang, Yong Yang, Yuanlun Xie, Xiu Liu, Yue Yu and Kaibo Shi
Sensors 2026, 26(3), 807; https://doi.org/10.3390/s26030807 - 26 Jan 2026
Viewed by 245
Abstract
In electroencephalogram (EEG)-based emotion recognition tasks, existing end-to-end approaches predominantly rely on real-valued neural networks, which mainly operate in the time–amplitude domain. However, EEG signals are a type of wave, intrinsically including frequency, phase, and amplitude characteristics. Real-valued architectures may struggle to capture [...] Read more.
In electroencephalogram (EEG)-based emotion recognition tasks, existing end-to-end approaches predominantly rely on real-valued neural networks, which mainly operate in the time–amplitude domain. However, EEG signals are a type of wave, intrinsically including frequency, phase, and amplitude characteristics. Real-valued architectures may struggle to capture amplitude–phase coupling and spectral structures that are crucial for emotion decoding. To the best of our knowledge, this work is the first to introduce complex-valued neural networks for EEG-based emotion recognition, upon which we design a new end-to-end architecture named Complex-valued EEGNet (CV-EEGNet). Beginning with raw EEG signals, CV-EEGNet transforms them into complex-valued spectra via the Fast Fourier Transform, then sequentially applies complex-valued spectral, spatial, and depthwise-separable convolution modules to extract frequency structures, spatial topologies, and high-level semantic representations while preserving amplitude–phase relationships. Finally, a complex-valued, fully connected classifier generates complex logits, and the final emotion predictions are derived from their magnitudes. Experiments on the SEED (three-class) and SEED-IV (four-class) datasets validate the effectiveness of the proposed method, with t-SNE visualizations further confirming the discriminability of the learned representations. These results show the potential of complex-valued neural networks for raw-signal EEG emotion recognition. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 3203 KB  
Article
Machine Learning and Physics-Informed Neural Networks for Thermal Behavior Prediction in Porous TPMS Metals
by Mohammed Yahya and Mohamad Ziad Saghir
Fluids 2026, 11(2), 29; https://doi.org/10.3390/fluids11020029 - 23 Jan 2026
Viewed by 220
Abstract
Triply periodic minimal surface (TPMS) structures provide high surface area to volume ratios and tunable conduction pathways, but predicting their thermal behavior across different metallic materials remains challenging because multi-material experimentation is costly and full-scale simulations require extremely fine meshes to resolve the [...] Read more.
Triply periodic minimal surface (TPMS) structures provide high surface area to volume ratios and tunable conduction pathways, but predicting their thermal behavior across different metallic materials remains challenging because multi-material experimentation is costly and full-scale simulations require extremely fine meshes to resolve the complex geometry. This study develops a physics-informed neural network (PINN) that reconstructs steady-state temperature fields in TPMS Gyroid structures using only two experimentally measured materials, Aluminum and Silver, which were tested under identical heat flux and flow conditions. The model incorporates conductivity ratio physics, Fourier-based thermal scaling, and complete spatial temperature profiles directly into the learning process to maintain physical consistency. Validation using the complete Aluminum and Silver datasets confirms excellent agreement for Aluminum and strong accuracy for Silver despite its larger temperature gradients. Once trained, the PINN can generalize the learned behavior to nine additional metals using only their conductivity ratios, without requiring new experiments or numerical simulations. A detailed heat transfer analysis is also performed for Magnesium, a lightweight material that is increasingly considered for thermal management applications. Since no published TPMS measurements for Magnesium currently exist, the predicted Nusselt numbers obtained from the PINN-generated temperature fields represent the first model-based evaluation of its convective performance. The results demonstrate that the proposed PINN provides an efficient, accurate, and scalable surrogate model for predicting thermal behavior across multiple metallic TPMS structures and supports the design and selection of materials for advanced porous heat technologies. Full article
Show Figures

Figure 1

22 pages, 2756 KB  
Article
DACL-Net: A Dual-Branch Attention-Based CNN-LSTM Network for DOA Estimation
by Wenjie Xu and Shichao Yi
Sensors 2026, 26(2), 743; https://doi.org/10.3390/s26020743 - 22 Jan 2026
Viewed by 130
Abstract
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. [...] Read more.
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. This paper proposes a spatio-temporal fusion model named DACL-Net for DOA estimation. The spatial branch applies a two-dimensional Fourier transform (2D-FT) to the covariance matrix, causing angles to appear as peaks in the magnitude spectrum. This operation transforms the original covariance matrix into a dark image with bright spots, enabling the convolutional neural network (CNN) to focus on the bright-spot components via an attention module. Additionally, a spectrum attention mechanism (SAM) is introduced to enhance the extraction of temporal features in the time branch. The model learns simultaneously from two data branches and finally outputs DOA results through a linear layer. Simulation results demonstrate that DACL-Net outperforms existing algorithms in terms of accuracy, achieving an RMSE of 0.04° at an SNR of 0 dB. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

16 pages, 1206 KB  
Article
HASwinNet: A Swin Transformer-Based Denoising Framework with Hybrid Attention for mmWave MIMO Systems
by Xi Han, Houya Tu, Jiaxi Ying, Junqiao Chen and Zhiqiang Xing
Entropy 2026, 28(1), 124; https://doi.org/10.3390/e28010124 - 20 Jan 2026
Viewed by 231
Abstract
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic [...] Read more.
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic noise sensitivity and clustered sparse multipath structures. These challenges are particularly severe under limited pilot resources and low signal-to-noise ratio (SNR) conditions. To address these difficulties, this paper proposes HASwinNet, a deep learning (DL) framework designed for mmWave channel denoising. The framework integrates a hierarchical Swin Transformer encoder for structured representation learning. It further incorporates two complementary branches. The first branch performs sparse token extraction guided by angular-domain significance. The second branch focuses on angular-domain refinement by applying discrete Fourier transform (DFT), squeeze-and-excitation (SE), and inverse DFT (IDFT) operations. This generates a mask that highlights angularly coherent features. A decoder combines the outputs of both branches with a residual projection from the input to yield refined channel estimates. Additionally, we introduce an angular-domain perceptual loss during training. This enforces spectral consistency and preserves clustered multipath structures. Simulation results based on the Saleh–Valenzuela (S–V) channel model demonstrate that HASwinNet achieves significant improvements in normalized mean squared error (NMSE) and bit error rate (BER). It consistently outperforms convolutional neural network (CNN), long short-term memory (LSTM), and U-Net baselines. Furthermore, experiments with reduced pilot symbols confirm that HASwinNet effectively exploits angular sparsity. The model retains a consistent advantage over baselines even under pilot-limited conditions. These findings validate the scalability of HASwinNet for practical 6G mmWave backhaul applications. They also highlight its potential in ISAC scenarios where accurate channel recovery supports both communication and sensing. Full article
Show Figures

Figure 1

27 pages, 4802 KB  
Article
Fine-Grained Radar Hand Gesture Recognition Method Based on Variable-Channel DRSN
by Penghui Chen, Siben Li, Chenchen Yuan, Yujing Bai and Jun Wang
Electronics 2026, 15(2), 437; https://doi.org/10.3390/electronics15020437 - 19 Jan 2026
Viewed by 183
Abstract
With the ongoing miniaturization of smart devices, fine-grained hand gesture recognition using millimeter-wave radar has attracted increasing attention, yet practical deployment remains challenging in continuous-gesture segmentation, robust feature extraction, and reliable classification. This paper presents an end-to-end fine-grained gesture recognition framework based on [...] Read more.
With the ongoing miniaturization of smart devices, fine-grained hand gesture recognition using millimeter-wave radar has attracted increasing attention, yet practical deployment remains challenging in continuous-gesture segmentation, robust feature extraction, and reliable classification. This paper presents an end-to-end fine-grained gesture recognition framework based on frequency modulated continuous wave(FMCW) millimeter-wave radar, including gesture design, data acquisition, feature construction, and neural network-based classification. Ten gesture types are recorded (eight valid gestures and two return-to-neutral gestures); for classification, the two return-to-neutral gesture types are merged into a single invalid class, yielding a nine-class task. A sliding-window segmentation method is developed using short-time Fourier transformation(STFT)-based Doppler-time representations, and a dataset of 4050 labeled samples is collected. Multiple signal classification(MUSIC)-based super-resolution estimation is adopted to construct range–time and angle–time representations, and instance-wise normalization is applied to Doppler and range features to mitigate inter-individual variability without test leakage. For recognition, a variable-channel deep residual shrinkage network (DRSN) is employed to improve robustness to noise, supporting single-, dual-, and triple-channel feature inputs. Results under both subject-dependent evaluation with repeated random splits and subject-independent leave one subject out(LOSO) cross-validation show that DRSN architecture consistently outperforms the RefineNet-based baseline, and the triple-channel configuration achieves the best performance (98.88% accuracy). Overall, the variable-channel design enables flexible feature selection to meet diverse application requirements. Full article
Show Figures

Figure 1

26 pages, 544 KB  
Article
Physics-Aware Deep Learning Framework for Solar Irradiance Forecasting Using Fourier-Based Signal Decomposition
by Murad A. Yaghi and Huthaifa Al-Omari
Algorithms 2026, 19(1), 81; https://doi.org/10.3390/a19010081 - 17 Jan 2026
Viewed by 158
Abstract
Photovoltaic Systems have been a long-standing challenge to integrate with electrical Power Grids due to the randomness of solar irradiance. Deep Learning (DL) has potential to forecast solar irradiance; however, black-box DL models typically do not offer interpretation, nor can they easily distinguish [...] Read more.
Photovoltaic Systems have been a long-standing challenge to integrate with electrical Power Grids due to the randomness of solar irradiance. Deep Learning (DL) has potential to forecast solar irradiance; however, black-box DL models typically do not offer interpretation, nor can they easily distinguish between deterministic astronomical cycles, and random meteorological variability. The objective of this study was to develop and apply a new Physics-Aware Deep Learning Framework that identifies and utilizes physical attributes of solar irradiance via Fourier-based signal decomposition. The proposed method decomposes the time-series into polynomial trend, Fourier-based seasonal component and stochastic residual, each of which are processed within different neural network paths. A wide variety of architectures were tested (Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN)), at multiple historical window sizes and forecast horizons on a diverse dataset from a three-year span. All of the architectures tested demonstrated improved accuracy and robustness when using the physics aware decomposition as opposed to all other methods. Of the architectures tested, the GRU architecture was the most accurate and performed well in terms of overall evaluation. The GRU model had an RMSE of 78.63 W/m2 and an R2 value of 0.9281 for 15 min ahead forecasting. Additionally, the Fourier-based methodology was able to reduce the maximum absolute error by approximately 15% to 20%, depending upon the architecture used, and therefore it provided a way to reduce the impact of the larger errors in forecasting during periods of unstable weather. Overall, this framework represents a viable option for both physically interpretive and computationally efficient real-time solar forecasting that provides a bridge between Physical Modeling and Data-Driven Intelligence. Full article
(This article belongs to the Special Issue Artificial Intelligence in Sustainable Development)
Show Figures

Figure 1

15 pages, 1488 KB  
Article
Identification of the Geographical Origins of Matcha Using Three Spectroscopic Methods and Machine Learning
by Meryem Taskaya, Rikuto Akiyama, Mai Kanetsuna, Murat Yigit, Yvan Llave and Takashi Matsumoto
AgriEngineering 2026, 8(1), 21; https://doi.org/10.3390/agriengineering8010021 - 8 Jan 2026
Viewed by 338
Abstract
For high-value-added products such as matcha, scientific confirmation of the origin is essential for quality assurance and fraud prevention. In this study, three nondestructive analytical techniques, specifically fluorescence (FF), near-infrared (NIR), and Fourier transform infrared (FT-IR) spectroscopy, were combined with machine learning algorithms [...] Read more.
For high-value-added products such as matcha, scientific confirmation of the origin is essential for quality assurance and fraud prevention. In this study, three nondestructive analytical techniques, specifically fluorescence (FF), near-infrared (NIR), and Fourier transform infrared (FT-IR) spectroscopy, were combined with machine learning algorithms to accurately identify the origin of Japanese matcha. FF data were analyzed using convolutional neural networks (CNNs), whereas NIR and FT-IR spectral data were analyzed using k-nearest neighbors (KNNs), random forest (RF), logistic regression (LR), and support vector machine (SVM) models. The FT-IR–RF model demonstrated the highest accuracy (99.0%), followed by the NIR–KNN (98.7%) and FF–CNN (95.7%) models. Functional group absorption in FT-IR, moisture and carbohydrates in NIR, and amino acid and polyphenol fluorescence in FF contributed to the identification. These findings indicate that the selection of an algorithm appropriate for the characteristics of the spectroscopic data is effective for improving accuracy. This method can quickly and nondestructively identify the origin of matcha and is expected to be applicable to other teas and agricultural products. This new approach contributes to the verification of the authenticity of food and improvement in its traceability. Full article
Show Figures

Figure 1

24 pages, 15172 KB  
Article
Real-Time Hand Gesture Recognition for IoT Devices Using FMCW mmWave Radar and Continuous Wavelet Transform
by Anna Ślesicka and Adam Kawalec
Electronics 2026, 15(2), 250; https://doi.org/10.3390/electronics15020250 - 6 Jan 2026
Viewed by 375
Abstract
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system [...] Read more.
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system introduces a novel pre-processing stage based on the Continuous Wavelet Transform (CWT). The CWT enables the extraction of discriminative time–frequency features directly from raw radar signals, improving the interpretability and robustness of the learned representations. A lightweight convolutional neural network architecture is then designed to process the CWT maps for efficient classification on edge IoT devices. Experimental validation with data collected from 20 participants performing five standardized gestures demonstrates that the proposed framework achieves an accuracy of up to 99.87% using the Morlet wavelet, with strong generalization to unseen users (82–84% accuracy). The results confirm that the integration of CWT-based radar signal processing with deep learning forms a computationally efficient and accurate intelligent system for human–computer interaction in real-time IoT environments. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

23 pages, 1037 KB  
Article
Acoustic Side-Channel Vulnerabilities in Keyboard Input Explored Through Convolutional Neural Network Modeling: A Pilot Study
by Michał Rzemieniuk, Artur Niewiarowski and Wojciech Książek
Appl. Sci. 2026, 16(2), 563; https://doi.org/10.3390/app16020563 - 6 Jan 2026
Viewed by 376
Abstract
This paper presents the findings of a pilot study investigating the feasibility of recognizing keyboard keystroke sounds using Convolutional Neural Networks (CNNs) as a means of simulating an acoustic side-channel attack aimed at recovering typed text. A dedicated dataset of keyboard audio recordings [...] Read more.
This paper presents the findings of a pilot study investigating the feasibility of recognizing keyboard keystroke sounds using Convolutional Neural Networks (CNNs) as a means of simulating an acoustic side-channel attack aimed at recovering typed text. A dedicated dataset of keyboard audio recordings was collected and preprocessed using signal-processing techniques, including Fourier-transform-based feature extraction and mel-spectrogram analysis. Data augmentation methods were applied to improve model robustness, and a CNN-based prediction architecture was developed and trained. A series of experiments was performed under multiple conditions, including controlled laboratory settings, scenarios with background noise interference, tests involving a different keyboard model, and evaluations following model quantization. The results indicate that CNN-based models can achieve high keystroke-prediction accuracy, demonstrating that this class of acoustic side-channel attacks is technically viable. Additionally, the study outlines potential mitigation strategies designed to reduce exposure to such threats. Overall, the findings highlight the need for increased awareness of acoustic side-channel vulnerabilities and underscore the importance of further research to more comprehensively understand, evaluate, and prevent attacks of this nature. Full article
(This article belongs to the Special Issue Artificial Neural Network and Deep Learning in Cybersecurity)
Show Figures

Figure 1

20 pages, 2906 KB  
Article
Research on Oil and Gas Pipeline Leakage Detection Based on MSCNN-Transformer
by Yingtao Zhang, Wenhe Li, Yang Wu and Huili Wei
Appl. Sci. 2026, 16(1), 480; https://doi.org/10.3390/app16010480 - 2 Jan 2026
Viewed by 376
Abstract
The leakage detection of oil and gas is very important for the safe operation of pipelines. The existing working condition recognition methods have limitations in processing and capturing complex multi-category leakage signal characteristics. In order to improve the accuracy of oil and gas [...] Read more.
The leakage detection of oil and gas is very important for the safe operation of pipelines. The existing working condition recognition methods have limitations in processing and capturing complex multi-category leakage signal characteristics. In order to improve the accuracy of oil and gas pipeline leakage detection, a multi-scale convolutional neural network-Transformer (MSCNN-Transformer)-based oil and gas pipeline leakage condition recognition method is proposed. Firstly, in order to capture the global information and nonlinear characteristics of the time series signal, STFT is used to generate the time-frequency image. Furthermore, in order to enrich the feature information from different dimensions, the one-dimensional signal and the two-dimensional time-frequency image are sampled by multi-scale convolution, and the global relationship is established by combining the multi-head attention mechanism of the Transformer module. Finally, the leakage signal is accurately identified by fusing features and classifiers. The experimental results show that the proposed method shows high performance on the GPLA-12 data set, and the recognition accuracy is 96.02%. Compared with other leakage signal recognition methods, the proposed method has obvious advantages. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

14 pages, 7836 KB  
Article
Optimization of Lensless Imaging Using Ray Tracing
by Samira Arabpou and Simon Thibault
Appl. Sci. 2026, 16(1), 275; https://doi.org/10.3390/app16010275 - 26 Dec 2025
Viewed by 391
Abstract
Lensless microscopy is a well-established imaging approach that replaces traditional lenses with phase modulators, enabling compact, low-cost, and computationally driven analysis of biological samples. In this work, we show how ray tracing simulations can be used to optimize lensless imaging systems for automated [...] Read more.
Lensless microscopy is a well-established imaging approach that replaces traditional lenses with phase modulators, enabling compact, low-cost, and computationally driven analysis of biological samples. In this work, we show how ray tracing simulations can be used to optimize lensless imaging systems for automated classification, particularly for detecting red blood cell (RBC) disease. Rather than improving the machine learning classification algorithm, our focus is on refining optical parameters such as element spacing and modulator type to maximize classification performance. We modeled a lensless microscope in Zemax OpticStudio (ray tracing) and compared the results against Fourier optics simulations. Despite not explicitly modeling diffraction, ray tracing produced classification results largely consistent with wave optics simulations, confirming its effectiveness for parameter optimization in lensless imaging setups used for classification tasks. Furthermore, to show the flexibility of the ray tracing model, we introduced a microlens array (MLA) as the phase modulator and performed the classification task on the generated patterns. These results establish ray tracing as an efficient tool for the optical design of lensless microscopy systems intended for machine learning based biomedical applications. The developed lensless microscopy model enables the generation of datasets for training neural networks. Full article
(This article belongs to the Special Issue Current Updates on Optical Scattering)
Show Figures

Figure 1

24 pages, 1607 KB  
Article
A Biomechanics-Guided and Time–Frequency Collaborative Deep Learning Framework for Parkinsonian Gait Severity Assessment
by Wei Lin, Tianqi Zhou and Qiwen Yang
Mathematics 2026, 14(1), 89; https://doi.org/10.3390/math14010089 - 26 Dec 2025
Viewed by 204
Abstract
Parkinson’s Disease (PD) is a neurodegenerative disorder in which gait abnormalities serve as key indicators of motor impairment and disease progression. Although wearable sensor-based gait analysis has advanced, existing methods still face challenges in modeling multi-sensor spatial relationships, extracting adaptive multi-scale temporal features, [...] Read more.
Parkinson’s Disease (PD) is a neurodegenerative disorder in which gait abnormalities serve as key indicators of motor impairment and disease progression. Although wearable sensor-based gait analysis has advanced, existing methods still face challenges in modeling multi-sensor spatial relationships, extracting adaptive multi-scale temporal features, and effectively integrating time–frequency information. To address these issues, this paper proposes a multi-sensor gait neural network that integrates biomechanical priors with time–frequency collaborative learning for the automatic assessment of PD gait severity. The framework consists of three core modules: (1) BGS-GAT (Biomechanics-Guided Graph Attention Network), which constructs a sensor graph based on plantar anatomy and explicitly models inter-regional force dependencies via graph attention; (2) AMS-Inception1D (Adaptive Multi-Scale Inception-1D), which employs dilated convolutions and channel attention to extract multi-scale temporal features adaptively; and (3) TF-Branch (Time–Frequency Branch), which applies Real-valued Fast Fourier Transform (RFFT) and frequency-domain convolution to capture rhythmic and high-frequency components, enabling complementary time–frequency representation. Experiments on the PhysioNet multi-channel foot pressure dataset demonstrate that the proposed model achieves 0.930 in accuracy and 0.925 in F1-score for four-class severity classification, outperforming state-of-the-art deep learning models. Full article
Show Figures

Figure 1

Back to TopTop