Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (543)

Search Parameters:
Keywords = training fidelity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3116 KiB  
Article
Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach
by Tianxiao Wang, Yingtao Niu and Zhanyang Zhou
Appl. Sci. 2025, 15(15), 8654; https://doi.org/10.3390/app15158654 (registering DOI) - 5 Aug 2025
Viewed by 26
Abstract
To address the small-sample training bottleneck and inadequate convergence efficiency of Deep Reinforcement Learning (DRL)-based communication anti-jamming methods in complex electromagnetic environments, this paper proposes a Generative Adversarial Network-enhanced Deep Q-Network (GA-DQN) anti-jamming method. The method constructs a Generative Adversarial Network (GAN) to [...] Read more.
To address the small-sample training bottleneck and inadequate convergence efficiency of Deep Reinforcement Learning (DRL)-based communication anti-jamming methods in complex electromagnetic environments, this paper proposes a Generative Adversarial Network-enhanced Deep Q-Network (GA-DQN) anti-jamming method. The method constructs a Generative Adversarial Network (GAN) to learn the time–frequency distribution characteristics of short-period jamming and to generate high-fidelity mixed samples. Furthermore, it screens qualified samples using the Pearson correlation coefficient to form a sample set, which is input into the DQN network model for pre-training to expand the experience replay buffer, effectively improving the convergence speed and decision accuracy of DQN. Our simulation results show that under periodic jamming, compared with the DQN algorithm, this algorithm significantly reduces the number of interference occurrences in the early communication stage and improves the convergence speed, to a certain extent. Under dynamic jamming and intelligent jamming, the algorithm significantly outperforms the DQN, Proximal Policy Optimization (PPO), and Q-learning (QL) algorithms. Full article
Show Figures

Figure 1

31 pages, 8580 KiB  
Article
TSA-GRU: A Novel Hybrid Deep Learning Module for Learner Behavior Analytics in MOOCs
by Soundes Oumaima Boufaida, Abdelmadjid Benmachiche, Makhlouf Derdour, Majda Maatallah, Moustafa Sadek Kahil and Mohamed Chahine Ghanem
Future Internet 2025, 17(8), 355; https://doi.org/10.3390/fi17080355 - 5 Aug 2025
Viewed by 20
Abstract
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep [...] Read more.
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep learning framework that combines TSA with a sequential encoder based on the GRU. This hybrid model effectively reconstructs student response times and learning trajectories with high fidelity by leveraging tthe emporal embeddings of instructional and feedback activities. By dynamically filtering noise from student interactions, TSA-GRU generates context-aware representations that seamlessly integrate both short-term fluctuations and long-term learning patterns. Empirical evaluation on the 2009–2010 ASSISTments dataset demonstrates that TSA-GRU achieved a test accuracy of 95.60% and a test loss of 0.0209, outperforming Modular Sparse Attention-Gated Recurrent Unit (MSA-GRU), Bayesian Knowledge Tracing (BKT), Performance Factors Analysis (PFA), and TSA in the same experimental design. TSA-GRU converged in five training epochs; thus, while TSA-GRU is demonstrated to have strong predictive performance for knowledge tracing tasks, these findings are specific to the conducted dataset and should not be implicitly regarded as conclusive for all data. More statistical validation through five-fold cross-validation, confidence intervals, and paired t-tests have confirmed the robustness, consistency, and statistically significant superiority of TSA-GRU over the baseline model MSA-GRU. TSA-GRU’s scalability and capacity to incorporate a temporal dimension of knowledge can make it acceptably well-positioned to analyze complex learner behaviors and plan interventions for adaptive learning in computerized learning systems. Full article
Show Figures

Figure 1

22 pages, 3270 KiB  
Article
Deep Point Cloud Facet Segmentation and Applications in Downsampling and Crop Organ Extraction
by Yixuan Wang, Chuang Huang and Dawei Li
Appl. Sci. 2025, 15(15), 8638; https://doi.org/10.3390/app15158638 (registering DOI) - 4 Aug 2025
Viewed by 150
Abstract
To address the issues in existing 3D point cloud facet generation networks, specifically, the tendency to produce a large number of empty facets and the uncertainty in facet count, this paper proposes a novel deep learning framework for robust facet segmentation. Based on [...] Read more.
To address the issues in existing 3D point cloud facet generation networks, specifically, the tendency to produce a large number of empty facets and the uncertainty in facet count, this paper proposes a novel deep learning framework for robust facet segmentation. Based on the generated facet set, two exploratory applications are further developed. First, to overcome the bottleneck where inaccurate empty-facet detection impairs the downsampling performance, a facet-abstracted downsampling method is introduced. By using a learned facet classifier to filter out and discard empty facets, retaining only non-empty surface facets, and fusing point coordinates and local features within each facet, the method achieves significant compression of point cloud data while preserving essential geometric information. Second, to solve the insufficient precision in organ segmentation within crop point clouds, a facet growth-based segmentation algorithm is designed. The network first predicts the edge scores for the facets to determine the seed facets. The facets are then iteratively expanded according to adjacent-facet similarity until a complete organ region is enclosed, thereby enhancing the accuracy of segmentation across semantic boundaries. Finally, the proposed facet segmentation network is trained and validated using a synthetic dataset. Experiments show that, compared with traditional methods, the proposed approach significantly outperforms both downsampling accuracy and instance segmentation performance. In various crop scenarios, it demonstrates excellent geometric fidelity and semantic consistency, as well as strong generalization ability and practical application potential, providing new ideas for in-depth applications of facet-level features in 3D point cloud analysis. Full article
Show Figures

Figure 1

14 pages, 1728 KiB  
Article
Accelerating High-Frequency Circuit Optimization Using Machine Learning-Generated Inverse Maps for Enhanced Space Mapping
by Jorge Davalos-Guzman, Jose L. Chavez-Hurtado and Zabdiel Brito-Brito
Electronics 2025, 14(15), 3097; https://doi.org/10.3390/electronics14153097 - 3 Aug 2025
Viewed by 207
Abstract
The optimization of high-frequency circuits remains a computationally intensive task due to the need for repeated high-fidelity electromagnetic (EM) simulations. To address this challenge, we propose a novel integration of machine learning-generated inverse maps within the space mapping (SM) optimization framework to significantly [...] Read more.
The optimization of high-frequency circuits remains a computationally intensive task due to the need for repeated high-fidelity electromagnetic (EM) simulations. To address this challenge, we propose a novel integration of machine learning-generated inverse maps within the space mapping (SM) optimization framework to significantly accelerate circuit optimization while maintaining high accuracy. The proposed approach leverages Bayesian Neural Networks (BNNs) and surrogate modeling techniques to construct an inverse mapping function that directly predicts design parameters from target performance metrics, bypassing iterative forward simulations. The methodology was validated using a low-pass filter optimization scenario, where the inverse surrogate model was trained using electromagnetic simulations from COMSOL Multiphysics 2024 r6.3 and optimized using MATLAB R2024b r24.2 trust region algorithm. Experimental results demonstrate that our approach reduces the number of high-fidelity simulations by over 80% compared to conventional SM techniques while achieving high accuracy with a mean absolute error (MAE) of 0.0262 (0.47%). Additionally, convergence efficiency was significantly improved, with the inverse surrogate model requiring only 31 coarse model simulations, compared to 580 in traditional SM. These findings demonstrate that machine learning-driven inverse surrogate modeling significantly reduces computational overhead, accelerates optimization, and enhances the accuracy of high-frequency circuit design. This approach offers a promising alternative to traditional SM methods, paving the way for more efficient RF and microwave circuit design workflows. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

24 pages, 23817 KiB  
Article
Dual-Path Adversarial Denoising Network Based on UNet
by Jinchi Yu, Yu Zhou, Mingchen Sun and Dadong Wang
Sensors 2025, 25(15), 4751; https://doi.org/10.3390/s25154751 - 1 Aug 2025
Viewed by 234
Abstract
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a [...] Read more.
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a novel three-module architecture for image denoising, comprising a generator, a dual-path-UNet-based denoiser, and a discriminator. The generator creates synthetic noise patterns to augment training data, while the dual-path-UNet denoiser uses multiple receptive field modules to preserve fine details and dense feature fusion to maintain global structural integrity. The discriminator provides adversarial feedback to enhance denoising performance. This dual-path adversarial training mechanism addresses the limitations of traditional methods by simultaneously capturing both local details and global structures. Experiments on the SIDD, DND, and PolyU datasets demonstrate superior performance. We compare our architecture with the latest state-of-the-art GAN variants through comprehensive qualitative and quantitative evaluations. These results confirm the effectiveness of noise removal with minimal loss of critical image details. The proposed architecture enhances image denoising capabilities in complex noise scenarios, providing a robust solution for applications that require high image fidelity. By enhancing adaptability to various types of noise while maintaining structural integrity, this method provides a versatile tool for image processing tasks that require preserving detail. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 586 KiB  
Article
Implementing High-Intensity Gait Training in Stroke Rehabilitation: A Real-World Pragmatic Approach
by Jennifer L. Moore, Pia Krøll, Håvard Hansen Berg, Merethe B. Sinnes, Roger Arntsen, Chris E. Henderson, T. George Hornby, Stein Arne Rimehaug, Ingvild Lilleheie and Anders Orpana
J. Clin. Med. 2025, 14(15), 5409; https://doi.org/10.3390/jcm14155409 - 31 Jul 2025
Viewed by 306
Abstract
Background: High-intensity gait training (HIT) is an evidence-based intervention recommended for stroke rehabilitation; however, its implementation in routine practice is inconsistent. This study examined the real-world implementation of HIT in an inpatient rehabilitation setting in Norway, focusing on fidelity, barriers, and knowledge [...] Read more.
Background: High-intensity gait training (HIT) is an evidence-based intervention recommended for stroke rehabilitation; however, its implementation in routine practice is inconsistent. This study examined the real-world implementation of HIT in an inpatient rehabilitation setting in Norway, focusing on fidelity, barriers, and knowledge translation (KT) strategies. Methods: Using the Knowledge-to-Action (KTA) framework, HIT was implemented in three phases: pre-implementation, implementation, and competency. Fidelity metrics and coverage were assessed in 99 participants post-stroke. Barriers and facilitators were documented and categorized using the Consolidated Framework for Implementation Research. Results: HIT was delivered with improved fidelity during the implementation and competency phases, reflected by increased stepping and heart rate metrics. A coverage rate of 52% was achieved. Barriers evolved over time, beginning with logistical and knowledge challenges and shifting toward decision-making complexity. The KT interventions, developed collaboratively by clinicians and external facilitators, supported implementation. Conclusions: Structured pre-implementation planning, clinician engagement, and external facilitation enabled high-fidelity HIT implementation in a real-world setting. Pragmatic, context-sensitive strategies were critical to overcoming evolving barriers. Future research should examine scalable, adaptive KT strategies that balance theoretical guidance with clinical feasibility to sustain evidence-based practice in rehabilitation. Full article
Show Figures

Figure 1

12 pages, 3315 KiB  
Article
NeRF-RE: An Improved Neural Radiance Field Model Based on Object Removal and Efficient Reconstruction
by Ziyang Li, Yongjian Huai, Qingkuo Meng and Shiquan Dong
Information 2025, 16(8), 654; https://doi.org/10.3390/info16080654 - 31 Jul 2025
Viewed by 171
Abstract
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study [...] Read more.
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study introduces a 3D scene reconstruction and rendering strategy based on implicit neural representation through the efficient and removable neural radiation fields model (NeRF-RE). Leveraging neural radiance fields (NeRF), the model incorporates a multi-resolution hash grid and proposal network to improve training efficiency and modeling accuracy, while integrating a segment-anything model to safeguard public privacy. Take the crabapple tree, extensively utilized in urban garden design across temperate regions of the Northern Hemisphere. A dataset comprising 660 images of crabapple trees exhibiting three distinct geometric forms is collected to assess the NeRF-RE model’s performance. The results demonstrated that the ‘harvest gold’ crabapple scene had the highest reconstruction accuracy, with PSNR, LPIPS and SSIM of 24.80 dB, 0.34 and 0.74, respectively. Compared to the Mip-NeRF 360 model, the NeRF-RE model not only showed an up to 21-fold increase in training efficiency for three types of crabapple trees, but also exhibited a less pronounced impact of dataset size on reconstruction accuracy. This study reconstructs real scenes with high fidelity using virtual reality technology. It not only facilitates people’s personal enjoyment of the beauty of natural gardens at home, but also makes certain contributions to the publicity and promotion of urban landscapes. Full article
(This article belongs to the Special Issue Extended Reality and Its Applications)
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Enhancing Early GI Disease Detection with Spectral Visualization and Deep Learning
by Tsung-Jung Tsai, Kun-Hua Lee, Chu-Kuang Chou, Riya Karmakar, Arvind Mukundan, Tsung-Hsien Chen, Devansh Gupta, Gargi Ghosh, Tao-Yuan Liu and Hsiang-Chen Wang
Bioengineering 2025, 12(8), 828; https://doi.org/10.3390/bioengineering12080828 - 30 Jul 2025
Viewed by 451
Abstract
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision [...] Read more.
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision Enhancer (SAVE), an innovative, software-driven framework that transforms standard WLI into high-fidelity hyperspectral imaging (HSI) and simulated narrow-band imaging (NBI) without any hardware modification. SAVE leverages advanced spectral reconstruction techniques, including Macbeth Color Checker-based calibration, principal component analysis (PCA), and multivariate polynomial regression, achieving a root mean square error (RMSE) of 0.056 and structural similarity index (SSIM) exceeding 90%. Trained and validated on the Kvasir v2 dataset (n = 6490) using deep learning models like ResNet-50, ResNet-101, EfficientNet-B2, both EfficientNet-B5 and EfficientNetV2-B0 were used to assess diagnostic performance across six key GI conditions. Results demonstrated that SAVE enhanced imagery and consistently outperformed raw WLI across precision, recall, and F1-score metrics, with EfficientNet-B2 and EfficientNetV2-B0 achieving the highest classification accuracy. Notably, this performance gain was achieved without the need for specialized imaging hardware. These findings highlight SAVE as a transformative solution for augmenting GI diagnostics, with the potential to significantly improve early detection, streamline clinical workflows, and broaden access to advanced imaging especially in resource constrained settings. Full article
Show Figures

Figure 1

13 pages, 3685 KiB  
Article
A Controlled Variation Approach for Example-Based Explainable AI in Colorectal Polyp Classification
by Miguel Filipe Fontes, Alexandre Henrique Neto, João Dallyson Almeida and António Trigueiros Cunha
Appl. Sci. 2025, 15(15), 8467; https://doi.org/10.3390/app15158467 (registering DOI) - 30 Jul 2025
Viewed by 213
Abstract
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption [...] Read more.
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption barriers due to their ‘black box’ nature, limiting interpretability. This study presents an example-based explainable artificial intehlligence (XAI) approach using Pix2Pix to generate synthetic polyp images with controlled size variations and LIME to explain classifier predictions visually. EfficientNet and Vision Transformer (ViT) were trained on datasets of real and synthetic images, achieving strong baseline accuracies of 94% and 96%, respectively. Image quality was assessed using PSNR (18.04), SSIM (0.64), and FID (123.32), while classifier robustness was evaluated across polyp sizes. Results show that Pix2Pix effectively controls image attributes like polyp size despite limitations in visual fidelity. LIME integration revealed classifier vulnerabilities, underscoring the value of complementary XAI techniques. This enhances DL model interpretability and deepens understanding of their behaviour. The findings contribute to developing explainable AI tools for polyp classification and CRC diagnosis. Future work will improve synthetic image quality and refine XAI methodologies for broader clinical use. Full article
Show Figures

Figure 1

32 pages, 9710 KiB  
Article
Early Detection of ITSC Faults in PMSMs Using Transformer Model and Transient Time-Frequency Features
by Ádám Zsuga and Adrienn Dineva
Energies 2025, 18(15), 4048; https://doi.org/10.3390/en18154048 - 30 Jul 2025
Viewed by 312
Abstract
Inter-turn short-circuit (ITSC) faults in permanent magnet synchronous machines (PMSMs) present a significant reliability challenge in electric vehicle (EV) drivetrains, particularly under non-stationary operating conditions characterized by inverter-driven transients, variable loads, and magnetic saturation. Existing diagnostic approaches, including motor current signature analysis (MCSA) [...] Read more.
Inter-turn short-circuit (ITSC) faults in permanent magnet synchronous machines (PMSMs) present a significant reliability challenge in electric vehicle (EV) drivetrains, particularly under non-stationary operating conditions characterized by inverter-driven transients, variable loads, and magnetic saturation. Existing diagnostic approaches, including motor current signature analysis (MCSA) and wavelet-based methods, are primarily designed for steady-state conditions and rely on manual feature selection, limiting their applicability in real-time embedded systems. Furthermore, the lack of publicly available, high-fidelity datasets capturing the transient dynamics and nonlinear flux-linkage behaviors of PMSMs under fault conditions poses an additional barrier to developing data-driven diagnostic solutions. To address these challenges, this study introduces a simulation framework that generates a comprehensive dataset using finite element method (FEM) models, incorporating magnetic saturation effects and inverter-driven transients across diverse EV operating scenarios. Time-frequency features extracted via Discrete Wavelet Transform (DWT) from stator current signals are used to train a Transformer model for automated ITSC fault detection. The Transformer model, leveraging self-attention mechanisms, captures both local transient patterns and long-range dependencies within the time-frequency feature space. This architecture operates without sequential processing, in contrast to recurrent models such as LSTM or RNN models, enabling efficient inference with a relatively low parameter count, which is advantageous for embedded applications. The proposed model achieves 97% validation accuracy on simulated data, demonstrating its potential for real-time PMSM fault detection. Additionally, the provided dataset and methodology contribute to the facilitation of reproducible research in ITSC diagnostics under realistic EV operating conditions. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Power and Energy Systems)
Show Figures

Figure 1

12 pages, 274 KiB  
Article
Transforming Communication and Non-Technical Skills in Intermediate Care Nurses Through Ultra-Realistic Clinical Simulation: A Cross-Sectional Study
by Mireia Adell-Lleixà, Francesc Riba-Porquet, Laia Grau-Castell, Lidia Sarrió-Colás, Marta Ginovart-Prieto, Elisa Mulet-Aloras and Silvia Reverté-Villarroya
Nurs. Rep. 2025, 15(8), 272; https://doi.org/10.3390/nursrep15080272 - 29 Jul 2025
Viewed by 399
Abstract
Background: Intermediate care units face growing complexity due to aging populations and chronic illnesses. Non-technical skills such as empathy and communication are crucial for quality care. We aimed to examine the relationship between communication skills, self-efficacy, and sense of coherence among intermediate [...] Read more.
Background: Intermediate care units face growing complexity due to aging populations and chronic illnesses. Non-technical skills such as empathy and communication are crucial for quality care. We aimed to examine the relationship between communication skills, self-efficacy, and sense of coherence among intermediate care nurses. Methods: We conducted an observational, cross-sectional study with 60 intermediate care nurses from three units in a Catalan hospital, Spain. Participants engaged in high-fidelity simulation using geriatric end-of-life scenarios with an ultra-realistic manikin representing a geriatric patient at the end of life. NTSs were measured using validated tools: the Health Professionals Communication Skills Scale (HP-CSS), the General Self-Efficacy Scale, and the Sense of Coherence Questionnaire (OLQ-13). Sessions followed INACSL standards, including prebriefing, simulation, and debriefing phases. Results: Post-simulation outcomes revealed significant gains in interpersonal competencies, with men reporting higher assertiveness (p = 0.015) and greater satisfaction with both the simulation experience (p = 0.003) and the instructor (p = 0.008), underscoring gender-related perceptions in immersive training. Conclusions: Ultra-realistic clinical simulation is effective in enhancing NTS among intermediate care nurses, contributing to improved care quality and clearer professional profiles in geriatric nursing. Full article
(This article belongs to the Special Issue Innovations in Simulation Based Education in Healthcare)
28 pages, 10432 KiB  
Review
Rapid CFD Prediction Based on Machine Learning Surrogate Model in Built Environment: A Review
by Rui Mao, Yuer Lan, Linfeng Liang, Tao Yu, Minhao Mu, Wenjun Leng and Zhengwei Long
Fluids 2025, 10(8), 193; https://doi.org/10.3390/fluids10080193 - 28 Jul 2025
Viewed by 666
Abstract
Computational Fluid Dynamics (CFD) is regarded as an important tool for analyzing the flow field, thermal environment, and air quality around the built environment. However, for built environment applications, the high computational cost of CFD hinders large-scale scenario simulation and efficient design optimization. [...] Read more.
Computational Fluid Dynamics (CFD) is regarded as an important tool for analyzing the flow field, thermal environment, and air quality around the built environment. However, for built environment applications, the high computational cost of CFD hinders large-scale scenario simulation and efficient design optimization. In the field of built environment research, surrogate modeling has become a key technology to connect the needs of high-fidelity CFD simulation and rapid prediction, whereas the low-dimensional nature of traditional surrogate models is unable to match the physical complexity and prediction needs of built flow fields. Therefore, combining machine learning (ML) with CFD to predict flow fields in built environments offers a promising way to increase simulation speed while maintaining reasonable accuracy. This review briefly reviews traditional surrogate models and focuses on ML-based surrogate models, especially the specific application of neural network architectures in rapidly predicting flow fields in the built environment. The review indicates that ML accelerates the three core aspects of CFD, namely mesh preprocessing, numerical solving, and post-processing visualization, in order to achieve efficient coupled CFD simulation. Although ML surrogate models still face challenges such as data availability, multi-physics field coupling, and generalization capability, the emergence of physical information-driven data enhancement techniques effectively alleviates the above problems. Meanwhile, the integration of traditional methods with ML can further enhance the comprehensive performance of surrogate models. Notably, the online ministry of trained ML models using transfer learning strategies deserves further research. These advances will provide an important basis for advancing efficient and accurate operational solutions in sustainable building design and operation. Full article
(This article belongs to the Special Issue Feature Reviews for Fluids 2025–2026)
Show Figures

Figure 1

27 pages, 30210 KiB  
Article
Research on a Rapid Three-Dimensional Compressor Flow Field Prediction Method Integrating U-Net and Physics-Informed Neural Networks
by Chen Wang and Hongbing Ma
Mathematics 2025, 13(15), 2396; https://doi.org/10.3390/math13152396 - 25 Jul 2025
Viewed by 161
Abstract
This paper presents a neural network model, PINN-AeroFlow-U, for reconstructing full-field aerodynamic quantities around three-dimensional compressor blades, including regions near the wall. This model is based on structured CFD training data and physics-informed loss functions and is proposed for direct 3D compressor flow [...] Read more.
This paper presents a neural network model, PINN-AeroFlow-U, for reconstructing full-field aerodynamic quantities around three-dimensional compressor blades, including regions near the wall. This model is based on structured CFD training data and physics-informed loss functions and is proposed for direct 3D compressor flow prediction. It maps flow data from the physical domain to a uniform computational domain and employs a U-Net-based neural network capable of capturing the sharp local transitions induced by fluid acceleration near the blade leading edge, as well as learning flow features associated with internal boundaries (e.g., the wall boundary). The inputs to PINN-AeroFlow-U are the flow-field coordinate data from high-fidelity multi-geometry blade solutions, the 3D blade geometry, and the first-order metric coefficients obtained via mesh transformation. Its outputs include the pressure field, temperature field, and velocity vector field within the blade passage. To enhance physical interpretability, the network’s loss function incorporates both the Euler equations and gradient constraints. PINN-AeroFlow-U achieves prediction errors of 1.063% for the pressure field and 2.02% for the velocity field, demonstrating high accuracy. Full article
Show Figures

Figure 1

19 pages, 6886 KiB  
Article
Nonparametric Prediction of Ship Maneuvering Motions Based on Interpretable NbeatsX Deep Learning Method
by Lijia Chen, Xinwei Zhou, Kezhong Liu, Yang Zhou and Hewei Tian
J. Mar. Sci. Eng. 2025, 13(8), 1417; https://doi.org/10.3390/jmse13081417 - 25 Jul 2025
Viewed by 234
Abstract
With the development of the shipbuilding industry, nonparametric prediction has become the mainstream method for predicting ship maneuvering motion. However, the lack of transparency and interpretability make the output process of the prediction results challenging to track and understand. An interpretable deep learning [...] Read more.
With the development of the shipbuilding industry, nonparametric prediction has become the mainstream method for predicting ship maneuvering motion. However, the lack of transparency and interpretability make the output process of the prediction results challenging to track and understand. An interpretable deep learning framework based on the NbeatsX model is presented for nonparametric ship maneuvering motion prediction. Its three-tier fully connected architecture incorporates trend, seasonal, and exogenous constraints to decompose motion data, enhancing temporal and contextual learning while rendering the prediction process transparent. On the KVLCC2 zig-zag maneuver dataset, NbeatsX achieves NRMSEs of 0.01872, 0.01234, and 0.01661 for surge speed, sway speed, and yaw rate, with SMAPEs of 9.21%, 6.40%, and 7.66% and R2 values all above 0.995, yielding a more than 20% average error reduction compared with LS-SVM, LSTM, and LSTM–Attention and reducing total training time by about 15%. This method unifies high-fidelity forecasting with transparent decision tracing. It is an effective aid for ship maneuvering, offering more credible support for maritime navigation and safety decision-making, and it has substantial practical application potential. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

20 pages, 1354 KiB  
Article
On the Development of a Neural Network Architecture for Magnetometer-Based UXO Classification
by Piotr Ściegienka and Marcin Blachnik
Appl. Sci. 2025, 15(15), 8274; https://doi.org/10.3390/app15158274 - 25 Jul 2025
Viewed by 227
Abstract
The classification of Unexploded Ordnance (UXO) from magnetometer data is a critical but challenging task, frequently hindered by the data scarcity required for training robust machine learning models. To address this, we leverage a high-fidelity digital twin to generate a comprehensive dataset of [...] Read more.
The classification of Unexploded Ordnance (UXO) from magnetometer data is a critical but challenging task, frequently hindered by the data scarcity required for training robust machine learning models. To address this, we leverage a high-fidelity digital twin to generate a comprehensive dataset of magnetometer signals from both UXO and non-UXO objects, incorporating complex remanent magnetization effects. In this study, we design and evaluate a custom Convolutional Neural Network (CNN) for UXO classification and compare it against classical machine learning baseline, including Random Forest and kNN. Our CNN model achieves a balanced accuracy of 84.65%, significantly outperforming traditional models that exhibit performance collapse under slight distortions such as additive noise, drift, and time-wrapping. Additionally, we present a compact two-block CNN variant that retains competitive accuracy while reducing the number of learnable parameters by approximately 33%, making it suitable for real-time onboard classification in underwater vehicle missions. Through extensive ablation studies, we confirm that architectural components, such as residual skip connections and element-wise batch normalization, are crucial for achieving model stability and performance. The results also highlight the practical implications of underwater vehicles for survey design, emphasizing the need to mitigate signal drift and maintain constant survey speeds. This work not only provides a robust deep learning model for UXO classification, but also offers actionable suggestions for improving both model deployment and data acquisition protocols in the field. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

Back to TopTop