Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (37,715)

Search Parameters:
Keywords = neural network modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6455 KB  
Article
Lightweight Deep Learning Framework for Real-Time PRPD-Based Insulation Defect Classification in Medium-Voltage Cable Testing
by Paweł Kluge, Jacek Starzyński, Wojciech Kołtunowicz, Tomasz Bednarczyk and Łukasz Kolimas
Energies 2026, 19(9), 2029; https://doi.org/10.3390/en19092029 (registering DOI) - 22 Apr 2026
Abstract
Partial discharge (PD) measurements are crucial for evaluating the condition of the insulation systems of medium-voltage (MV) cables and their accessories. However, identifying PD defect types from phase-resolved partial discharge (PRPD) patterns still largely relies on expert knowledge. In this paper, the authors [...] Read more.
Partial discharge (PD) measurements are crucial for evaluating the condition of the insulation systems of medium-voltage (MV) cables and their accessories. However, identifying PD defect types from phase-resolved partial discharge (PRPD) patterns still largely relies on expert knowledge. In this paper, the authors critically evaluate lightweight deep neural network architectures for automated classification of insulation defects from PRPD patterns: YOLOv8n, the MobileNetV2–YOLO hybrid network, and a compact SqueezeNet-based model. PD measurements were performed in a controlled environment in a factory laboratory for MV power cables in order to better evaluate the capability of the investigated models. The results demonstrate that lightweight deep neural architectures can effectively classify PRPD patterns and be deployed in a real measurement environment. The proposed approach has been integrated with the OMICRON MPD Suite measurement system, enabling automated defect recognition and visualisation during routine testing of MV cable. Full article
19 pages, 2352 KB  
Article
Interval Prediction of Remaining Useful Life Based on Uncertainty Quantification with Bayesian Convolutional Neural Networks Featuring Dual-Output Units
by Zhendong Qu, Jialong He, Yan Liu, Song Mao and Xiaowu Han
Sensors 2026, 26(9), 2592; https://doi.org/10.3390/s26092592 (registering DOI) - 22 Apr 2026
Abstract
RUL prediction methods do not fully account for the uncertainties caused by data scarcity and inherent noise, and they also suffer from low reliability of RUL point estimates. To tackle these challenges, this paper proposes a Bayesian convolutional neural network with dual-output units [...] Read more.
RUL prediction methods do not fully account for the uncertainties caused by data scarcity and inherent noise, and they also suffer from low reliability of RUL point estimates. To tackle these challenges, this paper proposes a Bayesian convolutional neural network with dual-output units for RUL interval predictions. The network employs the negative log-likelihood as the loss function. Thanks to its dual-output structure, it not only provides point estimates, but also quantifies the aleatoric uncertainty inherent in the data. During the training process, the CNN is reformulated using Bayesian principles, and the Bayes-by-backprop method is applied to train the network. This transformation converts model parameters from fixed values into random variables. As a result, epistemic uncertainty caused by model inaccuracies and limited data can be quantified. Experimental validation on the IEEE PHM Challenge 2012 dataset demonstrated that the proposed method achieved a higher prediction accuracy than state-of-the-art uncertainty-aware prediction approaches, demonstrating a better applicability in engineering practice. Full article
(This article belongs to the Special Issue Sensing Technologies in Industrial Defect Detection)
16 pages, 2149 KB  
Article
Pitot Tube Fault Warning Method Based on Fully Connected Neural Networks
by Hongyu Liu, Bijiang Lv, Yuexin Zhong, Ke Gao and Jie Chen
Appl. Sci. 2026, 16(9), 4104; https://doi.org/10.3390/app16094104 (registering DOI) - 22 Apr 2026
Abstract
The pitot tube is the core sensor for aircraft to obtain external atmospheric data, and its failure has a very important impact on flight safety. However, as its structure and principle are relatively simple, all manufacturers have not adopted available monitoring methods for [...] Read more.
The pitot tube is the core sensor for aircraft to obtain external atmospheric data, and its failure has a very important impact on flight safety. However, as its structure and principle are relatively simple, all manufacturers have not adopted available monitoring methods for its health status due to the perspective of cost and complexity reduction. The pitot tube fault warning method is conducted in this paper with a fully connected neural network (FCNN) method based on the data collected by the pitot tube itself. By constructing and selecting parameters and extracting fault features from flight record data, a pitot tube fault warning model based on an FCNN is constructed. The effectiveness of the proposed method is verified through pitot tube fault warning experiments based on actual flight record data, which can provide technical reference for pitot tube fault warning during aircraft route operation in the future. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

26 pages, 2864 KB  
Article
FEM-Based Hybrid Compression Framework with Pipeline Implementation for Efficient Deep Neural Networks on Tiny ImageNet
by Areej Hamza, Amel Tuama and Asraf Mohamed Moubark
Big Data Cogn. Comput. 2026, 10(5), 131; https://doi.org/10.3390/bdcc10050131 - 22 Apr 2026
Abstract
The high accuracy achieved by deep learning techniques has made them indispensable in computer vision applications. However, their substantial memory demands and high computational complexity limit their deployment in resource-constrained environments. To address this challenge, this study introduces a Feature Enhancement Module (FEM) [...] Read more.
The high accuracy achieved by deep learning techniques has made them indispensable in computer vision applications. However, their substantial memory demands and high computational complexity limit their deployment in resource-constrained environments. To address this challenge, this study introduces a Feature Enhancement Module (FEM) as part of a unified hybrid compression framework that combines mixed-precision quantization and structured pruning to improve model efficiency. Experimental results on the Tiny ImageNet dataset using ResNet50 and MobileNetV3 architectures demonstrate the strong adaptability and scalability of the proposed approach. Compared with state-of-the-art compression methods, the proposed FEM-based framework achieves up to 6% improvement in Top-1 accuracy, while reducing memory usage by 32.26% and improving inference speed by 66%. Furthermore, the ablation study demonstrates that incorporating the FEM module leads to up to 24% improvement over the baseline model, highlighting its effectiveness. The results further show that FEM effectively preserves inter-channel feature representation stability even under aggressive compression, making it well suited for real-time processing and practical Artificial Intelligence (AI) applications. By maintaining semantic richness while significantly reducing computational cost, the proposed method bridges the gap between high-performance deep models and lightweight, deployable solutions. Overall, the FEM-based hybrid compression framework establishes a scalable and architecture-independent foundation for sustainable deep learning in resource-limited environments. Full article
Show Figures

Graphical abstract

17 pages, 2160 KB  
Article
Research on Coal and Rock Identification by Integrating Terahertz Time-Domain Spectroscopy and Multiple Machine Learning Algorithms
by Dongdong Ye, Lipeng Hu, Jianfei Xu, Yadong Yang, Zeping Liu, Sitong Li, Jiabao Li, Longhai Liu and Changpeng Li
Photonics 2026, 13(5), 409; https://doi.org/10.3390/photonics13050409 - 22 Apr 2026
Abstract
Aiming to address the problems of low accuracy in coal–rock identification during coal mining, which lead to energy waste and safety hazards, a high-precision coal–rock medium identification method combining terahertz time-domain spectroscopy technology and multiple machine learning algorithms is proposed. By preparing coal–rock [...] Read more.
Aiming to address the problems of low accuracy in coal–rock identification during coal mining, which lead to energy waste and safety hazards, a high-precision coal–rock medium identification method combining terahertz time-domain spectroscopy technology and multiple machine learning algorithms is proposed. By preparing coal–rock samples with a gradient change in coal content, terahertz time-domain spectroscopy data of coal–rock mixed media are collected, and optical parameters such as the refractive index and absorption coefficient are extracted. Principal component analysis is used to reduce the dimensionality of the terahertz data, and machine learning algorithms such as support vector machine, least squares support vector machine, artificial neural networks, and random forests are adopted for classification and identification. The study found that terahertz waves are more sensitive to coal–rock media in the 0.7–1.3 THz frequency band, and that the refractive index and absorption coefficient of coal–rock mixed media are significantly positively correlated with coal content within the range of 0–30%. After feature extraction and K-fold cross-validation, the random forest model achieved a coal–rock classification accuracy of over 96% on the test set, significantly outperforming other comparison algorithms. The research verifies the efficiency and practicality of terahertz technology combined with multiple machine learning algorithms in coal–rock identification, providing a new method for fields such as mineral separation. This method has, to a certain extent, broken through the accuracy bottleneck of traditional coal–rock identification technologies within its applicable range, providing a new solution for real-time detection of coal–rock interfaces and is expected to further reduce the risks of ineffective mining and roof accidents in the future. Full article
22 pages, 1877 KB  
Article
LTiT: A Deep Learning Model for Subway Section Passenger Flow Prediction Based on LSTM-TSSA-iTransformer
by Jie Liu, Yanzhan Chen, Yange Li and Fan Yu
Sensors 2026, 26(9), 2584; https://doi.org/10.3390/s26092584 - 22 Apr 2026
Abstract
As a vital part of urban public transportation system, subway passenger flow prediction plays a crucial role in alleviating traffic congestion, improving transportation infrastructure, and optimizing travel experience. Existing subway passenger flow prediction mainly focuses on short-term predictions of inbound/outbound passenger flow and [...] Read more.
As a vital part of urban public transportation system, subway passenger flow prediction plays a crucial role in alleviating traffic congestion, improving transportation infrastructure, and optimizing travel experience. Existing subway passenger flow prediction mainly focuses on short-term predictions of inbound/outbound passenger flow and origin-destination (O-D) demand. Subway section passenger flow prediction can provide a more direct reflection of passenger fluctuations across different line segments, and offer robust support for management and resource allocation. We propose a subway section passenger flow generation model and a prediction method based on LTiT (LSTM-TSSA-iTransformer). This model is based on the overall architecture of the iTransformer encoder, and an LSTM (Long Short-Term Memory) network is employed to capture the temporal characteristics of subway section passenger flow. This is combined with the TSSA (Token Statistics Self-Attention) to adaptively weight the information at key time points. Efficient performance of the model was evaluated by comparing its predictions with other models, including SARIMA (Seasonal Auto-Regressive integrated moving average), BP neural networks, LightGBM (Light Gradient Boosting Machine) and LSTM (Long Short-Term Memory). Experimental results show that the proposed model outperforms traditional baseline models in evaluation metrics such as R2, MAE, MSE, and MAPE. Finally, we further investigate the selection of input window length and prediction step size, and perform robustness analysis under different noise conditions. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

32 pages, 3351 KB  
Article
The TWC Sigma Model: A Nonlinear Correlation and Neural Network Approach for Spatial Source Detection
by Paolo Massimo Buscema, Marco Breda, Riccardo Petritoli, Giulia Massini and Guido Ferilli
J. Exp. Theor. Anal. 2026, 4(2), 16; https://doi.org/10.3390/jeta4020016 - 22 Apr 2026
Abstract
The TWC Sigma model, part of the Topological Weighted Centroid (TWC) family, is introduced as a spatial framework for source localization in systems where network information is incomplete or unavailable. Its architecture relies on two alternative approaches: one based on nonlinear correlation, capable [...] Read more.
The TWC Sigma model, part of the Topological Weighted Centroid (TWC) family, is introduced as a spatial framework for source localization in systems where network information is incomplete or unavailable. Its architecture relies on two alternative approaches: one based on nonlinear correlation, capable of capturing complex spatial dependencies among observed signals, and another based on supervised neural networks, which use adaptive learning on a discretized spatial grid to estimate the probability of hidden source localization. In both cases, TWC Sigma provides a robust and consistent mechanism to estimate the probable positions of hidden sources using only spatial coordinates and signal intensity. Applications on both synthetic and real-world datasets—such as those collected by Minna-no Data Site on post-Fukushima radiocesium contamination—confirm the model’s ability to identify both primary and secondary emission zones with strong spatial coherence. These results highlight TWC Sigma as an efficient and interpretable model that can be used both independently and as a complementary tool to more complex network-based frameworks, offering rapid and reliable localization even in the presence of sparse, noisy, or heterogeneous data. Full article
Show Figures

Figure 1

15 pages, 5064 KB  
Article
Physics-Guided Machine Learning with Flowing Material Balance Integration: A Novel Approach for Reliable Production Forecasting and Well Performance Analytics
by Eghbal Motaei, Tarek Ganat and Hai T. Nguyen
Energies 2026, 19(9), 2022; https://doi.org/10.3390/en19092022 - 22 Apr 2026
Abstract
Reliable production forecasting is a critical task for evaluating asset valuation and commercial performance in oil and gas reservoirs. Conventional short-term forecasting methods, such as Arps’ decline curve analysis, rely on simple mathematical curve fitting and often oversimplify reservoir performance. On the other [...] Read more.
Reliable production forecasting is a critical task for evaluating asset valuation and commercial performance in oil and gas reservoirs. Conventional short-term forecasting methods, such as Arps’ decline curve analysis, rely on simple mathematical curve fitting and often oversimplify reservoir performance. On the other hand, long-term forecasting requires complex multidisciplinary models that integrate geophysics, reservoir engineering, and production engineering, but these approaches are time-consuming and have high turnaround times. To bridge the gap between long and short-term production forecasts, reduced-physics models such as Blasingame type curves have been developed, incorporating transient well behaviour derived from diffusivity equations and Darcy’s law. These models assume homogeneity and uniform reservoir properties, enabling faster results while honouring pressure performance. However, despite their efficiency, they still face limitations in reliability, particularly when extended to long-term forecasts. This paper proposes a hybrid modelling approach that integrates flowing material balance (FMB) concepts into physics-informed neural networks (PiNNs) and machine learning models to improve the accuracy and reliability of production forecasting. The proposed methodology introduces two hybrid strategies: physics-informed models enriched with FMB feature, and PiNNs. The first proposed hybrid model uses a created FMB-derived feature as input to neural networks. The second PiNN model embeds data-driven loss functions with a physics-based envelope to reflect reservoir response into the machine learning model. The primary loss function is mean squared error, ensuring minimization of data misfit between predicted and observed production rates. The study validates both proposed physically informed neural network models through performance metrics such as RMSE, MAE, MAPE, and R2. Results application on field data shows that the integration of FMB into neural network models using the PiNN concept guides the neural network models to predict the production rates with higher reliability over the full span of the tested data period, which was the last year of unseen production data. Additionally, the proposed PiNN model is able to predict the well productivity index via hyper-tuning of the PiNN model. Furthermore, the PiNN is not improving the metric performance of conventional neural networks, as it has to satisfy an additional material balance equation. This is due to a lower degree of freedom in the PiNN models. Full article
Show Figures

Figure 1

34 pages, 1939 KB  
Article
AutoUAVFormer: Neural Architecture Search with Implicit Super-Resolution for Real-Time UAV Aerial Object Detection
by Li Pan, Huiyao Wan, Pazlat Nurmamat, Jie Chen, Long Sun, Yice Cao, Shuai Wang, Yingsong Li and Zhixiang Huang
Remote Sens. 2026, 18(9), 1268; https://doi.org/10.3390/rs18091268 - 22 Apr 2026
Abstract
The widespread deployment of unmanned aerial vehicles (UAVs) in civil and commercial airspace has raised significant safety concerns, driving the demand for reliable and real-time Anti-UAV visual detection systems. However, existing deep learning-based detectors face substantial challenges in complex low-altitude environments, including drastic [...] Read more.
The widespread deployment of unmanned aerial vehicles (UAVs) in civil and commercial airspace has raised significant safety concerns, driving the demand for reliable and real-time Anti-UAV visual detection systems. However, existing deep learning-based detectors face substantial challenges in complex low-altitude environments, including drastic scale variations, severe background clutter, and weak feature representation of small UAV targets. Moreover, handcrafted Transformer-based architectures often lack adaptability across diverse scenarios and struggle to balance detection accuracy with computational efficiency. To address these limitations, this paper proposes AutoUAVFormer, a super-resolution guided neural architecture search framework for Anti-UAV detection. In contrast to conventional manually designed approaches, AutoUAVFormer leverages joint optimization of a Transformer-based detection objective and a super-resolution reconstruction objective to automatically identify a task-specific optimal network architecture for detecting UAV targets. Specifically, a unified search space is formulated by jointly embedding Transformer hyperparameters and Feature Pyramid Network (FPN) structures, facilitating end-to-end co-optimization of multi-scale feature fusion and global context modeling. To efficiently locate architectures that balance accuracy and computational cost, a three-stage pipeline, combining supernetwork training with evolutionary search, is employed. Additionally, we design a super-resolution auxiliary branch that operates only during training to enhance the model’s ability to learn fine-grained textures and sharpen edge representations of small targets, without introducing any inference overhead. Extensive experiments on three challenging Anti-UAV detection benchmarks, namely DetFly, DUT Anti-UAV, and UAV Swarm, confirm the superiority of AutoUAVFormer over current state-of-the-art methods, with mAP@0.5 scores reaching 98.6%, 95.5%, and 89.9% on the respective datasets while sustaining real-time inference speed. These results demonstrate that AutoUAVFormer achieves strong generalization and maintains robust Anti-UAV detection performance under challenging low-altitude conditions. Full article
13 pages, 3028 KB  
Article
A Neural Network Approach for the Simulation of Real Fluid Two-Phase Combustion Using a Multi-Species (H2/O2) Mechanism
by Bruno Delhom, Chaouki Habchi, Olivier Colin and Julien Bohbot
Fluids 2026, 11(5), 105; https://doi.org/10.3390/fluids11050105 - 22 Apr 2026
Abstract
Fully compressible two-phase flow configurations present many challenges for numerical modelling, requiring the development of Real Fluid Models (RFMs) able to simulate flows in subcritical, transcritical and supercritical regimes. Such an RFM has been recently developed at IFPEN based on physical properties lookup [...] Read more.
Fully compressible two-phase flow configurations present many challenges for numerical modelling, requiring the development of Real Fluid Models (RFMs) able to simulate flows in subcritical, transcritical and supercritical regimes. Such an RFM has been recently developed at IFPEN based on physical properties lookup tables, mainly for binary and ternary chemical systems. This paper proposes an Artificial Neural Network (ANN) approach to overcome the limitations of lookup tables of thermodynamic properties and to apply RFM to multi-species combustion. A methodology for generating an optimized data set by combining a vapor–liquid equilibrium (VLE) thermodynamic solver and the in situ adaptive tabulation (ISAT) method is developed. It aims to improve the neural network training process for two-phase combustion simulations where many species are present. This ANN methodology has been implemented in the CONVERGE CFD solver and validated using a mixing layer (LOX/GH2) benchmark from the literature relevant to rocket conditions, and an academic gaseous (H2/O2) case relevant to hydrogen combustion. The results show that this ANN approach makes H2 combustion simulation possible when coupled to the RFM framework and using a 10-species kinetic mechanism. Full article
Show Figures

Figure 1

70 pages, 5036 KB  
Review
A Review of Mathematical Reduced-Order Modeling of PCM-Based Latent Heat Storage Systems
by John Nico Omlang and Aldrin Calderon
Energies 2026, 19(9), 2017; https://doi.org/10.3390/en19092017 - 22 Apr 2026
Abstract
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications [...] Read more.
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications computationally expensive. This review examines mathematical reduced-order modeling (ROM) as an effective strategy to overcome this limitation by combining physics-based simplifications, projection methods, interpolation techniques, and data-driven models for PCM-based LHS systems. While physical simplifications (such as dimensional reduction and effective property approximations) represent an important first layer of model reduction, the primary focus of this work is on the mathematical ROM methodologies that operate on the governing equations after such physical simplifications have been applied. The review covers approaches including two-temperature non-equilibrium and analytical thermal-resistance models, Proper Orthogonal Decomposition (POD), CFD-derived look-up tables, kriging and ε-NTU grey/black-box metamodels, and machine-learning methods such as artificial neural networks and gradient-boosted regressors trained from CFD data. These ROM techniques have been applied to packed beds, PCM-integrated heat exchangers, finned enclosures, triplex-tube systems, and solar thermal components, achieving speed-ups from tens to over 80,000 times faster than full CFD simulations while maintaining prediction errors typically below 5% or within sub-Kelvin temperature deviations. A critical comparative analysis exposes the fundamental trade-off between interpretability, data dependence, and computational efficiency, leading to a practical decision-making framework that guides method selection for specific applications such as design optimization, real-time control, and system-level simulation. Remaining challenges—including accurate representation of phase change nonlinearity, moving phase boundaries, multi-timescale dynamics, generalization across geometries, experimental validation, and integration into industrial workflows—motivate a structured roadmap for future hybrid physics–machine learning developments, standardized validation protocols, and pathways toward industrial deployment. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

17 pages, 11454 KB  
Article
Informer-Based Precipitation Forecasting Using Ground Station Data in Guangxi, China
by Ting Zhang, Donghong Qin, Deyi Wang, Soung-Yue Liew and Huasheng Zhao
Atmosphere 2026, 17(5), 429; https://doi.org/10.3390/atmos17050429 - 22 Apr 2026
Abstract
Precipitation forecasting is essential for disaster prevention, water resource management, and socio-economic resilience. The field has evolved from numerical weather prediction (NWP) and optical-flow-based methods toward data-driven deep learning approaches that can exploit larger observational datasets and model complex nonlinear relationships. Against this [...] Read more.
Precipitation forecasting is essential for disaster prevention, water resource management, and socio-economic resilience. The field has evolved from numerical weather prediction (NWP) and optical-flow-based methods toward data-driven deep learning approaches that can exploit larger observational datasets and model complex nonlinear relationships. Against this background, this study evaluates multi-station temporal forecasting models within a single-year, station-based proof-of-concept benchmark under unified data conditions. We adapt the Transformer and Informer architectures to this meteorological setting, rigorously preprocess the AWS dataset to avoid data leakage, and select predictive variables using complementary linear and nonlinear relevance criteria. Model performance is assessed using continuous and categorical precipitation metrics, including the Critical Success Index (CSI). The results show that the Informer outperforms the recurrent neural network (RNN) baselines and achieves the lowest mean MAE and RMSE together with the highest mean CSI among the evaluated models while using substantially fewer parameters than the standard Transformer. However, its sample-wise absolute error distribution remains statistically comparable to that of the standard Transformer. Overall, this study establishes a single-year, station-based proof-of-concept benchmark for comparing architectures in very-short-term (1–5 h ahead) precipitation forecasting. Full article
(This article belongs to the Special Issue Atmospheric Modeling with Artificial Intelligence Technologies)
Show Figures

Figure 1

42 pages, 966 KB  
Article
Garbage In, Garbage Out? The Impact of Data Quality on the Performance of Financial Distress Prediction Models
by Veronika Labosova, Lucia Duricova, Katarina Kramarova and Marek Durica
Forecasting 2026, 8(3), 35; https://doi.org/10.3390/forecast8030035 - 22 Apr 2026
Abstract
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic [...] Read more.
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic attention. This study examines how an economically grounded data-preparation process affects the predictive performance of selected statistical and machine-learning models dedicated to predicting corporate financial distress. Using the chosen financial ratios, generally accepted indicators of corporate financial stability and economic performance, financial distress models are estimated on both raw, unprocessed input data and pre-processed data involving the exclusion of economically implausible accounting values, treatment of missing observations, and class balancing. In light of the above, the study adopts a structured methodological approach to assess the predictive performance of selected classification models, namely decision tree algorithms (CART, CHAID, and C5.0), artificial neural networks (ANNs), logistic regression (LR), and linear discriminant analysis (DA), using confusion-matrix–based evaluation and a comprehensive set of evaluation measures. The results suggest that the process of input data preparation is a critical factor, significantly improving the predictive performance of financial distress prediction models across most modelling techniques employed. The most pronounced gains are observed in decision tree models. ANNs also demonstrate marked improvement after input data preparation, whereas LR benefits more moderately, and linear DA remains limited despite preprocessing. The average gain in accuracy across all six modelling techniques, calculated as the difference between pre-processed and raw performance for each method and averaged across methods, was approximately 15.6 percentage points, with specificity improving by approximately 26.9 percentage points on average, amounting to roughly half the performance variation attributable to algorithm choice, which underscores that data preparation is a primary determinant of model reliability alongside algorithm selection. A step-level detailed analysis further shows that missing value imputation is the dominant driver of improvement for tree-based models, while class balancing contributes most for ANNs and logistic regression. The findings highlight that reliable financial distress prediction depends not only on technique selection but also on the consistency and economic plausibility of the input data, underscoring the central role of structured data preparation in developing robust early-warning models. Full article
Show Figures

Figure 1

23 pages, 1760 KB  
Article
Data-Driven Prediction and Inverse Design of Fluoride Glasses via Explainable GA-BP Neural Networks
by Runze Zhou, Xinqiang Yuan, Longfei Zhang, Chi Zhang, Hongxing Dong and Long Zhang
Materials 2026, 19(9), 1685; https://doi.org/10.3390/ma19091685 - 22 Apr 2026
Abstract
With the increasing application of novel glass materials in the field of optics, traditional empirical and trial-and-error approaches to glass development are gradually becoming insufficient to meet escalating performance demands. In this study, we propose a neural network-based machine learning method for the [...] Read more.
With the increasing application of novel glass materials in the field of optics, traditional empirical and trial-and-error approaches to glass development are gradually becoming insufficient to meet escalating performance demands. In this study, we propose a neural network-based machine learning method for the design of advanced fluoride glass materials. Predictive models for density and refractive index were first developed based on online fluoride glass datasets. Moreover, SHapley Additive exPlanations (SHAP) analysis was adopted to uncover the quantitative composition-property relationship. Then, the well-trained model was employed for inverse design, identifying specific compositions that fulfill desired properties in terms of density and refractive index. Finally, several recommended compositions were experimentally validated and the measured density and refractive index matched well with the corresponding input values, thereby confirming the effectiveness of the proposed method in designing new fluoride glass materials. Full article
(This article belongs to the Section Materials Simulation and Design)
30 pages, 1435 KB  
Review
A Review of Machine Learning Modeling Approaches of Spatiotemporal Urbanization and Land Use Land Cover
by Farasath Hasan, Jian Liu and Xintao Liu
Smart Cities 2026, 9(5), 74; https://doi.org/10.3390/smartcities9050074 - 22 Apr 2026
Abstract
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there [...] Read more.
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there is limited synthesis of how AI-based models complement, extend, or supersede conventional approaches. This study addresses this gap through a systematic review of 6356 records, from which 120 articles were selected for detailed analysis. It investigates: (i) how ML/DL techniques are embedded within spatiotemporal modeling frameworks; (ii) their use in simulating urbanization dynamics and land-use (LU) transitions; (iii) methodological and performance gains relative to traditional statistical and rule-based models; and (iv) emerging research frontiers and limitations. The review shows that LULCC dominates current applications, with Artificial Neural Networks (ANNs) as the most prevalent ML method, increasingly complemented by DL architectures. Across cases, AI is primarily used to learn non-linear transition dynamics, represent spatial and temporal dependencies, identify influential drivers, and improve classification performance and computational efficiency. Building on these insights, the paper synthesizes the roles of AI in spatiotemporal urban modeling and outlines forward-looking research directions to support more robust, transparent, and policy-relevant applications for urban sustainability. Full article
Back to TopTop