Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,248)

Search Parameters:
Keywords = perceptrons

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 10878 KB  
Article
Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes
by Enrique Luna-Villagómez and Vladimir Mahalec
Processes 2025, 13(11), 3672; https://doi.org/10.3390/pr13113672 (registering DOI) - 13 Nov 2025
Abstract
Designing reliable fault detection and diagnosis (FDD) systems remains difficult when only limited fault-free data are available. Kolmogorov–Arnold Networks (KANs) have recently been proposed as parameter-efficient alternatives to multilayer perceptrons, yet their effectiveness for unsupervised FDD has not been systematically established. This study [...] Read more.
Designing reliable fault detection and diagnosis (FDD) systems remains difficult when only limited fault-free data are available. Kolmogorov–Arnold Networks (KANs) have recently been proposed as parameter-efficient alternatives to multilayer perceptrons, yet their effectiveness for unsupervised FDD has not been systematically established. This study presents a statistically grounded comparison of Kolmogorov–Arnold Autoencoders (KAN-AEs) against an orthogonal autoencoder and a PCA baseline using the Tennessee Eastman Process benchmark. Four KAN-AE variants (EfficientKAN-AE, FastKAN-AE, FourierKAN-AE, and WavKAN-AE) were trained on fault-free data subsets ranging from 625 to 250,000 samples and evaluated over 30 independent runs. Detection performance was assessed using Bayesian signed-rank tests to estimate posterior probabilities of model superiority across fault scenarios. The results show that WavKAN-AE and EfficientKAN-AE achieve approximately 90–92% fault detection rate with only 2500 samples. In contrast, the orthogonal autoencoder requires over 30,000 samples to reach comparable performance, while PCA remains markedly below this level regardless of data size. Under data-rich conditions, Bayesian tests show that the orthogonal autoencoder matches or slightly outperforms the KAN-AEs on the more challenging fault scenarios, while remaining computationally more efficient. These findings position KAN-AEs as compact, data-efficient tools for industrial fault detection when historical fault-free data are scarce. Full article
(This article belongs to the Special Issue AI-Driven Advanced Process Control for Smart Energy Systems)
23 pages, 16307 KB  
Article
Improving EFDD with Neural Networks in Damping Identification for Structural Health Monitoring
by Yuanqi Zheng, Chin-Long Lee, Jia Guo, Renjie Shen, Feifei Sun, Jiaqi Yang and Alejandro Saenz Calad
Sensors 2025, 25(22), 6929; https://doi.org/10.3390/s25226929 (registering DOI) - 13 Nov 2025
Abstract
Damping has attracted increasing attention as an indicator for structural health monitoring (SHM), owing to its sensitivity to subtle damage that may not be reflected in natural frequencies. However, the practical application of damping-based SHM remains limited by the accuracy and robustness of [...] Read more.
Damping has attracted increasing attention as an indicator for structural health monitoring (SHM), owing to its sensitivity to subtle damage that may not be reflected in natural frequencies. However, the practical application of damping-based SHM remains limited by the accuracy and robustness of damping identification methods. Enhanced Frequency Domain Decomposition (EFDD), a widely used operational modal analysis technique, offers efficiency and user-friendliness, but suffers from intrinsic deficiencies in damping identification due to bias introduced at several signal-processing stages. This study proposes to improve EFDD by integrating neural networks, replacing heuristic parameter choices with data-driven modules. Two strategies are explored: a step-wise embedding of neural modules into the EFDD workflow, and an end-to-end grid-weight framework that aggregates candidate damping estimates using a lightweight multilayer perceptron. Both approaches are validated through numerical simulations on synthetic response datasets. Their applicability was further validated through shaking-table experiments on an eight-storey steel frame and a five-storey steel–concrete hybrid structure. The proposed grid-weight EFDD demonstrated superior robustness and sensitivity in capturing early-stage damping variations, confirming its potential for practical SHM applications. The findings also revealed that the effectiveness of damping-based indicators is strongly influenced by the structural material system. This study highlights the feasibility of integrating neural network training into EFDD to replace human heuristics, thereby improving the reliability and interpretability of damping-based damage detection. Full article
(This article belongs to the Special Issue Intelligent Sensors and Artificial Intelligence in Building)
Show Figures

Figure 1

17 pages, 1121 KB  
Article
TASA: Text-Anchored State–Space Alignment for Long-Tailed Image Classification
by Long Li, Tinglei Jia, Huaizhi Yue, Huize Cheng, Yongfeng Bu and Zhaoyang Zhang
J. Imaging 2025, 11(11), 410; https://doi.org/10.3390/jimaging11110410 - 13 Nov 2025
Abstract
Long-tailed image classification remains challenging for vision–language models. Head classes dominate training while tail classes are underrepresented and noisy, and short prompts with weak text supervision further amplify head bias. This paper presents TASA, an end-to-end framework that stabilizes textual supervision and enhances [...] Read more.
Long-tailed image classification remains challenging for vision–language models. Head classes dominate training while tail classes are underrepresented and noisy, and short prompts with weak text supervision further amplify head bias. This paper presents TASA, an end-to-end framework that stabilizes textual supervision and enhances cross-modal fusion. A Semantic Distribution Modulation (SDM) module constructs class-specific text prototypes by cosine-weighted fusion of multiple LLM-generated descriptions with a canonical template, providing stable and diverse semantic anchors without training text parameters. Dual-Space Cross-Modal Fusion (DCF) module incorporates selective-scan state–space blocks into both image and text branches, enabling bidirectional conditioning and efficient feature fusion through a lightweight multilayer perceptron. Together with a margin-aware alignment loss, TASA aligns images with class prototypes for classification without requiring paired image–text data or per-class prompt tuning. Experiments on CIFAR-10/100-LT, ImageNet-LT, and Places-LT demonstrate consistent improvements across many-, medium-, and few-shot groups. Ablation studies confirm that DCF yields the largest single-module gain, while SDM and DCF combined provide the most robust and balanced performance. These results highlight the effectiveness of integrating text-driven prototypes with state–space fusion for long-tailed classification. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

25 pages, 2563 KB  
Article
LungVisionNet: A Hybrid Deep Learning Model for Chest X-Ray Classification—A Case Study at King Hussein Cancer Center (KHCC)
by Iyad Sultan, Hasan Gharaibeh, Azza Gharaibeh, Belal Lahham, Mais Al-Tarawneh, Rula Al-Qawabah and Ahmad Nasayreh
Technologies 2025, 13(11), 517; https://doi.org/10.3390/technologies13110517 - 12 Nov 2025
Abstract
Early diagnosis and rapid treatment of respiratory abnormalities such as many lung diseases including pneumonia, TB, cancer, and other pulmonary problems depend on accurate and fast classification of chest X-ray images. Delayed diagnosis and insufficient treatment lead to the subjective, labour-intensive, error-prone features [...] Read more.
Early diagnosis and rapid treatment of respiratory abnormalities such as many lung diseases including pneumonia, TB, cancer, and other pulmonary problems depend on accurate and fast classification of chest X-ray images. Delayed diagnosis and insufficient treatment lead to the subjective, labour-intensive, error-prone features of current manual diagnosis systems. To tackle this pressing healthcare issue, this work investigates many deep convolutional neural network (CNN) architectures including VGG16, VGG19, ResNet50, InceptionV3, Xception, DenseNet121, NASNetMobile, and NASNet Large. LungVisionNet (LVNet) is an innovative hybrid model proposed here that combines MobileNetV2 with multilayer perceptron (MLP) layers in a unique way. LungVisionNet outperformed previous models in accuracy 96.91%, recall 97.59%, precision, specificity, F1-score 97.01%, and area under the curve (AUC) measurements according to thorough examination on two publicly available datasets including various chest abnormalities and normal cases exhibited. Comprehensive evaluation with an independent, real-world clinical dataset from King Hussein Cancer Centre (KHCC), which achieved 95.3% accuracy, 95.3% precision, 78.8% recall, 99.1% specificity, and 86.4% F1-score, confirmed the model’s robustness, generalizability, and clinical usefulness. We also created a simple mobile application that lets doctors quickly classify and evaluate chest X-ray images in hospitals, so enhancing clinical integration and practical application and supporting fast decision-making and better patient outcomes. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

20 pages, 4838 KB  
Article
Real-Time Control of a Focus Tunable Lens for Presbyopia Correction Using Ciliary Muscle Biopotentials and Artificial Neural Networks
by Bishesh Sigdel, Sven Schumayer, Sebastian Kaltenstadler, Eberhart Zrenner, Volker Bucher, Albrecht Rothermel and Torsten Straßer
Bioengineering 2025, 12(11), 1228; https://doi.org/10.3390/bioengineering12111228 - 10 Nov 2025
Viewed by 109
Abstract
Ageing results in the progressive loss of near vision, known as presbyopia, which impacts individuals and society. Existing corrective methods offer only partial compensation and do not restore dynamic focusing at varying distances. This work presents a closed-loop correction system for presbyopia, employing [...] Read more.
Ageing results in the progressive loss of near vision, known as presbyopia, which impacts individuals and society. Existing corrective methods offer only partial compensation and do not restore dynamic focusing at varying distances. This work presents a closed-loop correction system for presbyopia, employing biopotential signals from the ciliary muscle and an artificial neural network to predict the eye’s accommodative state in real time. Non-invasive contact lens electrodes collect biopotential data, which are preprocessed and classified using a multi-layer perceptron. The classifier output guides a control system that adjusts an external focus-tunable lens, enabling both accommodation and disaccommodation similar to a young eye. The system demonstrated an accuracy of 0.79, with F1-scores of 0.78 for prediction of accommodation and 0.77 for disaccommodation. Using the system in two presbyopic subjects, near visual acuity improved from 0.28 and 0.38 to 0.04 and −0.03 logMAR, while distance acuity remained stable. Despite challenges such as signal quality and individual variability, the findings demonstrate the feasibility of restoring near-natural accommodation in presbyopia using neuromuscular signals and adaptive lens control. Future research will focus on system validation, expanding the dataset, and pre-clinical testing in implantable devices. Full article
(This article belongs to the Special Issue Bioengineering Strategies for Ophthalmic Diseases)
Show Figures

Figure 1

27 pages, 3207 KB  
Article
Interpolation and Machine Learning Methods for Sub-Hourly Missing Rainfall Data Imputation in a Data-Scarce Environment: One- and Two-Step Approaches
by Mohamed Boukdire, Çağrı Alperen İnan, Giada Varra, Renata Della Morte and Luca Cozzolino
Hydrology 2025, 12(11), 297; https://doi.org/10.3390/hydrology12110297 - 10 Nov 2025
Viewed by 92
Abstract
Complete sub-hourly rainfall datasets are critical for accurate flood modeling, real-time forecasting, and understanding of short-duration rainfall extremes. However, these datasets often contain missing values due to sensor or transmission failures. Recovering missing values (or filling these data gaps) at high temporal resolution [...] Read more.
Complete sub-hourly rainfall datasets are critical for accurate flood modeling, real-time forecasting, and understanding of short-duration rainfall extremes. However, these datasets often contain missing values due to sensor or transmission failures. Recovering missing values (or filling these data gaps) at high temporal resolution is challenging due to the imbalance between rain and no-rain periods. In this study, we developed and tested two approaches for the imputation of missing 10-min rainfall data by means of machine learning (Multilayer Perceptron and Random Forest) and interpolation methods (Inverse Distance Weighting and Ordinary Kriging). The (a) direct approach operates on raw data to directly feed the imputation models, while the (b) two-step approach first classifies time steps as rain or no-rain with a Random Forest classifier and subsequently applies an imputation model to predicted rainfall depth instances classified as rain. Each approach was tested under three spatial scenarios: using all nearby stations, using stations within the same cluster, and using the three most highly correlated stations. An additional test involved the comparison of the results obtained using data from the imputed time interval only and data from a time window containing several time intervals before and after the imputed time interval. The methods were evaluated with reference to two different environments, mountainous and coastal, in Campania region (Southern Italy), under data-scarce conditions where rainfall depth is the only available variable. With reference to the application of the two-step approach, the Random Forest classifier shows a good performance both in the mountainous and in the coastal area, with an average weighted F1 score of 0.961 and 0.957, and an average Accuracy of 0.928 and 0.946, respectively. The highest performance in the regression step is obtained by the Random Forest in the mountainous area with an R2 of 0.541 and an RMSE of 0.109 mm, considering a spatial configuration including all stations. The comparison with the direct approach results shows that the two-step approach consistently improves accuracy across all scenarios, highlighting the benefits gained from breaking the data imputation process in stages where different physical conditions (in this case, rain and no-rain) are separately managed. Another important finding is that the use of time windows containing data lagged with respect to the imputed time interval allows capturing the atmospheric dynamics by connecting rainfall instances at different time levels and distant stations. Finally, the study confirms that machine learning models outperform spatial interpolation methods, thanks to their ability to manage data with complicated internal structure. Full article
Show Figures

Figure 1

17 pages, 4362 KB  
Article
Developing Statistical and Multilayer Perceptron Neural Network Models for a Concrete Dam Dynamic Behaviour Interpretation
by Andrés Mauricio Guzmán Sejas, Sérgio Pereira, Juan Mata and Álvaro Cunha
Infrastructures 2025, 10(11), 301; https://doi.org/10.3390/infrastructures10110301 - 9 Nov 2025
Viewed by 232
Abstract
This work focuses on the dynamic monitoring behaviour of concrete dams, with a specific emphasis on the Baixo Sabor dam as a case study. The main objective of the dynamic monitoring is to continuously observe the dam’s behaviour, ensuring it remains within expected [...] Read more.
This work focuses on the dynamic monitoring behaviour of concrete dams, with a specific emphasis on the Baixo Sabor dam as a case study. The main objective of the dynamic monitoring is to continuously observe the dam’s behaviour, ensuring it remains within expected patterns and issuing alerts if deviations occur. The monitoring process relies on on-site instruments and behaviour models that use pattern recognition, thereby avoiding explicit dependence on mechanical principles. The undertaken work aimed to develop, calibrate, and compare statistical and machine learning models to aid in interpreting the observed dynamic behaviour of a concrete dam. The methodology included several key steps: operational modal analysis of acceleration time series, characterisation of the temporal evolution of observed magnitudes and influential environmental and operational variables, construction and calibration of predictive models using both statistical and machine learning methods, and the comparison of their effectiveness. Both Multiple Linear Regression (MLR) and Multilayer Perceptron Neural Network (MLP-NN) models were developed and tested. This work emphasised the development of several MLP-NN architectures. MLP-NN models with one and two hidden layers, and with one or more outputs in the output layer, were performed. The aim of this work is to assess the performance of MLP-NN models with different numbers of units in the output layer, in order to understand the advantages and disadvantages of having multiple models that characterise the observed behaviour of a single quantity or a single MLP-NN model that simultaneously learns and characterises the observed behaviour for multiple quantities. The results showed that while both MLR and MLP-NN models effectively captured and predicted the dam’s behaviour, the neural network slightly outperformed the regression model in prediction accuracy. However, the linear regression model is easier to interpret. In conclusion, both methods of linear regression and neural network models are suitable for the analysis and interpretation of monitored dynamic behaviour, but there are advantages in adopting a single model that considers all quantities simultaneously. For large-scale projects like the Baixo Sabor dam, Multilayer Perceptron Neural Networks offer significant advantages in handling intricate data relationships, thus providing better insights into the dam’s dynamic behaviour. Full article
(This article belongs to the Special Issue Preserving Life Through Dams)
Show Figures

Figure 1

21 pages, 3188 KB  
Article
Aeromagnetic Compensation for UAVs Using Transformer Neural Networks
by Weiming Dai, Changcheng Yang and Shuai Zhou
Sensors 2025, 25(22), 6852; https://doi.org/10.3390/s25226852 - 9 Nov 2025
Viewed by 258
Abstract
In geophysics, aeromagnetic surveying based on unmanned aerial vehicles (UAV) is a widely employed exploration technique, that can analyze underground structures by conducting data acquisition, processing, and inversion. This method is highly efficient and covers large areas, making it widely applicable in mineral [...] Read more.
In geophysics, aeromagnetic surveying based on unmanned aerial vehicles (UAV) is a widely employed exploration technique, that can analyze underground structures by conducting data acquisition, processing, and inversion. This method is highly efficient and covers large areas, making it widely applicable in mineral exploration, oil and gas surveys, geological mapping, and engineering and environmental studies. However, during flight, interference from the aircraft’s engine, electronic systems, and metal structures introduces noise into the magnetic data. To ensure accuracy, mathematical models and calibration techniques are employed to eliminate these aircraft-induced magnetic interferences. This enhances measurement precision, ensuring the data faithfully reflect the magnetic characteristics of subsurface geological features. This study focuses on aeromagnetic data processing methods, conducting numerical simulations of magnetic interference for aeromagnetic surveys of UAVs with the Tolles–Lawson (T-L) model. Recognizing the temporal dependencies in aeromagnetic data, we propose a Transformer neural network algorithm for aeromagnetic compensation. The method is applied to both simulated and measured flight data, and its performance is compared with the classical Multilayer Perceptron neural networks (MLP). The results demonstrate that the Transformer neural networks achieve better fitting capability and higher compensation accuracy. Full article
Show Figures

Figure 1

27 pages, 6345 KB  
Article
A Deep Neural Network-Based Approach for Optimizing Ammonia–Hydrogen Combustion Mechanism
by Xiaoting Xu, Jie Zhong, Yuchen Hu, Ridong Zhang, Kaiqi Zhang, Yunliang Qi and Zhi Wang
Energies 2025, 18(22), 5877; https://doi.org/10.3390/en18225877 - 7 Nov 2025
Viewed by 236
Abstract
Ammonia is a highly promising zero-carbon fuel for engines. However, it exhibits high ignition energy, slow flame propagation, and severe pollutant emissions, so it is usually burned in combination with highly reactive fuels such as hydrogen. An accurate understanding and modeling of ammonia–hydrogen [...] Read more.
Ammonia is a highly promising zero-carbon fuel for engines. However, it exhibits high ignition energy, slow flame propagation, and severe pollutant emissions, so it is usually burned in combination with highly reactive fuels such as hydrogen. An accurate understanding and modeling of ammonia–hydrogen combustion is of fundamental and practical significance to its application. Deep Neural Networks (DNNs) demonstrate significant potential in autonomously learning the interactions between high-dimensional inputs. This study proposed a deep neural network-based method for optimizing chemical reaction mechanism parameters, producing an optimized mechanism file as the final output. The novelty lies in two aspects: first, it systematically compares three DNN structures (Multi-layer perceptron (MLP), Convolutional Neural Network, and Residual Regression Neural Network (ResNet)) with other machine learning models (generalized linear regression (GLR), support vector machine (SVM), random forest (RF)) to identify the most effective structure for mapping combustion-related variables; second, it develops a ResNet-based surrogate model for ammonia–hydrogen mechanism optimization. For the test set (20% of the total dataset), the ResNet outperformed all other ML models and empirical correlations, achieving a coefficient of determination (R2) of 0.9923 and root mean square error (RMSE) of 135. The surrogate model uses the trained ResNet to optimize mechanism parameters based on a Stagni mechanism by mapping the initial conditions to experimental IDT. The results show that the optimized mechanism improves the prediction accuracy on laminar flame speed (LFS) by approximately 36.6% compared to the original mechanism. This method, while initially applied to the optimization of an ammonia–hydrogen combustion mechanism, can potentially be adapted to optimize mechanisms for other fuels. Full article
(This article belongs to the Section I2: Energy and Combustion Science)
Show Figures

Figure 1

23 pages, 2026 KB  
Article
EEG-Based Local–Global Dimensional Emotion Recognition Using Electrode Clusters, EEG Deformer, and Temporal Convolutional Network
by Hyoung-Gook Kim and Jin-Young Kim
Bioengineering 2025, 12(11), 1220; https://doi.org/10.3390/bioengineering12111220 - 7 Nov 2025
Viewed by 218
Abstract
Emotions are complex phenomena arising from cooperative interactions among multiple brain regions. Electroencephalography (EEG) provides a non-invasive means to observe such neural activity; however, as it captures only electrode-level signals from the scalp, accurately classifying dimensional emotions requires considering both local electrode activity [...] Read more.
Emotions are complex phenomena arising from cooperative interactions among multiple brain regions. Electroencephalography (EEG) provides a non-invasive means to observe such neural activity; however, as it captures only electrode-level signals from the scalp, accurately classifying dimensional emotions requires considering both local electrode activity and the global spatial distribution across the scalp. Motivated by this, we propose a brain-inspired EEG electrode-cluster-based framework for dimensional emotion classification. The model organizes EEG electrodes into nine clusters based on spatial and functional proximity, applying an EEG Deformer to each cluster to learn frequency characteristics, temporal dynamics, and local signal patterns. The features extracted from each cluster are then integrated using a bidirectional cross-attention (BCA) mechanism and a temporal convolutional network (TCN), effectively modeling long-term inter-cluster interactions and global signal dependencies. Finally, a multilayer perceptron (MLP) is used to classify valence and arousal levels. Experiments on three public EEG datasets demonstrate that the proposed model significantly outperforms existing EEG-based dimensional emotion recognition methods. Cluster-based learning, reflecting electrode proximity and signal distribution, effectively captures structural patterns at the electrode-cluster level, while inter-cluster information integration further captures global signal interactions, thereby enhancing the interpretability and physiological validity of EEG-based dimensional emotion analysis. This approach provides a scalable framework for future affective computing and brain–computer interface (BCI) applications. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

13 pages, 375 KB  
Article
Predicting Outcome and Duration of Mechanical Ventilation in Acute Hypoxemic Respiratory Failure: The PREMIER Study
by Jesús Villar, Jesús M. González-Martín, Cristina Fernández, Juan A. Soler, Marta Rey-Abalo, Juan M. Mora-Ordóñez, Ramón Ortiz-Díaz-Miguel, Lorena Fernández, Isabel Murcia, Denis Robaglia, José M. Añón, Carlos Ferrando, Dácil Parrilla, Ana M. Dominguez-Berrot, Pilar Cobeta, Domingo Martínez, Ana Amaro-Harpigny, David Andaluz-Ojeda, M. Mar Fernández, Estrella Gómez-Bentolila, Ewout W. Steyerberg, Luigi Camporota and Tamas Szakmanyadd Show full author list remove Hide full author list
J. Clin. Med. 2025, 14(22), 7903; https://doi.org/10.3390/jcm14227903 - 7 Nov 2025
Viewed by 306
Abstract
Objectives: The ability of clinicians to predict prolonged mechanical ventilation (MV) in patients with acute hypoxemic respiratory failure (AHRF) is inaccurate, mainly because of the competitive risk of mortality. We aimed to assess the performance of machine learning (ML) models for the early [...] Read more.
Objectives: The ability of clinicians to predict prolonged mechanical ventilation (MV) in patients with acute hypoxemic respiratory failure (AHRF) is inaccurate, mainly because of the competitive risk of mortality. We aimed to assess the performance of machine learning (ML) models for the early prediction of prolonged MV in a large cohort of patients with AHRF. Methods: We analyzed 996 ventilated AHRF patients with complete data at 48 h after diagnosis of AHRF from 1241 patients enrolled in a prospective, national epidemiological study, after excluding 245 patients ventilated for <2 days. To account for competing mortality, we used multinomial regression analysis (MNR) to model prolonged MV in three categories: (i) ICU survivors (regardless of MV duration), (ii) non-survivors ventilated for 2–7 days, (iii) non-survivors ventilated for >7 days. We performed 4 × 10-fold cross-validation to validate the performance of potent ML techniques [Multilayer Perceptron (MLP), Support Vector Machine (SVM), Random Forest (RF)] for predicting patient assignment. Results: All-cause ICU mortality was 32.8% (327/996). We identified 12 key predictors at 48 h of AHRF diagnosis: age, specific comorbidities, sequential organ failure assessment score, tidal volume, PEEP, plateau pressure, PaO2, pH, and number of organ failures. MLP showed the best predictive performance [AUC 0.86 (95%CI: 0.80–0.92) and 0.87 (0.80–0.93)], followed by MNR [AUC 0.83 (0.76–0.90) and 0.84 (0.77–0.91)], in distinguishing ICU survivors, with non-survivors ventilated 2–7 days and >7 days, respectively. Conclusions: Accounting for ICU mortality, MLP and MNR offered accurate patient-level predictions. Further work should integrate clinical and organizational factors to improve timely management and optimize outcomes. This study was initially registered on 3 February 2025 at ClinicalTrials.gov (NCT06815523). Full article
(This article belongs to the Special Issue Acute Hypoxemic Respiratory Failure: Progress, Challenges and Future)
Show Figures

Figure 1

22 pages, 6324 KB  
Article
A Novel Approach for the Estimation of the Efficiency of Demulsification of Water-In-Crude Oil Emulsions
by Slavko Nešić, Olga Govedarica, Mirjana Jovičić, Julijana Žeravica, Sonja Stojanov, Cvijan Antić and Dragan Govedarica
Polymers 2025, 17(21), 2957; https://doi.org/10.3390/polym17212957 - 6 Nov 2025
Viewed by 411
Abstract
Undesirable water-in-crude oil emulsions in the oil and gas industry can lead to several issues, including equipment corrosion, high-pressure drops in pipelines, high pumping costs, and increased total production costs. These emulsions are commonly treated with surface-active chemicals called demulsifiers, which can break [...] Read more.
Undesirable water-in-crude oil emulsions in the oil and gas industry can lead to several issues, including equipment corrosion, high-pressure drops in pipelines, high pumping costs, and increased total production costs. These emulsions are commonly treated with surface-active chemicals called demulsifiers, which can break an oil–water interface and enhance phase separation. This study introduces a novel approach based on neural networks to estimate demulsification efficiency and to aid in the selection of demulsifiers under field conditions. The influence of various types of demulsifiers, demulsifier concentration, time required for demulsification, temperature and asphaltene content on the demulsification efficiency is analyzed. To improve model accuracy, a modified full-scale factorial design of experiments and the comparison of response surface method with multilayer perception neural networks were conducted. The results demonstrated the advantages of using neural networks over the response surface methodology such as a reduced settling time in separators, an improved crude oil dehydration and processing capacity, and a lower consumption of energy and utilities. The findings may enhance processing conditions and identify regions of higher demulsification efficiency. The neural network approach provided a more accurate prediction of maximum of demulsification efficiency compared to the response surface methodology. The automated multilayer perceptron neural network, with an architecture consisting of 3 input layers, 14 hidden layers, and 1 output layer, demonstrated the highest validation performance R2 of 0.991932 by utilizing a logistic output activation function and a hyperbolic tangent activation function for the hidden layers. The identification of shifted optimal values of time required from demulsification, demulsifier concentration, and asphaltene content along with sensitivity analysis confirmed advantages of automated neural networks over conventional methods. Full article
Show Figures

Figure 1

22 pages, 10772 KB  
Article
An Artificial Neural Network for Rapid Prediction of the 3D Transient Temperature Fields in Ship Hull Plate Line Heating Forming
by Zhe Yang, Hua Yuan, Zhenshuai Wei, Lichun Chang, Yao Zhao and Jiayi Liu
Materials 2025, 18(21), 5054; https://doi.org/10.3390/ma18215054 - 6 Nov 2025
Viewed by 283
Abstract
Line heating processes play a significant role in the fabrication of structural steel components, particularly in industries such as shipbuilding, aerospace, and automotive manufacturing, where dimensional accuracy and minimal defects are critical. Traditional methods, such as the finite element method (FEM) simulations, offer [...] Read more.
Line heating processes play a significant role in the fabrication of structural steel components, particularly in industries such as shipbuilding, aerospace, and automotive manufacturing, where dimensional accuracy and minimal defects are critical. Traditional methods, such as the finite element method (FEM) simulations, offer high-fidelity predictions but are hindered by prohibitive computational latency and the need for case-specific re-meshing. This study presents a physics-aware, data-driven neural network that delivers fast, high-fidelity temperature predictions across a broad operating envelope. Each spatiotemporal point is mapped to a one-dimensional feature vector. This vector encodes thermophysical properties, boundary influence factors, heatsource variables, and timing variables. All geometric features are expressed in a path-aligned local coordinate frame, and the inputs are appropriately normalized and nondimensionalized. A lightweight multilayer perceptron (MLP) is trained on FEM-generated induction heating data for steel plates with varying thickness and randomized paths. On a hold-out test set, the model achieves MAE = 0.60 °C, RMSE = 1.27 °C, and R2 = 0.995, with a narrow bootstrapped 99.7% error interval (−0.203 to −0.063 °C). Two independent experiments on an integrated heating and mechanical rolling forming (IHMRF) platform show strong agreement with thermocouple measurements and demonstrate generalization to a plate size not seen during training. Inference is approximately five orders of magnitude (~105) faster than FEM, enabling near-real-time full-field reconstructions or targeted spatiotemporal queries. The approach supports rapid parameter optimization and advances intelligent line heating operations. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

21 pages, 3926 KB  
Article
Predicting the Strength of Heavy Concrete Exposed to Aggressive Environmental Influences by Machine Learning Methods
by Kirill P. Zubarev, Irina Razveeva, Alexey N. Beskopylny, Sergey A. Stel’makh, Evgenii M. Shcherban’, Levon R. Mailyan, Diana M. Shakhalieva, Andrei Chernil’nik and Nadezhda I. Nikora
Buildings 2025, 15(21), 3998; https://doi.org/10.3390/buildings15213998 - 5 Nov 2025
Viewed by 197
Abstract
Currently, intelligent algorithms are becoming a reliable alternative source of data analysis in many areas of human activity. In materials science, the integration of machine learning methods is effectively applied to predictive modeling of building materials properties. This is particularly interesting and relevant [...] Read more.
Currently, intelligent algorithms are becoming a reliable alternative source of data analysis in many areas of human activity. In materials science, the integration of machine learning methods is effectively applied to predictive modeling of building materials properties. This is particularly interesting and relevant for predicting the strength properties of building materials under aggressive environmental conditions. In this study, machine learning methods (Linear Regression, K-Neighbors, Decision Tree, Random Forest, CatBoost, Support Vector Regression, and Multilayer Perceptron) were used to analyze the relationship between the strength properties of heavy concrete depending on the freeze–thaw cycle, the average area of damaged areas during this cycle, and the number of damaged areas. The Random Forest and CatBoost methods demonstrate the smallest errors: deviations from actual values are 0.27 MPa and 0.25 MPa, respectively, with an average absolute percentage error of less than 1%. The determination coefficient R2 for both models is greater than 0.99. High values of this statistical measure indicate that the implemented models adequately describe changes in the observed data. The theoretical and practical development of intelligent algorithms in materials science opens up vast opportunities for the development and production of materials that are more resistant to aggressive influences. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

15 pages, 2028 KB  
Article
Parkinson’s Disease Classification Using Gray Matter MRI and Deep Learning: A Comparative Framework
by Haotian Li, Tong Liang, Runhong Yao and Takashi Kuremoto
Appl. Sci. 2025, 15(21), 11812; https://doi.org/10.3390/app152111812 - 5 Nov 2025
Viewed by 219
Abstract
In this study, we propose multiple deep learning models for classifying gray matter MRI images of healthy individuals, prodromal Parkinson’s disease (PD) subjects, and diagnosed PD patients. The two proposed models extend conventional deep learning architectures—MedicalNet3D and 3D ResNet18—by performing feature extraction separately [...] Read more.
In this study, we propose multiple deep learning models for classifying gray matter MRI images of healthy individuals, prodromal Parkinson’s disease (PD) subjects, and diagnosed PD patients. The two proposed models extend conventional deep learning architectures—MedicalNet3D and 3D ResNet18—by performing feature extraction separately for each class and inputting these features into distinct multilayer perceptron (MLP) classifiers constructed via fine-tuning. To mitigate overfitting problem and improve generalizability, we introduce a training method based on group-wise feature fusion, in which subject IDs are separated to avoid data leakage during training. Through comparative experiments using the PPMI database, the effectiveness of the proposed approach was validated. Full article
(This article belongs to the Special Issue Pattern Recognition Applications of Neural Networks and Deep Learning)
Show Figures

Figure 1

Back to TopTop