Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (79)

Search Parameters:
Keywords = full multi-layer perceptron

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6320 KB  
Article
Crashworthiness Optimization of Composite/Metal Hybrid Tubes with Triggering Holes
by Yan Ma, Zehui Huang, Hongbin Tang, Jianjiao Deng, Jingchun Wang, Shibin Wang, Zhiguo Zhang and Zhenjiang Wu
Designs 2026, 10(2), 44; https://doi.org/10.3390/designs10020044 - 10 Apr 2026
Viewed by 279
Abstract
Due to high specific energy absorption, composite/metal hybrid multi-cell thin-walled tubes hold significant potential in the field of automotive passive safety. However, the material coupling effect enhancing SEA often elevated the initial peak crushing force, reducing crushing force efficiency and compromising occupant protection. [...] Read more.
Due to high specific energy absorption, composite/metal hybrid multi-cell thin-walled tubes hold significant potential in the field of automotive passive safety. However, the material coupling effect enhancing SEA often elevated the initial peak crushing force, reducing crushing force efficiency and compromising occupant protection. To balance SEA and CFE, trigger holes were introduced as an induced deformation mechanism for hybrid tubes to reduce IPCF while preserving SEA, with the optimized perforated configuration yielding higher CFE than the non-perforated counterpart. A high-fidelity finite element model of the hybrid tube was developed and experimentally validated, and the influences of induced structural parameters on SEA and CFE were investigated. Given the strong nonlinear coupling between trigger parameters and crashworthiness, a multilayer perceptron surrogate model was constructed using 200 optimal Latin hypercube sampling samples (20 for validation). A Q-learning enhanced particle swarm optimization (QL-PSO) algorithm was adopted for optimization, with reinforcement learning dynamically adjusting PSO parameters to balance global exploration and local exploitation. Finite element simulations validated that the proposed method achieved a favorable SEA-CFE trade-off, with SEA and CFE improved by 12.02% and 16.39% respectively, outperforming reported configurations. Compared with standard PSO, QL-PSO exhibited superior search efficiency and inverse mapping accuracy, with 22% higher optimization efficiency and full compliance with inverse design performance targets. This study provided valuable guidance for the design of thin-walled energy-absorbing structures in multi-material vehicle bodies. Full article
(This article belongs to the Section Vehicle Engineering Design)
Show Figures

Figure 1

24 pages, 2538 KB  
Article
Baseline Neutrophil-to-Lymphocyte Ratio Stratifies Early Trichoscopic Response to Platelet-Rich Plasma–Based Regimens in Non-Scarring Alopecia: A Real-World Cohort with Internal Validation Using an Interpretable Neural Network
by Adelina Vrapcea, Sarmis-Marian Săndulescu, Eleonora Daniela Ciupeanu-Calugaru, Emil-Tiberius Traşcă, Dumitru Rădulescu, Patricia-Mihaela Rădulescu, Cristina Violeta Tutunaru, Sandra-Alice Buteica, Elena-Camelia Stănciulescu and Cătălina Gabriela Pisoschi
Life 2026, 16(4), 606; https://doi.org/10.3390/life16040606 - 6 Apr 2026
Viewed by 399
Abstract
Background/Objectives: Platelet-rich plasma (PRP)–based regimens are widely used in non-scarring alopecia, yet objective response is variable and clinic-ready predictors are lacking. We evaluated short-term trichoscopic outcomes in routine practice and tested whether baseline complete blood count–derived inflammatory status, quantified by the neutrophil-to-lymphocyte ratio [...] Read more.
Background/Objectives: Platelet-rich plasma (PRP)–based regimens are widely used in non-scarring alopecia, yet objective response is variable and clinic-ready predictors are lacking. We evaluated short-term trichoscopic outcomes in routine practice and tested whether baseline complete blood count–derived inflammatory status, quantified by the neutrophil-to-lymphocyte ratio (NLR), can stratify response under PRP-based therapy. Methods: We performed an ambispective observational cohort study (October 2024–October 2025) in an outpatient dermatology practice. The final analytic cohort included 129 patients allocated to four treatment groups: PRP alone (n = 54), PRP combined with microneedling-assisted Purasomes Hair & Scalp Complex (HCS50+, Dermoaroma; exosome-containing) (n = 33), PRP combined with microneedling-assisted Mesoaroma Hair Cocktail (scalp formulation; nutrient complex) (n = 24), and a nutrient complex alone (n = 18). Trichoscopy (FotoFinder ATBM; FotoFinder Systems GmbH, Bad Birnbach, Germany) was obtained at baseline (T1) and first follow-up (T2). Density response was defined as a ≥10% increase in total hair density and hair-cycle response as an anagen fraction increase ≥5 percentage points. Predictive analyses were prespecified and restricted to PRP-containing regimens, using logistic regression and a multilayer perceptron with repeated cross-validation for internal validation. Results: Across the full cohort (n = 129), total hair density and hair-cycle parameters improved from T1 to T2. In the PRP-containing subgroup (n = 111), baseline NLR strongly discriminated density responders (AUC 0.85, bootstrap 95% CI 0.77–0.91). In multivariable models, NLR remained independently associated with density response (OR 0.31 per 1-unit increase, 95% CI 0.20–0.48). Conclusions: In this cohort, baseline NLR was associated with discrimination of early trichoscopic response in PRP-based treatment of non-scarring alopecia. Using the Youden-derived cut-off (NLR = 2.202), patients with NLR > 2.202 had a higher risk of density non-response (72.1% vs. 4.7%), corresponding to a 15.49-fold increased failure risk in this cohort. These findings are exploratory and hypothesis-generating, and external validation and calibration are required before any routine clinical or decision-support use. Full article
(This article belongs to the Special Issue Innovative Approaches in Dermatological Therapies and Diagnostics)
Show Figures

Figure 1

19 pages, 2757 KB  
Article
Data-Driven Modeling and Optimization of a Modified Ludzack–Ettinger Process Using ML and DL for Effluent Quality Prediction
by Fengshi Guo, Shiyu Sun, Mingcan Cui and Daeyeon Yang
Water 2026, 18(7), 863; https://doi.org/10.3390/w18070863 - 3 Apr 2026
Viewed by 422
Abstract
Accurate prediction and optimization of effluent quality are essential for the stable operation of wastewater treatment plants under increasing influent variability and stringent discharge regulations. This study presents an integrated data-driven framework that combines machine learning, deep learning, model interpretability, and optimization to [...] Read more.
Accurate prediction and optimization of effluent quality are essential for the stable operation of wastewater treatment plants under increasing influent variability and stringent discharge regulations. This study presents an integrated data-driven framework that combines machine learning, deep learning, model interpretability, and optimization to enhance the performance of a full-scale Modified Ludzack–Ettinger (MLE) process. Three years of operational data from a municipal wastewater treatment plant were used to develop and compare random forest (RF), k-nearest neighbors (K-NN), multilayer perceptron (MLP), and deep neural network (DNN) models for the simultaneous prediction of effluent total organic carbon (TOC), biochemical oxygen demand (BOD), and total nitrogen (TN). Model performance was evaluated using the coefficient of determination (R2) and root mean square error (RMSE), and generalization capability was validated using independent field data. The results show that deep learning models, particularly DNN, outperform conventional machine learning approaches by effectively capturing complex nonlinear and multivariate process dynamics. To improve model interpretability, SHapley Additive exPlanations (SHAP) were applied to identify key operational variables affecting effluent quality. In addition, particle swarm optimization (PSO) was integrated with the trained models to determine optimal operating conditions that minimize effluent pollutant concentrations without requiring structural modifications. Overall, the proposed framework provides an interpretable and practical decision-support tool for proactive wastewater treatment plant operation, contributing to improved operational efficiency and environmental sustainability. Full article
Show Figures

Figure 1

16 pages, 8167 KB  
Article
Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System
by Wanhong Yan, Lingping He, Chen Tao, Tianqi Ma, Zhenwei Han, Sibo Yu and Bo Chen
Photonics 2026, 13(4), 330; https://doi.org/10.3390/photonics13040330 - 28 Mar 2026
Viewed by 386
Abstract
To meet the requirements of quantitative elemental analysis in the ultraviolet (UV) spectrum, a UV single-photon imaging system was developed, integrating a digital micromirror device (DMD) and a single photon-counting imaging detector, enabling high sensitivity, high resolution, and a wide dynamic range. However, [...] Read more.
To meet the requirements of quantitative elemental analysis in the ultraviolet (UV) spectrum, a UV single-photon imaging system was developed, integrating a digital micromirror device (DMD) and a single photon-counting imaging detector, enabling high sensitivity, high resolution, and a wide dynamic range. However, intrinsic geometric distortion poses a significant challenge to accurate spectral calibration. A hybrid correction framework is proposed, cascading polynomial coarse correction with multilayer perceptron (MLP) fine regression, improving calibration accuracy. The method utilizes a full-field dot-array mask projected by the DMD to acquire distortion-reference image pairs. The polynomial model rapidly captures the dominant high-order distortion, while a lightweight MLP performs non-parametric fine regression of residual displacements, achieving a mean error of 0.84 pixels. This approach reduces the root mean square (RMS) error to 1.01 pixels, outperforming traditional direct linear transformation (5.35 pixels) and pure polynomial models (1.33 pixels), while the nonlinearity index decreases from 0.35° to 0.05°. In addition, the method demonstrates stable performance across multi-scale checkerboard patterns ranging from 128 to 280 pixels, with RMS errors remaining around the 1-pixel level. These results validate the high-precision distortion suppression and robust cross-scale performance of the proposed framework. By leveraging DMD-generated patterns for self-calibration, this method eliminates the need for external targets, offering a scalable solution for high-end spectrometer calibration. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

10 pages, 2482 KB  
Proceeding Paper
AClustering-Enhanced Explainable Approach Involving Convolutional Neural Networks for Predicting the Compressive Strength of Lightweight Aggregate Concrete
by Violeta Migallón, Héctor Penadés and José Penadés
Eng. Proc. 2026, 124(1), 77; https://doi.org/10.3390/engproc2026124077 - 11 Mar 2026
Viewed by 143
Abstract
Lightweight aggregate concrete (LWAC) is a practical alternative to conventional concrete in civil engineering, offering advantages such as reduced density, enhanced insulation properties, and improved seismic performance. However, segregation during compaction remains a limitation, as it can lead to non-uniform material distribution and [...] Read more.
Lightweight aggregate concrete (LWAC) is a practical alternative to conventional concrete in civil engineering, offering advantages such as reduced density, enhanced insulation properties, and improved seismic performance. However, segregation during compaction remains a limitation, as it can lead to non-uniform material distribution and reduced compressive strength. This study addresses this issue by combining non-destructive techniques with deep learning methods to predict the compressive strength of LWAC. We propose an explainable approach based on a convolutional recurrent neural network architecture, enhanced by unsupervised clustering and SHapley Additive exPlanations (SHAP), to improve interpretability. To optimize predictive performance, several aggregation strategies are evaluated at the recurrent layer before the dense layers, including full-sequence flattening, max pooling, average pooling, and an attention mechanism over the full sequence. Experimental results show that the proposed model outperforms conventional machine learning methods such as multilayer perceptron (MLP), random forest (RF), and support vector regression (SVR), as well as ensemble methods such as gradient boosting (GBR), XGBoost, and weighted average ensemble (WAE). Furthermore, when combined with unsupervised clustering, the model identifies latent behavioral patterns that are not observable through traditional evaluation techniques. This demonstrates the potential of integrating non-destructive testing with interpretable deep learning as a reliable approach for the structural assessment of LWAC. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

21 pages, 4694 KB  
Article
Fourier-Feature Neural Surrogate for Hemodynamic Field Reconstruction in Stenotic and Bifurcating Flows
by Polydoros N. Papadopoulos and Vasilis N. Burganos
Mach. Learn. Knowl. Extr. 2026, 8(3), 59; https://doi.org/10.3390/make8030059 - 3 Mar 2026
Viewed by 520
Abstract
This work presents a fast neural surrogate capable of reconstructing fully three-dimensional hemodynamic velocity fields in stenotic and bifurcating microvascular geometries with satisfactory accuracy, avoiding repeated, computationally demanding computational fluid dynamics (CFD) simulations. A Fourier-augmented, coordinate-neural surrogate is presented and assessed for rapid [...] Read more.
This work presents a fast neural surrogate capable of reconstructing fully three-dimensional hemodynamic velocity fields in stenotic and bifurcating microvascular geometries with satisfactory accuracy, avoiding repeated, computationally demanding computational fluid dynamics (CFD) simulations. A Fourier-augmented, coordinate-neural surrogate is presented and assessed for rapid computation of three-dimensional blood-flow fields in a sample geometry. The model is trained on detailed CFD data across a parameter set of stenosis severities that feed a direct mapping from spatial coordinates to velocity components. To mitigate spectral bias and improve accuracy in regions of steep gradients, the input space is embedded with random Fourier features and compared against a conventional multilayer perceptron (MLP) backbone. Predictive ability is assessed upon strict hold-out testing, during which certain arteriolar stenosis cases are excluded from training and treated with the Fourier surrogate. Direct comparison with CFD results reveals that the Fourier MLP achieves nearly CFD fidelity with the coefficient of determination R2 ≥ 0.994 and offers more than 80% reduction in the normalized errors as provided by conventional MLP, with the precise improvement depending on the severity of stenosis. Centerline velocity and cross-sectional profiles further show that the Fourier MLP reconstructs stenosis speed-up and radial profiles more reliably compared to conventional MLP. These results indicate that Fourier feature embedding provides a simple and effective route to robust full-field hemodynamic surrogates for efficient screening of stenosis configurations without resorting to repeated, heavily demanding CFD simulations. Full article
Show Figures

Graphical abstract

42 pages, 1422 KB  
Article
Exploring Handwriting-Based Biomarkers for Alzheimer’s Disease: Identifying Discriminative Features and Tasks to Enhance Diagnostic Accuracy
by Cansu Akyürek Anacur, Asuman Günay Yılmaz and Bekir Dizdaroğlu
Diagnostics 2026, 16(5), 697; https://doi.org/10.3390/diagnostics16050697 - 26 Feb 2026
Viewed by 406
Abstract
Background/Objectives: This study proposes a comprehensive classification framework for the automatic detection of Alzheimer’s disease using handwriting data. An enriched feature space is constructed by combining 18 baseline features extracted from raw handwriting signals with 30 additional features derived from established handwriting analysis [...] Read more.
Background/Objectives: This study proposes a comprehensive classification framework for the automatic detection of Alzheimer’s disease using handwriting data. An enriched feature space is constructed by combining 18 baseline features extracted from raw handwriting signals with 30 additional features derived from established handwriting analysis studies, resulting in a total of 48 features. To enhance clinical practicality, a task reduction analysis is conducted by comparing the full dataset containing 25 handwriting tasks with a reduced dataset comprising 14 selected tasks. Methods: The proposed framework employs a two-stage evaluation strategy involving four feature selection methods (Random Forest Feature Importance, Extreme Gradient Boosting Feature Importance, L1 Regularization and Recursive Feature Elimination), three normalization techniques (Unnormalized, Min–Max and Z-Score), and five baseline machine learning classifiers (Random Forest, Logistic Regression, Multilayer Perceptron, XGBoost and Support Vector Machines). In the second stage, a dynamic ensemble learning strategy is introduced, where the most effective classifiers are adaptively selected for each cross-validation fold and integrated using soft and hard voting schemes. Results: The experimental results demonstrate that reducing the number of tasks leads to an improvement in average classification accuracy from 79.47% to 81.03%, while simultaneously decreasing training time and memory consumption by approximately 40% and 35%, respectively. The highest classification performance, achieving an accuracy of 94.20%, is obtained using the Hard Ensemble combined with L1-based feature selection. Conclusions: These findings highlight that the joint use of enriched feature representations, task reduction, and dynamic ensemble learning provides an effective and computationally efficient solution for handwriting-based Alzheimer’s disease detection. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

19 pages, 5129 KB  
Article
High-Resolution Contact Localization and Three-Axis Force Estimation with a Sparse Strain-Node Tactile Interface Device
by Yanyan Wu, Hanhan Wu, Yifei Han, Yi Ding, Bosheng Cao and Chongkun Xia
Sensors 2026, 26(4), 1378; https://doi.org/10.3390/s26041378 - 22 Feb 2026
Viewed by 468
Abstract
High-resolution contact localization and three-axis force estimation are crucial for human–robot interaction and precision manipulation, yet the sensing area is limited by channel density and wiring cost. Sparse strain readout makes joint estimation of location and three-axis force challenging due to cross-axis coupling [...] Read more.
High-resolution contact localization and three-axis force estimation are crucial for human–robot interaction and precision manipulation, yet the sensing area is limited by channel density and wiring cost. Sparse strain readout makes joint estimation of location and three-axis force challenging due to cross-axis coupling and nonlinear responses, while dense arrays or extensive calibration increase complexity. We present a sparse strain-node tactile interface device (SSTID) whose three-module layout is optimized via particle swarm optimization to maximize informative response overlap, enabling contact localization (x,y) and three-axis force (Fx,Fy,Fz) estimation using only nine strain channels. We further propose a strain-node contact-state decoding framework (SCDF) implemented with a lightweight multilayer perceptron and trained via a two-stage sim-to-real strategy, including FEM pretraining followed by few-shot real-data adaptation. Experiments demonstrate accurate contact-state decoding with full-workspace characterization, supporting low-cost and scalable deployment of sparse tactile interfaces. Full article
Show Figures

Figure 1

21 pages, 1305 KB  
Article
Cross-Learner Spectral Subset Optimisation: PLS–Ensemble Feature Selection with Weighted Borda Count for Grapevine Cultivar Discrimination
by Kyle Loggenberg, Albert Strever and Zahn Münch
Geomatics 2026, 6(1), 12; https://doi.org/10.3390/geomatics6010012 - 28 Jan 2026
Viewed by 440
Abstract
The mapping of vineyard cultivars presents a substantial challenge in digital agriculture due to the crop’s high intra-class heterogeneity and low inter-class variability. High-dimensional spectral datasets, such as hyperspectral or spectrometry data, can overcome these difficulties. However, research has yet to fully address [...] Read more.
The mapping of vineyard cultivars presents a substantial challenge in digital agriculture due to the crop’s high intra-class heterogeneity and low inter-class variability. High-dimensional spectral datasets, such as hyperspectral or spectrometry data, can overcome these difficulties. However, research has yet to fully address the need for optimal spectral feature subsets tailored for grapevine cultivar discrimination, while few studies have systematically examined waveband subsets that transfer effectively across different learning algorithms. This study sets out to address these gaps by introducing a Partial Least Squares (PLS)-based ensemble feature selection framework with Weighted Borda Count aggregation for cultivar discrimination. Using in-field spectrometry data, collected for six cultivars, and 18 PLS-based feature selection methods spanning filter, wrapper, and hybrid approaches, the PLS–ensemble identified 100 wavebands most relevant for cultivar discrimination, reducing dimensionality by ~95%. The efficacy and transferability of this subset were evaluated using five classification algorithms: Oblique Random Forest (oRF), Multinomial Logistic Regression (Multinom), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and a 1D Convolutional Neural Network (CNN). For oRF, Multinom, SVM, and MLP, the PLS–ensemble subset improved accuracy by 0.3–12% compared with using all wavebands. The subset was not optimal for the 1D-CNN, where accuracy decreased by up to 5.7%. Additionally, this study investigated waveband binning to transform narrow hyperspectral bands into broadband spectral features. Using feature multicollinearity and wavelength position, the 100 selected wavebands were condensed into 10 broadband features, which improved accuracy over both the full dataset and the original subset, delivering gains of 4.5–19.1%. The SVM model with this 10-feature subset outperformed all other models (F1: 1.00; BACC: 0.98; MCC: 0.78; AUC: 0.95). Full article
Show Figures

Figure 1

38 pages, 9992 KB  
Article
Learning-Based Multi-Objective Optimization of Parametric Stadium-Type Tiered-Seating Configurations
by Metin Arel and Fikret Bademci
Mathematics 2026, 14(3), 410; https://doi.org/10.3390/math14030410 - 24 Jan 2026
Viewed by 552
Abstract
Parametric tiered-seating design can be framed as a constrained multi-objective optimization problem in which a low-dimensional decision vector is evaluated by a deterministic operator with sequential feasibility rejection and visibility constraints. This study introduces an oracle-preserving, learning-assisted screening workflow, where a multi-output multilayer [...] Read more.
Parametric tiered-seating design can be framed as a constrained multi-objective optimization problem in which a low-dimensional decision vector is evaluated by a deterministic operator with sequential feasibility rejection and visibility constraints. This study introduces an oracle-preserving, learning-assisted screening workflow, where a multi-output multilayer perceptron (MLP) is used only to prioritize candidates for evaluation. Here, multi-output denotes a single network trained to predict the full objective vector jointly. Candidates are sampled within bounded decision ranges and evaluated by an operator that propagates section-coupled geometric state and enforces hard clearance thresholds through a Vertical Sightline System (VSS), i.e., a deterministic row-wise sightline/clearance evaluator that enforces hard clearance thresholds. The oracle-evaluated set is reduced to its mixed-direction Pareto-efficient subset and filtered by feature-space proximity to a fixed validation reference using nearest-neighbor distances in standardized 11-dimensional features, yielding a robustness-oriented pool. A compact shortlist is derived via TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution; used here strictly as a post-Pareto decision-support ranking rule), and preference uncertainty is assessed by Monte Carlo weight sampling from a symmetric Dirichlet distribution. In an archived run under a fixed oracle budget, 1235 feasible designs are evaluated, producing 934 evaluated Pareto solutions; proximity filtering retains 187 robust candidates and TOPSIS reports a traceable top-30 shortlist. Stability is supported by concentrated top-k frequencies under weight perturbations and by audits under single-feature-drop ablations and tested rounding precisions. Overall, the workflow enables reproducible multi-objective screening and reporting for feasibility-dominated seating design. Full article
Show Figures

Figure 1

33 pages, 19776 KB  
Article
Multiparametric Vibration Diagnostics of Machine Tools Within a Digital Twin Framework Using Machine Learning
by Andrey Kurkin, Yuri Kabaldin, Maksim Zhelonkin, Sergey Mancerov, Maksim Anosov and Dmitriy Shatagin
Appl. Sci. 2026, 16(2), 982; https://doi.org/10.3390/app16020982 - 18 Jan 2026
Cited by 1 | Viewed by 647
Abstract
In the context of the digital transformation of industrial production, the need for intelligent maintenance and repair systems capable of ensuring reliable operation of machine-tool equipment without operator involvement is growing. This present study reviews the current state and future development of diagnostic [...] Read more.
In the context of the digital transformation of industrial production, the need for intelligent maintenance and repair systems capable of ensuring reliable operation of machine-tool equipment without operator involvement is growing. This present study reviews the current state and future development of diagnostic and condition-monitoring systems for metalworking machine tools. A review of international standards and existing solutions from domestic and international vendors in vibration diagnostics has been conducted. Particular attention is paid to non-intrusive vibration diagnostics, digital twins, multiparametric analysis methods, and neural network approaches to failure prediction. The architecture of the developed system is presented. The concept of the system is developed in full compliance with Russian and international standards of vibration diagnostics. At its core, the comprehensive digital twin relies on machine learning methods. The proposed architecture is a predictive-maintenance system built on interconnected digital twin realizations: the dynamic machine passport of a unit, operational data, and a comprehensive digital twin of the machine-tool equipment. The potential of neuromorphic computing on a hardware platform is being considered as a promising element for local-condition classification and emergency protection. At the current development stage, the operating principle has been demonstrated along with the integration into the control loop. The system is now at the beginning of laboratory testing. It demonstrates capabilities for comprehensive assessment of the equipment’s technical condition based on multiparametric data, short-term vibration trend forecasting using a Long Short-Term Memory network, and state classification using a Multilayer Perceptron model. The results of the system’s testing on a turning machining center have been analyzed. Full article
(This article belongs to the Special Issue Vibration-Based Diagnostics and Condition Monitoring)
Show Figures

Figure 1

22 pages, 5031 KB  
Article
Data-Driven Prediction of Stress–Strain Fields Around Interacting Mining Excavations in Jointed Rock: A Comparative Study of Surrogate Models
by Anatoliy Protosenya and Alexey Ivanov
Mining 2026, 6(1), 4; https://doi.org/10.3390/mining6010004 - 16 Jan 2026
Viewed by 539
Abstract
Assessing the stress–strain state around interacting mining excavations using the finite element method (FEM) is computationally expensive for parametric studies. This study evaluates tabular machine-learning surrogate models for the rapid prediction of full stress–strain fields in fractured rock masses treated as an equivalent [...] Read more.
Assessing the stress–strain state around interacting mining excavations using the finite element method (FEM) is computationally expensive for parametric studies. This study evaluates tabular machine-learning surrogate models for the rapid prediction of full stress–strain fields in fractured rock masses treated as an equivalent continuum. A dataset of 1000 parametric FEM simulations using the elastoplastic generalized Hoek–Brown constitutive model was generated to train Random Forest, LightGBM, CatBoost, and Multilayer Perceptron (MLP) models based on geometric features. The results show that the best models achieve R2 scores of 0.96–0.97 for stress components and 0.99 for total displacements. LightGBM and CatBoost provide the optimal balance between accuracy and computational cost, offering speed-ups of 15 to 70 times compared to FEM. While Random Forest yields slightly higher accuracy, it is resource-intensive. Conversely, MLP is the fastest but less accurate. These findings demonstrate that data-driven surrogates can effectively replace repeated FEM simulations, enabling efficient parametric analysis and intelligent design optimization for mine workings. Full article
Show Figures

Graphical abstract

25 pages, 4355 KB  
Article
Integrating Regressive and Probabilistic Streamflow Forecasting via a Hybrid Hydrological Forecasting System: Application to the Paraíba do Sul River Basin
by Gutemberg Borges França, Vinicius Albuquerque de Almeida, Mônica Carneiro Alves Senna, Enio Pereira de Souza, Madson Tavares Silva, Thaís Regina Benevides Trigueiro Aranha, Maurício Soares da Silva, Afonso Augusto Magalhães de Araujo, Manoel Valdonel de Almeida, Haroldo Fraga de Campos Velho, Mauricio Nogueira Frota, Juliana Aparecida Anochi, Emanuel Alexander Moreno Aldana and Lude Quieto Viana
Water 2026, 18(2), 210; https://doi.org/10.3390/w18020210 - 13 Jan 2026
Cited by 1 | Viewed by 586
Abstract
This study introduces the Hybrid Hydrological Forecast System (HHFS), a dual-stage, data-driven framework for monthly streamflow forecasting at the Santa Branca outlet in the upper Paraíba do Sul River Basin, Brazil. The system combines two nonlinear regressors, Multi-Layer Perceptron (MLP) and extreme Gradient [...] Read more.
This study introduces the Hybrid Hydrological Forecast System (HHFS), a dual-stage, data-driven framework for monthly streamflow forecasting at the Santa Branca outlet in the upper Paraíba do Sul River Basin, Brazil. The system combines two nonlinear regressors, Multi-Layer Perceptron (MLP) and extreme Gradient Boosting (XGB), calibrated through a structured four-step evolutionary procedure in GA1 (hydrological weighting, dual-regime Ridge fusion, rolling bias correction, and monthly mean–variance adjustment) and a hydro-adaptive probabilistic optimization in GA2. SHAP-based analysis provides physical interpretability of the learned relations. The regressive stage (GA1) generates a bias-corrected and climatologically consistent central forecast. After the full four-step optimization, GA1 achieves robust generalization skill during the independent test period (2020–2023), yielding NSE = 0.77 ± 0.05, KGE = 0.85 ± 0.05, R2 = 0.77 ± 0.05, and RMSE = 20.2 ± 3.1 m3 s−1, representing a major improvement over raw MLP/XGB outputs (NSE ≈ 0.5). Time-series, scatter, and seasonal diagnostics confirm accurate reproduction of wet- and dry-season dynamics, absence of low-frequency drift, and preservation of seasonal variance. The probabilistic stage (GA2) constructs a hydro-adaptive prediction interval whose width (max-min streamflow) and asymmetry evolve with seasonal hydrological regimes. The optimized configuration achieves comparative coverage COV = 0.86 ± 0.00, hit rate p = 0.96 ± 0.04, and relative width r = 2.40 ± 0.15, correctly expanding uncertainty during wet-season peaks and contracting during dry-season recessions. SHAP analysis reveals a coherent predictor hierarchy dominated by streamflow persistence, precipitation structure, temperature extremes, and evapotranspiration, jointly explaining most of the predictive variance. By combining regressive precision, probabilistic realism, and interpretability within a unified evolutionary architecture, the HHFS provides a transparent, physically grounded, and operationally robust tool for reservoir management, drought monitoring, and hydro-climatic early-warning systems in data-limited regions. Full article
(This article belongs to the Special Issue Climate Modeling and Impacts of Climate Change on Hydrological Cycle)
Show Figures

Figure 1

20 pages, 415 KB  
Article
Cross-Cancer Transfer Learning for Gastric Cancer Risk Prediction from Electronic Health Records
by Daeyoung Hong, Jiung Kim and Jiyong Jung
Diagnostics 2025, 15(24), 3175; https://doi.org/10.3390/diagnostics15243175 - 12 Dec 2025
Viewed by 764
Abstract
Background: Timely identification of individuals at elevated risk for gastric cancer (GC) within routine care could enable earlier endoscopy and referral. We posit that cancers within the gastrointestinal/hepatopancreatobiliary spectrum share signals that can be leveraged via transfer learning on electronic health records [...] Read more.
Background: Timely identification of individuals at elevated risk for gastric cancer (GC) within routine care could enable earlier endoscopy and referral. We posit that cancers within the gastrointestinal/hepatopancreatobiliary spectrum share signals that can be leveraged via transfer learning on electronic health records (EHRs) variables. Methods: We developed a cross-cancer transfer learning framework (TransferGC) on structured EHR data from the MIMIC-IV database, including 508 GC cases in the target cohort, that pretrains on non-gastric gastrointestinal/hepatopancreatobiliary cancers (colorectal, esophageal, liver, pancreatic) and then adapts to GC using only structured variables. We compared transfer variants against strong non-transfer baselines (logistic regression, XGBoost, architecturally matched multilayer perceptron), with area under the receiver operating characteristic curve (AUROC) and average precision (AP) as primary endpoints and F1 and sensitivity/specificity as secondary endpoints. Results: In the full-label setting, Transfer achieved AUROC 0.854 and AP 0.600, outperforming logistic regression (LR), extreme gradient boosting (XGB) and improving over the scratch multilayer perceptron (MLP) in AUROC (+0.024) and F1 (+0.027), while AP was essentially tied (Transfer 0.600 vs. MLP 0.603). As GC labels were reduced, Transfer maintained the strongest overall performance. Conclusions: Cross-cancer transfer on structured EHR data suggests a sample-efficient route to GC risk modeling under label scarcity. However, because all models were developed and evaluated using a single-center inpatient dataset, external validation on multi-center and outpatient cohorts will be essential to establish generalizability before deployment. If confirmed in future studies, the proposed framework could be integrated into EHR-based triage and clinical decision support workflows to flag patients at elevated GC risk for timely endoscopy and specialist referral. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

32 pages, 6175 KB  
Article
Comprehensive Image-Based Validation Framework for Particle Motion in DEM Models Under Field-like Conditions
by Kuře Jiří and Kuřetová Barbora
Technologies 2025, 13(12), 570; https://doi.org/10.3390/technologies13120570 - 5 Dec 2025
Viewed by 710
Abstract
Accurate numerical prediction of particle–tool interaction requires validation methods that closely reflect the complexity of real operating conditions. This study introduces a comprehensive methodology for validating the motion of particulate material modeled using the Discrete Element Method (DEM) under field-like conditions, with experimental [...] Read more.
Accurate numerical prediction of particle–tool interaction requires validation methods that closely reflect the complexity of real operating conditions. This study introduces a comprehensive methodology for validating the motion of particulate material modeled using the Discrete Element Method (DEM) under field-like conditions, with experimental measurements conducted directly during agricultural processing. The proposed framework integrates image analysis with manual extraction of experimental particle trajectories, providing an efficient, flexible, and cost-effective validation approach. A multilayer perceptron artificial neural network (ANN) trained on 94,939 calibration samples was employed to transform pixel coordinates from two synchronized cameras into 3D spatial positions. To the best of the authors’ knowledge, this represents the first application of an ANN-based trajectory reconstruction method under laboratory soil-channel conditions that replicate field-representative geometry and operating velocities. Experiments were conducted in a laboratory soil channel using a full-scale agricultural chisel operating at 1.0 and 1.5 m·s−1, corresponding to realistic tillage velocities. The ANN achieved excellent accuracy (R2 = 0.9994, 0.9993, and 0.9988 for the X-, Y-, and Z-axes; average deviation 2.7 mm), and the subsequent comparison with DEM simulations resulted in an average nRMSE error of 4.7% for 1 m·s−1 and 9.41% for 1.5 m·s−1. The results confirm that the proposed methodology enables precise reconstruction of particle trajectories and provides a robust framework for the validation and calibration of DEM models under conditions closely approximating real field environments. Full article
Show Figures

Figure 1

Back to TopTop