Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (522)

Search Parameters:
Keywords = surrogate data analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4040 KB  
Article
Data-Driven Design of Epoxy–Granite Machine Foundations: Bayesian Optimization for Enhanced Compressive Strength and Vibration Damping
by Mohammed Y. Abdellah, Osama M. Irfan and Hanafy M. Omar
Polymers 2026, 18(4), 532; https://doi.org/10.3390/polym18040532 (registering DOI) - 21 Feb 2026
Abstract
Epoxy–granite (EG) composites, comprising granite quarry waste and low-cost epoxy, present a sustainable alternative to cast iron for machine tool foundations. This study develops a data-driven simulation framework to enhance the mechanical properties of epoxy–granite systems by integrating published experimental data with Gaussian [...] Read more.
Epoxy–granite (EG) composites, comprising granite quarry waste and low-cost epoxy, present a sustainable alternative to cast iron for machine tool foundations. This study develops a data-driven simulation framework to enhance the mechanical properties of epoxy–granite systems by integrating published experimental data with Gaussian Process Regression (GPR) surrogate modeling and Bayesian optimization (BO). The objective is to maximize compressive strength and vibration damping—both critical factors for machining accuracy and dynamic stability. Experimental results from composites with 12–25 wt% epoxy and varied aggregate gradations demonstrate compressive strengths up to 76.8 MPa and flexural strengths reaching 35.4 MPa. The peak damping ratio of 0.0202 was observed at intermediate epoxy content. Mixtures enriched with fine particles also exhibited enhanced fracture toughness and low water absorption, outperforming cementitious concretes, polymer concretes, and natural granite. To address the limitations of experimental coverage, a GPR-based simulation model was employed to explore the four-dimensional design space defined by epoxy content and aggregate fractions. Integrated with BO under realistic manufacturing constraints, the framework identifies optimal formulations comprising 22–26 wt% epoxy and 55–70% fine aggregates. These compositions yield predicted compressive strengths of 78–85 MPa and damping ratios approaching 0.022, indicating significant improvement in overall mechanical properties. Bayesian Weibull analysis further quantifies reliability, revealing shape parameters α ≈ 2.4–2.9, which indicate consistent performance with moderate variability. This work presents the first reported application of an integrated GPR-BO-Bayesian Weibull simulation framework to epoxy–granite composites, enabling simultaneous optimization of conflicting objectives and probabilistic reliability assessment of key mechanical properties. The approach reduces experimental effort by over 70% and supports the circular economy through valorization of granite waste in high-value manufacturing. Nonetheless, predictive uncertainty remains high in under-sampled regions (e.g., damping with n = 2). Future experimental validation—comprising at least 10–15 data points across varied epoxy ratios and gradations—is essential to corroborate the predicted optimum. Full article
(This article belongs to the Section Artificial Intelligence in Polymer Science)
Show Figures

Figure 1

22 pages, 675 KB  
Review
Cross-Platform Transcriptomic Analysis of 40 Human and Rodent Skeletal Muscle Exerkines
by Hash Brown Taha, Nathan Robbins, Firas-Shah Zoha, Shirley Zhu, Nandhana Vivek and Aleksander Bogoniewski
Muscles 2026, 5(1), 15; https://doi.org/10.3390/muscles5010015 - 13 Feb 2026
Viewed by 230
Abstract
Animal and human studies show that exercise induces organism-wide molecular adaptations that are partly mediated by exerkines which are secreted factors that enable inter-organ communication between tissues such as skeletal muscle, adipose tissue, liver and the brain. However, the tissue-specific responsiveness of individual [...] Read more.
Animal and human studies show that exercise induces organism-wide molecular adaptations that are partly mediated by exerkines which are secreted factors that enable inter-organ communication between tissues such as skeletal muscle, adipose tissue, liver and the brain. However, the tissue-specific responsiveness of individual exerkines and how these responses differ across species, exercise conditions and sexes remain poorly understood. To address this gap, we systematically analyzed skeletal muscle transcriptomic responses of 40 exerkines using three publicly available datasets including MetaMEx, Extrameta and the MoTrPAC 6-month-old rat training dataset. We reviewed exerkine-specific regulation in humans, mice and rats across acute and chronic exercise and inactivity. We determined conserved, non-conserved, and discordant exerkines across species and whether they were dependent on exercise modality or sex. Our review reveals substantial heterogeneity in skeletal muscle transcriptomic exerkine regulation with only a small subset showing conserved changes across species. Additionally, a key limitation is that our analysis was limited to transcriptomic data and may not reflect protein-level abundance, secretion, or uptake by recipient tissues. Therefore, we highlight a need for multi species and multi condition approaches when selecting exerkines as biomarkers or surrogate therapeutic targets. Full article
Show Figures

Figure 1

21 pages, 859 KB  
Article
Predicting the Unpredictable: Prognostic Role of Systemic Inflammatory Indices and Tumor Biology of Neoadjuvant Chemotherapy Response in Gastric and Gastroesophageal Junction Cancer—Insights from a Systematic Review and Real-World Experience
by Sibel Oyucu Orhan, Bedrettin Orhan, Yağmur Çakır, Seda Sali, Burcu Caner, Birol Ocak, Ahmet Bilgehan Şahin, Adem Deligönül, Erdem Çubukçu and Türkkan Evrensel
J. Clin. Med. 2026, 15(4), 1484; https://doi.org/10.3390/jcm15041484 - 13 Feb 2026
Viewed by 188
Abstract
Background/Objectives: Perioperative chemotherapy is the standard treatment for locally advanced gastric and gastroesophageal junction adenocarcinoma; however, substantial uncertainty remains regarding the optimal management of non-responding patients and the prognostic relevance of biological and inflammatory biomarkers. This study aimed to determine, using real-world data [...] Read more.
Background/Objectives: Perioperative chemotherapy is the standard treatment for locally advanced gastric and gastroesophageal junction adenocarcinoma; however, substantial uncertainty remains regarding the optimal management of non-responding patients and the prognostic relevance of biological and inflammatory biomarkers. This study aimed to determine, using real-world data integrated with a comprehensive literature review, whether long-term survival is driven primarily by the choice of chemotherapy regimen or by the tumor’s intrinsic biological aggressiveness and the host’s systemic inflammatory response. Methods: A retrospective analysis was performed of 43 patients with locally advanced gastric cancer who received neoadjuvant chemotherapy. Survival outcomes were stratified by regimen (FLOT versus non-FLOT) and analyzed using Kaplan–Meier methods. The prognostic value of clinicopathological features and systemic inflammatory indices was assessed using multivariate Cox regression models to identify independent predictors of mortality. Results: Although FLOT showed a trend toward improved overall survival (OS) (median not reached vs. 18.9 months), this difference did not reach statistical significance. Univariate analysis linked lymphovascular invasion (LVI) (HR = 4.17; p = 0.003), pan-cytokeratin (panCK) (HR = 2.44; p = 0.032), and monocyte-to-lymphocyte ratio (MLR) (HR = 1.73; p = 0.027) with survival. To minimize overfitting, two multivariate models were constructed. The first confirmed LVI (HR = 7.32; p < 0.001) and panCK (HR = 4.30; p = 0.006) as independent prognostic markers. The second identified MLR (HR = 1.65; p = 0.033) and panCK (HR = 2.42; p = 0.034) as independent adverse factors. Conclusions: Our findings suggest a paradigm shift in prognostic assessment for locally advanced gastric cancer: therapeutic success appears to depend more on underlying tumor biology and the immune microenvironment than on any specific neoadjuvant regimen. High MLR and LVI serve as strong surrogate markers of a biologically aggressive, chemotherapy-resistant phenotype. Consequently, future clinical strategies should move beyond a “one-size-fits-all” chemotherapy approach and prioritize these biomarkers for risk stratification and personalization of multimodal therapy. Full article
Show Figures

Figure 1

16 pages, 2615 KB  
Article
Multi-Point Stretch Forming Springback Prediction and Parameter Sensitivity Analysis Based on GWO-CatBoost
by Xue Chen, Dongmei Wang, Chi Zhang, Renwei Wang, Changliang Zhang and Yueteng Zhou
Appl. Sci. 2026, 16(4), 1790; https://doi.org/10.3390/app16041790 - 11 Feb 2026
Viewed by 98
Abstract
Springback control in Multi-Point Stretch Forming (MPSF) is significantly hindered by the computational intensity of Finite Element Analysis (FEA) and the limited predictive robustness of traditional regression methods. This study develops a hybrid GWO-CatBoost model acting as a data-driven surrogate for MPSF simulations [...] Read more.
Springback control in Multi-Point Stretch Forming (MPSF) is significantly hindered by the computational intensity of Finite Element Analysis (FEA) and the limited predictive robustness of traditional regression methods. This study develops a hybrid GWO-CatBoost model acting as a data-driven surrogate for MPSF simulations by integrating the Grey Wolf Optimizer (GWO) with the CatBoost algorithm for high-precision springback forecasting. An FEA model of the MPSF process was initially validated through experimental comparison under a representative working condition to assess modeling accuracy. A comprehensive dataset comprising 1200 scenarios was generated via a full factorial design, incorporating key variables: curvature radius, sheet thickness, cushion thickness, and pre-stretching rate. In this study, the GWO was employed to perform automated hyperparameter tuning for CatBoost by optimizing the learning rate, tree depth, and number of iterations, thereby enabling accurate modeling of the complex nonlinear relationship between process inputs and numerical springback values. Numerical evaluations demonstrate that the GWO-CatBoost model outperforms GWO-XGBoost and GWO-Random Forest benchmarks, achieving a Coefficient of Determination (R2) of 0.9293, a root mean square error (RMSE) of 0.0274 mm and mean absolute error (MAE) of 0.0189 mm. Sensitivity analysis identifies sheet thickness as the dominant factor (46% contribution), with cushion thickness as the secondary driver (23%). This predictive framework serves as a computationally efficient auxiliary surrogate, designed to assist iterative finite element analyses and support process optimization in the manufacture of complex-curved panels. Full article
Show Figures

Figure 1

36 pages, 4683 KB  
Review
Machine Learning for Satellite Solar-Induced Fluorescence: Retrieval, Reconstruction, Downscaling, and Applications
by Jochem Verrelst, Yuxin Zhang, Miguel Morata, Emma De Clerck and Leizhen Liu
Remote Sens. 2026, 18(4), 553; https://doi.org/10.3390/rs18040553 - 9 Feb 2026
Viewed by 261
Abstract
Satellite-observed solar-induced chlorophyll fluorescence (SIF) provides a direct radiative link between solar radiation, photosystem de-excitation and vegetation photosynthetic activity. As multiple satellite missions now deliver global SIF products, machine learning (ML) has become a key tool for: (i) flexible nonlinear SIF retrieval, (ii) [...] Read more.
Satellite-observed solar-induced chlorophyll fluorescence (SIF) provides a direct radiative link between solar radiation, photosystem de-excitation and vegetation photosynthetic activity. As multiple satellite missions now deliver global SIF products, machine learning (ML) has become a key tool for: (i) flexible nonlinear SIF retrieval, (ii) spatial reconstruction and downscaling of SIF fields, (iii) full-spectrum SIF reconstruction beyond narrow absorption windows, and (iv) data-driven analysis of the SIF–gross primary production (GPP) relationship. In addition, ML methods are increasingly used for: (v) uncertainty quantification (UQ) along the SIF information chain, and (vi) emulation (i.e., surrogate modelling) of radiative transfer models (RTMs) to accelerate computationally demanding SIF workflows. This review provides a conceptual and methodological survey of recent ML applications across the satellite SIF processing chain, summarises emerging products and methods, and highlights open challenges in uncertainty treatment, spectral reconstruction, and hybrid RTM–ML approaches. Particular emphasis is placed on the upcoming ESA FLEX mission, planned for launch in 2026, which will deliver multi-band SIF observations optimised for photosynthesis monitoring. While FLEX Level-2 (L2) operational processing will be based on physically grounded retrieval algorithms developed within ESA projects, ML is expected to play an important role in scientific exploitation and in the development of higher-level products (L3/L4), supporting high-resolution, uncertainty-aware SIF and GPP products and helping to bridge scales from leaf to ecosystem. Full article
(This article belongs to the Special Issue Remote Sensing and Modelling of Terrestrial Ecosystems Functioning)
Show Figures

Figure 1

30 pages, 9131 KB  
Article
Multi-Objective Optimization Design of High-Power Permanent Magnet Synchronous Motor Based on Surrogate Model
by Zhihao Zhu, Xiang Li, Yingzhi Lin, Hao Wu, Junhui Chen, Niannian Zhang, Thomas Wu, Bo Lin and Suyan Wang
Sustainability 2026, 18(3), 1705; https://doi.org/10.3390/su18031705 - 6 Feb 2026
Viewed by 322
Abstract
Energy scarcity has evolved into one of the most pressing challenges confronting the global community today. Fuel-driven loaders suffer from drawbacks such as high fuel consumption, low energy conversion efficiency, and heavy pollution, which not only aggravate atmospheric environmental pollution but also exacerbate [...] Read more.
Energy scarcity has evolved into one of the most pressing challenges confronting the global community today. Fuel-driven loaders suffer from drawbacks such as high fuel consumption, low energy conversion efficiency, and heavy pollution, which not only aggravate atmospheric environmental pollution but also exacerbate the global energy crisis, directly undermining sustainable development goals. In contrast, permanent magnet synchronous motors (PMSMs) have become the preferred choice for the electrification of loaders owing to their exceptional torque density, strong overload capacity, and high reliability. However, during the optimal design of high-power interior permanent magnet synchronous motors (IPMSMs), traditional methods encounter issues with inadequate optimization efficiency and excessive computational expenses, thus hindering the large-scale deployment of power systems for eco-friendly loaders. Therefore, this paper takes a 125 kW, 3000 rpm IPMSM as the research object and proposes a multi-objective optimization strategy integrating a high-precision surrogate model with modern intelligent algorithms. This approach not only enhances motor performance but also cuts down computational overhead, which holds considerable significance for reducing industrial carbon emissions and driving the sustainable development of the manufacturing industry. Taking the key performance of IPMSM as the optimization objective and the related structural parameters as the optimization variables, the multi-performance characteristic index, interaction effect and comprehensive sensitivity of the variables are calculated and analyzed by fuzzy Taguchi experiment, and the hierarchical dimension reduction in the variables is completed. The Multicriteria Optimal-Latin Hypercube Sampling (MO-LHS) method is adopted to construct the sample data space, and a back-propagation neural network (BPNN) surrogate model is used to predict and fit the motor performance. The second-generation non-dominated sorting genetic algorithm (NSGA-II) is employed for iterative optimization, and the optimized motor dimension parameters are obtained through the Pareto optimal solution. Finally, through finite element analysis (FEA) and experiments, the rated torques obtained are 417.6 N·m and 425.1 N·m, respectively, with an error not exceeding 1.8%. This verifies the correctness and effectiveness of the proposed multi-objective optimization method based on the surrogate model. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

29 pages, 4499 KB  
Article
Surrogate-Assisted Many-Objective Optimization of Injection Molding: Effects of Objective Selection and Sampling Density
by T. Marques, J. B. Melo, A. J. Pontes and A. Gaspar-Cunha
Appl. Sci. 2026, 16(3), 1578; https://doi.org/10.3390/app16031578 - 4 Feb 2026
Viewed by 261
Abstract
In injection molding, advanced numerical modeling tools, such as Moldex3D, can significantly improve product development by optimizing part functionality, structural integrity, and material efficiency. However, the complex and nonlinear interdependencies between the several decision variables and objectives, considering the various operational phases, constitute [...] Read more.
In injection molding, advanced numerical modeling tools, such as Moldex3D, can significantly improve product development by optimizing part functionality, structural integrity, and material efficiency. However, the complex and nonlinear interdependencies between the several decision variables and objectives, considering the various operational phases, constitute a challenge to the inherent complexity of injection molding processes. This complexity often exceeds the capacity of conventional optimization methods, necessitating more sophisticated analytical approaches. Consequently, this research aims to evaluate the potential of integrating intelligent algorithms, specifically the selection of objectives using Principal Component Analysis and Mutual Information/Clustering, metamodels using Artificial Neural Networks, and optimization using Multi-Objective Evolutionary Algorithms, to manage and solve complex, real-world injection molding problems effectively. Using surrogate modeling to reduce computational costs, the study systematically investigates multiple methodological approaches, algorithmic configurations, and parameter-tuning strategies to enhance the robustness and reliability of predictive and optimization outcomes. The research results highlight the significant potential of data-mining methodologies, demonstrating their ability to capture and model complex relationships among variables accurately and to optimize conflicting objectives efficiently. In due course, the enhanced capabilities provided by these integrated data-mining techniques result in substantial improvements in mold design, process efficiency, product quality, and overall economic viability within the injection molding industry. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3855 KB  
Article
A Meta-Optimization Framework Based on Hybrid Neuro-Regression for Quality-Oriented Laser Transmission Welding of PMMA–Metal Joints
by Nilay Kucukdogan
Appl. Sci. 2026, 16(3), 1563; https://doi.org/10.3390/app16031563 - 4 Feb 2026
Viewed by 174
Abstract
This study presents an integrated modeling and optimization framework for laser transmission welding (LTW) of transparent polymethyl methacrylate (PMMA) joints using single- and multi-core copper wires as energy absorbers. The highly nonlinear relationships between laser power, welding speed, and spot diameter and the [...] Read more.
This study presents an integrated modeling and optimization framework for laser transmission welding (LTW) of transparent polymethyl methacrylate (PMMA) joints using single- and multi-core copper wires as energy absorbers. The highly nonlinear relationships between laser power, welding speed, and spot diameter and the resulting shear force and weld width were modeled using a hybrid neuro-regression strategy combining data-driven learning with physically interpretable analytical formulations. A wide range of candidate mathematical models were systematically evaluated based on training and testing performance, residual behavior, and physical consistency. The results demonstrate that models exhibiting near-perfect training accuracy frequently suffered from severe overfitting and poor generalization, whereas intermediate-complexity formulations provided a more reliable balance between accuracy and robustness. Comparative analysis further showed that multi-core absorbers consistently produced higher shear strength and more uniform weld seams than single-core configurations. The selected robust models were subsequently integrated into a two-level ensemble meta-optimization framework employing Differential Evolution, Nelder–Mead, Random Search, and Simulated Annealing algorithms under multiple design scenarios. The meta-optimization process successfully eliminated model- and algorithm-dependent extreme solutions and identified stable consensus parameter regions. For the multi-core system, an optimal combination of 30 W laser power, 20 mm/s welding speed, and 0.7 mm spot diameter was obtained, achieving improved mechanical performance while remaining within experimentally validated limits. The proposed framework provides a physically grounded and reliable strategy for surrogate-based optimization of nonlinear welding processes. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

35 pages, 2881 KB  
Review
Systematic Mapping of Artificial Intelligence Applications in Finite-Element-Based Structural Engineering
by Villem Vaktskjold, Lars Olav Toppe, Marcin Luczkowski, Anders Rønnquist and David Morin
Buildings 2026, 16(3), 644; https://doi.org/10.3390/buildings16030644 - 3 Feb 2026
Viewed by 332
Abstract
This study systematically maps how artificial intelligence (AI) has been applied within finite-element (FE)-based structural engineering. A corpus of 5995 unique English-language publications was compiled and classified by discipline, with 3345 relevant papers further categorized by application group. A representative subset of 372 [...] Read more.
This study systematically maps how artificial intelligence (AI) has been applied within finite-element (FE)-based structural engineering. A corpus of 5995 unique English-language publications was compiled and classified by discipline, with 3345 relevant papers further categorized by application group. A representative subset of 372 studies underwent detailed full-text classification across seven analytical dimensions covering AI methods, element formulations, materials, and structural objects. The analysis reveals rapid growth after 2015, including a pronounced expansion of surrogate modeling and data-driven prediction methods. The disciplinary composition of the literature has also evolved, with structural engineering studies becoming more prominent in recent years relative to earlier decades. Optimization & Design remains the largest application area across the full dataset, while Structural Performance Prediction and FEM Acceleration/Surrogate Modeling show the fastest growth, reflecting increasing emphasis on predictive, solver-efficient, and hybrid physics–data approaches. These findings indicate a maturing field in which AI is increasingly embedded across all stages of FE-based analysis and design. This study provides a structured overview of methodological patterns, identifies emerging hybrid strategies, and highlights opportunities for future research and industrial integration. Full article
Show Figures

Figure 1

24 pages, 4359 KB  
Article
GPU-Accelerated Data-Driven Surrogates for Transient Simulation of Tileable Piezoelectric Microactuators
by John Scumniotales, Jason Clark and Daniel Tran
Actuators 2026, 15(2), 94; https://doi.org/10.3390/act15020094 - 2 Feb 2026
Viewed by 277
Abstract
Finite element analysis (FEA) remains the gold standard for simulating piezoelectric microactuators because it resolves coupled electromechanical fields with high fidelity. However, transient FEA becomes prohibitively expensive when thousands of actuators must be simulated. This work presents a data-driven surrogate modeling framework for [...] Read more.
Finite element analysis (FEA) remains the gold standard for simulating piezoelectric microactuators because it resolves coupled electromechanical fields with high fidelity. However, transient FEA becomes prohibitively expensive when thousands of actuators must be simulated. This work presents a data-driven surrogate modeling framework for tileable, PZT-5H microactuators enabling fast, dynamic, and parallel predictions of actuator displacement over multi-step horizons from short displacement history windows, augmented with the corresponding prescribed voltage and traction samples over that same history window. High-fidelity COMSOL simulations are used to generate a dataset aiming to encompass the full operational envelope of our actuator under stochastically sampled and procedurally generated input waveform families. From these families, we construct a supervised learning dataset of time histories, displacement, and applied loads. From this, we train a recurrent sequence-to-sequence neural network that predicts a multi-step open-loop displacement rollout conditioned on the most recent electromechanical history. The resulting model can be leveraged to perform batched inference for millions of actuators on GPU hardware, opening up a wide range of new applications such as reinforcement learning via digital twins, scalable design and simulation for piezoelectric artificial-muscle systems, and accelerated optimization. Full article
(This article belongs to the Section Miniaturized and Micro Actuators)
Show Figures

Figure 1

22 pages, 1267 KB  
Article
Application of a Hybrid Explainable ML–MCDM Approach for the Performance Optimisation of Self-Compacting Concrete Containing Crumb Rubber and Calcium Carbide Residue
by Musa Adamu, Shrirang Madhukar Choudhari, Ashwin Raut, Yasser E. Ibrahim and Sylvia Kelechi
J. Compos. Sci. 2026, 10(2), 76; https://doi.org/10.3390/jcs10020076 - 2 Feb 2026
Viewed by 577
Abstract
The combined incorporation of crumb rubber (CR) and calcium carbide residue (CCR) in self-compacting concrete (SCC) induces competing and nonlinear effects on its fresh and hardened properties, making the simultaneous optimisation of workability, strength, durability, and stability challenging. CR reduces density and enhances [...] Read more.
The combined incorporation of crumb rubber (CR) and calcium carbide residue (CCR) in self-compacting concrete (SCC) induces competing and nonlinear effects on its fresh and hardened properties, making the simultaneous optimisation of workability, strength, durability, and stability challenging. CR reduces density and enhances deformability and flow stability but adversely affects strength, whereas CCR improves particle packing, cohesiveness, and early-age strength up to an optimal replacement level. To systematically address these trade-offs, this study proposes an integrated multi-criteria decision-making (MCDM)–explainable machine learning–global optimisation framework for sustainable SCC mix design. A composite performance score encompassing fresh, mechanical, durability, and thermal indicators is constructed using a weighted MCDM scheme and learned through surrogate machine-learning models. Three learners—glmnet, ranger, and xgboost—are tuned using v-fold cross-validation, with xgboost demonstrating the highest predictive fidelity. Given the limited experimental dataset, bootstrap out-of-bag validation is employed to ensure methodological robustness. Model-agnostic interpretability, including permutation importance, SHAP analysis, and partial-dependence plots, provides physical transparency and reveals that CR and CCR exert strong yet opposing influences on the composite response, with CCR partially compensating for CR-induced strength losses through enhanced cohesiveness. Differential Evolution (DEoptim) applied to the trained surrogate identifies optimal material proportions within a continuous design space, favouring mixes with 5–10% CCR and limited CR content. Among the evaluated mixes, 0% CR–5% CCR delivers the best overall performance, while 20% CR–5% CCR offers a balanced strength–ductility compromise. Overall, the proposed framework provides a transparent, interpretable, and scalable data-driven pathway for optimising SCC incorporating circular materials under competing performance requirements. Full article
(This article belongs to the Special Issue Sustainable Cementitious Composites)
Show Figures

Figure 1

23 pages, 3101 KB  
Article
Inverse Thermal Process Design for Interlayer Temperature Control in Wire-Directed Energy Deposition Using Physics-Informed Neural Networks
by Fuad Hasan, Abderrachid Hamrani, Tyler Dolmetsch, Somnath Somadder, Md Munim Rayhan, Arvind Agarwal and Dwayne McDaniel
J. Manuf. Mater. Process. 2026, 10(2), 52; https://doi.org/10.3390/jmmp10020052 - 1 Feb 2026
Viewed by 288
Abstract
Wire-directed energy deposition (W-DED) produces steep thermal gradients and rapid heating-cooling cycles due to the moving heat source, where modest variations in process parameters significantly alter heat input per unit length and therefore the full thermal history. This sensitivity makes process tuning by [...] Read more.
Wire-directed energy deposition (W-DED) produces steep thermal gradients and rapid heating-cooling cycles due to the moving heat source, where modest variations in process parameters significantly alter heat input per unit length and therefore the full thermal history. This sensitivity makes process tuning by trial-and-error or repeated FE sweeps expensive, motivating inverse analysis. This work proposes an inverse thermal process design framework that couples single-track experiments, a calibrated finite element (FE) thermal model, and a parametric physics-informed neural network (PINN) surrogate. By using experimentally calibrated heat-loss physics to define the training constraints, the PINN learns a parameterized thermal response from physics alone (no temperature data in the PINN loss), enabling inverse design without repeated FE runs. Thermocouple measurements are used to calibrate the convection film coefficient and emissivity in the FE model, and those parameters are used to train a parametric PINN over continuous ranges of arc power (1.5–3.0 kW) and travel speed (0.005–0.015 m/s) without using temperature data in the loss function. The trained PINN model was validated against the calibrated FE model at 3 probe locations with different power and travel speed combinations. Across these benchmark conditions, the mean absolute errors are between 6.5–17.4 °C, with cooling-tail errors ranging from 1.8–12.1 °C. The trained surrogate is then embedded in a sampling-based inverse optimization loop to identify power-speed combinations that achieve prescribed interlayer temperatures at a fixed dwell time. For target interlayer temperatures of 100, 130, and 160 °C with a 10 s dwell time, the optimized solutions remain within 3.3–5.6 °C of the target according to the PINN, while FE verification is within 4.0–6.6 °C. The results demonstrate that a physics-only parametric PINN surrogate enables inverse thermal process design without repeated FE runs while establishing a single-track baseline for extension to multi-track and multi-layer builds. Full article
Show Figures

Figure 1

22 pages, 6391 KB  
Article
A Multimodal Machine Learning Framework for Optimizing Coated Cutting Tool Performance in CNC Turning Operations
by Paschalis Charalampous
Machines 2026, 14(2), 161; https://doi.org/10.3390/machines14020161 - 1 Feb 2026
Viewed by 339
Abstract
The present study introduces a comprehensive machine-learning framework for modeling, interpretation and optimization of the CNC turning procedure employing coated cutting inserts. The primary novelty of this work lies in the integrated pipeline that leverages a multimodal experimental dataset in order to simultaneously [...] Read more.
The present study introduces a comprehensive machine-learning framework for modeling, interpretation and optimization of the CNC turning procedure employing coated cutting inserts. The primary novelty of this work lies in the integrated pipeline that leverages a multimodal experimental dataset in order to simultaneously model surface roughness and residual stresses, as well as to interpret these predictions within a unified optimization scheme. Particularly, a deep learning model was developed incorporating a convolutional encoder for analyzing time-series signals and a static encoder for the investigated machining parameters. This fused representation enabled accurate multi-task predictions, capturing the thermo-mechanical interactions that govern surface integrity. Additionally, to ensure interpretability, a surrogate meta-model based on the deep model’s predictions was established and evaluated via Shapley Additive Explanations. This analysis quantified the relative influence of each cutting parameter, linking data-driven insights to contact-mechanical principles. Furthermore, a multi-objective optimization scheme was implemented to derive Pareto optimal trade-offs among the examined parameters that could enhance the machining efficiency. Overall, the integration of deep learning, interpretable modeling and optimization established a coherent framework for data-driven decision making in turning, highlighting the importance of model transparency in advancing intelligent manufacturing systems. Full article
Show Figures

Figure 1

17 pages, 3309 KB  
Article
Semantic Segmentation for Walkability Assessment in Southeast Asian Streetscapes
by Yunkyung Choi, Darren Ho Di Xiang and Samuel Chng
Sustainability 2026, 18(3), 1355; https://doi.org/10.3390/su18031355 - 29 Jan 2026
Viewed by 238
Abstract
Walkable urban environments are increasingly recognized as essential for sustainable mobility, public health, and social well-being. While macro-scale indicators of walkability are widely used, growing evidence highlights the importance of street-level physical conditions experienced at eye level. Advances in computer vision and street [...] Read more.
Walkable urban environments are increasingly recognized as essential for sustainable mobility, public health, and social well-being. While macro-scale indicators of walkability are widely used, growing evidence highlights the importance of street-level physical conditions experienced at eye level. Advances in computer vision and street view imagery (SVI) offer new opportunities to quantify such streetscape characteristics, yet the applicability of existing semantic segmentation models in developing urban contexts remains underexplored. This study evaluates the suitability of five state-of-the-art semantic segmentation models for streetscape analysis using crowdsourced SVI from Phnom Penh, Cambodia. Through a comparative analysis, Oneformer was identified as the most suitable semantic segmentation model, uniquely successful in identifying street vendors through surrogate semantic class (base) and street furniture. A rigorous quantitative validation using manually annotated images confirmed the model’s reliability, achieving an mIoU of 65.7% within the complex urban fabric of Phnom Penh. This performance stems from OneFormer’s unified task-conditioned framework, which integrates semantic, instance, and panoptic information within a single query. Such an architecture ensures enhanced boundary stability and semantic coherence by consolidating visual noise into meaningful units, making it particularly robust for processing the irregular street elements typical of Southeast Asian cities. Applying the selected model revealed pronounced spatial variation in streetscape composition across three neighborhoods, reflecting distinct development stages and levels of informality. These findings suggest that carefully selected pretrained models can yield analytically useful representations of streetscape conditions in data-constrained settings, supporting more context-sensitive and inclusive urban analysis in rapidly developing cities. Full article
Show Figures

Figure 1

24 pages, 873 KB  
Article
Multi-Scale Digital Twin Framework with Physics-Informed Neural Networks for Real-Time Optimization and Predictive Control of Amine-Based Carbon Capture: Development, Experimental Validation, and Techno-Economic Assessment
by Mansour Almuwallad
Processes 2026, 14(3), 462; https://doi.org/10.3390/pr14030462 - 28 Jan 2026
Viewed by 220
Abstract
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital [...] Read more.
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital Twin (DT) framework integrating Physics-Informed Neural Networks (PINNs) to address these challenges through real-time optimization. The framework combines molecular dynamics, process simulation, computational fluid dynamics, and deep learning to enable real-time predictive control. A key innovation is the sequential training algorithm with domain decomposition, specifically designed to handle the nonlinear transport equations governing CO2 absorption with enhanced convergence properties. The algorithm achieves prediction errors below 1% for key process variables (R2 > 0.98) when validated against CFD simulations across 500 test cases. Experimental validation against pilot-scale absorber data (12 m packing, 30 wt% MEA) confirms good agreement with measured profiles, including temperature (RMSE = 1.2 K), CO2 loading (RMSE = 0.015 mol/mol), and capture efficiency (RMSE = 0.6%). The trained surrogate enables computational speedups of up to four orders of magnitude, supporting real-time inference with response times below 100 ms suitable for closed-loop control. Under the conditions studied, the framework demonstrates reboiler duty reductions of 18.5% and operational cost reductions of approximately 31%. Sensitivity analysis identifies liquid-to-gas ratio and MEA concentration as the most influential parameters, with mechanistic explanations linking these to mass transfer enhancement and reaction kinetics. Techno-economic assessment indicates favorable investment metrics, though results depend on site-specific factors. The framework architecture is designed for extensibility to alternative solvent systems, with future work planned for industrial-scale validation and uncertainty quantification through Bayesian approaches. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

Back to TopTop