Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (121)

Search Parameters:
Keywords = deterministic and stochastic learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 31110 KB  
Article
Explicit Features Versus Implicit Spatial Relations in Geomorphometry: A Comparative Analysis for DEM Error Correction in Complex Geomorphological Regions
by Shuyu Zhou, Mingli Xie, Nengpan Ju, Changyun Feng, Qinghua Lin and Zihao Shu
Sensors 2026, 26(6), 1995; https://doi.org/10.3390/s26061995 - 23 Mar 2026
Viewed by 70
Abstract
Global Digital Elevation Models (DEMs) exhibit systematic biases constrained by acquisition geometry and surface penetration. This study aims to evaluate whether the increasing complexity of geometric deep learning (e.g., Graph Neural Networks, GNNs) is justified by performance gains over established feature engineering paradigms [...] Read more.
Global Digital Elevation Models (DEMs) exhibit systematic biases constrained by acquisition geometry and surface penetration. This study aims to evaluate whether the increasing complexity of geometric deep learning (e.g., Graph Neural Networks, GNNs) is justified by performance gains over established feature engineering paradigms (e.g., XGBoost) under the constraints of sparse altimetry supervision. We established a rigorous comparative framework across four mainstream products—ALOS World 3D, Copernicus DEM, SRTM GL1, and TanDEM-X—using Sichuan Province, China, as a representative natural laboratory. Our results reveal a fundamental scale mismatch (where the ~485 m average spacing of sampled altimetry footprints dwarfs the local terrain resolution): despite their topological complexity, Hybrid GNN models fail to establish a statistically significant accuracy advantage over the systematically optimized XGBoost baseline, demonstrating RMSE parity. Mechanistically, we uncover a critical divergence in decision logic: XGBoost relies on a stable “Physics Skeleton” consistently dominated by deterministic features (terrain aspect and vegetation density), whereas GNNs exhibit severe “Attribution Stochasticity” (ρ  0.63–0.77). The GNN component acts as a residual-dependent latent feature learner rather than discovering universal topological laws. We conclude that for geospatial regression tasks relying on sparse supervision, “Physics Trumps Geometry.” A “Feature-First” paradigm that prioritizes robust, domain-knowledge-based physical descriptors outweighs the indeterminate complexity of “Black Box” architectures. This study underscores the imperative of prioritizing explanatory stability over marginal accuracy gains to foster trusted Geo-AI. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 12029 KB  
Article
Investigation of Anticipation in Motor Control Using Kinematic and Kinetic Metrics in a Leader-Follower Task
by İrem Eşme, Ali Emre Turgut and Kutluk Bilge Arıkan
Appl. Sci. 2026, 16(6), 2840; https://doi.org/10.3390/app16062840 - 16 Mar 2026
Viewed by 172
Abstract
Anticipation allows individuals to prepare actions by predicting upcoming events, yet its influence on motor learning and its practical relevance for rehabilitation remain unclear. This study investigates how anticipation mechanisms shape motor learning and skill acquisition in a virtual leader–follower task and explores [...] Read more.
Anticipation allows individuals to prepare actions by predicting upcoming events, yet its influence on motor learning and its practical relevance for rehabilitation remain unclear. This study investigates how anticipation mechanisms shape motor learning and skill acquisition in a virtual leader–follower task and explores their potential for adaptive training. Forty-nine healthy adults performed a joystick-controlled tracking task in virtual reality, following a dynamic leader that was always visible (Control), became invisible at regular intervals (Deterministic Anticipation), or disappeared randomly (Stochastic Anticipation) to elicit anticipatory behavior. Kinematic and kinetic metrics and time-series analysis were used to evaluate synchrony, smoothness, and coordination. Performance improved from baseline to retention, with no distinct differences in final performance between the groups. However, slope-based analyses found that anticipation-based training accelerated learning, especially in the novice subgroup (baseline score < 35), with marked improvements in metrics such as score pause duration, temporal lag, and spatial error. Although participants reached similar final performance levels across protocols, the rate and pattern of learning differed across training protocols. Anticipation accelerates early-stage improvements, with the strongest effects observed in novice participants. The paradigm provides a high-resolution framework for adaptive motor training and assessment. Full article
Show Figures

Figure 1

28 pages, 1600 KB  
Article
A Data-Driven Deep Reinforcement Learning Framework for Real-Time Economic Dispatch of Microgrids Under Renewable Uncertainty
by Biao Dong, Shijie Cui and Xiaohui Wang
Energies 2026, 19(6), 1481; https://doi.org/10.3390/en19061481 - 16 Mar 2026
Viewed by 172
Abstract
The real-time economic dispatch of microgrids (MGs) is challenged by the high penetration of renewable energy and the resulting source–load uncertainties. Conventional optimization-based scheduling methods rely heavily on accurate probabilistic models and often suffer from high computational burdens, which limits their real-time applicability. [...] Read more.
The real-time economic dispatch of microgrids (MGs) is challenged by the high penetration of renewable energy and the resulting source–load uncertainties. Conventional optimization-based scheduling methods rely heavily on accurate probabilistic models and often suffer from high computational burdens, which limits their real-time applicability. To address these challenges, a data-driven deep reinforcement learning (DRL) framework is proposed for real-time microgrid energy management. The MG dispatch problem is formulated as a Markov decision process (MDP), and a Deep Deterministic Policy Gradient (DDPG) algorithm is adopted to efficiently handle the high-dimensional continuous action space of distributed generators and energy storage systems (ESS). The system state incorporates renewable generation, load demand, electricity price, and ESS operational conditions, while the reward function is designed as the negative of the operational cost with penalty terms for constraint violations. A continuous-action policy network is developed to directly generate control commands without action discretization, enabling smooth and flexible scheduling. Simulation studies are conducted on an extended European low-voltage microgrid test system under both deterministic and stochastic operating scenarios. The proposed approach is compared with model-based methods (MPC and MINLP) and representative DRL algorithms (SAC and PPO). The results show that the proposed DDPG-based strategy achieves competitive economic performance, fast convergence, and good adaptability to different initial ESS conditions. In stochastic environments, the proposed method maintains operating costs close to the optimal MINLP reference while significantly reducing the online computational time. These findings demonstrate that the proposed framework provides an efficient and practical solution for the real-time economic dispatch of microgrids with high renewable penetration. Full article
Show Figures

Figure 1

23 pages, 1158 KB  
Article
A Hybrid Model Reduction Method for Dual-Continuum Model with Random Inputs
by Lingling Ma
Computation 2026, 14(3), 69; https://doi.org/10.3390/computation14030069 - 13 Mar 2026
Viewed by 109
Abstract
In this paper, a hybrid model reduction method for solving flows in fractured media is proposed. The approach integrates the Generalized Multiscale Finite Element Method (GMsFEM) with a novel variable-separation (VS) technique. Compared with many widely used variable-separation methods, the proposed model reduction [...] Read more.
In this paper, a hybrid model reduction method for solving flows in fractured media is proposed. The approach integrates the Generalized Multiscale Finite Element Method (GMsFEM) with a novel variable-separation (VS) technique. Compared with many widely used variable-separation methods, the proposed model reduction method shares their merits but has lower computation complexity and higher efficiency. Within this framework, we can get the low-rank variable-separation expansion of dual-continuum model solutions in a systematic enrichment manner. No iteration is performed at each enrichment step. The expansion is constructed using two sets of basis functions: stochastic basis functions and deterministic physical basis functions, both derived from offline, model-oriented computations. To efficiently construct the stochastic basis functions, the original model is used to learn stochastic information. Meanwhile, the deterministic physical basis functions are trained using solutions obtained by applying an uncoupled GMsFEM to the dual-continuum system at a select number of optimal samples. Once these bases are established, the online evaluation for each new random sample becomes highly efficient, allowing for the computation of a large number of stochastic realizations at minimal cost. To demonstrate the performance of the proposed method, two numerical examples for dual-continuum models with random inputs are presented. The results confirm that the hybrid model reduction method is both efficient and achieves high approximation accuracy. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

21 pages, 4680 KB  
Article
Hierarchical Thermocline-Aware Navigation for Underwater Gliders via Multi-Objective Path Planning and Reinforcement Learning
by Zizhao Song, Mingsong Bao and Tingting Guo
J. Mar. Sci. Eng. 2026, 14(5), 498; https://doi.org/10.3390/jmse14050498 - 6 Mar 2026
Viewed by 322
Abstract
Navigation planning and execution for underwater gliders operating in thermocline-affected environments is challenging due to the coupled influence of energy constraints, spatially distributed environmental disturbances, and limited control authority. Spatially varying thermocline structures act as structured environmental disturbances that degrade motion efficiency and [...] Read more.
Navigation planning and execution for underwater gliders operating in thermocline-affected environments is challenging due to the coupled influence of energy constraints, spatially distributed environmental disturbances, and limited control authority. Spatially varying thermocline structures act as structured environmental disturbances that degrade motion efficiency and tracking accuracy, and therefore must be explicitly considered in both path planning and control design. This paper proposes a hierarchical control-oriented decision framework for underwater glider navigation in thermocline regions. At the planning layer, a thermocline-aware multi-objective optimization problem is formulated to regulate the trade-off between navigation efficiency and cumulative environmental disturbance, characterized by total path length and cumulative thermocline exposure, respectively. A multi-objective artificial bee colony (MOABC) algorithm is employed to generate a set of Pareto-optimal reference trajectories that explicitly reveal this trade-off. At the execution layer, pitch angle regulation is formulated as a stochastic tracking control problem under environmental uncertainty. A Markov Decision Process (MDP) is constructed to model the coupled effects of pitch control on energy consumption and trajectory deviation, and a deep deterministic policy gradient (DDPG) algorithm is adopted to synthesize a feedback control policy for adaptive pitch regulation during path execution. Simulation results demonstrate that the proposed framework effectively reduces cumulative thermocline exposure and overall energy consumption while maintaining improved trajectory consistency compared with representative benchmark methods. These results indicate that integrating multi-objective planning with learning-based control provides an effective control-oriented solution for constrained underwater glider navigation in thermally stratified environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 1714 KB  
Article
A Novel Transformer Architecture for Scalable Perovskite Thin-Film Detection
by Mengke Li, Hongling Li, Yuyu Shi and Yanfang Meng
Micromachines 2026, 17(3), 314; https://doi.org/10.3390/mi17030314 - 28 Feb 2026
Viewed by 306
Abstract
The further development of scalable fabrication for perovskite solar cells has been considerably constrained by strong process variability and the lack of a reliable real-time predictive mechanism during the thin-film formation process. Existing machine learning-based methods are incapable of capturing the inherent multi-stage [...] Read more.
The further development of scalable fabrication for perovskite solar cells has been considerably constrained by strong process variability and the lack of a reliable real-time predictive mechanism during the thin-film formation process. Existing machine learning-based methods are incapable of capturing the inherent multi-stage kinetic characteristics and uncertainties of the perovskite crystallization process, as they rely on deterministic point prediction models and flatten time-series signals into static features, which necessitates more advanced modeling strategies. To address these challenges, an in situ process monitoring and predictive modeling framework based on a lightweight probabilistic Transformer is proposed for the scalable preparation of perovskite thin films. The strategically designed inputs, consisting of time-resolved photoluminescence (PL) and diffuse reflectance imaging signals acquired during the vacuum quenching process, enable the model to directly learn the conditional probability distribution of the final device performance metrics. Rather than producing a single predicted value, this method enables the explicit quantification of prediction uncertainty, providing statistical support for uncertainty-aware process assessment. Leveraging its advantages over feed-forward neural networks and traditional tree-based machine learning methods, the proposed Transformer architecture effectively captures the staged and non-stationary kinetic features of thin-film formation. Consequently, it exhibits higher robustness and superior uncertainty calibration capability during the early-stage prediction phase. The results demonstrate that the probabilistic Transformer-based modeling paradigm provides a viable pathway toward uncertainty-aware, data-driven process evaluation in perovskite manufacturing. This framework extends its application beyond perovskite photovoltaic device fabrication, providing a generalizable modeling strategy for real-time predictive assessment in the preparation of other complex materials governed by irreversible stochastic dynamics. Full article
Show Figures

Figure 1

35 pages, 1715 KB  
Review
Optimization Strategies for Large-Scale PV Integration in Smart Distribution Networks: A Review
by Stefania Conti, Antonino Laudani, Santi A. Rizzo, Nunzio Salerno, Gian Giuseppe Soma, Giuseppe M. Tina and Cristina Ventura
Energies 2026, 19(5), 1191; https://doi.org/10.3390/en19051191 - 27 Feb 2026
Viewed by 287
Abstract
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective [...] Read more.
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective that links photovoltaic power forecasting, photovoltaic optimisation, and energy storage system management within the broader context of Smart Grid operation. The analysis covers forecasting techniques across all temporal horizons, compares deterministic, stochastic, metaheuristic, and hybrid optimisation approaches, and reviews siting, sizing, and operational strategies for both PV units and Energy Storage Systems, including their effects on hosting capacity, reactive power control, and network flexibility. A key contribution of this work is the consolidation of planning- and operation-oriented methods into a coherent framework that clarifies how forecasting accuracy influences Distributed Energy Resources optimisation and system-level performance. The review also highlights emerging trends, such as reinforcement learning for real-time Energy Storage Systems control, surrogate-assisted multi-objective optimisation, data-driven hosting capacity evaluation, and explainable AI for grid transparency, as essential enablers for flexible, resilient, and sustainable distribution networks. Open challenges include uncertainty modelling, real-world validation of optimisation tools, interoperability with flexibility markets, and the development of scalable and adaptive optimisation frameworks for next-generation smart grids. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

30 pages, 1444 KB  
Review
Uncertainty-Aware Planning of EV Charging Infrastructure and Renewable Integration in Distribution Networks: A Review
by Sasmita Tripathy, Edwin Boima Fahnbulleh, Sriparna Roy Ghatak, Fernando Lopes and Parimal Acharjee
Energies 2026, 19(5), 1131; https://doi.org/10.3390/en19051131 - 24 Feb 2026
Viewed by 349
Abstract
Transitioning from internal combustion engines to electric vehicles (EVs) is critical for fighting climate change. This requires widespread adoption of Electric Vehicle Charging Stations (EVCSs). Integrating EVCSs and renewable energy sources (RESs) into distribution networks (DNs) is vital for a sustainable transportation system [...] Read more.
Transitioning from internal combustion engines to electric vehicles (EVs) is critical for fighting climate change. This requires widespread adoption of Electric Vehicle Charging Stations (EVCSs). Integrating EVCSs and renewable energy sources (RESs) into distribution networks (DNs) is vital for a sustainable transportation system while enhancing power generation in an environmentally friendly manner. This review explores challenges and opportunities of EVCS and RES integration, concentrating on EV charging-demand uncertainty modeling, forecasting algorithms, planning techniques, and the impacts on DN. It discusses forecasting algorithms in terms of learning-based and non-learning-based methods. EVCS planning algorithms are also discussed, involving deterministic and stochastic methods. The technical, environmental, reliability, and economic impacts of EVCS-RES on DNs are discussed. It explores optimization strategies to minimize these impacts, incorporating them as objective functions. Additionally, the survey examines the methods of incorporating EVs and RES in DN, optimizing EVCS allocation while addressing EVCS impacts on voltage regulation, power loss, and network reliability. The importance of energy management systems and advanced forecasting techniques in balancing power fluctuation and improving efficiency is emphasized. Finally, it identifies open problems and future directions for forecasting and optimizing EVCS-RES integration in the networks. These findings are highly relevant for designing resilient and efficient modern power systems that leverage RES and EVCS in the grids. Full article
(This article belongs to the Section F2: Distributed Energy System)
Show Figures

Figure 1

22 pages, 2714 KB  
Article
DeepChance-OPT: A Robust Decision-Making Framework for Dynamic Grasping in Precision Assembly
by Tong Wei and Haibo Jin
Information 2026, 17(2), 187; https://doi.org/10.3390/info17020187 - 12 Feb 2026
Viewed by 298
Abstract
Achieving safe and efficient sequential decision-making in dynamic and uncertain environments is a core challenge in intelligent manufacturing and robotic systems. During operation, systems are often subject to coupled multi-source uncertainties—such as stochastic disturbances, model mismatch, and environmental shifts—rendering traditional approaches based on [...] Read more.
Achieving safe and efficient sequential decision-making in dynamic and uncertain environments is a core challenge in intelligent manufacturing and robotic systems. During operation, systems are often subject to coupled multi-source uncertainties—such as stochastic disturbances, model mismatch, and environmental shifts—rendering traditional approaches based on deterministic models or post hoc safety verification incapable of simultaneously ensuring performance and safety. In particular, the non-differentiability of constraint satisfaction probabilities in chance-constrained decision-making severely impedes its integration with data-driven learning paradigms. To address these challenges, this paper proposes DeepChance-OPT (Deep Chance Optimization), an end-to-end differentiable disturbance-rejection decision framework tailored for dynamic grasping tasks in precision assembly. The framework first encodes historical observations and control sequences into a low-dimensional latent representation to extract key dynamic features relevant to decision-making. Subsequently, it models the temporal propagation of uncertainty in this latent space to predict the probability distribution of future states. Furthermore, via a differentiable chance-constrained mechanism, the risk of constraint violation is transformed into a continuous and differentiable penalty term, which is jointly optimized with the task performance objective to achieve synergistic improvement in both safety and efficiency. The entire framework is trained and executed under a unified end-to-end architecture, enabling closed-loop online sequential decision-making. Experiments on a precision silicon carbide wafer grasping task demonstrate that DeepChance-OPT achieves real-time performance (average decision latency < 4 ms) while reducing the constraint violation rate to 2.3%, significantly outperforming both traditional optimization and purely data-driven baselines. Under composite uncertainty scenarios—including parameter perturbations, measurement noise, and external disturbances—the success rate remains stably above 87.5%, fully validating the effectiveness of the proposed framework for robust, safe, and efficient decision-making in complex dynamic environments. This work provides a new paradigm for intelligent disturbance-rejection decision-making in high-precision manufacturing, offering both theoretical rigor and engineering practicality. Full article
(This article belongs to the Special Issue Data-Driven Decision-Making in Intelligent Systems)
Show Figures

Figure 1

29 pages, 11326 KB  
Article
Constrained Soft Actor–Critic for Joint Computation Offloading and Resource Allocation in UAV-Assisted Edge Computing
by Nawazish Muhammad Alvi, Waqas Muhammad Alvi, Xiaolong Zhou, Jun Li and Yifei Wei
Sensors 2026, 26(4), 1149; https://doi.org/10.3390/s26041149 - 10 Feb 2026
Viewed by 521
Abstract
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for UAV-assisted edge computing systems. We formulate the problem as a Constrained Markov Decision Process (CMDP) that explicitly models latency constraints, rather than relying on implicit reward shaping. To solve this CMDP, we propose Constrained Soft Actor–Critic (C-SAC), a deep reinforcement learning algorithm that combines maximum-entropy policy optimization with Lagrangian dual methods. C-SAC employs a dedicated constraint critic network to estimate long-term constraint violations and an adaptive Lagrange multiplier that automatically balances energy efficiency against latency satisfaction without manual tuning. Extensive experiments demonstrate that C-SAC achieves an 18.9% constraint violation rate. This represents a 60.6-percentage-point improvement compared to unconstrained Soft Actor–Critic, with 79.5%, and a 22.4-percentage-point improvement over deterministic TD3-Lagrangian, achieving 41.3%. The learned policies exhibit strong channel-adaptive behavior with a correlation coefficient of 0.894 between the local computation ratio and channel quality, despite the absence of explicit channel modeling in the reward function. Ablation studies confirm that both adaptive mechanisms are essential, while sensitivity analyses show that C-SAC maintains robust performance with violation rates varying by less than 2 percentage points even as channel variability triples. These results establish constrained reinforcement learning as an effective approach for reliable UAV edge computing under stringent quality-of-service requirements. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

18 pages, 2627 KB  
Article
Application of Machine Learning Techniques in the Prediction of Surface Geometry
by Aneta Gądek-Moszczak, Dominik Nowakowski and Norbert Radek
Materials 2026, 19(4), 661; https://doi.org/10.3390/ma19040661 - 9 Feb 2026
Viewed by 392
Abstract
The article presents an attempt by the authors to generate a digital representation of the analyzed surface layer of WC-Co-Al2O3 coating deposited by the ESD method. The WC-Co-Al2O3 surface layer is superhard and abrasion-resistant, significantly increasing the [...] Read more.
The article presents an attempt by the authors to generate a digital representation of the analyzed surface layer of WC-Co-Al2O3 coating deposited by the ESD method. The WC-Co-Al2O3 surface layer is superhard and abrasion-resistant, significantly increasing the exploitation time of working elements. The authors aim to develop a method for generating series of digital surfaces with similar geometry parameters based on data collected through profilometric analysis. Therefore, the advanced integration of machine learning (ML) techniques with classical statistical approaches for modeling and predicting stochastic processes. While traditional models such as ARMA/ARIMA and hidden Markov models (HMMs) offer mathematical rigor, they often impose assumptions of stationarity and linearity, which limits their application to complex, noisy data. This paper proposes a model for surface geometry generation based on experimental data that combines recurrent neural networks (RNNs) and Monte Carlo simulation. Additionally, the study reviews emerging methods, including generative adversarial networks (GANs) for stochastic simulation and expectation-maximization (EM) algorithms for parameter estimation. An empirical case study on WC-Co-AL2O3 surface geometries demonstrates the effectiveness of ML–stochastic hybrids in capturing both deterministic structures and random fluctuations. The findings underscore not only the benefits but also the limitations of such models, including high computational demands and interpretability challenges, while proposing future research directions toward physics-informed ML and explainable AI. Full article
(This article belongs to the Special Issue Advances in Surface Engineering: Functional Films and Coatings)
Show Figures

Figure 1

23 pages, 4747 KB  
Article
Neural Network Regression Structural Hyperparameter Selection Using Reconstruction Error Minimization (REM)
by Soosan Beheshti, Mahdi Shamsi, Miaosen Zhou, Yashar Naderahmadian and Younes Sadat-Nejad
Electronics 2026, 15(4), 723; https://doi.org/10.3390/electronics15040723 - 8 Feb 2026
Viewed by 251
Abstract
Structural hyperparameter selection (HPS) in neural network (NN) regression faces two critical, computationally expensive barriers: the mandatory splitting of datasets for validation, which significantly impairs sample efficiency, and the inability of conventional metrics (like Data MSE) to decouple true modeling error from detrimental [...] Read more.
Structural hyperparameter selection (HPS) in neural network (NN) regression faces two critical, computationally expensive barriers: the mandatory splitting of datasets for validation, which significantly impairs sample efficiency, and the inability of conventional metrics (like Data MSE) to decouple true modeling error from detrimental output noise, leading to suboptimal architectural complexity and overfitting. To resolve these systemic limitations, we propose the Reconstruction Error Minimization for Hyperparameter Selection (REM-HPS) framework, a novel, non-Bayesian approach grounded in statistical learning theory. REM-HPS fundamentally shifts the optimization objective by minimizing the Reconstruction Mean Squared Error (MSE), which precisely isolates and measures the model’s intrinsic ability to recover the underlying noise-free function. Since this target error is typically inaccessible, the framework employs the observable Data MSE (validation error) to construct a reliable, probabilistic estimate, yielding a deterministic and noise-aware selection criterion. REM-HPS utilizes a deterministic structural hyperparameter selection criterion that removes randomness due to validation data splitting, while remaining compatible with standard stochastic training procedures. This strategy allows for the use of the entire dataset for training, eliminating the need for explicit data splitting or the introduction of tuning-intensive regularization hyperparameters. Rigorous empirical validation demonstrates that REM-HPS consistently selects significantly more compact architectures (minimal complexity) while achieving superior generalizability and estimation accuracy, particularly across varied Signal-to-Noise Ratios and data regimes. By providing an efficient and optimal selection metric, REM-HPS offers a transformative, resource-efficient alternative to structural HPS in modern data-driven systems. Full article
Show Figures

Figure 1

39 pages, 2558 KB  
Article
An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems
by Xuemei Zhu, Han Peng, Haoyu Cai, Yu Liu, Shirong Li and Wei Peng
Computation 2026, 14(2), 45; https://doi.org/10.3390/computation14020045 - 6 Feb 2026
Viewed by 326
Abstract
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces [...] Read more.
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces stochastic perturbations into the step-size evolution; (2) a mirror opposition-based learning strategy to actively inject structured population diversity; and (3) an adaptive adjustment mechanism for the Lévy flight parameter β to enable phase-sensitive optimization behavior. The effectiveness of EPIMO is validated through a multi-stage experimental framework. Systematic evaluations on the CEC 2017 and CEC 2022 benchmark suites, alongside four classical engineering optimization problems (Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design), demonstrate its comprehensive superiority. The Wilcoxon rank-sum test confirms statistically significant performance improvements over its predecessor (PIMO) and a range of state-of-the-art and classical algorithms. EPIMO exhibits exceptional performance in convergence accuracy, stability, robustness, and constraint-handling capability, establishing it as a highly reliable and efficient metaheuristic optimizer. This research contributes a systematic, adaptive enhancement framework for projection-based metaheuristics, which can be generalized to improve other swarm intelligence systems when facing complex, constrained, and high-dimensional engineering optimization tasks. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Cited by 1 | Viewed by 328
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 1707 KB  
Article
Axiom Generation for Automated Ontology Construction from Texts Through Schema Mapping
by Tsitsi Zengeya, Jean Vincent Fonou-Dombeu and Mandlenkosi Gwetu
Mach. Learn. Knowl. Extr. 2026, 8(2), 29; https://doi.org/10.3390/make8020029 - 26 Jan 2026
Viewed by 764
Abstract
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description [...] Read more.
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description Logic axioms required for constructing logically consistent and computationally tractable ontologies remains largely underexplored. This paper puts forward a novel pipeline for automated axiom generation through schema mapping. Our paper introduces three key innovations: a deterministic mapping framework that guarantees logical consistency (unlike stochastic Large Language Models); guaranteed formal consistency verified by OWL reasoners (unaddressed by prior statistical methods); and a transparent, scalable bridge from neural extractions to symbolic logic, eliminating manual post-processing. Technically, the pipeline builds upon the outputs of a Transformer-based fusion model for joint concept and relation extraction. We then map lexical relational phrases to formal ontological properties through a lemmatization-based schema alignment step. Entity typing and hierarchical induction are then employed to infer class structures, as well as domain and range constraints. Using RDFLib and structured data processing, we transform the extracted triples into both assertional (ABox) and terminological (TBox) axioms expressed in Description Logic. Experimental evaluation on benchmark datasets (Conll04 and NYT) demonstrates the efficacy of the approach, with expert validation showing high acceptance rates (>95%) and reasoners confirming zero inconsistencies. The pipeline thus establishes a reliable, scalable foundation for automated ontology learning, advancing the field from extraction to formally verifiable knowledge base construction. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

Back to TopTop