Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,017)

Search Parameters:
Keywords = nonlinear algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 12486 KB  
Article
Sustainability-Focused Evaluation of Self-Compacting Concrete: Integrating Explainable Machine Learning and Mix Design Optimization
by Abdulaziz Aldawish and Sivakumar Kulasegaram
Appl. Sci. 2026, 16(3), 1460; https://doi.org/10.3390/app16031460 (registering DOI) - 31 Jan 2026
Abstract
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), [...] Read more.
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), SHapley Additive exPlanations (SHAP), and multi-objective optimization to improve SCC mixture design. A large and heterogeneous publicly available global SCC dataset, originally compiled from 156 independent peer-reviewed studies and further enhanced through a structured three-stage data augmentation strategy, was used to develop robust predictive models for key fresh-state properties. An optimized XGBoost model demonstrated strong predictive accuracy and generalization capability, achieving coefficients of determination of R2=0.835 for slump flow and R2=0.828 for T50 time, with reliable performance on independent industrial SCC datasets. SHAP-based interpretability analysis identified the water-to-binder ratio and superplasticizer dosage as the dominant factors governing fresh-state behavior, providing physically meaningful insights into mixture performance. A cradle-to-gate life cycle assessment was integrated within a multi-objective genetic algorithm to simultaneously minimize embodied CO2 emissions and material costs while satisfying workability constraints. The resulting Pareto-optimal mixtures achieved up to 3.9% reduction in embodied CO2 emissions compared to conventional SCC designs without compromising performance. External validation using independent industrial data confirms the practical reliability and transferability of the proposed framework. Overall, this study presents an interpretable and scalable AI-driven approach for the sustainable optimization of SCC mixture design. Full article
17 pages, 2806 KB  
Article
Daily Runoff Forecasting in the Middle Yangtze River Using a Long Short-Term Memory Network Optimized by the Sparrow Search Algorithm
by Qi Zhang, Yaoyao Dong, Chesheng Zhan, Yueling Wang, Hongyan Wang and Hongxia Zou
Water 2026, 18(3), 364; https://doi.org/10.3390/w18030364 (registering DOI) - 31 Jan 2026
Abstract
To address the challenge of predicting runoff processes in the middle reaches of the Yangtze River under the influence of complex river–lake relationships and human disturbances, this paper proposes a coupled model based on the Sparrow Search Algorithm-optimized Long Short-Term Memory neural network [...] Read more.
To address the challenge of predicting runoff processes in the middle reaches of the Yangtze River under the influence of complex river–lake relationships and human disturbances, this paper proposes a coupled model based on the Sparrow Search Algorithm-optimized Long Short-Term Memory neural network (SSA-LSTM) for daily runoff forecasting at the Jiujiang Hydrological Station. The input data were preprocessed through feature selection and sequence decomposition. Subsequently, the Sparrow Search Algorithm (SSA) was utilized to perform automated of key hyperparameters of the Long Short-Term Memory (LSTM) model, thereby enhancing the model’s adaptability under complex hydrological conditions. Experimental results based on multi-station hydrological and meteorological data of the middle reaches of the Yangtze River from 2009 to 2016 show that the SSA-LSTM achieves a Nash–Sutcliffe Efficiency (NSE) of 0.98 during the testing period (2016). The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are reduced by 49.3% and 51.3%, respectively, compared to the standard LSTM. A comprehensive evaluation across different flow levels, utilizing Taylor diagrams and error distribution analysis, further confirms the model’s robustness. The model demonstrates robust performance across different flow regimes: compared to the standard LSTM model, SSA-LSTM improves the NSE from 0.45 to 0.88 in high-flow scenarios, exhibiting excellent capabilities in peak flow prediction and flood process characterization. In low-flow scenarios, the NSE is improved from −0.77 to 0.72, indicating more reliable prediction of baseflow mechanisms. The study demonstrates that SSA-LSTM can effectively capture hydrological nonlinear characteristics under strong river–lake backwater and human disturbances, providing a high-precision and high-efficiency data-driven method for runoff prediction in complex basins. Full article
Show Figures

Figure 1

30 pages, 1904 KB  
Review
Motion-Induced Errors in Buoy-Based Wind Measurements: Mechanisms, Compensation Methods, and Future Perspectives for Offshore Applications
by Dandan Cao, Sijian Wang and Guansuo Wang
Sensors 2026, 26(3), 920; https://doi.org/10.3390/s26030920 (registering DOI) - 31 Jan 2026
Abstract
Accurate measurement of sea-surface winds is critical for climate science, physical oceanography, and the rapidly expanding offshore wind energy sector. Buoy-based platforms—moored meteorological buoys, drifters, and floating LiDAR systems (FLS)—provide practical alternatives to fixed offshore structures, especially in deep water where bottom-founded installations [...] Read more.
Accurate measurement of sea-surface winds is critical for climate science, physical oceanography, and the rapidly expanding offshore wind energy sector. Buoy-based platforms—moored meteorological buoys, drifters, and floating LiDAR systems (FLS)—provide practical alternatives to fixed offshore structures, especially in deep water where bottom-founded installations are economically prohibitive. Yet these floating platforms are subject to continuous pitch, roll, heave, and yaw motions forced by wind, waves, and currents. Such six-degree-of-freedom dynamics introduce multiple error pathways into the measured wind signal. This paper synthesizes the current understanding of motion-induced measurement errors and the techniques developed to compensate for them. We identify four principal error mechanisms: (1) geometric biases caused by sensor tilt, which can underestimate horizontal wind speed by 0.4–3.4% depending on inclination angle; (2) contamination of the measured signal by platform translational and rotational velocities; (3) artificial inflation of turbulence intensity by 15–50% due to spectral overlap between wave-frequency buoy motions and atmospheric turbulence; and (4) beam misalignment and range-gate distortion specific to scanning LiDAR systems. Compensation strategies have progressed through four recognizable stages: fundamental coordinate-transformation and velocity-subtraction algorithms developed in the 1990s; Kalman-filter-based multi-sensor fusion emerging in the 2000s; Response Amplitude Operator modeling tailored to FLS platforms in the 2010s; and data-driven machine-learning approaches under active development today. Despite this progress, key challenges persist. Sensor reliability degrades under extreme sea states precisely when accurate data are most needed. The coupling between high-frequency platform vibrations and turbulence remains poorly characterized. No unified validation framework or benchmark dataset yet exists to compare methods across platforms and environments. We conclude by outlining research priorities: end-to-end deep-learning architectures for nonlinear error correction, adaptive algorithms capable of all-sea-state operation, standardized evaluation protocols with open datasets, and tighter integration of intelligent software with next-generation low-power sensors and actively stabilized platforms. Full article
(This article belongs to the Section Industrial Sensors)
18 pages, 955 KB  
Article
Parameter Calculation of Coal Mine Gas Drainage Networks Based on PSO–Newton Iterative Algorithm
by Xiaolin Li, Zhiyu Cheng and Tongqiang Xia
Appl. Sci. 2026, 16(3), 1443; https://doi.org/10.3390/app16031443 - 30 Jan 2026
Abstract
Comprehensive monitoring of gas extraction parameters is crucial for the safe production of coal mines. However, it is a challenge to collect the overall gas drainage network parameters with limited sensors due to technical and econoincorporating mic constraints. To address this issue, a [...] Read more.
Comprehensive monitoring of gas extraction parameters is crucial for the safe production of coal mines. However, it is a challenge to collect the overall gas drainage network parameters with limited sensors due to technical and econoincorporating mic constraints. To address this issue, a nonlinear model for gas confluence structure is construed for the conservation of mass, energy, and gas state properties. Considering exogenous variables such as frictional loss correction coefficient (α) and air leakage resistance coefficient (β), as well as the iterative structure of drainage networks, a hybrid PSO–Newton algorithm framework is designed. This framework realizes iterative solutions for multi confluence structures by combining global optimization (PSO) and local nonlinear solving (Newton’s method). A case study using historical monitoring data from the 11,306 working face of S Coal Mine was conducted to evaluate the proposed algorithm at both branch and drill field scale. The results show that key parameters such as gas flow velocity, concentration, and density align with actual observation trends, with most deviations within 10%, verifying the accuracy and effectiveness of the algorithm. A deviation comparison between the standalone Newton’s method and the PSO–Newton algorithm further demonstrates the stability of the latter. By enabling the derivation of comprehensive network parameters from limited monitoring data, this study provides strong support for the intelligent management of coal mine gas extraction. Full article
24 pages, 5198 KB  
Article
Industrial Process Control Based on Reinforcement Learning: Taking Tin Smelting Parameter Optimization as an Example
by Yingli Liu, Zheng Xiong, Haibin Yuan, Hang Yan and Ling Yang
Appl. Sci. 2026, 16(3), 1429; https://doi.org/10.3390/app16031429 - 30 Jan 2026
Abstract
To address the issues of parameter setting, reliance on human experience, and the limitations of traditional model-driven control methods in handling complex nonlinear dynamics in the tin smelting industrial process, this paper proposes a data-driven control approach based on improved deep reinforcement learning [...] Read more.
To address the issues of parameter setting, reliance on human experience, and the limitations of traditional model-driven control methods in handling complex nonlinear dynamics in the tin smelting industrial process, this paper proposes a data-driven control approach based on improved deep reinforcement learning (RL). Aiming to reduce the tin entrainment rate in smelting slag and CO emissions in exhaust gas, we construct a data-driven environment model with an 8-dimensional state space (including furnace temperature, pressure, gas composition, etc.) and an 8-dimensional action space (including lance parameters such as material flow, oxygen content, backpressure, etc.). We innovatively design a Dual-Action Discriminative Deep Deterministic Policy Gradient (DADDPG) algorithm. This method employs an online Actor network to simultaneously generate deterministic and exploratory random actions, with the Critic network selecting high-value actions for execution, consistently enhancing policy exploration efficiency. Combined with a composite reward function (integrating real-time Sn/CO content, their variations, and continuous penalty mechanisms for safety constraints), the approach achieves multi-objective dynamic optimization. Experiments based on real tin smelting production line data validate the environment model, with results demonstrating that the tin content in slag is reduced to between 3.5% and 4%, and CO content in exhaust gas is decreased to between 2000 and 2700 ppm. Full article
12 pages, 2261 KB  
Article
Fractional Modeling of Coupled Heat and Moisture Transfer with Gas-Pressure-Driven Flow in Raw Cotton
by Normakhmad Ravshanov and Istam Shadmanov
Processes 2026, 14(3), 481; https://doi.org/10.3390/pr14030481 - 29 Jan 2026
Abstract
This study introduces a multidimensional mathematical model and a robust numerical algorithm with second-order accuracy for modeling the complex coupled processes of heat and moisture transfer with gas-pressure-driven flow, based on time-fractional differential equations (with Caputo derivatives of order 0 < α ≤ [...] Read more.
This study introduces a multidimensional mathematical model and a robust numerical algorithm with second-order accuracy for modeling the complex coupled processes of heat and moisture transfer with gas-pressure-driven flow, based on time-fractional differential equations (with Caputo derivatives of order 0 < α ≤ 1), which capture the memory effects and anomalous diffusion inherent in heterogeneous porous media. The proposed model integrates conductive and convective heat transfer; moisture diffusion and phase change; and pressure dynamics within the pore space and their bidirectional couplings. It also incorporates environmental interactions through boundary conditions for heat and moisture exchange with the ambient air; internal heat and moisture release; transient influx of solar radiation; and material heterogeneity, where all transport coefficients are spatially variable functions. To solve this nonlinear and coupled system, we developed a high-order, stable finite-difference scheme. The numerical algorithm employs an alternating direction-implicit approach, which ensures computational efficiency while maintaining numerical stability. We demonstrate the algorithm’s capability through numerical simulations that monitor and predict the spatiotemporal evolution of coupled transport temperature, moisture content, and pressure fields. The results reveal how heterogeneity, diurnal solar radiation, and internal sources create localized hot spots, moisture accumulation zones, and pressure gradients that significantly influence the overall dynamics of storage and drying processes. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Graphical abstract

14 pages, 1104 KB  
Article
MAGE (Multimodal AI-Enhanced Gastrectomy Evaluation): Comparative Analysis of Machine Learning Models for Postoperative Complications in Central European Gastric Cancer Population
by Wojciech Górski, Marcin Kubiak, Amir Nour Mohammadi, Maksymilian Podleśny, Gian Luca Baiocchi, Manuele Gaioni, Santo Vincent Grasso, Andrew Gumbs, Timothy M. Pawlik, Bartłomiej Drop, Albert Chomątowski, Zuzanna Pelc, Katarzyna Sędłak, Michał Woś and Karol Rawicz-Pruszyński
Cancers 2026, 18(3), 443; https://doi.org/10.3390/cancers18030443 - 29 Jan 2026
Abstract
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients [...] Read more.
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients with gastric cancer (GC) undergoing multimodal therapy. In particular, we aimed to develop a free, publicly accessible online calculator based on preoperative variables. Materials and Methods: Patients with histologically confirmed locally advanced (cT2-4N0-3M0) GC who underwent multimodal treatment with curative intent between 2013 and 2023 were included in the study. ML models evaluation pipeline was used with Stratified 5-Fold Cross-Validation. Results: A total of 368 patients were included in the final analytic cohort. Among five algorithm classes under 5-fold cross-validation, Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) was 0.9719, 0.9652, 0.9796, 0.8339 and 0.7581 for XGBoost, Catboost, Random Forest, SVM and Logistic Regression, respectively. Macro F1 was 0.8714, 0.5094, 0.8820, 0.8714 and 0.4579 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression, respectively. Overall Accuracy was 0.8897, 0.5980, 0.8885, 0.8750 and 0.5466 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression models, respectively. Conclusions: In this Central and Eastern European cohort of patients with locally advanced GC, ML models using non-linear decision rules-particularly Random Forest and XGBoost- substantially outperformed conventional linear approaches in predicting the severity of postoperative complications. Prospective external validation is needed to clarify the model’s clinical utility and its potential role in perioperative decision support. Full article
24 pages, 7789 KB  
Article
Real-Time Acceleration Estimation for Low-Thrust Spacecraft Using a Dual-Layer Filter and an Interacting Multiple Model
by Zipeng Wu, Peng Zhang and Fanghua Jiang
Aerospace 2026, 13(2), 130; https://doi.org/10.3390/aerospace13020130 - 29 Jan 2026
Abstract
Orbit determination for non-cooperative targets represents a significant focus of research within the domain of space situational awareness. In contrast to cooperative targets, non-cooperative targets do not provide their orbital parameters, necessitating the use of observation data for accurate orbit determination. The increasing [...] Read more.
Orbit determination for non-cooperative targets represents a significant focus of research within the domain of space situational awareness. In contrast to cooperative targets, non-cooperative targets do not provide their orbital parameters, necessitating the use of observation data for accurate orbit determination. The increasing prevalence of low-cost, low-thrust spacecraft has heightened the demand for advancements in real-time orbit determination and parameter estimation for low-thrust maneuvers. This paper presents a novel dual-layer filter approach designed to facilitate real-time acceleration estimation for non-cooperative targets. Initially, the method employs a square-root cubature Kalman filter (SRCKF) to handle the nonlinearity of the system and a Jerk model to address the challenges in acceleration modeling, thereby yielding a preliminary estimation of the acceleration produced by the thruster of the non-cooperative target. Subsequently, a specialized filtering structure is established for the estimated acceleration, and two filtering frameworks are integrated into a dual-layer filter model via the cubature transform, significantly enhancing the estimation accuracy of acceleration parameters. Finally, to adapt to the potential on/off states of the thrusters, the Interacting Multiple Model (IMM) algorithm is employed to bolster the robustness of the proposed solution. Simulation results validate the effectiveness of the proposed method in achieving real-time orbit determination and acceleration estimation. Full article
(This article belongs to the Special Issue Precise Orbit Determination of the Spacecraft)
22 pages, 1360 KB  
Article
A Data-Driven Approach to Estimating Passenger Boarding in Bus Networks
by Gustavo Bongiovi, Teresa Galvão Dias, Jose Nauri Junior and Marta Campos Ferreira
Appl. Sci. 2026, 16(3), 1384; https://doi.org/10.3390/app16031384 - 29 Jan 2026
Viewed by 27
Abstract
This study explores the application of multiple predictive algorithms under general versus route-specialized modeling strategies to estimate passenger boarding demand in public bus transportation systems. Accurate estimation of boarding patterns is essential for optimizing service planning, improving passenger comfort, and enhancing operational efficiency. [...] Read more.
This study explores the application of multiple predictive algorithms under general versus route-specialized modeling strategies to estimate passenger boarding demand in public bus transportation systems. Accurate estimation of boarding patterns is essential for optimizing service planning, improving passenger comfort, and enhancing operational efficiency. This research evaluates a range of predictive models to identify the most effective techniques for forecasting demand across different routes and times. Two modeling strategies were implemented: a generalistic approach and a specialized one. The latter was designed to capture route-specific characteristics and variability. A real-world case study from a medium-sized metropolitan region in Brazil was used to assess model performance. Results indicate that ensemble-tree-based models, particularly XGBoost, achieved the highest accuracy and robustness in handling nonlinear relationships and complex interactions within the data. Compared to the generalistic approach, the specialized approach demonstrated superior adaptability and precision, making it especially suitable for long-term and strategic planning applications. It reduced the average RMSE by 19.46% (from 13.84 to 11.15) and the MAE by 17.36% (from 9.60 to 7.93), while increasing the average R² from 0.289 to 0.344. However, these gains came with higher computational demands and mean Forecast Bias (from 0.002 to 0.560), indicating a need for bias correction before operational deployment. The findings highlight the practical value of predictive modeling for transit authorities, enabling data-driven decision making in fleet allocation, route planning, and service frequency adjustment. Moreover, accurate demand forecasting contributes to cost reduction, improved passenger satisfaction, and environmental sustainability through optimized operations. Full article
20 pages, 480 KB  
Systematic Review
Mathematical and Algorithmic Advances in Machine Learning for Statistical Process Control: A Systematic Review
by Yulong Qiao, Tingting Han, Zixing Wu, Ge Jin, Qian Zhang and Qin Xu
Entropy 2026, 28(2), 151; https://doi.org/10.3390/e28020151 - 29 Jan 2026
Viewed by 24
Abstract
Integrating machine learning (ML) with Statistical Process Control (SPC) is important for Industry 4.0 environments. Contemporary manufacturing data exhibit high-dimensionality, autocorrelation, non-stationarity, and class imbalance, which challenge classical SPC assumptions. This systematic review, conducted following the PRISMA 2020 guidelines, provides a problem-driven synthesis [...] Read more.
Integrating machine learning (ML) with Statistical Process Control (SPC) is important for Industry 4.0 environments. Contemporary manufacturing data exhibit high-dimensionality, autocorrelation, non-stationarity, and class imbalance, which challenge classical SPC assumptions. This systematic review, conducted following the PRISMA 2020 guidelines, provides a problem-driven synthesis that links these data challenges to corresponding methodological families in ML-based SPC. Specifically, we review approaches for (1) high-dimensional and redundant data (dimensionality reduction and feature selection), (2) autocorrelated and dynamic processes (time-series and state-space models), and (3) data scarcity and imbalance (cost-sensitive learning, generative modeling, and transfer learning). Nonlinearity is treated as a cross-cutting property within each category. For each, we outline the mathematical rationale of representative algorithms and illustrate their use with industrial examples. We also summarize open issues in interpretability, thresholding, and real-time deployment. This review offers structured guidance for selecting ML techniques suited to complex manufacturing data and for designing reliable online monitoring pipelines. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
14 pages, 1426 KB  
Article
Optimization of Multi-Layer Neural Network-Based Cooling Load Prediction for Office Buildings Through Data Preprocessing and Algorithm Variations
by Namchul Seong, Daeung Danny Kim and Goopyo Hong
Buildings 2026, 16(3), 566; https://doi.org/10.3390/buildings16030566 - 29 Jan 2026
Viewed by 25
Abstract
Accurate forecasting of cooling loads is essential for the effective operation of Building Energy Management Systems (BEMSs) and the reduction of building-sector carbon emissions. Although Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), have shown strong capability in modeling nonlinear thermal dynamics, their [...] Read more.
Accurate forecasting of cooling loads is essential for the effective operation of Building Energy Management Systems (BEMSs) and the reduction of building-sector carbon emissions. Although Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), have shown strong capability in modeling nonlinear thermal dynamics, their reliability in practice is often limited by inappropriate training algorithm selection and poor data quality, including missing values and numerical distortions. To address these limitations, this study conducts a comprehensive empirical investigation into the effects of training algorithms and systematic data preprocessing strategies on cooling load prediction performance using an MLP model. Through benchmarking ten distinct training algorithms under identical conditions, the Levenberg–Marquardt (LM) algorithm was identified as achieving the lowest prediction error when integrated data preprocessing was applied. In particular, the application of data preprocessing reduced the CvRMSE from 18.56% to 6.03% during the testing period. Furthermore, the proposed framework effectively mitigated zero-load prediction errors during non-cooling periods and improved prediction accuracy under high-load operating conditions. These results provide practical and quantitative guidance for developing robust data-driven forecasting models applicable to real-time building energy optimization. Full article
(This article belongs to the Special Issue Built Environment and Building Energy for Decarbonization)
36 pages, 488 KB  
Article
Analysis of Implicit Neutral-Tempered Caputo Fractional Volterra–Fredholm Integro-Differential Equations Involving Retarded and Advanced Arguments
by Abdulrahman A. Sharif and Muath Awadalla
Mathematics 2026, 14(3), 470; https://doi.org/10.3390/math14030470 - 29 Jan 2026
Viewed by 24
Abstract
This paper investigates a class of implicit neutral fractional integro-differential equations of Volterra–Fredholm type. The equations incorporate a tempered fractional derivative in the Caputo sense, along with both retarded (delay) and advanced arguments. The problem is formulated on a time domain segmented into [...] Read more.
This paper investigates a class of implicit neutral fractional integro-differential equations of Volterra–Fredholm type. The equations incorporate a tempered fractional derivative in the Caputo sense, along with both retarded (delay) and advanced arguments. The problem is formulated on a time domain segmented into past, present, and future intervals and includes nonlinear mixed integral operators. Using Banach’s contraction mapping principle and Schauder’s fixed point theorem, we establish sufficient conditions for the existence and uniqueness of solutions within the space of continuous functions. The study is then extended to general Banach spaces by employing Darbo’s fixed point theorem combined with the Kuratowski measure of noncompactness. Ulam–Hyers–Rassias stability is also analyzed under appropriate conditions. To demonstrate the practical applicability of the theoretical framework, explicit examples with specific nonlinear functions and integral kernels are provided. Furthermore, detailed numerical simulations are conducted using MATLAB-based specialized algorithms, illustrating solution convergence and behavior in both finite-dimensional and Banach space contexts. Full article
20 pages, 874 KB  
Article
An Adaptive Scheme for Neuron Center Selection to Design an Efficient Radial Basis Neural Network Using PSO
by Arshad Afzal
Mathematics 2026, 14(3), 469; https://doi.org/10.3390/math14030469 - 29 Jan 2026
Viewed by 25
Abstract
An adaptive and efficient particle swarm optimization (PSO)-based learning algorithm to determine neuron centers in the hidden layer of a radial basis neural network (RBNN) is developed in the present work for regression problems. The proposed PSO–RBNN algorithm searches the entire input domain [...] Read more.
An adaptive and efficient particle swarm optimization (PSO)-based learning algorithm to determine neuron centers in the hidden layer of a radial basis neural network (RBNN) is developed in the present work for regression problems. The proposed PSO–RBNN algorithm searches the entire input domain space to discover optimal neuron centers by solving an optimization problem and aims to overcome the limitation of center selection from the training data. The network is built in a sequential manner using optimal neuron centers until some specified criterion is met, and therefore, it exploits the concept of neuron significance during the learning process. The Gaussian function with a constant spread (also known as width) is chosen as the radial basis function for each neuron. To illustrate the effectiveness of the PSO–RBNN algorithm over the orthogonal least squares (OLS) method (a popular learning algorithm under a similar category, which selects the neuron center from training data), numerical simulations for different types of nonlinear problems of varying dimensions and complexities are conducted. Finally, a comparison with multiple existing algorithms for network design is made using available data. The results show that the RBNN architecture developed with the proposed learning algorithm exhibits superior convergence, displays good generalization ability, and requires a smaller number of neurons, resulting in an efficient and compact network architecture. Full article
(This article belongs to the Section E: Applied Mathematics)
21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Viewed by 49
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2691 KB  
Article
Improved Load Frequency Control Design for Interconnected Power Systems
by Van Nguyen Ngoc Thanh, De Huynh Tan, Hoai Duong Minh and Van Van Huynh
Energies 2026, 19(3), 702; https://doi.org/10.3390/en19030702 - 29 Jan 2026
Viewed by 39
Abstract
Managing frequency stability in modern interconnected power systems is a critical challenge, particularly under continuous load variations and increasing system complexity. In response to these challenges, this study introduces an Improved Grey Wolf Optimizer (IGWO)-based Proportional–Integral–Derivative (PID) controller as a solution for effective [...] Read more.
Managing frequency stability in modern interconnected power systems is a critical challenge, particularly under continuous load variations and increasing system complexity. In response to these challenges, this study introduces an Improved Grey Wolf Optimizer (IGWO)-based Proportional–Integral–Derivative (PID) controller as a solution for effective Load Frequency Control (LFC). The proposed method is tested on interconnected power systems integrating thermal (reheat and non-reheat) and hydropower plants. The simulations focus on continuous load variation and nonlinearity cases, where the GRC block is added in the model to closely mimic real-world operating conditions. The findings demonstrate that the IGWO-PID controller outperforms by achieving faster stabilization, minimizing frequency deviations, and ensuring robust performance compared to the Particle Swarm Optimization (PSO) algorithm. These results highlight the controller’s adaptability and scalability, offering a reliable approach to maintaining stability and operational efficiency in interconnected power systems. Full article
(This article belongs to the Special Issue Modeling, Simulation and Optimization of Power Systems: 2nd Edition)
Show Figures

Figure 1

Back to TopTop