Next Issue
Volume 14, February-2
Previous Issue
Volume 14, January-2
 
 
mathematics-logo

Journal Browser

Journal Browser

Mathematics, Volume 14, Issue 3 (February-1 2026) – 194 articles

Cover Story (view full-size image): Determining how uniaxial waves behave in an initially undisturbed semi-infinite medium is a fundamental topic in linear viscoelasticity and is covered in most standard rheology textbooks. It also serves as an essential preliminary step in studying wave propagation in linear dispersive media with dissipative effects, such as seismic wave transmission within the Earth. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 2397 KB  
Article
Numerical Solutions via Shifted Pell Polynomials for Third-Order Rosenau–Hyman and Gilson–Pickering Equations
by Mohamed A. Abdelkawy, Waleed Mohamed Abd-Elhameed, Seham S. Alzahrani, Ahmed Gamal Atta and Anjan Biswas
Mathematics 2026, 14(3), 582; https://doi.org/10.3390/math14030582 - 6 Feb 2026
Abstract
This paper introduces a collocation algorithm for numerically solving the third-order Gilson–Pickering equation (GPE) and the classical Rosenau–Hyman equation (RHE). We employ newly developed shifted Pell polynomials as basis functions. Novel formulas for these polynomials are devised and utilized in constructing the proposed [...] Read more.
This paper introduces a collocation algorithm for numerically solving the third-order Gilson–Pickering equation (GPE) and the classical Rosenau–Hyman equation (RHE). We employ newly developed shifted Pell polynomials as basis functions. Novel formulas for these polynomials are devised and utilized in constructing the proposed algorithm. Specifically, we establish a new power form and its inversion formula, along with an explicit formula for derivatives of the shifted Pell polynomials, from which the operational matrices of derivatives (OMDs) are derived. These matrices facilitate the conversion of nonlinear dispersive models into systems of algebraic equations, efficiently solved using Newton’s iterative technique. The error analysis of the shifted Pell expansion is discussed in depth. Several numerical examples, including the RHE, its fourth-order variant, and the Fornberg–Whitham equation, are provided to demonstrate the method’s performance and accuracy. Comparative results are also reported. Full article
(This article belongs to the Special Issue Soliton Theory and Integrable Systems in Mathematical Physics)
31 pages, 4208 KB  
Article
Study on Lateral Abutment Stress and Damage Range of Coal Seam Under the Coupling of Coal-Rock Structure
by Wenrui He, Dongdong Chen and Hengzhong Zhu
Mathematics 2026, 14(3), 581; https://doi.org/10.3390/math14030581 - 6 Feb 2026
Abstract
The lateral abutment stress and damage range of the coal seam are prerequisites for the layout of gob-side entries and surrounding rock control. They are influenced by the structure and mechanical properties of the coal seam and the overlying strata. To address this [...] Read more.
The lateral abutment stress and damage range of the coal seam are prerequisites for the layout of gob-side entries and surrounding rock control. They are influenced by the structure and mechanical properties of the coal seam and the overlying strata. To address this issue, this study establishes a mechanical analysis model for the lateral abutment stress and damage range under coupled conditions between the coal seam and overlying strata. This model systematically investigates the influence of various factors, including the fracture height and break angle of the overlying strata, the rotation angle and subsidence of key blocks, the burial depth and thickness of the coal seam, as well as the cohesion and internal friction angle of the coal mass. The study reveals that the weight and overburden load of the triangular hanging roof zone, along with the subsidence and rotation of the key blocks, are the key factors influencing the lateral abutment stress and damage range. Meanwhile, the reliability of the mechanical model has been substantiated through a combination of numerical simulation and in situ monitoring results. Full article
(This article belongs to the Special Issue Mathematics Applied in Rock Mechanics and Mining Science)
56 pages, 4119 KB  
Article
Reliability in Robotics and Intelligent Systems: Mathematical Modeling and Algorithmic Innovations
by Madina Issametova, Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Elizaveta E. Kuleshova and Denis V. Valuev
Mathematics 2026, 14(3), 580; https://doi.org/10.3390/math14030580 - 6 Feb 2026
Abstract
The rapid development of digital manufacturing and robotic systems places increased demands on the accuracy and reliability of industrial manipulators. Traditional time-based reliability metrics do not reflect the robot’s ability to consistently achieve the desired position and orientation within process tolerances or the [...] Read more.
The rapid development of digital manufacturing and robotic systems places increased demands on the accuracy and reliability of industrial manipulators. Traditional time-based reliability metrics do not reflect the robot’s ability to consistently achieve the desired position and orientation within process tolerances or the probability of the end-effector falling into a given area of permissible poses. The proposed framework integrates a deterministic kinematic model, a stochastic representation of Denavit–Hartenberg parameters and control variables, analytical methods for estimating probabilities, and numerical modeling using the Monte Carlo method. The methodology has been tested on the widely used industrial robot FANUC LR Mate 200iD/7L. The results demonstrate a significant dependence of geometric reliability on the kinematic configuration of the manipulator, with maximum reliability in compact poses and a significant reduction in elongated configurations near singularities. Comprehensive validation was carried out, including numerical experiments on a planar prototype, high-precision physical measurements on a real robot and analysis of operational data, which confirmed the adequacy of the proposed model. The developed approach provides a powerful tool for designing, optimizing and predicting the reliability of robotic cells in high-precision automation environments. Full article
23 pages, 5683 KB  
Article
Optimizing RTAB-Map Viewability to Reduce Cognitive Workload in VR Teleoperation: A User-Centric Approach
by Hojin Yoon, Haegyeom Choi, Jaehoon Jeong and Donghun Lee
Mathematics 2026, 14(3), 579; https://doi.org/10.3390/math14030579 - 6 Feb 2026
Abstract
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to [...] Read more.
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to clearly perceive object depth and structure in virtual environments. To address this, this study proposes a methodology to optimize the viewability of RTAB-Map-based 3D maps using the Taguchi method, aiming to enhance VR teleoperation efficiency and reduce cognitive workload. We identified eight key parameters that critically affect visual quality and utilized an L18 orthogonal array to derive an optimal combination that controls point cloud density and noise levels. Experimental results from a target object picking task demonstrated that the optimized 3D map reduced task completion time by approximately 9 s compared to the RGB image condition, achieving efficiency levels approaching those of the physical-world baseline. Furthermore, evaluations using NASA-TLX confirmed that intuitive visual feedback minimized situational awareness errors and substantially alleviated cognitive workload. This study suggests a new direction for constructing high-efficiency teleoperation interfaces from a Human–Robot Interaction perspective by expanding SLAM optimization criteria from geometric precision to user-centric visual quality. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Systems)
Show Figures

Figure 1

20 pages, 785 KB  
Article
Pre-Disaster Preparedness in Food Bank Supply Chains Addressing Supply Uncertainty and Different Objectives
by Adrian F. Rivera, Neale R. Smith and Angel Ruiz
Mathematics 2026, 14(3), 578; https://doi.org/10.3390/math14030578 - 6 Feb 2026
Abstract
A critical decision in pre-disaster humanitarian logistics planning is determining the amount of aid to preposition to ensure timely and effective emergency response. To support managers in this process, we propose four mathematical formulations designed to optimize food prepositioning and subsequent distribution while [...] Read more.
A critical decision in pre-disaster humanitarian logistics planning is determining the amount of aid to preposition to ensure timely and effective emergency response. To support managers in this process, we propose four mathematical formulations designed to optimize food prepositioning and subsequent distribution while minimizing unmet demand under supply uncertainty. Two formulations adopt the cardinality-constrained approach: one focuses on minimizing unmet demand, and the other incorporates equity in meeting demand. The remaining two formulations are scenario-based, addressing the same objectives with and without equity considerations. To compare the variations in the solutions generated by the proposed formulations and gain a deeper understanding of their behavior and performance, the formulations are applied to synthetic instances. To assist managers in selecting the model that best aligns with their objectives, we provide a summary of the advantages and disadvantages of each formulation. Our results show that considering supply uncertainty has important implications for the total costs, and that having adequate storage capacity may help mitigate the problems caused by this uncertainty. Full article
Show Figures

Figure 1

29 pages, 401 KB  
Article
Dependency-Constrained Cascading Rescheduling: Network Evolution and Long-Term Adaptation
by TzeHoung Lee and Xue-Ming Yuan
Mathematics 2026, 14(3), 577; https://doi.org/10.3390/math14030577 - 5 Feb 2026
Abstract
Traditional scheduling theory optimizes initial task assignments under static assumptions, yet operational systems face repeated disruptions requiring both immediate rescheduling and long-term structural adaptation. Existing approaches treat each disruption independently, failing to capture how organizations learn and evolve through repeated challenges. This paper [...] Read more.
Traditional scheduling theory optimizes initial task assignments under static assumptions, yet operational systems face repeated disruptions requiring both immediate rescheduling and long-term structural adaptation. Existing approaches treat each disruption independently, failing to capture how organizations learn and evolve through repeated challenges. This paper presents a unified framework bridging cascading rescheduling with network evolution, formally modeling how dependency structures adapt over time to improve resilience. The framework consists of three integrated components: (1) immediate rescheduling algorithms with provable complexity bounds—O(n) for tree-structured dependencies, fixed-parameter tractable for bounded treewidth—enabling real-time response; (2) five adaptation strategies (redundancy, buffering, decoupling, reshuffling, and control) with convergence guarantees showing exponential improvement rate O(e(σλ)t); and (3) computable resilience metrics quantifying organizational capacity to absorb disruptions. Comprehensive validation through 5200 simulated weeks (52 weeks × 100 replications) demonstrates substantial performance improvements. Redundancy-based adaptation achieves 109% resilience improvement and 66% disruption reduction compared to non-adaptive baselines (p<0.001, Cohen’s d>1.8). The framework is implemented as Orange3 visual programming widgets, achieving 92% user acceptance among non-technical practitioners with 7-month payback periods. While the framework is domain-agnostic and applicable to any operational network with dependency constraints, validation focuses on healthcare scheduling contexts where disruption patterns are well documented. The approach demonstrates that organizations can systematically build resilience through principled adaptation rather than reactive responses, with quantifiable performance improvements and accessible implementation tools. Full article
18 pages, 389 KB  
Article
Asymptotic Stability of Time-Varying Nonlinear Cascade Systems with Delay via Lyapunov–Razumikhin Approach
by Natalia Sedova and Olga Druzhinina
Mathematics 2026, 14(3), 576; https://doi.org/10.3390/math14030576 - 5 Feb 2026
Abstract
This paper addresses nonlinear time-varying cascade systems governed by differential equations with finite delay. Several sufficient conditions for asymptotic stability are derived, based on differing assumptions regarding the isolated subsystems and their interconnection. The cascade structure enables the treatment of a broad class [...] Read more.
This paper addresses nonlinear time-varying cascade systems governed by differential equations with finite delay. Several sufficient conditions for asymptotic stability are derived, based on differing assumptions regarding the isolated subsystems and their interconnection. The cascade structure enables the treatment of a broad class of systems while simplifying stability analysis compared to conventional approaches. Moreover, it allows the stabilization problem to be decoupled: under suitable conditions, the asymptotic stability of the overall cascade system follows from the stability properties of its individual subsystems. These properties are typically verified using the direct Lyapunov method. In contrast to existing results, the theorems presented herein apply to an extended class of systems and impose relaxed conditions on the Lyapunov functions employed to establish uniform asymptotic stability. Additionally, new results are provided on semiglobal exponential stability and (non-uniform) asymptotic stability for time-varying cascade systems with delay. Collectively, these contributions broaden the applicability of the direct Lyapunov method to delayed cascade systems. Full article
(This article belongs to the Special Issue Research on Delay Differential Equations and Their Applications)
Show Figures

Figure 1

20 pages, 3823 KB  
Article
DA-TransResUNet: Residual U-Net Liver Segmentation Model Integrating Dual Attention of Spatial and Channel with Transformer
by Kunzhan Wang, Xinyue Lu, Jing Li and Yang Lu
Mathematics 2026, 14(3), 575; https://doi.org/10.3390/math14030575 - 5 Feb 2026
Abstract
Precise medical image segmentation plays a vital role in disease diagnosis and clinical treatment. Although U-Net-based architectures and their Transformer-enhanced variants have achieved remarkable progress in automatic segmentation tasks, they still face challenges in complex medical imaging scenarios, particularly around simultaneously modeling fine-grained [...] Read more.
Precise medical image segmentation plays a vital role in disease diagnosis and clinical treatment. Although U-Net-based architectures and their Transformer-enhanced variants have achieved remarkable progress in automatic segmentation tasks, they still face challenges in complex medical imaging scenarios, particularly around simultaneously modeling fine-grained local details and capturing long-range global contextual information, which limits segmentation accuracy and structural consistency. To address these challenges, this paper proposes a novel medical image segmentation framework termed DA-TransResUNet. Built upon a ResUNet backbone, the proposed network integrates residual learning, Transformer-based encoding, and a dual-attention (DA) mechanism in a unified manner. Residual blocks facilitate stable optimization and progressive feature refinement in deep networks, while the Transformer module effectively models long-range dependencies to enhance global context representation. Meanwhile, the proposed DA-Block jointly exploits local and global features as well as spatial and channel-wise dependencies, leading to more discriminative feature representations. Furthermore, embedding DA-Blocks into both the feature embedding stage and skip connections strengthens information interaction between the encoder and decoder, thereby improving overall segmentation performance. Experimental results on the LiTS2017 dataset and Sliver07 dataset demonstrate that the proposed method achieves incremental improvement in liver segmentation. In particular, on the LiTS2017 dataset, DA-TransResUNet achieves a Dice score of 97.39%, a VOE of 5.08%, and an RVD of −0.74%, validating its effectiveness for liver segmentation. Full article
Show Figures

Figure 1

32 pages, 5567 KB  
Article
Optimized Image Segmentation Model for Pellet Microstructure Incorporating KL Divergence Constraints
by Yuwen Ai, Xia Li, Aimin Yang, Yunjie Bai and Xuezhi Wu
Mathematics 2026, 14(3), 574; https://doi.org/10.3390/math14030574 - 5 Feb 2026
Abstract
Accurate segmentation of pellet microstructure images is crucial for evaluating their metallurgical performance and optimizing production processes. To address the challenges posed by complex structures, blurred boundaries, and fine-grained textures of hematite and magnetite in pellet micrographs, this study proposes a hybrid intelligently [...] Read more.
Accurate segmentation of pellet microstructure images is crucial for evaluating their metallurgical performance and optimizing production processes. To address the challenges posed by complex structures, blurred boundaries, and fine-grained textures of hematite and magnetite in pellet micrographs, this study proposes a hybrid intelligently optimized VGG16-U-Net semantic segmentation model. The model incorporates an improved SPC-SA channel self-attention mechanism in the encoder to enhance deep feature representation, while a simplified SAN and SAW module is integrated into the decoder to strengthen its response to key mineral regions. Additionally, a hybrid loss strategy is employed with KL regularization for training optimization. Experimental results show that the model achieves an mIoU of 85.58%, an mPA of 91.54%, and an overall accuracy of 93.58%. Compared with the baseline models, the proposed method achieves improved performance to some extent. Full article
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Computer Vision)
22 pages, 1680 KB  
Article
Application of Machine Learning to Cluster Analysis of Diabetes Mortality at the Municipality Level in Mexico According to Sociodemographic Factors
by Nelva N. Almanza-Ortega, Carlos Fernando Moreno-Calderon, Sandra Silvia Roblero-Aguilar, Rodolfo Pazos-Rangel, Joaquín Pérez-Ortega, Vanesa Landero-Nájera and Víctor Augusto Castellanos-Escamilla
Mathematics 2026, 14(3), 573; https://doi.org/10.3390/math14030573 - 5 Feb 2026
Abstract
In recent years, the mortality due to diabetes has increased around the world. In particular, diabetes is the second leading cause of mortality in Mexico, with a heterogeneous distribution of mortality rates at the municipality level. The objective of this study is the [...] Read more.
In recent years, the mortality due to diabetes has increased around the world. In particular, diabetes is the second leading cause of mortality in Mexico, with a heterogeneous distribution of mortality rates at the municipality level. The objective of this study is the analysis of clusters of municipalities with similar values for sociodemographic indices and diabetes mortality. In this sense, an application is presented that was developed using a data science methodology and a machine learning algorithm called fuzzy c-means. For this research, 4,604,360 death certificates from 2019 to 2023 were assessed, among other official data. As a result of the analysis, two key indicators related to diabetes mortality were found, i.e., one is the percentage of population in poverty and the other is population density. The main results of this research are as follows: a direct correlation was found between population density and mortality, and an inverse correlation was found between population in poverty and mortality. In the study interval, it was observed that the cluster with less mortality showed an increase in mortality rate year after year. Finally, we consider that the tendencies found can be useful to public health authorities for optimizing the distribution of resources for treating diabetes and reducing diabetes-related mortality. Full article
17 pages, 2767 KB  
Article
Implicit Neural Representation for Dense Event-Based Imaging Velocimetry
by Jia Ai, Junjie Li, Zuobing Chen and Yong Lee
Mathematics 2026, 14(3), 572; https://doi.org/10.3390/math14030572 - 5 Feb 2026
Abstract
This paper presents an Implicit Neural Representation method for Event-Based Imaging Velocimetry (INR-VG) to reconstruct dense velocity fields from sparse event streams. The core idea is to learn a mapping (multilayer perceptron) from spatial coordinates to flow velocities, [...] Read more.
This paper presents an Implicit Neural Representation method for Event-Based Imaging Velocimetry (INR-VG) to reconstruct dense velocity fields from sparse event streams. The core idea is to learn a mapping (multilayer perceptron) from spatial coordinates to flow velocities, v(x)=f(x;θ), which thereby enables dense velocity measurements at any desired spatial resolution. The neural network is optimized through test-time optimization by minimizing the alignment error between warped voxel grids of events. Extensive evaluations on synthetic datasets and real-world flows demonstrate that INR-VG achieves high accuracy (errors as low as 0.05 px/ms) and maintains robustness in challenging conditions where existing methods typically fail, including low event rates and large displacements, significantly outperforming optical-flow-based baselines. To the best of our knowledge, this work represents a successful application of implicit neural representations to event-based imaging velocimetry (EBIV), establishing a new paradigm for dense and robust event-based flow measurement. The implementation and experimental details are publicly available to support reproducibility and future research. Full article
(This article belongs to the Special Issue Applied Mathematics in Fluid Mechanics and Flows)
Show Figures

Figure 1

25 pages, 51444 KB  
Article
Local Contrast Enhancement in Digital Images Using a Tunable Modified Hyperbolic Tangent Transformation
by Camilo E. Echeverry and Manuel G. Forero
Mathematics 2026, 14(3), 571; https://doi.org/10.3390/math14030571 - 5 Feb 2026
Abstract
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: [...] Read more.
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: one to select the intensity range for modification and another to control the degree of enhancement. This approach outperforms conventional histogram-based techniques such as histogram equalization and specification in local contrast enhancement, without increasing computational cost, and produces smooth, artifact-free results in user-defined regions of interest. In addition, the proposed method was compared with CLAHE in MRIs, showing that, unlike CLAHE, the proposed method does not enhance the noise present in the background of the image. Furthermore, in deep learning contexts where dataset size is often limited, our method could serve as an effective data augmentation tool—generating varied contrast images while preserving anatomical structures, which improves neural network training for brain tumor detection in magnetic resonance imaging. The ability to manipulate local contrast may offer a pathway toward better interpretability of convolutional neural networks, as targeted contrast adjustments allow researchers to probe model sensitivity and enhance the explainability of classification and detection mechanisms. Full article
(This article belongs to the Special Issue Data Mining and Algorithms Applied in Image Processing)
17 pages, 281 KB  
Article
On Aggregations of Algebraic Objects in Data Modeling
by Ivana Štajner-Papuga and Andreja Tepavčević
Mathematics 2026, 14(3), 570; https://doi.org/10.3390/math14030570 - 5 Feb 2026
Abstract
Aggregation operators are mathematical tools that, under certain constraints in the form of properties imposed on those mathematical functions, provide a representative value for the whole set of input values or structures. The question of when and what properties of input structures, with [...] Read more.
Aggregation operators are mathematical tools that, under certain constraints in the form of properties imposed on those mathematical functions, provide a representative value for the whole set of input values or structures. The question of when and what properties of input structures, with particular emphasis on fuzzy groups, and in particular Ω-groups, are preserved during the aggregation process is the main focus of this paper. This paper proposes a new technique for the aggregation of special fuzzy groups in a lattice-valued framework based on the aggregation of lattice-valued weak equivalence relations. Necessary and sufficient conditions for aggregation operators on complete lattices to aggregate Ω-groups is the main result. Full article
(This article belongs to the Section A: Algebra and Logic)
24 pages, 1924 KB  
Article
Simultaneous Confidence Intervals for Pairwise Differences of Means in Zero-Inflated Rayleigh Distributions with an Application to Road Accident Fatalities Data
by Warisa Thangjai, Sa-Aat Niwitpong, Narudee Smithpreecha and Arunee Wongkhao
Mathematics 2026, 14(3), 569; https://doi.org/10.3390/math14030569 - 5 Feb 2026
Abstract
This paper develops simultaneous confidence intervals (SCIs) for pairwise differences of means with zero-inflated Rayleigh (ZIR) distributions, a flexible framework for modeling positively skewed data with excess zeros. Closed-form expressions for the ZIR mean are derived, and several competing interval estimation procedures are [...] Read more.
This paper develops simultaneous confidence intervals (SCIs) for pairwise differences of means with zero-inflated Rayleigh (ZIR) distributions, a flexible framework for modeling positively skewed data with excess zeros. Closed-form expressions for the ZIR mean are derived, and several competing interval estimation procedures are investigated, including generalized confidence interval (GCI), parametric bootstrap (PB), method of variance estimates recovery (MOVER), delta-method normal approximation, and highest posterior density (HPD) intervals. The finite-sample performance of the proposed SCIs is examined via extensive Monte Carlo simulations, focusing on empirical coverage probabilities (CPs) and average interval lengths (ALs) over a broad range of parameter configurations and zero-inflation levels. A real data application to road accident fatality counts demonstrates the practical utility of the proposed methodology. The results show that the HPD method consistently achieves the most favorable balance between coverage accuracy and interval efficiency. Overall, this study advances reliable simultaneous inference for zero-inflated models commonly encountered in environmental, biomedical, and reliability studies. Full article
(This article belongs to the Special Issue Statistical Inference: Methods and Applications)
17 pages, 2265 KB  
Article
Current Transformer Error Compensation Under Core Saturation Conditions Based on Machine Learning Algorithms
by Ismoil Odinaev, Svetlana Beryozkina, Andrey Pazderin, Mihail Senyuk, Murodbek Safaraliev and Pavel Dubrovin
Mathematics 2026, 14(3), 568; https://doi.org/10.3390/math14030568 - 5 Feb 2026
Viewed by 32
Abstract
To provide information support for relay protection and emergency automation algorithms, electromagnetic measuring voltage and current transformers are most often used. As practice shows, the magnetic core of the current transformers can be saturated under transient processes. This negatively impacts the proper functioning [...] Read more.
To provide information support for relay protection and emergency automation algorithms, electromagnetic measuring voltage and current transformers are most often used. As practice shows, the magnetic core of the current transformers can be saturated under transient processes. This negatively impacts the proper functioning of protection systems. This paper proposes a methodology for restoration of the current transformers’ secondary current based on machine learning algorithms. The task of current restoration is reduced to clustering and regression problems. The groups’ current data are clustered depending on the depth of core saturation and the shape of current distortion. Then, solving the regression problem, current restoration is performed. Considering the requirements for the performance of the protection system, the following machine learning algorithms were selected for current recovery: Decision Tree, Random Forest, XGBoost, and Support Vector Machine for regression problems. The results of computational experiments show that the optimal number of clusters is four. Among the current restoration algorithms, XGBoost proved to be the most suitable. On average, for 17,240 test saturation modes, its error was 4%. The time delay for restoration one saturation mode was 0.0067 ms. Full article
(This article belongs to the Special Issue Mathematical Applications in Electrical Engineering, 2nd Edition)
Show Figures

Figure 1

14 pages, 8558 KB  
Article
FDEA-Net: Enhancing X-Ray Fracture Detection via Detail-Boosted and Rotation-Aware Feature Encoding
by Xiaohan Yu, Meng Wang and Chao He
Mathematics 2026, 14(3), 567; https://doi.org/10.3390/math14030567 - 5 Feb 2026
Viewed by 55
Abstract
X-ray imaging is the most widely used modality for fracture diagnosis in clinical practice due to its efficiency and accessibility. However, automated X-ray fracture detection faces two major challenges. First, fracture regions often contain subtle and low-contrast crack patterns, making it difficult for [...] Read more.
X-ray imaging is the most widely used modality for fracture diagnosis in clinical practice due to its efficiency and accessibility. However, automated X-ray fracture detection faces two major challenges. First, fracture regions often contain subtle and low-contrast crack patterns, making it difficult for models to capture essential fine details. Second, fractures exhibit strong directional variability, while conventional detection frameworks have limited capacity to model rotation changes. To address these issues, we propose FDEA-Net, an enhanced detection framework tailored for fracture analysis. It integrates two lightweight improvement modules. The Fracture Detail Enhancer (FDE) strengthens high-frequency textures and fine-grained structural cues that are closely associated with fracture lines. The Rotation Aware Encoder (RAE) encodes rotation-sensitive representations, improving recognition under diverse fracture orientations. Experiments on a large-scale X-ray fracture dataset show clear performance gains, achieving an mAP50 of 0.742 and an F1-score of 0.738. These findings verify the effectiveness of combining detail enhancement with rotation-aware feature modeling. FDEA-Net provides an efficient and generalizable solution for reliable detection of subtle fractures in medical imaging. Full article
Show Figures

Figure 1

31 pages, 2074 KB  
Article
A Multi-Model Dynamic Selection Framework Using Deep Contextual Bandits for Urban Traffic Flow Prediction in Large-Scale Road Networks
by Silai Chen, Shengfeng Mao, Zongcheng Zhang, Xiaoyuan Zhang, Yunxia Wu, Yangsheng Jiang and Zhihong Yao
Mathematics 2026, 14(3), 566; https://doi.org/10.3390/math14030566 - 4 Feb 2026
Viewed by 132
Abstract
To address the challenge of model selection in large-scale traffic flow prediction tasks, this paper proposes a dynamic multi-model selection framework based on Deep Contextual Bandits (DCB). Centered on the optimal combination of sub-models, the framework leverages contextual information of road segments to [...] Read more.
To address the challenge of model selection in large-scale traffic flow prediction tasks, this paper proposes a dynamic multi-model selection framework based on Deep Contextual Bandits (DCB). Centered on the optimal combination of sub-models, the framework leverages contextual information of road segments to select dynamically among candidate predictors, achieving more efficient and accurate traffic flow prediction. Several mechanisms are introduced to improve strategy learning and convergence, including a baseline network, experience replay, double-model estimation, and prioritized experience sampling. A clustering-based strategy is further designed to reduce the search space and enhance the generalization and transferability. Experiments on real-world traffic datasets demonstrate that the proposed framework significantly outperforms traditional static fusion methods, reinforcement learning (RL) baselines, and mainstream spatiotemporal prediction models. In particular, the framework yields a 1.0% improvement in R2 and a 3.2% reduction in MAE compared to state-of-the-art baselines, while reducing inference time by 43.1%. Moreover, the proposed framework shows strong capability in adaptive model selection under varying contexts, with ablation studies confirming the effectiveness of its key components. Full article
14 pages, 1049 KB  
Article
Fractional Fuzzy Force-Position Control of Constrained Robots
by Aldo Jonathan Muñoz-Vázquez, Mohamed Gharib, Juan Diego Sánchez-Torres and Anh-Tu Nguyen
Mathematics 2026, 14(3), 565; https://doi.org/10.3390/math14030565 - 4 Feb 2026
Viewed by 100
Abstract
Modern robotic tasks often require interaction with the surrounding elements in the workspace. In some high-precision tasks, it is essential to stabilize the contact force on a smooth yet rigid surface, which can be modeled as a unilateral constraint. This challenge becomes increasingly [...] Read more.
Modern robotic tasks often require interaction with the surrounding elements in the workspace. In some high-precision tasks, it is essential to stabilize the contact force on a smooth yet rigid surface, which can be modeled as a unilateral constraint. This challenge becomes increasingly complex in the presence of disturbances. This study addresses these issues using a robust fuzzy force-position controller that combines the approximation capabilities of fuzzy inference systems with the nonlocal properties of fractional operators. The proposed approach extends the error integration to include proportional-integral-derivative (PID) components of the position error, along with the integral of the contact force error. This formulation leverages the orthogonality between force and velocity subspaces to achieve accurate force-position stabilization. Additionally, an adaptive mechanism enhances closed-loop performance and robustness. The effectiveness of the proposed controller is validated through analytical derivations and simulations, thereby demonstrating its reliability in constrained environments. Full article
Show Figures

Figure 1

19 pages, 609 KB  
Article
Regime-Switching Fischer–Margrabe Options Pricing with Liquidity Risk and Stochastic Volatility
by Priya Mittal, Dharmaraja Selvamuthu and Guglielmo D’Amico
Mathematics 2026, 14(3), 564; https://doi.org/10.3390/math14030564 - 4 Feb 2026
Viewed by 56
Abstract
This article presents a model for pricing an exchange option considering stochastic volatility and liquidity risk. The impact of liquidity risk on an asset price is considered by utilizing a liquidity discount process that is influenced by both market and asset-specific liquidity. Girsanov’s [...] Read more.
This article presents a model for pricing an exchange option considering stochastic volatility and liquidity risk. The impact of liquidity risk on an asset price is considered by utilizing a liquidity discount process that is influenced by both market and asset-specific liquidity. Girsanov’s theorem is applied to transform from the real-world probability measure to equivalent probability measures, such as the risk-neutral probability measure. The Feynman–Kac theorem is applied to transform the exchange option pricing formula into the vanilla option pricing formula. The analytical expression is derived through the characteristic function approach. The accuracy of the proposed formula is validated through comparisons with Monte Carlo simulation, where the relative error remains below 0.93% across different values of S(0) and τ. Furthermore, numerical experiments highlight that incorporating liquidity risk leads to higher option prices. As the maturity increases from 0.1 to 2.0, the percentage gap between the option prices increases from 1.65% to 20.2%. Finally, sensitivity analysis is conducted to examine the influence of various parameters and to demonstrate the impact of stochastic volatility and liquidity in exchange option valuation. Full article
(This article belongs to the Section E5: Financial Mathematics)
19 pages, 432 KB  
Article
A Reaction–Diffusion System of General Gene Expression with Delays
by Xiaoqin P. Wu and Liancheng Wang
Mathematics 2026, 14(3), 563; https://doi.org/10.3390/math14030563 - 4 Feb 2026
Viewed by 110
Abstract
In this paper, a complete analysis is presented to study a reaction–diffusion system of general gene expression with two time delays and with Neumann boundary conditions. The global existence of a unique strong solution and the existence of an attractor are established. Using [...] Read more.
In this paper, a complete analysis is presented to study a reaction–diffusion system of general gene expression with two time delays and with Neumann boundary conditions. The global existence of a unique strong solution and the existence of an attractor are established. Using delays as bifurcation parameters, we obtain critical values so that the Hopf bifurcation occurs at a unique equilibrium point. Numerical simulations are provided to illustrate both the stability of the equilibrium point and the emergence of bifurcations. For steady-state solutions, the Maximum Principle is used to obtain the bounds of positive solutions. The conditions for the system to have constant solutions are also investigated. Full article
Show Figures

Figure 1

15 pages, 282 KB  
Article
Rigidity and Conformal Characterizations of Noncompact Gradient Schouten Solitons
by Ali H. Alkhaldi, Fatemah Mofarreh, Huda M. Alshanbari and Akram Ali
Mathematics 2026, 14(3), 562; https://doi.org/10.3390/math14030562 - 4 Feb 2026
Viewed by 65
Abstract
This paper studies the conformal geometry of complete gradient Schouten solitons (GSSs) admitting closed conformal vector fields (CVFs). We establish rigidity and characterization results for nonparallel, homothetic closed CVFs under the assumption that the gradient of the scalar curvature is parallel to the [...] Read more.
This paper studies the conformal geometry of complete gradient Schouten solitons (GSSs) admitting closed conformal vector fields (CVFs). We establish rigidity and characterization results for nonparallel, homothetic closed CVFs under the assumption that the gradient of the scalar curvature is parallel to the CVF. It is shown that such manifolds are isometric to Euclidean space. Moreover, complete noncompact GSSs with constant scalar curvature are locally conformally flat in dimension four and have harmonic Weyl curvature in higher dimensions. Finally, we prove that these manifolds are totally umbilical if and only if their scalar curvature is constant, and they form warped products with space forms. Full article
31 pages, 648 KB  
Article
On Model Improvement Algorithms—Generalised Linear Models and Neural Networks
by Manuel L. Esquível, Nadezhda P. Krasii and Raquel Medeiros Gaspar
Mathematics 2026, 14(3), 561; https://doi.org/10.3390/math14030561 - 4 Feb 2026
Viewed by 77
Abstract
We propose a generic approach to stochastic model improvement by first introducing an archetypal algorithm based on error minimisation and establishing two results on the weak convergence of the probability laws associated with the models under improvement. We then present two concrete instances [...] Read more.
We propose a generic approach to stochastic model improvement by first introducing an archetypal algorithm based on error minimisation and establishing two results on the weak convergence of the probability laws associated with the models under improvement. We then present two concrete instances of this approach: Generalised Linear Models and classical multivariate models assessed using a neural network. In both cases, we illustrate the methodology using economic, financial, and social data related to the determination of government bond coupon rates prior to primary market auctions. For each application, we derive weak convergence results that specify conditions under which model improvement occurs, in the sense of convergence in law of the probability distributions associated with successive models. These results ensure the convergence of the proposed archetypal algorithm and provide a probabilistic foundation for systematic model improvement. Full article
19 pages, 281 KB  
Article
Nonlinear ξ-Bi-Skew Lie Triple Derivations on -Algebras*
by Yimeng Yue, Fenhong Li, Liang Kong and Fugang Chao
Mathematics 2026, 14(3), 560; https://doi.org/10.3390/math14030560 - 4 Feb 2026
Viewed by 65
Abstract
Let A be a unital associative *-algebra over the complex field C containing a nontrivial projection and ξ be a nonzero scalar. In this paper, we give a characterization of ξ-bi-skew Lie triple derivation on A. As applications, we apply the [...] Read more.
Let A be a unital associative *-algebra over the complex field C containing a nontrivial projection and ξ be a nonzero scalar. In this paper, we give a characterization of ξ-bi-skew Lie triple derivation on A. As applications, we apply the above result to prime *-algebras, factor von Neumann algebras and standard operator algebras. Full article
(This article belongs to the Section A: Algebra and Logic)
21 pages, 1036 KB  
Article
An Attention-Based Learning Approach for Joint Optimization of Storage Selection and Order Picking Paths in Mobile Shelving Systems
by Jiawei Zhang, Li Wang, Pinyan Lai, Ye Shao and Sixiang Zhao
Mathematics 2026, 14(3), 559; https://doi.org/10.3390/math14030559 - 4 Feb 2026
Viewed by 96
Abstract
This research introduces an advanced attention-driven model designed to optimize mobile shelf warehouse order-picking. Our model incorporates an enhanced masking mechanism and context-aware decoder, streamlining the order-picking process. In essence, our model presents an attention model based heuristic solution to the long-standing problem [...] Read more.
This research introduces an advanced attention-driven model designed to optimize mobile shelf warehouse order-picking. Our model incorporates an enhanced masking mechanism and context-aware decoder, streamlining the order-picking process. In essence, our model presents an attention model based heuristic solution to the long-standing problem of order-picking optimization, leveraging the latest in attention-based deep learning techniques. The attention model is combined with Apriori and the Adaptive Large Neighborhood Search (ALNS) algorithm to solve the bilevel combinatorial optimization model for mobile shelves. Compared to existing methods, our innovative model shows superior performance, offering significant potential in warehousing solutions. Full article
Show Figures

Figure 1

54 pages, 5162 KB  
Article
Mathematical Framework for Airport as Cognitive Digital Twin of Aviation Ecosystem
by Igor Kabashkin and Arturs Saveljevs
Mathematics 2026, 14(3), 558; https://doi.org/10.3390/math14030558 - 4 Feb 2026
Viewed by 68
Abstract
Airport digital transformation is commonly approached through technological integration and data-driven optimization, yet such perspectives provide limited insight into system-level reasoning and governance. This paper introduces the cognitive airport paradigm (CAP) as a mathematically grounded framework that models the airport as a domain-specific [...] Read more.
Airport digital transformation is commonly approached through technological integration and data-driven optimization, yet such perspectives provide limited insight into system-level reasoning and governance. This paper introduces the cognitive airport paradigm (CAP) as a mathematically grounded framework that models the airport as a domain-specific cognitive digital twin within a complex aviation ecosystem. Methodologically, the study follows a conceptual–analytical and design-science research approach, combining system analysis, conceptual modeling, ontology engineering, and formal mathematical representation of cognitive transitions and governance constraints. CAP represents airport cognition as an explicit state space characterized by cognitive maturity, governance integrity, and semantic stability. Analytical reasoning, adaptive learning, and orchestration mechanisms are formalized through instrument dominance profiles and cognitive performance functionals, enabling analytical comparison of airport configurations and identification of cognitive regimes. The results include (i) a formalization of airports as cognitive digital twins with measurable cognitive and governance properties; (ii) quantitative indices such as the cognitive readiness index, governance integrity index, and ethical alignment coefficient supporting structured evaluation of airport cognitive maturity; and (iii) illustrative expert-based parameterizations and a geometric interpretation in a cognitive simplex demonstrating that governance-oriented orchestration stabilizes airport cognition under increasing system complexity. Airport development is interpreted as continuous cognitive evolution rather than discrete stages of digitalization. The paper further proposes a cognitive roadmap for guiding airport evolution through structured cognitive rebalancing. The framework contributes to the theoretical foundations of cognitive digital twins and is transferable to other safety-critical and institutionally governed socio-technical systems. Full article
Show Figures

Graphical abstract

28 pages, 4176 KB  
Article
Evaluating the Financial Performance of CSR Strategies and Sustainable Operations in Mexican Companies: An Explainable Machine Learning Approach
by Laura Elena Jiménez-Casillas, Román Rodríguez-Aguilar, Marisol Velázquez-Salazar and Santiago García-Álvarez
Mathematics 2026, 14(3), 557; https://doi.org/10.3390/math14030557 - 4 Feb 2026
Viewed by 64
Abstract
Research on how corporate social responsibility (CSR) practices linked to sustainable operations (SO) affect corporate financial performance (FP) is still limited. This study presents a novel methodological proposal to measure the individual impact of such practices on the profitability of companies listed on [...] Read more.
Research on how corporate social responsibility (CSR) practices linked to sustainable operations (SO) affect corporate financial performance (FP) is still limited. This study presents a novel methodological proposal to measure the individual impact of such practices on the profitability of companies listed on the Mexican Stock Exchange. The method employed consists of a Random Forest (RF) model complemented by Explainable Machine Learning (XML) techniques, namely Individual Conditional Expectation (ICE), Partial Dependence Plots (PDPs) and SHapley Additive exPlanations (SHAP), to calculate the individualized marginal effect in the return on assets (RoA), return on equity (RoE) and return on investment capital (ROIC) for each company, explained by the environmental, social, and governance scores provided by Bloomberg (Bloomberg Finance, L.P., New York, NY, USA), such as the market capitalization, debt-to-equity ratio, sales growth, and years since listing. The novelty of this model lies in the application of RF and XML, which offers a comprehensive and interpretable perspective on the CSR–FP relationship and the use of lagged explanatory variables to avoid endogeneity problems, overcoming the limitations of traditional analyses. The results indicate that environmental scores exhibit the most consistent contribution to FP, whereas social and governance effects are highly metric-dependent. The SHAP analysis reveals substantial heterogeneity in the drivers of firm FP, highlighting the relevance of XML methods. Full article
27 pages, 4702 KB  
Article
Comparative Mathematical Evaluation of Models in the Meta-Analysis of Proportions: Evidence from Neck, Shoulder, and Back Pain in the Population of Computer Vision Syndrome
by Vanja Dimitrijević, Bojan Rašković, Miroslav Popović, Patrik Drid and Borislav Obradović
Mathematics 2026, 14(3), 556; https://doi.org/10.3390/math14030556 - 3 Feb 2026
Viewed by 135
Abstract
Meta-analysis of proportions requires a rigorous transformation model due to the inherent mathematical constraints of proportional data (boundedness and non-constant variance). This study compared four proportions (Untransformed, Freeman–Tukey, Logit, and Arcsine) to determine the most reliable and numerically stable estimator for pooled prevalence. [...] Read more.
Meta-analysis of proportions requires a rigorous transformation model due to the inherent mathematical constraints of proportional data (boundedness and non-constant variance). This study compared four proportions (Untransformed, Freeman–Tukey, Logit, and Arcsine) to determine the most reliable and numerically stable estimator for pooled prevalence. A rigorous comparative evaluation was performed using 35 empirical studies on Computer Vision Syndrome (CVS)-related musculoskeletal pain prevalence. The analysis employed frequentist methods, Monte Carlo simulations (10,000 iterations) to test CI coverage, and Bayesian sensitivity analysis. Key findings were validated using the Generalized Linear Mixed Model (GLMM), representing the one-step methodological standard. Pooled prevalence estimates were highly consistent (0.467 to 0.483). Extreme heterogeneity (I2 ≈ 98–99%) persisted across all models, with τ2 values exceeding 1.0 specifically in Logit and GLMM frameworks. Mixed-effects meta-regression confirmed that this heterogeneity was independent of study size (p = 0.692 to 0.755), with the moderator explaining virtually none of the variance (R2) of 0% to 0.2%. This confirms that the high variance is an inherent feature of the dataset rather than a statistical artifact. Simulations revealed a critical trade-off: while the Untransformed model provided minimal bias, its CI coverage failed significantly in small-sample boundary scenarios (N = 50, p = 0.01, coverage: 39.36%). Under these conditions, the PFT transformation was most robust (98.51% coverage), while the Logit model also maintained high coverage accuracy (91.07%) despite its variance inflation. We conclude that model selection should be context-dependent: the Untransformed model is recommended for well-powered datasets, whereas the PFT transformation is essential for small samples to ensure valid inferential precision. Full article
(This article belongs to the Special Issue Dynamic Model and Analysis of Biology and Epidemiology)
Show Figures

Figure 1

26 pages, 416 KB  
Article
Computational Linearization of Nonlinear Kolmogorov Equations via the Generalized Newton Method in LSV Models
by Mehran Paziresh, Karim Ivaz and Mariyan Milev
Mathematics 2026, 14(3), 555; https://doi.org/10.3390/math14030555 - 3 Feb 2026
Viewed by 101
Abstract
This study develops a generalized Newton method to address the nonlinear Kolmogorov forward equation (KFE) under the local stochastic volatility (LSV) framework. Analytical convergence conditions are derived via the geometric series theorem, and empirical validation is conducted using 15 years of monthly crude [...] Read more.
This study develops a generalized Newton method to address the nonlinear Kolmogorov forward equation (KFE) under the local stochastic volatility (LSV) framework. Analytical convergence conditions are derived via the geometric series theorem, and empirical validation is conducted using 15 years of monthly crude oil spot price data (2011–2025), with the parameters set using maximum likelihood estimation. Sensitivity analyses confirm the stable convergence of the iterative scheme under realistic scenarios while also identifying parameter ranges that may lead to divergence. These findings demonstrate that the proposed methodology provides a tractable and accurate approach to probability density estimation in commodity markets, with a clear potential for extension to multi-dimensional settings and richer datasets. Full article
(This article belongs to the Special Issue Stochastic Simulation: Theory and Applications)
Show Figures

Figure 1

17 pages, 8681 KB  
Article
Balanced Grey Wolf Optimizer Algorithm for Backpropagation Neural Networks
by Jiashuo Chen, Hao Zhu, Tanjile Shu, Chengkun Cao, Yuanwang Deng and Qing Cheng
Mathematics 2026, 14(3), 554; https://doi.org/10.3390/math14030554 - 3 Feb 2026
Viewed by 88
Abstract
Backpropagation Neural Networks (BPNNs) are widely used in fault diagnosis and parameter prediction due to their simple structure and strong universal approximation capabilities. However, BPNNs suffer from slow convergence and susceptibility to poor local minima under basic gradient descent settings. To address these [...] Read more.
Backpropagation Neural Networks (BPNNs) are widely used in fault diagnosis and parameter prediction due to their simple structure and strong universal approximation capabilities. However, BPNNs suffer from slow convergence and susceptibility to poor local minima under basic gradient descent settings. To address these issues, this paper proposes a Balanced Grey Wolf Optimizer (BGWO) as an alternative to gradient descent for training BPNNs. This paper proposes a novel stochastic position update formula and a novel nonlinear convergence factor to balance the local exploitation and global exploration of the traditional Grey Wolf Optimizer. After exploration, the optimal convergence coefficient is determined. The test results on the six benchmark functions demonstrate that BGWO achieves better objective function values under fixed iteration settings. Based on BGWO, this paper constructs a training method for BPNN. Finally, three public datasets are used to test the BPNN trained with BGWO (BGWO-BPNN), the BPNN trained with Levenberg–Marquardt, and the traditional BPNN. The relative error and mean absolute percentage error of BPNNs’ prediction results are used for comparison. The Wilcoxon test is also performed. The test results show that, under the experimental settings of this paper, BGWO-BPNN achieves superior predictive performance. This demonstrates certain advantages of BGWO-BPNN. Full article
Show Figures

Figure 1

38 pages, 18189 KB  
Article
An Improved SAO Used for Global Optimization and Economic Power Load Forecasting
by Lang Zhou, Yaochun Shao, HaoXiang Zhou and Yangjian Yang
Mathematics 2026, 14(3), 553; https://doi.org/10.3390/math14030553 - 3 Feb 2026
Viewed by 85
Abstract
Short-term electricity load forecasting has become increasingly challenging due to growing demand volatility, nonlinear load patterns, and the dynamic penetration of renewable energy sources. Conventional forecasting models often suffer from sensitivity to hyperparameter settings and limited capability in capturing long-term temporal dependencies. To [...] Read more.
Short-term electricity load forecasting has become increasingly challenging due to growing demand volatility, nonlinear load patterns, and the dynamic penetration of renewable energy sources. Conventional forecasting models often suffer from sensitivity to hyperparameter settings and limited capability in capturing long-term temporal dependencies. To address these issues, this paper proposes a hybrid forecasting framework that integrates an Improved Snow Ablation Optimizer (ISAO) with a Dilated Bidirectional Gated Recurrent Unit (Dilated BiGRU). The proposed ISAO enhances the original Snow Ablation Optimizer through three key strategies to improve performance in high-dimensional optimization problems: (i) a subgroup cooperative mechanism to alleviate cross-dimensional interference, (ii) a learning-automata-based adaptive dimension assignment strategy to dynamically allocate optimization resources, and (iii) a t-distribution-based adaptive step size mechanism to balance global exploration and local exploitation. Extensive experiments on the CEC2017 benchmark suite demonstrate that ISAO achieves superior convergence speed and optimization accuracy, with average rankings of 1.60, 1.77, and 2.03 on 30-, 50-, and 100-dimensional problems, respectively, significantly outperforming the original SAO and several state-of-the-art metaheuristic algorithms. Building upon this optimization capability, ISAO is employed to automatically tune the key hyperparameters of the Dilated BiGRU model. Experiments conducted on the Kaggle electricity load dataset show that the proposed ISAO-Dilated BiGRU model achieves MAE, MAPE, and RMSE values of 20.003, 1.711%, and 25.926, respectively, corresponding to reductions of 16.6%, 15.6%, and 17.7% compared with the baseline model, along with an R2 of 0.97841. Comparative results against RNN, LSTM, Random Forest, and the original Dilated BiGRU confirm the robustness and superior long-term dependency modeling capability of the proposed framework. Overall, the proposed ISAO effectively enhances hyperparameter optimization quality and significantly improves the predictive accuracy and stability of the Dilated BiGRU model, providing a reliable and practical solution for short-term electricity load forecasting in modern power systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Optimization in Engineering Applications)
Previous Issue
Back to TopTop