Next Issue
Volume 14, January-1
Previous Issue
Volume 13, December-1
 
 
mathematics-logo

Journal Browser

Journal Browser

Mathematics, Volume 13, Issue 24 (December-2 2025) – 149 articles

Cover Story (view full-size image): Understanding the structure of matter at Planck scales points to fundamental limitations of the geometric frameworks developed so far. This challenge motivates the search for new mathematical structures capable of describing physics beyond classical geometry. Ternary algebras have recently attracted growing interest in theoretical physics, particularly due to the intrinsic ternary structure of the quark model. Inspired by this connection, the present work proposes the notion of a ternary Lie algebra at cube roots of unity and realizes it through an associative ternary multiplication of cubic matrices. This approach may open new avenues for the algebraic description of non-classical geometries and contribute to a deeper understanding of the mathematical foundations of space–time at the Planck scale. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
47 pages, 5622 KB  
Review
Grey Clustering Methods and Applications: A Bibliometric-Enhanced Review
by Gabriel Dumitrescu, Andra Sandu, Mihnea Panait and Camelia Delcea
Mathematics 2025, 13(24), 4040; https://doi.org/10.3390/math13244040 - 18 Dec 2025
Viewed by 382
Abstract
Grey systems theory has provided a change in paradigm related to how numbers and their mathematics are perceived. By including various levels of knowledge associated with the variables, the theory has succeeded in modelling systems characterised by incomplete or partially known information. Among [...] Read more.
Grey systems theory has provided a change in paradigm related to how numbers and their mathematics are perceived. By including various levels of knowledge associated with the variables, the theory has succeeded in modelling systems characterised by incomplete or partially known information. Among the methods offered by the grey systems theory, the grey clustering approach offers a distinct perspective on clustering methodology by allowing researchers to define degrees of importance for the variables included in the analysis. Despite its expanding use across disciplines, a comprehensive synthesis of grey clustering research is lacking. In this context, this study aims to provide a comprehensive and structured overview of the research field associated with grey clustering and its applications, rather than the more rhetorical formulation previously included. By using a PRISMA approach, a dataset containing papers related to grey clustering is extracted from the Clarivate Web of Science database and analysed through bibliometric tools and further enhanced by providing thematic maps and topics discovery through the use of Latent Dirichlet Allocation (LDA) and BERTopic analyses. The final dataset includes 318 articles, and their examination allows for a detailed assessment of publication trends, thematic structures, and methodological directions. The annual scientific production showcased an increase of 10.78%, while the thematic analysis revealed key themes related to performance management, risk assessment, evaluation models for enhancing organisational performance, urban and regional planning, civil engineering, industrial engineering and automation, and risk evaluation for health-related issues. Additionally, a detailed review of the most-cited papers has been performed to highlight the role of grey clustering in various research fields. Full article
Show Figures

Figure 1

17 pages, 910 KB  
Article
BER-Constrained Power Allocation for Uplink NOMA Systems with One-Bit ADCs
by Tae-Kyoung Kim
Mathematics 2025, 13(24), 4039; https://doi.org/10.3390/math13244039 - 18 Dec 2025
Viewed by 235
Abstract
This study investigates bit error rate (BER)-constrained power allocation for uplink non-orthogonal multiple access (NOMA) systems in which a base station employs one-bit analog-to-digital converters. Although one-bit quantization significantly reduces hardware costs and receiver power consumption, it also introduces severe nonlinear distortions that [...] Read more.
This study investigates bit error rate (BER)-constrained power allocation for uplink non-orthogonal multiple access (NOMA) systems in which a base station employs one-bit analog-to-digital converters. Although one-bit quantization significantly reduces hardware costs and receiver power consumption, it also introduces severe nonlinear distortions that degrade detection performance. To address this challenge, a pairwise error probability expression is first derived for the one-bit quantized uplink NOMA model, from which an analytical upper bound on the BER is obtained. Based on this characterization, a fairness-driven max–min power allocation strategy is formulated to minimize the BER of the worst-performing user. A closed-form solution for the optimal power allocation is obtained under binary phase-shift keying (BPSK) signaling. Simulation results verify the tightness of the analytical BER bound and demonstrate that the proposed power allocation scheme provides noticeable BER improvements that compensate for the performance degradation caused by one-bit quantization. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communications with Applications)
Show Figures

Figure 1

24 pages, 2524 KB  
Article
Exact and Heuristic Algorithms for Convex Polygon Decomposition
by Johana Milena Martínez Contreras, Germán Fernando Pantoja Benavides, Astrid Xiomara Rodríguez, John Willmer Escobar and David Álvarez-Martínez
Mathematics 2025, 13(24), 4038; https://doi.org/10.3390/math13244038 - 18 Dec 2025
Viewed by 451
Abstract
Convex decomposition plays a central role in computational geometry and is a key preprocessing step in applications such as robotic motion planning, 2D packing, pattern recognition, and manufacturing. This work revisits the minimum convex decomposition problem and proposes both an exact mathematical model [...] Read more.
Convex decomposition plays a central role in computational geometry and is a key preprocessing step in applications such as robotic motion planning, 2D packing, pattern recognition, and manufacturing. This work revisits the minimum convex decomposition problem and proposes both an exact mathematical model and an efficient heuristic algorithm capable of handling simple polygons as well as polygons with holes. The methodology incorporates a visibility-preserving bridge transformation that converts holed polygons into equivalent simple instances, enabling the extension of classical decomposition schemes to more general topologies. In addition, a convex-union post-processing phase is implemented to reduce the number of convex parts obtained by either method. The performance of the proposed approach is evaluated on benchmark instances from the literature and on a new dataset of polygons with holes introduced in this work. The exact model consistently produces optimal decompositions for small and medium instances, while the heuristic achieves near-optimal solutions with significantly reduced computation times. The union phase further decreases the number of resulting convex pieces in most cases. All codes, datasets, and results are publicly released to facilitate reproducibility and comparison with future methods. Full article
Show Figures

Figure 1

15 pages, 760 KB  
Article
Nonparametric Functions Estimation Using Biased Data
by Abdel-Salam G. Abdel-Salam and Ibrahim A. Ahmad
Mathematics 2025, 13(24), 4037; https://doi.org/10.3390/math13244037 - 18 Dec 2025
Viewed by 266
Abstract
Biased or weighted sampling frequently arises in reliability testing, biomedical survival analysis, and quality-control studies, where the observed data deviate systematically from the target population. This paper develops a unified framework for nonparametric estimation of probability density distribution, hazard rate, and regression functions [...] Read more.
Biased or weighted sampling frequently arises in reliability testing, biomedical survival analysis, and quality-control studies, where the observed data deviate systematically from the target population. This paper develops a unified framework for nonparametric estimation of probability density distribution, hazard rate, and regression functions when the data are subject to biased sampling. The proposed weighted kernel estimators adjust for biasing functions w(x), enabling asymptotically unbiased estimation under general sampling distortions. Comprehensive theoretical results are provided, including bias-variance decompositions, optimal bandwidth orders, and mean-squared error properties. Extensive numerical simulations and a real-data application to the Channing House dataset demonstrate the practical advantages and robustness of the proposed estimators compared with naïve approaches. The results confirm the method’s theoretical validity and its broad applicability in survival and reliability studies involving biased data. Full article
(This article belongs to the Special Issue Statistical Theory and Application, 2nd Edition)
Show Figures

Figure 1

22 pages, 335 KB  
Article
Properties and Application of Incomplete Orthogonalization in the Directions of Gradient Difference in Optimization Methods
by Vladimir Krutikov, Elena Tovbis, Svetlana Gutova, Ivan Rozhnov and Lev Kazakovtsev
Mathematics 2025, 13(24), 4036; https://doi.org/10.3390/math13244036 - 18 Dec 2025
Viewed by 241
Abstract
This paper considers the problem of unconstrained minimization of smooth functions. Despite the high efficiency of quasi-Newton methods such as BFGS, their performance degrades in ill-conditioned problems with unstable or rapidly varying Hessians—for example, in functions with curved ravine structures. This necessitates alternative [...] Read more.
This paper considers the problem of unconstrained minimization of smooth functions. Despite the high efficiency of quasi-Newton methods such as BFGS, their performance degrades in ill-conditioned problems with unstable or rapidly varying Hessians—for example, in functions with curved ravine structures. This necessitates alternative approaches that rely not on second-derivative approximations but on the topological properties of level surfaces. As a new methodological framework, we propose using a procedure of incomplete orthogonalization in the directions of gradient differences, implemented through the iterative least-squares method (ILSM). Two new methods are constructed based on this approach: a gradient method with the ILSM metric (HY_g) and a modification of the Hestenes–Stiefel conjugate gradient method with the same metric (HY_XS). Both methods are shown to have linear convergence on strongly convex functions and finite convergence on quadratic functions. A numerical experiment was conducted on a set of test functions. The results show that the proposed methods significantly outperform BFGS (2 times for HY_g and 3.5 times for HY_XS in terms of iterations number) when solving ill-posed problems with varying Hessians or complex level topologies, while providing comparable or better performance even in high-dimensional problems. This confirms the potential of using topology-based metrics alongside classical quasi-Newton strategies. Full article
29 pages, 15877 KB  
Article
Fracture Evolution in Rocks with a Hole and Symmetric Edge Cracks Under Biaxial Compression: An Experimental and Numerical Study
by Daobing Zhang, Linhai Zeng, Shurong Guo, Zhiping Chen, Jiahua Zhang, Xianyong Jiang, Futian Zhang and Anmin Jiang
Mathematics 2025, 13(24), 4035; https://doi.org/10.3390/math13244035 - 18 Dec 2025
Cited by 1 | Viewed by 299
Abstract
This study employs physical experiments and the RFPA3D numerical method to investigate the fracture evolution of rocks containing a central hole with symmetrically arranged double cracks (seven inclination angles β) under biaxial compression. The results demonstrate that peak stress and strain exhibit [...] Read more.
This study employs physical experiments and the RFPA3D numerical method to investigate the fracture evolution of rocks containing a central hole with symmetrically arranged double cracks (seven inclination angles β) under biaxial compression. The results demonstrate that peak stress and strain exhibit nonlinear increases with rising β. Tensile–shear failure dominates at lower angles (β = 0–60°), characterized by secondary crack initiation at defect tips and wing/anti-wing crack development at intermediate angles (β = 45–60°). At higher angles (β = 75–90°), shear failure prevails, governed by crack propagation along hole walls. When β exceeds 45°, enhanced normal stress on crack planes suppresses mode II propagation and secondary crack formation. Elevated lateral pressures (15–20 MPa) significantly alter failure patterns by redirecting the maximum principal stress, causing cracks to align parallel to this orientation and driving anti-wing cracks toward specimen boundaries. Three-dimensional analysis reveals critical differences between internal and surface fracture propagation, highlighting how penetrating cracks around the hole crucially impact stability. This study provides valuable insights into complex fracture mechanisms in defective rock masses, offering practical guidance for stability assessment in underground mining operations where such composite defects commonly occur. Full article
Show Figures

Figure 1

25 pages, 919 KB  
Article
A CVaR-Based Black–Litterman Model with Macroeconomic Cycle Views for Optimal Asset Allocation of Pension Funds
by Yungao Wu and Yuqin Sun
Mathematics 2025, 13(24), 4034; https://doi.org/10.3390/math13244034 - 18 Dec 2025
Viewed by 373
Abstract
As a form of long-term asset allocation, pension fund investment necessitates accurate estimation of both asset returns and associated risks over extended time horizons. However, long-term asset returns are significantly influenced by macroeconomic factors, whereas variance-based risk measures cannot account for the directional [...] Read more.
As a form of long-term asset allocation, pension fund investment necessitates accurate estimation of both asset returns and associated risks over extended time horizons. However, long-term asset returns are significantly influenced by macroeconomic factors, whereas variance-based risk measures cannot account for the directional nature of deviations from expected returns. To address these issues, we propose a novel CVaR-based Black–Litterman model incorporating macroeconomic cycle views (CVaR-BL-MCV) for optimal asset allocation of pension funds. This approach integrates macroeconomic cycle dynamics to quantify their impact on asset returns and utilizes Conditional Value-at-Risk (CVaR) as a coherent measure of downside risk. We employ a Markov-switching model to identify and forecast the phases of economic and monetary cycles. By analyzing the economic cycle with PMI and CPI, economic conditions are categorized into three distinct phases: stable, transitional, and overheating. Similarly, by analyzing the monetary cycle with M2 and SHIBOR, monetary conditions are classified into expansionary and contractionary phases. Based on historical asset return data across these cycles, view matrices are constructed for each cycle state. CVaR is used as the risk measure, and the posterior distribution of the Black–Litterman (BL) model is derived via generalized least squares (GLS), thereby extending the traditional BL framework to a CVaR-based approach. The experimental results demonstrate that the proposed CVaR-BL-MCV model outperforms the benchmark models. When the risk aversion coefficient is 1, 1.5, and 3, the Sharpe ratio of pension asset allocation using the CVaR-BL-MCV model is 21.7%, 18.4%, and 20.5% higher than that of the benchmark models, respectively. Moreover, the BL model incorporating CVaR improves the Sharpe ratio of pension asset allocation by an average of 19.7%, while the BL model with MCV achieves an average improvement of 14.4%. Full article
Show Figures

Figure 1

13 pages, 256 KB  
Article
Approximation by Overactivated and Spiked Convolutions as Positive Linear Operators
by George A. Anastassiou
Mathematics 2025, 13(24), 4033; https://doi.org/10.3390/math13244033 - 18 Dec 2025
Viewed by 198
Abstract
In this work, the author studied the quantitative approximation to the unit operator of three kinds of overactivated and spiked convolution type-operators. These operators have as a kernel a cusp coming from a constructed S-shaped finite-length arc, serving as a new activation function [...] Read more.
In this work, the author studied the quantitative approximation to the unit operator of three kinds of overactivated and spiked convolution type-operators. These operators have as a kernel a cusp coming from a constructed S-shaped finite-length arc, serving as a new activation function of compact support. This is derived from the composition of two general sigmoid activation functions with domain all reals. Our operators are positive linear ones and are treated as such. Initially we establish the basic convergence, then we move on to simultaneous and iterated approximations, all via inequalities and involving the modulus of continuity of the approximated univariate function. Full article
23 pages, 4868 KB  
Article
Enhancing Predictive Accuracy in Medical Data Through Oversampling and Interpolation Techniques
by Alma Rocío Sagaceta-Mejía, Pedro Pablo González-Pérez, Julián Fresán-Figueroa and Máximo Eduardo Sánchez-Gutiérrez
Mathematics 2025, 13(24), 4032; https://doi.org/10.3390/math13244032 - 18 Dec 2025
Viewed by 316
Abstract
Class imbalance is a major challenge in supervised classification, often leading to biased predictions and limited generalization. This issue is particularly pronounced in medical diagnostics, where datasets typically contain far more negative than positive cases. In this study, we compare two oversampling strategies: [...] Read more.
Class imbalance is a major challenge in supervised classification, often leading to biased predictions and limited generalization. This issue is particularly pronounced in medical diagnostics, where datasets typically contain far more negative than positive cases. In this study, we compare two oversampling strategies: the Synthetic Minority Oversampling Technique (SMOTE) and the Conditional Tabular Generative Adversarial Network (ctGAN). Using the benchmark Pima Indians Diabetes dataset, we generated balanced datasets through both methods and trained a multilayer perceptron classifier. Performance was evaluated with accuracy, precision, sensitivity, and F1 Score. The results show that both SMOTE and ctGAN improve classification on imbalanced data, with SMOTE consistently achieving superior sensitivity and F1 Score. These findings highlight the importance of selecting appropriate augmentation strategies to enhance the reliability and clinical usefulness of machine learning models in medical diagnostics. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning with Applications, 2nd Edition)
Show Figures

Figure 1

11 pages, 265 KB  
Article
Unique Existence and Reconstruction of the Solution of Inverse Spectral Problem for Differential Pencil
by Wei Lyu and Zhaoying Wei
Mathematics 2025, 13(24), 4031; https://doi.org/10.3390/math13244031 - 18 Dec 2025
Viewed by 231
Abstract
In this paper, the half-inverse spectral problem for energy-dependent Sturm–Liouville problems (that is, differential pencils), defined on interval [0,π] with the potential functions p,q being a priori known on the subinterval [...] Read more.
In this paper, the half-inverse spectral problem for energy-dependent Sturm–Liouville problems (that is, differential pencils), defined on interval [0,π] with the potential functions p,q being a priori known on the subinterval [0,π/2], is considered. We provide a method for the unique reconstruction of the two potential functions on [π/2,π] and the boundary condition at x=π by using one full spectrum. Consequently, based on the reconstruction method, we also provide a necessary and sufficient condition under which the existence of the quadratic pencil of differential operators is unique. Full article
25 pages, 4692 KB  
Article
Hybrid Microgrid Power Management via a CNN–LSTM Centralized Controller Tuned with Imperialist Competitive Algorithm
by Parastou Behgouy and Abbas Ugurenver
Mathematics 2025, 13(24), 4030; https://doi.org/10.3390/math13244030 - 18 Dec 2025
Viewed by 323
Abstract
Hybrid microgrids struggle to manage electricity due to renewable source, storage, and load demand variability. This paper proposes a centralized controller employing hybrid deep learning and evolutionary optimization to overcome these issues. Solar panels, BESS, EVs, dynamic loads, steady loads, and a switching [...] Read more.
Hybrid microgrids struggle to manage electricity due to renewable source, storage, and load demand variability. This paper proposes a centralized controller employing hybrid deep learning and evolutionary optimization to overcome these issues. Solar panels, BESS, EVs, dynamic loads, steady loads, and a switching main grid make up the hybrid microgrid. To capture spatial and temporal patterns, a centralized controller uses a deep learning model with a CNN–LSTM architecture. The imperialist competitive algorithm (ICA) optimizes neural network hyperparameters for more accurate controller outputs. The controller controls grid switching, voltage source converter power, and EV reference current. R2 values of 0.9602, 0.9512, and 0.9618 show reliable controller output predictions. A typical test case, low sunshine, and no EV or BESS initial charging are validation situations. Its constant power flow, uncertainty management, and adaptability make this controller better than others. Even with intermittent energy and limited storage capacity, the ICA-optimized hybrid deep learning controller stabilized smart-grids. Full article
(This article belongs to the Special Issue Deep Neural Networks: Theory, Algorithms and Applications)
Show Figures

Figure 1

14 pages, 1927 KB  
Article
Drilling Tool Attitude Dynamic Measurement Algorithm Based on Composite Inertial Measurement Unit
by Lingda Hu, Lu Wang, Yutong Zu, Yin Qing and Yuanbiao Hu
Mathematics 2025, 13(24), 4029; https://doi.org/10.3390/math13244029 - 18 Dec 2025
Viewed by 343
Abstract
Drilling tool attitude parameters are crucial for achieving precise directional drilling and trajectory control. Navigation systems based on redundant micro-electro-mechanical systems inertial measurement units (MEMS-IMU) significantly improve the reliability and accuracy of drilling tool attitude measurements. To achieve redundant arrangement of MEMS-IMUs, this [...] Read more.
Drilling tool attitude parameters are crucial for achieving precise directional drilling and trajectory control. Navigation systems based on redundant micro-electro-mechanical systems inertial measurement units (MEMS-IMU) significantly improve the reliability and accuracy of drilling tool attitude measurements. To achieve redundant arrangement of MEMS-IMUs, this paper proposes uniformly arranging MEMS-IMUs on a hollow hexagonal prism carrier, taking into account the actual structure of the drilling tool. However, under dynamic conditions, when updating drilling tool attitude using the strapdown inertial navigation system (SINS), the nonlinear errors of the MEMS-IMU accumulate over time, leading to distortion in the attitude calculation results. Therefore, this paper proposes a composite inertial measurement unit (CIMU) attitude measurement method. A virtual inertial measurement unit (VIMU) is generated through multi-IMU data fusion. Furthermore, the geometric constraints between each IMU and the VIMU, combined with Kalman filtering, are used to achieve real-time suppression of attitude errors, thereby improving the accuracy of the drilling tool attitude calculation results. Experimental results show that, compared with conventional data fusion methods, the CIMU algorithm reduces the overall drilling tool attitude error level by 40–70%. Full article
(This article belongs to the Special Issue Low-Quality Multimodal Data Fusion: Methodologies and Applications)
Show Figures

Figure 1

25 pages, 12450 KB  
Article
Novel Tensor Decomposition-Based Approach for Cell-Type Deconvolution in Visium Datasets with Reference scRNA-Seq Data Containing Multiple Minor Cell Types
by Y.-H. Taguchi and Turki Turki
Mathematics 2025, 13(24), 4028; https://doi.org/10.3390/math13244028 - 18 Dec 2025
Viewed by 521
Abstract
Conventional cell-type deconvolution methods, such as Robust Cell-Type Decomposition (RCTD), SPOTlight, Spatial Cellular Estimator for Tumors (SpaCET), and cell2location, often encounter limitations when applied to Visium datasets that include reference profiles with multiple minor cell types. This highlights the necessity for more advanced [...] Read more.
Conventional cell-type deconvolution methods, such as Robust Cell-Type Decomposition (RCTD), SPOTlight, Spatial Cellular Estimator for Tumors (SpaCET), and cell2location, often encounter limitations when applied to Visium datasets that include reference profiles with multiple minor cell types. This highlights the necessity for more advanced computational approaches to resolve such challenges. To address this issue, we have employed and refined tensor decomposition (TD)-based unsupervised feature extraction (FE) to integrate multiple Visium datasets, providing a robust platform for spatial gene expression profiling (spatial transcriptomics). Notably, TD-based unsupervised FE successfully retrieves singular value vectors that correspond with spatial distribution; neighboring spots are assigned vectors with comparable values. Additionally, TD-based unsupervised FE demonstrates successful interference of cell-type fractions within individual Visium spots, enabling effective deconvolution even when referencing single-cell RNA-seq datasets containing several minor cell types—a scenario where conventional methods, such as RCTD, SPOTlight, SpaCET, and cell2location, typically prove ineffective. The findings of this study suggest that TD-based unsupervised FE has broad applicability for diverse deconvolution tasks. Full article
Show Figures

Figure 1

27 pages, 11161 KB  
Article
CFD Simulation of a High Shear Mixer for Industrial AdBlue® Production
by Ludovic F. Ascenção, Isabel S. O. Barbosa, Adélio M. S. Cavadas and Ricardo J. Santos
Mathematics 2025, 13(24), 4027; https://doi.org/10.3390/math13244027 - 18 Dec 2025
Viewed by 330
Abstract
The increasing global demand for cleaner transportation has intensified the importance of efficient AdBlue® (AUS32) production, a key chemical in selective catalytic reduction (SCR) systems that reduces nitrogen oxides (NOx) emissions from diesel engines. This work presents a computational fluid dynamics (CFD) [...] Read more.
The increasing global demand for cleaner transportation has intensified the importance of efficient AdBlue® (AUS32) production, a key chemical in selective catalytic reduction (SCR) systems that reduces nitrogen oxides (NOx) emissions from diesel engines. This work presents a computational fluid dynamics (CFD) simulation study of the urea–water mixing process within a high shear mixer (HSM), aiming to enhance the sustainability of AdBlue® manufacturing. The model evaluates the hydrodynamic characteristics critical to optimising the dissolution of urea pellets in deionised water, which conventionally requires significant preheating. Experimental validation was conducted by comparing pressure drop simulation results with operational data from an active industrial facility in the United Kingdom. Therefore, this study validates the CFD model against an industrial two-stage Rotor–stator under real operating conditions. The computational framework combines a refined mesh with the k-ω SST turbulent model to resolve flow structures and capture near-wall effects and shear stress transport in complex flow domains. The results reveal opportunities for process optimisation, particularly in reducing thermal energy input without compromising solubility, thus offering a more sustainable pathway for AdBlue® production. The main contribution of this work is to close existing gaps in industrial practice and propose and computationally validate strategies to improve the numerical design of HSM for solid dissolution. Full article
(This article belongs to the Special Issue Computational Fluid Dynamics with Applications)
Show Figures

Figure 1

25 pages, 8166 KB  
Article
T-GARNet: A Transformer and Multi-Scale Gaussian Kernel Connectivity Network with Alpha-Rényi Regularization for EEG-Based ADHD Detection
by Danna Valentina Salazar-Dubois, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Mathematics 2025, 13(24), 4026; https://doi.org/10.3390/math13244026 - 18 Dec 2025
Viewed by 315
Abstract
Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental condition that is typically identified through behavioral assessments and subjective clinical reports. However, electroencephalography (EEG) offers a cost-effective and non-invasive alternative for capturing neural activity patterns closely associated with this disorder. Despite this potential, EEG-based [...] Read more.
Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental condition that is typically identified through behavioral assessments and subjective clinical reports. However, electroencephalography (EEG) offers a cost-effective and non-invasive alternative for capturing neural activity patterns closely associated with this disorder. Despite this potential, EEG-based ADHD classification remains challenged by overfitting, dependence on extensive preprocessing, and limited interpretability. Here, we propose a novel neural architecture that integrates transformer-based temporal attention with Gaussian mixture functional connectivity modeling and a cross-entropy loss regularized through α-Rényi mutual information, termed T-GARNet. The multi-scale Gaussian kernel functional connectivity leverages parallel Gaussian kernels to identify complex spatial dependencies, which are further stabilized and regularized by the α-Rényi term. This design enables direct modeling of long-range temporal dependencies from raw EEG while enhancing spatial interpretability and reducing feature redundancy. We evaluate T-GARNet on a publicly available ADHD EEG dataset using both leave-one-subject-out (LOSO) and stratified group k-fold cross-validation (SGKF-CV), where groups correspond to control and ADHD, and compare its performance against classical and modern state-of-the-art methods. Results show that T-GARNet achieves competitive or superior performance (82.10% accuracy), particularly under the more challenging SGKF-CV setting, while producing interpretable spatial attention patterns consistent with ADHD-related neurophysiological findings. These results underscore T-GARNet’s potential as a robust and explainable framework for objective EEG-based ADHD detection. Full article
Show Figures

Figure 1

23 pages, 3237 KB  
Article
Bifurcation Analysis and Soliton Behavior of New Combined Kairat-II-X Differential Equation Using Analytical Methods
by Jun Zhang, Haifa Bin Jebreen and Rzayeva Nuray
Mathematics 2025, 13(24), 4025; https://doi.org/10.3390/math13244025 - 18 Dec 2025
Viewed by 321
Abstract
The exact analytical solutions of a new combined Kairat-II-X differential equation are presented. The related model is investigated by combining the enhanced modified extended tanh function method and the modified tan(ϕ/2)-expansion method. Then, a wide range of [...] Read more.
The exact analytical solutions of a new combined Kairat-II-X differential equation are presented. The related model is investigated by combining the enhanced modified extended tanh function method and the modified tan(ϕ/2)-expansion method. Then, a wide range of solitary wave solutions with unknown coefficients are extracted in a variety of shapes, including dark, bright, bell-shaped, kink-type, combine, and complex solitons, exponential, hyperbolic, and trigonometric function solutions. To offer physical insight, some of the identified solutions are presented in figures. Also, 3D, 2D, and 2D density profiles of the obtained outcomes are illustrated in order to examine their dynamics with the choices of parameters involved. Based on the obtained findings, we can assert that the suggested computational approaches are efficient, dynamic, well-structured, and valuable for tackling complex nonlinear problems in several fields, including symbolic computations. The bifurcation analysis and sensitivity analysis are employed to comprehend the dynamical system. We assume that our findings will be very beneficial in improving our understanding of the waves that manifest in solids. Full article
Show Figures

Figure 1

30 pages, 1939 KB  
Article
Integrating Machine Learning and Scenario Modelling for Robust Population Forecasting Under Crisis and Data Scarcity
by Michael Politis, Nicholas Christakis, Zoi Dorothea Pana and Dimitris Drikakis
Mathematics 2025, 13(24), 4024; https://doi.org/10.3390/math13244024 - 18 Dec 2025
Viewed by 314
Abstract
This study introduces a new ensemble framework for demographic forecasting that systematically incorporates stylised crisis scenarios into rate and population projections. While scenario reasoning is common in qualitative foresight, its quantitative application in demography remains underdeveloped. Our method combines autoregressive lags, global predictors, [...] Read more.
This study introduces a new ensemble framework for demographic forecasting that systematically incorporates stylised crisis scenarios into rate and population projections. While scenario reasoning is common in qualitative foresight, its quantitative application in demography remains underdeveloped. Our method combines autoregressive lags, global predictors, and robust regression with a trend-anchoring mechanism, enabling stable projections from short official time series (15–20 years in length). Scenario shocks are operationalised through binary event flags for pandemics, refugee inflows, and financial crises, which influence fertility, mortality, and migration models before translating into cohort and population trajectories. Results demonstrate that shocks with strong historical precedence, such as Germany’s migration surges, are convincingly reproduced and leave enduring effects on projected populations. Conversely, weaker or non-recurrent shocks, typical in Norway and Portugal, produce muted scenario effects, with baseline momentum dominating long-term outcomes. At the national level, total population aggregates mitigate temporary shocks, while cohort-level projections reveal more pronounced divergences. Limitations include the short length of the training series, the reduction of signals when shocks do not surpass historical peaks, and the loss of granularity due to age grouping. Nevertheless, the framework shows how robust statistical ensembles can extend demographic forecasting beyond simple trend extrapolation, providing a formal and transparent quantitative tool for stress-testing population futures under both crisis and stability. Full article
Show Figures

Figure 1

30 pages, 1050 KB  
Article
Reconstruction of the Initial Data and Source Functions for a Wave Equation with Nonlocal Boundary Condition
by Sid Ahmed Ould Beinane, Nura Alotaibi, Ghaziyah Alsahli and Asim Ilyas
Mathematics 2025, 13(24), 4023; https://doi.org/10.3390/math13244023 - 18 Dec 2025
Viewed by 207
Abstract
This paper addresses multiple inverse source problems linked to the wave equation under nonlocal boundary conditions. A bi-orthogonal functional framework is adopted to represent the solutions through series expansions. The analysis establishes that these problems are ill-posed in the Hadamard sense. Three main [...] Read more.
This paper addresses multiple inverse source problems linked to the wave equation under nonlocal boundary conditions. A bi-orthogonal functional framework is adopted to represent the solutions through series expansions. The analysis establishes that these problems are ill-posed in the Hadamard sense. Three main reconstruction tasks are considered: identification of an unknown space-dependent source, recovery of the initial data, and estimation of a time-varying source term. For each case, suitable additional conditions are introduced to ensure the uniqueness of the unknown quantities. Existence and uniqueness theorems are proved under specific smoothness requirements. Finally, the theoretical developments are validated through numerical computations. Full article
Show Figures

Figure 1

18 pages, 2067 KB  
Article
Dual-Branch Network for Video Anomaly Detection Based on Feature Fusion
by Minggao Huang, Jing Li, Zhanming Sun and Jianwen Hu
Mathematics 2025, 13(24), 4022; https://doi.org/10.3390/math13244022 - 18 Dec 2025
Viewed by 364
Abstract
Anomaly detection is a critical task in video surveillance, with significant applications in the management and prevention of criminal activities. Traditional convolutional neural networks often struggle with motion modeling and multi-scale feature fusion due to their localized field of view. To address these [...] Read more.
Anomaly detection is a critical task in video surveillance, with significant applications in the management and prevention of criminal activities. Traditional convolutional neural networks often struggle with motion modeling and multi-scale feature fusion due to their localized field of view. To address these limitations, this work proposes a Dual-Branch Interactive Feature Fusion Network (DBIFF-Net). DBIFF-Net integrates a CNN branch and a swin transformer branch to extract multi-scale features. To optimize these features for efficient fusion, an interactive fusion module is introduced to efficiently fuse these multi-scale features through skip connections. Then, the temporal shift module is employed to exploit dependencies between video frames, thereby improving the identification of anomalous events. Finally, the channel attention is utilized for decoder to better assist in restoring complex object features in the video. System performance is evaluated on three standard benchmark datasets. DBIFF-Net achieves the area under the receiver operating characteristic (AUC) of 97.7%, 84.5%, and 73.8% on the UCSD ped2, CUHK Avenue, and ShanghaiTech Campus dataset, respectively. Extensive experiments demonstrate that DBIFF-Net outperforms most state-of-the-art methods, validating the effectiveness of our method. Full article
Show Figures

Figure 1

24 pages, 749 KB  
Article
Solution Methods for the Dynamic Generalized Quadratic Assignment Problem
by Yugesh Dhungel and Alan McKendall
Mathematics 2025, 13(24), 4021; https://doi.org/10.3390/math13244021 - 17 Dec 2025
Viewed by 274
Abstract
In this paper, the generalized quadratic assignment problem (GQAP) is extended to consider multiple time periods and is called the dynamic GQAP (DGQAP). This problem considers assigning a set of facilities to a set of locations for multiple periods in the planning horizon [...] Read more.
In this paper, the generalized quadratic assignment problem (GQAP) is extended to consider multiple time periods and is called the dynamic GQAP (DGQAP). This problem considers assigning a set of facilities to a set of locations for multiple periods in the planning horizon such that the sum of the transportation, assignment, and reassignment costs is minimized. The facilities may have different space requirements (i.e., unequal areas), and the capacities of the locations may vary during a multi-period planning horizon. Also, multiple facilities may be assigned to each location during each period without violating the capacities of the locations. This research was motivated by the problem of assigning multiple facilities (e.g., equipment) to locations during outages at electric power plants. This paper presents mathematical models, construction algorithms, and two simulated annealing (SA) heuristics for solving the DGQAP problem. The first SA heuristic (SAI) is a direct adaptation of SA to the DGQAP, and the second SA heuristic (SAII) is the same as SAI with a look-ahead/look-back search strategy. In computational experiments, the proposed heuristics are first compared to an exact method on a generated data set of smaller instances (data set 1). Then the proposed heuristics are compared on a generated data set of larger instances (data set 2). For data set 1, the proposed heuristics outperformed a commercial solver (CPLEX) in terms of solution quality and computational time. SAI obtained the best solutions for all the instances, while SAII obtained the best solution for all but one instance. However, for data set 2, SAII obtained the best solution for nineteen of the twenty-four instances, while SAI obtained five of the best solutions. The results highlight the effectiveness and efficiency of the proposed heuristics, particularly SAII, for solving the DGQAP. Full article
Show Figures

Figure 1

14 pages, 407 KB  
Article
General Vertex-Distinguishing Total Colorings of Complete Bipartite Graphs
by Xiang’en Chen and Ting Li
Mathematics 2025, 13(24), 4020; https://doi.org/10.3390/math13244020 - 17 Dec 2025
Viewed by 194
Abstract
Let G be a simple graph. A general total coloring f of G refers to a coloring of the vertices and edges of G. Let C(x) be the set of colors of vertex x and edges incident with x [...] Read more.
Let G be a simple graph. A general total coloring f of G refers to a coloring of the vertices and edges of G. Let C(x) be the set of colors of vertex x and edges incident with x under f. For a general total coloring f of G in which k colors are available, if C(u)C(v) for any two different vertices u and v in V(G), then f is called a k-general vertex-distinguishing total coloring of G, or a k-GVDTC of G for short. The minimum number of colors required for a GVDTC of G is denoted by χgvt(G) and is called the general vertex-distinguishing total chromatic number, or the GVDT chromatic number of G for short. GVDTCs of complete bipartite graphs are studied in this paper. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

25 pages, 331 KB  
Article
Killing Vector Fields of Invariant Metrics on Five-Dimensional Solvable Lie Groups
by Gerard Thompson
Mathematics 2025, 13(24), 4019; https://doi.org/10.3390/math13244019 - 17 Dec 2025
Viewed by 185
Abstract
In this paper we study the existence of Killing vector fields for right-invariant metrics on five-dimensional Lie groups. We begin by providing some explanation of the classification lists of the low-dimensional Lie algebras. Then we review some of the known results about Killing [...] Read more.
In this paper we study the existence of Killing vector fields for right-invariant metrics on five-dimensional Lie groups. We begin by providing some explanation of the classification lists of the low-dimensional Lie algebras. Then we review some of the known results about Killing vector fields on Lie groups. We take as our invariant metric the sum of the squares of the right-invariant Maurer–Cartan one-forms, starting from a coordinate representation. A number of such metrics are uncovered that have one or more extra Killing vector fields, besides the left-invariant vector fields that are automatically Killing for a right-invariant metric. In each case the corresponding Lie algebra of Killing vector fields is found and identified to the extent possible on a standard list. The computations are facilitated by use of the symbolic manipulation package MAPLE. Full article
(This article belongs to the Section B: Geometry and Topology)
19 pages, 27291 KB  
Article
Robust Financial Fraud Detection via Causal Intervention and Multi-View Contrastive Learning on Dynamic Hypergraphs
by Xiong Luo
Mathematics 2025, 13(24), 4018; https://doi.org/10.3390/math13244018 - 17 Dec 2025
Viewed by 425
Abstract
Financial fraud detection is critical to modern economic security, yet remains challenging due to collusive group behavior, temporal drift, and severe class imbalance. Most existing graph neural network (GNN) detectors rely on pairwise edges and correlation-driven learning, which limits their ability to represent [...] Read more.
Financial fraud detection is critical to modern economic security, yet remains challenging due to collusive group behavior, temporal drift, and severe class imbalance. Most existing graph neural network (GNN) detectors rely on pairwise edges and correlation-driven learning, which limits their ability to represent high-order group interactions and makes them vulnerable to spurious environmental cues (e.g., hubs or temporal bursts) that correlate with labels but are not necessarily causal. We propose Causal-DHG, a dynamic hypergraph framework that integrates hypergraph modeling, causal intervention, and multi-view contrastive learning. First, we construct label-agnostic hyperedges from publicly available metadata to capture high-order group structures. Second, a Multi-Head Spatio-Temporal Hypergraph Attention encoder models group-wise dependencies and their temporal evolution. Third, a Causal Disentanglement Module decomposes representations into causal and environment-related factors using HSIC regularization, and a dictionary-based backdoor adjustment approximates the interventional prediction P(Ydo(C)) to suppress spurious correlations. Finally, we employ self-supervised multi-view contrastive learning with mild hypergraph augmentations to leverage unlabeled data and stabilize training. Experiments on YelpChi, Amazon, and DGraph-Fin show consistent gains in AUC/F1 over strong baselines such as CARE-GNN and PC-GNN, together with improved robustness under feature and structural perturbations. Full article
Show Figures

Figure 1

26 pages, 624 KB  
Article
Two-Stage Analysis for Supply Chain Disruptions Considering the Trade-Off Between Profit Maximization and Adaptability
by Tomohiro Hayashida, Ichiro Nishizaki, Shinya Sekizaki and Keigo Tsukuda
Mathematics 2025, 13(24), 4017; https://doi.org/10.3390/math13244017 - 17 Dec 2025
Viewed by 389
Abstract
Considering the trade-off between profit maximization and adaptability to supply chain disruptions, we examine herein the decision-making for configuration and distribution plans in a supply chain. Supply chain disruptions are caused by facility accidents and disasters. In this work, we investigate an optimal [...] Read more.
Considering the trade-off between profit maximization and adaptability to supply chain disruptions, we examine herein the decision-making for configuration and distribution plans in a supply chain. Supply chain disruptions are caused by facility accidents and disasters. In this work, we investigate an optimal configuration and distribution plan in the supply chain with disruptions, including the opening of additional facilities while maintaining the optimum supply amounts to customers in the profit maximization plan when no such disruptions occur. Assuming the existence of uncertainties in demands and supplies, we formulate a two-stage model with a simple recourse, in which decisions on the supply chain configuration are made at the first stage. Decisions on the distribution are made at the second stage after the demands and supplies are realized. For such a configuration and distribution in the supply chain, we propose TSA-SCD (Two-Stage Analysis for Supply Chain Disruptions), a novel decision-making framework considering the trade-off between profit maximization and adaptability to supply chain disruptions. Accordingly, we perform numerical experiments with different degrees of disruptions to verify the effectiveness of the proposed decision method. Full article
Show Figures

Figure 1

34 pages, 4345 KB  
Article
A System Development Lifecycle Approach for the Development of Decision Support Systems for Operating Rooms Planning and Scheduling Using Mathematical Programming, Heuristics, and Discrete Event Simulation
by Justin Britt, Ahmed Azab and Mohammed Fazle Baki
Mathematics 2025, 13(24), 4016; https://doi.org/10.3390/math13244016 - 17 Dec 2025
Viewed by 324
Abstract
This paper describes an approach for developing decision support systems (DSS) for strategic and tactical operating room (OR) planning and scheduling problems. These problems involve assigning amounts of time and specific time blocks in the ORs to surgical specialties and/or surgeons. A four-phase [...] Read more.
This paper describes an approach for developing decision support systems (DSS) for strategic and tactical operating room (OR) planning and scheduling problems. These problems involve assigning amounts of time and specific time blocks in the ORs to surgical specialties and/or surgeons. A four-phase iterative software development lifecycle (SDLC) approach is used to develop a DSS that has a graphical user interface, a data management system, and optimization and simulation systems that incorporate mathematical programming models, solution methods, and discrete event simulation models. Results from the computational experience show that the plans generated by the DSS utilize at least 78% of the available OR time on average and use the downstream recovery ward (RW) beds in a balanced way that never exceeds the number of available beds. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

18 pages, 653 KB  
Review
Chaos in Control Systems: A Review of Suppression and Induction Strategies with Industrial Applications
by Asad Shafique, Georgii Kolev, Oleg Bayazitov, Yulia Bobrova and Ekaterina Kopets
Mathematics 2025, 13(24), 4015; https://doi.org/10.3390/math13244015 - 17 Dec 2025
Viewed by 521
Abstract
In control systems, chaos is a natural dualistic phenomenon that can be both a beneficial resource to be used and a negative phenomenon to be avoided. The study examines two opposing paradigms: positive chaotic control, which aims to enhance performance, and negative chaos [...] Read more.
In control systems, chaos is a natural dualistic phenomenon that can be both a beneficial resource to be used and a negative phenomenon to be avoided. The study examines two opposing paradigms: positive chaotic control, which aims to enhance performance, and negative chaos management, which aims to stabilize a system. More sophisticated suppression methods, including adaptive neural networks, sliding mode control, and model predictive control, can decrease convergence times. Controlled chaotic dynamics have significantly impacted the domain of embedded control systems. Specialized controller designs include fractal-based systems and hybrid switching systems that offer better control of chaotic behavior in many situations. The paper highlights the key issues that are related to chaos-based systems, such as the need to implement them in real time, parameter sensitivity, and safety. Recent research suggests an increased interdependence between artificial intelligence, quantum computing, and sustainable technology. The synthesis shows that chaos control has evolved into an engineering field, significantly impacting the industry, which was initially a theoretical concept. It also offers exclusive ideas in the design and improvement of complex control systems. Full article
Show Figures

Figure 1

17 pages, 1742 KB  
Article
Hessian-Enhanced Likelihood Optimization for Gravitational Wave Parameter Estimation: A Second-Order Approach to Machine Learning-Based Inference
by Zhuopeng Peng and Fan Zhang
Mathematics 2025, 13(24), 4014; https://doi.org/10.3390/math13244014 - 17 Dec 2025
Viewed by 368
Abstract
We introduce a new method for estimating gravitational wave parameters. This approach uses a second-order likelihood optimization framework built into a machine learning system (JimGW). Current methods often rely on first-order approximations, which can miss important details, while our method incorporates the full [...] Read more.
We introduce a new method for estimating gravitational wave parameters. This approach uses a second-order likelihood optimization framework built into a machine learning system (JimGW). Current methods often rely on first-order approximations, which can miss important details, while our method incorporates the full Hessian matrix of the likelihood function. This allows us to better capture the shape of the parameter space for gravitational waves. Our theoretical framework demonstrates that the trace of the Hessian matrix, when properly normalized, provides a coordinate-invariant measure of the local likelihood geometry that significantly enhances parameter recovery accuracy for gravitational wave sources. We test our second-order method using data from the three gravitational wave events. Take GW150914 as an example; the results show large gains in precision for parameter estimation, with accuracy gains exceeding 93% across all inferred parameters compared to standard first-order implementations. We use Jensen–Shannon divergence to compare the resulting posterior distributions. The JSD values range from 0.366 to 0.948, which correlate directly with improved parameter recovery as validated through injection studies. The method remains computationally efficient with only a 20% increase in runtime. At the same time, it produces seven times more effective samples. Our results show that machine learning methods using only first-order information can lead to systematic errors in gravitational wave parameter estimation. The incorporation of second-order corrections emerges not as an optional refinement but as a necessary component for achieving theoretically optimal inference. It also matters for ongoing gravitational wave analyses, future detector networks, and the broader application of machine learning methods in precision scientific measurement. Full article
(This article belongs to the Special Issue Optimization Theory, Algorithms and Applications)
Show Figures

Figure 1

1 pages, 135 KB  
Correction
Correction: Anastassiou, G.A. Composition of Activation Functions and the Reduction to Finite Domain. Mathematics 2025, 13, 3177
by George A. Anastassiou
Mathematics 2025, 13(24), 4013; https://doi.org/10.3390/math13244013 - 17 Dec 2025
Viewed by 154
Abstract
There was an error in the original publication [...] Full article
18 pages, 3588 KB  
Article
CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images
by Qianxiang Cheng, Zhanchuan Cai, Yi Lin, Jiayao Li and Ting Lan
Mathematics 2025, 13(24), 4012; https://doi.org/10.3390/math13244012 - 16 Dec 2025
Viewed by 972
Abstract
Accurate detection of concealed items in X-ray baggage images is critical for public safety in high-security environments such as airports and railway stations. However, small objects with low material contrast, such as plastic lighters, remain challenging to identify due to background clutter, overlapping [...] Read more.
Accurate detection of concealed items in X-ray baggage images is critical for public safety in high-security environments such as airports and railway stations. However, small objects with low material contrast, such as plastic lighters, remain challenging to identify due to background clutter, overlapping contents, and weak edge features. In this paper, we propose a novel architecture called the Contrast-Enhanced Feature Pyramid Network (CE-FPN), designed to be integrated into the YOLO detection framework. CE-FPN introduces a contrast-guided multi-branch fusion module that enhances small-object representations by emphasizing texture boundaries and improving semantic consistency across feature levels. When incorporated into YOLO, the proposed CE-FPN significantly boosts detection accuracy on the HiXray dataset, achieving up to a +10.1% improvement in mAP@50 for the nonmetallic lighter class and an overall +1.6% gain, while maintaining low computational overhead. In addition, the model attains a mAP@50 of 84.0% under low-resolution settings and 87.1% under high-resolution settings, further demonstrating its robustness across different input qualities. These results demonstrate that CE-FPN effectively enhances YOLO’s capability in detecting small and concealed objects, making it a promising solution for real-world security inspection applications. Full article
Show Figures

Figure 1

10 pages, 244 KB  
Article
On p-Hardy–Rogers and p-Zamfirescu Contractions in Complete Metric Spaces: Existence and Uniqueness Results
by Zouaoui Bekri, Nicola Fabiano, Mohammed Ahmed Alomair and Abdulaziz Khalid Alsharidi
Mathematics 2025, 13(24), 4011; https://doi.org/10.3390/math13244011 - 16 Dec 2025
Viewed by 234
Abstract
In this paper, we introduce and investigate two generalized forms of classical contraction mappings, namely the p-Hardy–Rogers and p-Zamfirescu contractions. By incorporating the integer parameter p1, these new definitions extend the traditional Hardy–Rogers and Zamfirescu conditions to iterated [...] Read more.
In this paper, we introduce and investigate two generalized forms of classical contraction mappings, namely the p-Hardy–Rogers and p-Zamfirescu contractions. By incorporating the integer parameter p1, these new definitions extend the traditional Hardy–Rogers and Zamfirescu conditions to iterated mappings ħp. We establish fixed-point theorems, ensuring both existence and uniqueness of fixed points for continuous self-maps on complete metric spaces that satisfy these p-contractive conditions. The proofs are constructed via geometric estimates on the iterates and by transferring the fixed point from the p-th iterate ħp to the original mapping ħ. Our results unify and broaden several well-known fixed-point theorems reported in previous studies, including those of Banach, Hardy–Rogers, and Zamfirescu as special cases. Full article
(This article belongs to the Section C: Mathematical Analysis)
Previous Issue
Back to TopTop