Next Issue
Volume 13, March-1
Previous Issue
Volume 13, February-1
 
 

Mathematics, Volume 13, Issue 4 (February-2 2025) – 140 articles

Cover Story (view full-size image): Existing control chart pattern recognition (CCPR) methods for process monitoring were established based on the normality assumption of quality variables. In today’s manufacturing, it not only allows for prompt corrections but also saves time and observation costs. It is challenging to cumulate enough sample resources to implement a CCPR method for process monitoring in the initial stage. Simulating all the in-control and out-of-control scenarios as the initial samples to implement CCPR methods can be a solution. Then, the process can be continually monitored using CCPR methods, and sample resources can be cumulated over time till a big sample has been achieved. Next, these CCPR methods must be retrained for process control based on the big sample. This study contributes to new CCPR methods in this area. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 288 KiB  
Article
Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations
by Yi Guo and Naiqi Liu
Mathematics 2025, 13(4), 684; https://doi.org/10.3390/math13040684 - 19 Feb 2025
Viewed by 278
Abstract
This article studies the theoretical properties of the numerical scheme for backward stochastic differential equations, extending the relevant results of Briand et al. with more general assumptions. To be more precise, the Brown motion will be approximated using the sum of a sequence [...] Read more.
This article studies the theoretical properties of the numerical scheme for backward stochastic differential equations, extending the relevant results of Briand et al. with more general assumptions. To be more precise, the Brown motion will be approximated using the sum of a sequence of martingale differences or a sequence of i.i.d. Gaussian variables instead of the i.i.d. Bernoulli sequence. We cope with an adaptation problem of Yn by defining a new process Y^n; then, we can obtain the Donsker-type theorem for numerical solutions using a similar method to Briand et al. Full article
16 pages, 3007 KiB  
Article
Multilayer Neurolearning of Measurement-Information-Poor Hydraulic Robotic Manipulators with Disturbance Compensation
by Guichao Yang and Zhiying Shi
Mathematics 2025, 13(4), 683; https://doi.org/10.3390/math13040683 - 19 Feb 2025
Viewed by 302
Abstract
In order to further improve the tracking performance of multiple-degree-of-freedom serial electro-hydraulic robotic manipulators, a high-performance multilayer neurocontroller will be proposed. In detail, multilayer neural networks will be employed to approximate the smooth and non-smooth state-dependent modeling uncertainties. Meanwhile, extended state observers will [...] Read more.
In order to further improve the tracking performance of multiple-degree-of-freedom serial electro-hydraulic robotic manipulators, a high-performance multilayer neurocontroller will be proposed. In detail, multilayer neural networks will be employed to approximate the smooth and non-smooth state-dependent modeling uncertainties. Meanwhile, extended state observers will be utilized to estimate matched and unmatched time-varying disturbances. Moreover, these estimated values will be incorporated into the synthesized controller to compensate for the modeling uncertainties. Significantly, the proposed controller without “explosion of complexity” is suitable for the scene where the joint angular velocities are not measurable. Additionally, the sensor measurement noises can be reduced and input saturation nonlinearity will be handled. Full article
Show Figures

Figure 1

19 pages, 319 KiB  
Article
σ-Martingales: Foundations, Properties, and a New Proof of the Ansel–Stricker Lemma
by Moritz Sohns
Mathematics 2025, 13(4), 682; https://doi.org/10.3390/math13040682 - 19 Feb 2025
Viewed by 311
Abstract
σ-martingales generalize local martingales through localizing sequences of predictable sets, which are essential in stochastic analysis and financial mathematics, particularly for arbitrage-free markets and portfolio theory. In this work, we present a new approach to σ-martingales that avoids using semimartingale characteristics. [...] Read more.
σ-martingales generalize local martingales through localizing sequences of predictable sets, which are essential in stochastic analysis and financial mathematics, particularly for arbitrage-free markets and portfolio theory. In this work, we present a new approach to σ-martingales that avoids using semimartingale characteristics. We develop all fundamental properties, provide illustrative examples, and establish the core structure of σ-martingales in a new, straightforward manner. This approach culminates in a new proof of the Ansel–Stricker lemma, which states that one-sided bounded σ-martingales are local martingales. This result, referenced in nearly every publication on mathematical finance, traditionally relies on the original French-language proof. We use this result to prove a generalization, which is essential for defining the general semimartingale model in mathematical finance. Full article
(This article belongs to the Special Issue Advances in Probability Theory and Stochastic Analysis)
18 pages, 318 KiB  
Article
Dimension-Independent Convergence Rate for Adagrad with Heavy-Ball Momentum
by Kyunghun Nam and Sejun Park
Mathematics 2025, 13(4), 681; https://doi.org/10.3390/math13040681 - 19 Feb 2025
Viewed by 223
Abstract
In this study, we analyze the convergence rate of Adagrad with momentum for non-convex optimization problems. We establish the first dimension-independent convergence rate under the (L0,L1)-smoothness assumption, which is a generalization of the standard L-smoothness. [...] Read more.
In this study, we analyze the convergence rate of Adagrad with momentum for non-convex optimization problems. We establish the first dimension-independent convergence rate under the (L0,L1)-smoothness assumption, which is a generalization of the standard L-smoothness. We show the O(1/T) convergence rate under bounded noise in stochastic gradients, where the bound can scale with the current optimality gap and gradient norm. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 3rd Edition)
Show Figures

Figure A1

34 pages, 3941 KiB  
Article
Smoothing the Subjective Financial Risk Tolerance: Volatility and Market Implications
by Wookjae Heo and Eunchan Kim
Mathematics 2025, 13(4), 680; https://doi.org/10.3390/math13040680 - 19 Feb 2025
Viewed by 255
Abstract
This study explores smoothing techniques to refine financial risk tolerance (FRT) data for the improved prediction of financial market indicators, including the Volatility Index and S&P 500 ETF. Raw FRT data often contain noise and volatility, obscuring their relationship with market dynamics. Seven [...] Read more.
This study explores smoothing techniques to refine financial risk tolerance (FRT) data for the improved prediction of financial market indicators, including the Volatility Index and S&P 500 ETF. Raw FRT data often contain noise and volatility, obscuring their relationship with market dynamics. Seven smoothing methods were applied to derive smoothed mean and standard deviation values, including exponential smoothing, ARIMA, and Kalman filter. Machine learning models, including support vector machines and neural networks, were used to assess predictive performance. The results demonstrate that smoothed FRT data significantly enhance prediction accuracy, with the smoothed standard deviation offering a more explicit representation of investor risk tolerance fluctuations. These findings highlight the value of smoothing techniques in behavioral finance, providing more reliable insights into market volatility and investor behavior. Smoothed FRT data hold potential for portfolio optimization, risk assessment, and financial decision-making, paving the way for more robust applications in financial modeling. Full article
Show Figures

Figure 1

30 pages, 4158 KiB  
Article
Optimizing Automated Negotiation: Integrating Opponent Modeling with Reinforcement Learning for Strategy Enhancement
by Ya Zhang, Jinghua Wu and Ruiyang Cao
Mathematics 2025, 13(4), 679; https://doi.org/10.3390/math13040679 - 19 Feb 2025
Viewed by 225
Abstract
Agent-based automated negotiation aims to enhance decision-making processes by predefining negotiation rules, strategies, and objectives to achieve mutually acceptable agreements. However, most existing research primarily focuses on modeling the formal negotiation phase, while neglecting the critical role of opponent analysis during the pre-negotiation [...] Read more.
Agent-based automated negotiation aims to enhance decision-making processes by predefining negotiation rules, strategies, and objectives to achieve mutually acceptable agreements. However, most existing research primarily focuses on modeling the formal negotiation phase, while neglecting the critical role of opponent analysis during the pre-negotiation stage. Additionally, the impact of opponent selection and classification on strategy formulation is often overlooked. To address these gaps, we propose a novel automated negotiation framework that enables the agent to use reinforcement learning, enhanced by opponent modeling, for strategy optimization during the negotiation stage. Firstly, we analyze the node and network topology characteristics within an agent-based relational network to uncover the potential strength and types of relationships between negotiating parties. Then, these analysis results are used to inform strategy adjustments through reinforcement learning, where different negotiation strategies are selected based on the opponent’s profile. Specifically, agents’ expectations are adjusted according to relationship strength, ensuring that the expectations of negotiating parties are accurately represented across varying levels of relationship strength. Meanwhile, the relationship classification results are used to adjust the discount factor within a Q-learning negotiation algorithm. Finally, we conducted a series of experiments, and comparative analysis demonstrates that our proposed model outperforms existing negotiation frameworks in terms of negotiation efficiency, utility, and fairness. Full article
Show Figures

Figure 1

9 pages, 1115 KiB  
Article
Reflexivity and Duplicability in Set Theory
by Vincenzo Manca
Mathematics 2025, 13(4), 678; https://doi.org/10.3390/math13040678 - 19 Feb 2025
Viewed by 249
Abstract
Set reflexivity and duplicability are considered by showing, with different proofs, their equivalence with Dedekind’s infinity. Then, an easy derivation of the Schröder–Bernstein theorem is presented, a fundamental result in the theory of cardinal numbers, usually based on arguments that are not very [...] Read more.
Set reflexivity and duplicability are considered by showing, with different proofs, their equivalence with Dedekind’s infinity. Then, an easy derivation of the Schröder–Bernstein theorem is presented, a fundamental result in the theory of cardinal numbers, usually based on arguments that are not very intuitive. Full article
Show Figures

Figure 1

24 pages, 4982 KiB  
Article
An Improved Salp Swarm Algorithm for Solving a Multi-Temperature Joint Distribution Route Optimization Problem
by Yimei Chang, Jiaqi Yu, Yang Wang and Xiaoling Xie
Mathematics 2025, 13(4), 677; https://doi.org/10.3390/math13040677 - 19 Feb 2025
Viewed by 268
Abstract
In order to address the diverse and personalized needs of consumers for fresh products, as well as to enhance the efficiency and safety of fresh product delivery, this paper proposes an integer programming model aimed at minimizing total distribution costs. The model takes [...] Read more.
In order to address the diverse and personalized needs of consumers for fresh products, as well as to enhance the efficiency and safety of fresh product delivery, this paper proposes an integer programming model aimed at minimizing total distribution costs. The model takes into account the cold storage multi-temperature joint distribution mode, carbon emission costs, and actual constraints associated with the distribution process of fresh products. To solve this model, an improved salp swarm algorithm (SSA) has been developed. The feasibility and effectiveness of both the proposed model and algorithm are demonstrated using R110 data from the Solomon standard calculation example. Research findings indicate that compared to traditional single-product temperature distribution modes, the multi-temperature joint distribution mode achieves reductions in total distribution costs and vehicle quantities by 45.4% and 72.2%, respectively. Furthermore, it is observed that total distribution costs increase with rising unit carbon tax prices; however, the rate of growth gradually diminishes over time. Additionally, a reduction in vehicle load capacity results in a continuous rise in total delivery costs after reaching a certain turning point. When compared to conventional SSAs and genetic algorithms, the proposed algorithm demonstrates superior performance in generating optimal multi-temperature joint distribution route schemes for fresh products. Full article
Show Figures

Figure 1

12 pages, 376 KiB  
Article
Toeplitz Determinants for Inverse of Analytic Functions
by Sarem H. Hadi, Yahea Hashem Saleem, Alina Alb Lupaş, Khalid M. K. Alshammari and Abdullah Alatawi
Mathematics 2025, 13(4), 676; https://doi.org/10.3390/math13040676 - 19 Feb 2025
Viewed by 231
Abstract
Estimates bounds for Carathéodory functions in the complex domain are applied to demonstrate sharp limits for the inverse of analytic functions. Determining these values is considered a more difficult task compared to finding the values of analytic functions themselves. The challenge lies in [...] Read more.
Estimates bounds for Carathéodory functions in the complex domain are applied to demonstrate sharp limits for the inverse of analytic functions. Determining these values is considered a more difficult task compared to finding the values of analytic functions themselves. The challenge lies in finding the sharp estimate for the functionals. While some recent studies have made progress in calculating the sharp boundary values of Hankel determinants associated with inverse functions, the Toeplitz determinant is yet to be addressed. Our research aims to estimate the determinants of the Toeplitz matrix, which is also linked to inverse functions. We also focus on computing these determinants for familiar analytical functions (pre-starlike, starlike, convex, symmetric-starlike) while investigating coefficient values. The study also provides an improvement to the estimation of the determinants of the pre-starlike class presented by Li and Gou. Full article
Show Figures

Figure 1

37 pages, 13135 KiB  
Article
A Novel Improved Binary Optimization Algorithm and Its Application in FS Problems
by Boyuan Wu and Jia Luo
Mathematics 2025, 13(4), 675; https://doi.org/10.3390/math13040675 - 18 Feb 2025
Viewed by 324
Abstract
With the rapid advancement of artificial intelligence (AI) technology, the demand for vast amounts of data for training AI algorithms to attain intelligence has become indispensable. However, in the realm of big data technology, the high feature dimensions of the data frequently give [...] Read more.
With the rapid advancement of artificial intelligence (AI) technology, the demand for vast amounts of data for training AI algorithms to attain intelligence has become indispensable. However, in the realm of big data technology, the high feature dimensions of the data frequently give rise to overfitting issues during training, thereby diminishing model accuracy. To enhance model prediction accuracy, feature selection (FS) methods have arisen with the goal of eliminating redundant features within datasets. In this paper, a highly efficient FS method with advanced FS performance, called EMEPO, is proposed. It combines three learning strategies on the basis of the Parrot Optimizer (PO) to better ensure FS performance. Firstly, a novel exploitation strategy is introduced, which integrates randomness, optimality, and Levy flight to enhance the algorithm’s local exploitation capabilities, reduce execution time in solving FS problems, and enhance classification accuracy. Secondly, a multi-population evolutionary strategy is introduced, which takes into account the diversity of individuals based on fitness values to optimize the balance between exploration and exploitation stages of the algorithm, ultimately improving the algorithm’s capability to explore the FS solution space globally. Finally, a unique exploration strategy is introduced, focusing on individual diversity learning to boost population diversity in solving FS problems. This approach improves the algorithm’s capacity to avoid local suboptimal feature subsets. The EMEPO-based FS method is tested on 23 FS datasets spanning low-, medium-, and high-dimensional data. The results show exceptional performance in classification accuracy, feature reduction, execution efficiency, convergence speed, and stability. This indicates the high promise of the EMEPO-based FS method as an effective and efficient approach for feature selection. Full article
(This article belongs to the Special Issue Advances in Optimization Algorithms and Its Applications)
Show Figures

Figure 1

17 pages, 726 KiB  
Article
Optimal Control Problem and Its Solution in Class of Feasible Control Functions by Advanced Model of Control Object
by Askhat Diveev and Elena Sofronova
Mathematics 2025, 13(4), 674; https://doi.org/10.3390/math13040674 - 18 Feb 2025
Viewed by 272
Abstract
This paper is devoted to the solution of the optimal control problem. The obtained control should be optimal in terms of quality criteria and, at the same time, feasible when implemented in the control object. To solve the optimal control problem in the [...] Read more.
This paper is devoted to the solution of the optimal control problem. The obtained control should be optimal in terms of quality criteria and, at the same time, feasible when implemented in the control object. To solve the optimal control problem in the class of feasible control functions, an advanced mathematical model of the control object is used. Firstly, the universal stabilisation system of the motion along any trajectory from some class is developed via symbolic regression. Then, the obtained stabilisation system is inserted into the right part of the control object model instead of the control vector. A reference model with a free control vector in the right part is added to the model; thus, the advanced mathematical model of the control object is obtained. After this, the optimal control problem is solved with the advanced mathematical model of the control object. The optimal control problem is stated in the classical form when the control is a time function. Here, the control function is searched for the reference model. The preliminary design of the universal stabilisation system for some class of trajectories allows the solution of the optimal control problem via the control object in a reasonable time frame. The proposed methodology is computationally tested for a model of the spatial motion of a quadcopter and a group of two-wheeled mobile robots with a differential drive. The results of the experiments show that the universal stabilisation system ensures the stabilisation of the motion of the objects along optimal trajectories, which are not known beforehand but obtained as a result of solving the problem with an advanced model. Full article
Show Figures

Figure 1

20 pages, 7369 KiB  
Article
Predicting T Cell Mitochondria Hijacking from Tumor Single-Cell RNA Sequencing Data with MitoR
by Anna Jiang, Chengshang Lyu and Yue Zhao
Mathematics 2025, 13(4), 673; https://doi.org/10.3390/math13040673 - 18 Feb 2025
Viewed by 330
Abstract
T cells play a crucial role in the immune system by identifying and eliminating tumor cells. Malignant cancer cells can hijack mitochondria (MT) from nearby T cells, affecting their metabolism and weakening their immune functions. This phenomenon, observed through co-culture systems and fluorescent [...] Read more.
T cells play a crucial role in the immune system by identifying and eliminating tumor cells. Malignant cancer cells can hijack mitochondria (MT) from nearby T cells, affecting their metabolism and weakening their immune functions. This phenomenon, observed through co-culture systems and fluorescent labeling, has been further explored with the development of the MERCI algorithm, which predicts T cell MT hijacking in cancer cells using single-cell RNA (scRNA) sequencing data. However, MERCI is limited by its reliance on a linear model and its inability to handle data sparsity. To address these challenges, we introduce MitoR, a computational algorithm using a Poisson–Gamma mixture model to predict T cell MT hijacking from tumor scRNA data. In performance comparisons, MitoR demonstrated improved performance compared to MERCI’s on gold-standard benchmark datasets scRNA-bench1 (top AUROC: 0.761, top accuracy: 0.769) and scRNA-bench2 (top AUROC: 0.730, top accuracy: 0.733). Additionally, MitoR showed an average 4.14% increase in AUROC and an average 3.86% increase in accuracy over MERCI in all rank strategies and simulated datasets. Finally, MitoR revealed T cell MT hijacking events in two real-world tumor datasets (basal cell carcinoma and esophageal squamous-cell carcinoma), highlighting their role in tumor immune evasion. Full article
(This article belongs to the Special Issue Mathematical Models and Computer Science Applied to Biology)
Show Figures

Figure 1

27 pages, 577 KiB  
Article
Approximate Description of Indefinable Granules Based on Classical and Three-Way Concept Lattices
by Hongwei Wang, Huilai Zhi and Yinan Li
Mathematics 2025, 13(4), 672; https://doi.org/10.3390/math13040672 - 18 Feb 2025
Viewed by 261
Abstract
Granule description is a fundamental problem in granular computing. However, how to describe indefinable granules is still an open, interesting, and important problem. The main objective of this paper is to give a preliminary solution to this problem. Before proceeding, the framework of [...] Read more.
Granule description is a fundamental problem in granular computing. However, how to describe indefinable granules is still an open, interesting, and important problem. The main objective of this paper is to give a preliminary solution to this problem. Before proceeding, the framework of approximate description is introduced. That is, any indefinable granule is characterized by an ordered pair of formulas, which form an interval set, where the first formula is the β-prior approximate optimal description and the second formula is the α-prior approximate optimal description. More concretely, given an indefinable granule, by exploring the description of its lower approximate granule, its β-prior approximate optimal description is obtained. Likewise, by consulting the description of its upper approximate granule, its α-prior approximate optimal description can also be derived. Following this idea, the descriptions of indefinable granules are investigated. Firstly, ∧-approximate descriptions of indefinable granules are investigated based on the classical concept lattice, and (,)-approximate descriptions of indefinable granules are given via object pictorial diagrams. And then, it is revealed from some examples that the classical concept lattice is no longer effective and negative attributes must be taken into consideration. Therefore, a three-way concept lattice is adopted instead of the classical concept lattice to study (,¬)-approximate descriptions and (,,¬)-approximate descriptions of indefinable granules. Finally, some discussions are presented to show the differences and similarities between our study and existing ones. Full article
(This article belongs to the Special Issue Recent Advances and Prospects in Formal Concept Analysis (FCA))
Show Figures

Figure 1

18 pages, 3015 KiB  
Article
Improved Hadamard Decomposition and Its Application in Data Compression in New-Type Power Systems
by Zhi Ding, Tianyao Ji and Mengshi Li
Mathematics 2025, 13(4), 671; https://doi.org/10.3390/math13040671 - 18 Feb 2025
Viewed by 272
Abstract
The proliferation of renewable energy sources, flexible loads, and advanced measurement devices in new-type power systems has led to an unprecedented surge in power signal data, posing significant challenges for data management and analysis. This paper presents an improved Hadamard decomposition framework for [...] Read more.
The proliferation of renewable energy sources, flexible loads, and advanced measurement devices in new-type power systems has led to an unprecedented surge in power signal data, posing significant challenges for data management and analysis. This paper presents an improved Hadamard decomposition framework for efficient power signal compression, specifically targeting voltage and current signals which constitute foundational measurements in power systems. First, we establish theoretical guarantees for decomposition uniqueness through orthogonality and non-negativity constraints, thereby ensuring consistent and reproducible signal reconstruction, which is critical for power system applications. Second, we develop an enhanced gradient descent algorithm incorporating adaptive regularization and early stopping mechanisms, achieving superior convergence performance in optimizing the Hadamard approximation. The experimental results with simulated and field data demonstrate that the proposed scheme significantly reduces data volume while maintaining critical features in the restored data. In addition, compared with other existing compression methods, this scheme exhibits remarkable advantages in compression efficiency and reconstruction accuracy, particularly in capturing transient characteristics critical for power quality analysis. Full article
Show Figures

Figure 1

26 pages, 2326 KiB  
Article
A Probabilistic Linguistic Large-Group Emergency Decision-Making Method Based on the Louvain Algorithm and Group Pressure Model
by Zhiying Wang, Hanjie Liu and Ruohan Ma
Mathematics 2025, 13(4), 670; https://doi.org/10.3390/math13040670 - 18 Feb 2025
Viewed by 273
Abstract
To tackle preference conflicts and uncertainty in large-group emergency decision-making (LGEDM), this study proposes a probabilistic linguistic LGEDM method integrating the Louvain algorithm and group pressure model. First, expert weights are determined based on a social trust network, and the Louvain algorithm is [...] Read more.
To tackle preference conflicts and uncertainty in large-group emergency decision-making (LGEDM), this study proposes a probabilistic linguistic LGEDM method integrating the Louvain algorithm and group pressure model. First, expert weights are determined based on a social trust network, and the Louvain algorithm is employed for expert clustering, reducing the complexity of large-scale decision information. Second, a group pressure model is introduced to dynamically adjust expert preferences, enhancing consensus and decision consistency. Third, probabilistic linguistic term sets (PLTSs) are utilized to represent fuzzy and uncertain information, while attribute weights are determined by incorporating both subjective and objective factors, ensuring scientific rigor in decision-making. Finally, an improved TODIM (an acronym in Portuguese for Interactive and Multicriteria Decision-Making) method is adopted to account for the loss aversion behavior of decision-makers (DMs), enabling a more accurate characterization of psychological decision-making traits. The experimental results demonstrate that the proposed method outperforms existing approaches in terms of decision efficiency, group consensus, and result robustness, offering effective support for emergency decision-making in crisis situations. Full article
Show Figures

Figure 1

16 pages, 470 KiB  
Article
Distributed Estimation for 0-Constrained Quantile Regression Using Iterative Hard Thresholding
by Zhihe Zhao and Heng Lian
Mathematics 2025, 13(4), 669; https://doi.org/10.3390/math13040669 - 18 Feb 2025
Viewed by 248
Abstract
Distributed frameworks for statistical estimation and inference have become a critical toolkit for analyzing massive data efficiently. In this paper, we present distributed estimation for high-dimensional quantile regression with 0 constraint using iterative hard thresholding (IHT). We propose a communication-efficient distributed estimator [...] Read more.
Distributed frameworks for statistical estimation and inference have become a critical toolkit for analyzing massive data efficiently. In this paper, we present distributed estimation for high-dimensional quantile regression with 0 constraint using iterative hard thresholding (IHT). We propose a communication-efficient distributed estimator which is linearly convergent to the true parameter up to the statistical precision of the model, despite the fact that the check loss minimization problem with an 0 constraint is neither strongly smooth nor convex. The distributed estimator we develop can achieve the same convergence rate as the estimator based on the whole data set under suitable assumptions. In our simulations, we illustrate the convergence of the estimators under different settings and also demonstrate the accuracy of nonzero parameter identification. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

46 pages, 9513 KiB  
Article
Multi-Strategy Improved Binary Secretarial Bird Optimization Algorithm for Feature Selection
by Fuqiang Chen, Shitong Ye, Jianfeng Wang and Jia Luo
Mathematics 2025, 13(4), 668; https://doi.org/10.3390/math13040668 - 18 Feb 2025
Viewed by 282
Abstract
With the rapid development of large model technology, data storage as well as collection is very important to improve the accuracy of model training, and Feature Selection (FS) methods can greatly eliminate redundant features in the data warehouse and improve the interpretability of [...] Read more.
With the rapid development of large model technology, data storage as well as collection is very important to improve the accuracy of model training, and Feature Selection (FS) methods can greatly eliminate redundant features in the data warehouse and improve the interpretability of the model, which makes it particularly important in the field of large model training. In order to better reduce redundant features in data warehouses, this paper proposes an enhanced Secretarial Bird Optimization Algorithm (SBOA), called BSFSBOA, by combining three learning strategies. First, for the problem of insufficient algorithmic population diversity in SBOA, the best-rand exploration strategy is proposed, which utilizes the randomness and optimality of random individuals as well as optimal individuals to effectively improve the population diversity of the algorithm. Second, to address the imbalance in the exploration/exploitation phase of SBOA, the segmented balance strategy is proposed to improve the balance by segmenting the individuals in the population, targeting individuals of different natures with different degrees of exploration and exploitation performance, and improving the quality of the FS subset when the algorithm is solved. Finally, for the problem of insufficient exploitation performance of SBOA, a four-role exploitation strategy is proposed, which strengthens the effective exploitation ability of the algorithm and enhances the classification accuracy of the FS subset by different degrees of guidance through the four natures of individuals in the population. Subsequently, the proposed BSFSBOA-based FS method is applied to solve 36 FS problems involving low, medium, and high dimensions, and the experimental results show that, compared to SBOA, BSFSBOA improves the performance of classification accuracy by more than 60%, also ranks first in feature subset size, obtains the least runtime, and confirms that the BSFSBOA-based FS method is a robust FS method with efficient solution performance, high stability, and high practicality. Full article
(This article belongs to the Special Issue Optimization Theory, Algorithms and Applications)
Show Figures

Figure 1

19 pages, 325 KiB  
Article
Existence and Uniqueness of Fixed-Point Results in Non-Solid C-Algebra-Valued Bipolar b-Metric Spaces
by Annel Thembinkosi Bokodisa and Maggie Aphane
Mathematics 2025, 13(4), 667; https://doi.org/10.3390/math13040667 - 18 Feb 2025
Viewed by 234
Abstract
In this monograph, motivated by the work of Aphane, Gaba, and Xu, we explore fixed-point theory within the framework of C-algebra-valued bipolar b-metric spaces, characterized by a non-solid positive cone. We define and analyze [...] Read more.
In this monograph, motivated by the work of Aphane, Gaba, and Xu, we explore fixed-point theory within the framework of C-algebra-valued bipolar b-metric spaces, characterized by a non-solid positive cone. We define and analyze (FHGH)-contractions, utilizing positive monotone functions to extend classical contraction principles. Key contributions include the existence and uniqueness of fixed points for mappings satisfying generalized contraction conditions. The interplay between the non-solidness of the cone, the C-algebra structure, and the completeness of the space is central to our results. We apply our results to find uniqueness of solutions to Fredholm integral equations and differential equations, and we extend the Ulam–Hyers stability problem to non-solid cones. This work advances the theory of metric spaces over Banach algebras, providing foundational insights with applications in operator theory and quantum mechanics. Full article
Show Figures

Figure 1

34 pages, 942 KiB  
Article
Discrete Information Acquisition in Financial Markets
by Jingrui Pan, Shancun Liu, Qiang Zhang and Yaodong Yang
Mathematics 2025, 13(4), 666; https://doi.org/10.3390/math13040666 - 18 Feb 2025
Viewed by 269
Abstract
We study investors’ information acquisition strategies under arbitrary and discrete sets of information precision and derive conditions for the existence of equilibria. When investors face information choice from general precision sets, despite their homogeneity, the information market can exhibit asymmetric corner equilibria, where [...] Read more.
We study investors’ information acquisition strategies under arbitrary and discrete sets of information precision and derive conditions for the existence of equilibria. When investors face information choice from general precision sets, despite their homogeneity, the information market can exhibit asymmetric corner equilibria, where some investors acquire low-precision information and others acquire high-precision information. Conversely, in the case of high-precision sets, there is a symmetric and unique interior equilibrium where all informed agents opt for the same precision level. Furthermore, the impact of information technologies on price informativeness is uncertain: an improvement in information quality tends to reduce price informativeness due to more investors’ free ride on prices, whereas a reduction in information costs enhances price informativeness by encouraging more investors to acquire information. Our analysis has implications on the prevailing trend of robo-advising and the herding behavior of analysts. Full article
Show Figures

Figure 1

54 pages, 1295 KiB  
Review
Selective Reviews of Bandit Problems in AI via a Statistical View
by Pengjie Zhou, Haoyu Wei and Huiming Zhang
Mathematics 2025, 13(4), 665; https://doi.org/10.3390/math13040665 - 18 Feb 2025
Viewed by 274
Abstract
Reinforcement Learning (RL) is a widely researched area in artificial intelligence that focuses on teaching agents decision-making through interactions with their environment. A key subset includes multi-armed bandit (MAB) and stochastic continuum-armed bandit (SCAB) problems, which model sequential decision-making under uncertainty. This review [...] Read more.
Reinforcement Learning (RL) is a widely researched area in artificial intelligence that focuses on teaching agents decision-making through interactions with their environment. A key subset includes multi-armed bandit (MAB) and stochastic continuum-armed bandit (SCAB) problems, which model sequential decision-making under uncertainty. This review outlines the foundational models and assumptions of bandit problems, explores non-asymptotic theoretical tools like concentration inequalities and minimax regret bounds, and compares frequentist and Bayesian algorithms for managing exploration–exploitation trade-offs. Additionally, we explore K-armed contextual bandits and SCAB, focusing on their methodologies and regret analyses. We also examine the connections between SCAB problems and functional data analysis. Finally, we highlight recent advances and ongoing challenges in the field. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Causal Inference)
Show Figures

Figure 1

29 pages, 1462 KiB  
Review
PID vs. Model-Based Control for the Double Integrator Plus Dead-Time Model: Noise Attenuation and Robustness Aspects
by Mikulas Huba, Pavol Bistak, Damir Vrancic and Mingwei Sun
Mathematics 2025, 13(4), 664; https://doi.org/10.3390/math13040664 - 18 Feb 2025
Cited by 1 | Viewed by 298
Abstract
One of the most important contributions of modern control theory from the 1960s was the separation of the dynamics of state-space controller design from the dynamics of state reconstruction. However, because modern control theory predates the mass spread of digital controllers and was [...] Read more.
One of the most important contributions of modern control theory from the 1960s was the separation of the dynamics of state-space controller design from the dynamics of state reconstruction. However, because modern control theory predates the mass spread of digital controllers and was predominantly focused on analog solutions that avoided modeling dead-time elements, it cannot effectively cover all aspects that emerged with the development of programmable devices and embedded systems. The same historical limitations also characterized the development of proportional-integral-derivative (PID) controllers, which began several decades earlier. Although they were used to control time-delayed systems, these solutions, which are most commonly used in practice today, can also be referred to as simplified disturbance observers that allow the avoidance of the the direct use of dead-time models. Using the example of controlling systems with a double integrator plus dead-time model, this article shows a novel controller design that significantly improves control performance compared to conventional PID controllers. The new control structure is a combination of a generalized state-space controller, interpreted as a higher-order derivative controller, and a predictive disturbance observer that uses the inversion of double integrator dynamics and dead-time models. It enables the elimination of the windup effect that is typical for PID control and extends the separation of the dynamics of setpoint tracking from the dynamics of state and disturbance reconstruction to time-delayed processes as well. The novelty of the presented solution offers several orders of magnitude lower amplification of measurement noise compared to traditional PID control. On the other hand, it offers high robustness and a stable transient response despite the unstable internal feedback of processes like the magnetic levitation system. The improvements achieved are so high that they call into question the classical solutions with PID controllers, at least for DIPDT models. In addition to the comparison with PID control, the relationship with traditional state space controllers, which today form the basis of active disturbance rejection control (ADRC), is also discussed and examined for processes including dead time. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

16 pages, 326 KiB  
Article
Modified Information Criterion for Testing Changes in the Inverse Gaussian Degradation Process
by Jiahua Qiao, Xia Cai and Meiqi Zhang
Mathematics 2025, 13(4), 663; https://doi.org/10.3390/math13040663 - 18 Feb 2025
Viewed by 226
Abstract
The Inverse Gaussian process is a useful stochastic process to model the monotonous degradation process of a certain component. Owing to the phenomenon that the degradation processes often exhibit multi-stage characteristics because of the internal degradation mechanisms and external environmental factors, a change-point [...] Read more.
The Inverse Gaussian process is a useful stochastic process to model the monotonous degradation process of a certain component. Owing to the phenomenon that the degradation processes often exhibit multi-stage characteristics because of the internal degradation mechanisms and external environmental factors, a change-point Inverse Gaussian process is studied in this paper. A modified information criterion method is applied to illustrate the existence and estimate of the change point. A reliability function is derived based on the proposed method. The simulations are conducted to show the performance of the proposed method. As a result, the procedure outperforms the existing procedure with regard to test power and consistency. Finally, the procedure is applied to hydraulic piston pump data to demonstrate its practical application. Full article
(This article belongs to the Special Issue Reliability Analysis and Statistical Computing)
Show Figures

Figure 1

13 pages, 215 KiB  
Article
The Krasnoselskii–Mann Method for Approximation of Coincidence Points of Set-Valued Mappings
by Alexander J. Zaslavski
Mathematics 2025, 13(4), 662; https://doi.org/10.3390/math13040662 - 18 Feb 2025
Viewed by 219
Abstract
In the present paper, we use the Krasnoselskii–Mann method in order to obtain approximate coincidence points of set-valued mappings in metric spaces with a hyperbolic structure. Full article
(This article belongs to the Special Issue Applied Functional Analysis and Applications: 2nd Edition)
25 pages, 7252 KiB  
Article
An Efficient Target-to-Area Classification Strategy with a PIP-Based KNN Algorithm for Epidemic Management
by Jong-Shin Chen, Ruo-Wei Hung and Cheng-Ying Yang
Mathematics 2025, 13(4), 661; https://doi.org/10.3390/math13040661 - 17 Feb 2025
Viewed by 296
Abstract
During a widespread epidemic, a large portion of the population faces an increased risk of contracting infectious diseases such as COVID-19, monkeypox, and pneumonia. These outbreaks often trigger cascading effects, significantly impacting society and healthcare systems. To contain the spread, the Centers for [...] Read more.
During a widespread epidemic, a large portion of the population faces an increased risk of contracting infectious diseases such as COVID-19, monkeypox, and pneumonia. These outbreaks often trigger cascading effects, significantly impacting society and healthcare systems. To contain the spread, the Centers for Disease Control and Prevention (CDC) must monitor infected individuals (targets) and their geographical locations (areas) as a basis for allocating medical resources. This scenario is a Target-to-Area (TTA) problem. Previous research introduced the Point-In-Polygon (PIP) technique to address multi-target and single-area TTA problems. PIP technology relies on an area’s boundary points to determine whether a target is within that region. However, when dealing with multi-target, multi-area TTA problems, PIP alone may have limitations. The K-Nearest Neighbors (KNN) algorithm presents a promising alternative, but its classification accuracy depends on the availability of sufficient samples, i.e., known targets and their corresponding geographical areas. When sample data are limited, the effectiveness of KNN is constrained, potentially delaying the CDC’s ability to track and manage outbreaks. For this problem, this study proposes an improved approach that integrates PIP and KNN technologies while introducing area boundary points as additional samples. This enhancement aims to improve classification accuracy and mitigate the impact of insufficient sample data on epidemic tracking and management. Full article
(This article belongs to the Special Issue Graph Theory: Advanced Algorithms and Applications, 2nd Edition)
Show Figures

Figure 1

18 pages, 1973 KiB  
Article
EHAFF-NET: Enhanced Hybrid Attention and Feature Fusion for Pedestrian ReID
by Jun Yang, Yan Wang, Haizhen Xie, Jiayue Chen, Shulong Sun and Xiaolan Zhang
Mathematics 2025, 13(4), 660; https://doi.org/10.3390/math13040660 - 17 Feb 2025
Viewed by 323
Abstract
This study addresses the cross-scenario challenges in pedestrian re-identification for public safety, including perspective differences, lighting variations, occlusions, and vague feature expressions. We propose a pedestrian re-identification method called EHAFF-NET, which integrates an enhanced hybrid attention mechanism and multi-branch feature fusion. We introduce [...] Read more.
This study addresses the cross-scenario challenges in pedestrian re-identification for public safety, including perspective differences, lighting variations, occlusions, and vague feature expressions. We propose a pedestrian re-identification method called EHAFF-NET, which integrates an enhanced hybrid attention mechanism and multi-branch feature fusion. We introduce the Enhanced Hybrid Attention Module (EHAM), which combines channel and spatial attention mechanisms. The channel attention mechanism uses self-attention to capture long-range dependencies and extracts multi-scale local features with convolutional kernels and channel shuffling. The spatial attention mechanisms aggregate features using global average and max pooling to enhance spatial representation. To tackle issues like perspective differences, lighting changes, and occlusions, we incorporate the Multi-Branch Feature Integration module. The global branch captures overall information with global average pooling, while the local branch integrates features from different layers via the Diverse-Depth Feature Integration Module (DDFIM) to extract multi-scale semantic information. It also extracts features based on human proportions, balancing high-level semantics and low-level details. Experiments demonstrate that our model achieves a mAP of 92.5% and R1 of 94.7% on the Market-1501 dataset, a mAP of 85.4% and R1 of 88.6% on the DukeMTMC-reID dataset, and a mAP of 49.1% and R1 of 73.8% on the MSMT17 dataset, demonstrating significant accuracy advantages over several advanced models. Full article
Show Figures

Figure 1

31 pages, 2778 KiB  
Article
Mining High-Efficiency Itemsets with Negative Utilities
by Irfan Yildirim
Mathematics 2025, 13(4), 659; https://doi.org/10.3390/math13040659 - 17 Feb 2025
Viewed by 243
Abstract
High-efficiency itemset mining has recently emerged as a new problem in itemset mining. An itemset is classified as a high-efficiency itemset if its utility-to-investment ratio meets or exceeds a specified efficiency threshold. The goal is to discover all high-efficiency itemsets in a given [...] Read more.
High-efficiency itemset mining has recently emerged as a new problem in itemset mining. An itemset is classified as a high-efficiency itemset if its utility-to-investment ratio meets or exceeds a specified efficiency threshold. The goal is to discover all high-efficiency itemsets in a given database. However, solving the problem is computationally complex, due to the large search space involved. To effectively address this problem, several algorithms have been proposed that assume that databases contain only positive utilities. However, real-world databases often contain negative utilities. When the existing algorithms are applied to such databases, they fail to discover the complete set of itemsets, due to their limitations in handling negative utilities. This study proposes a novel algorithm, MHEINU (mining high-efficiency itemset with negative utilities), designed to correctly mine a complete set of high-efficiency itemsets from databases that also contain negative utilities. MHEINU introduces two upper-bounds to efficiently and safely reduce the search space. Additionally, it features a list-based data structure to streamline the mining process and minimize costly database scans. Experimental results on various datasets containing negative utilities showed that MHEINU effectively discovered the complete set of high-efficiency itemsets, performing well in terms of runtime, number of join operations, and memory usage. Additionally, MHEINU demonstrated good scalability, making it suitable for large-scale datasets. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 2250 KiB  
Article
Short-Term Prediction of Traffic Flow Based on the Comprehensive Cloud Model
by Jianhua Dong
Mathematics 2025, 13(4), 658; https://doi.org/10.3390/math13040658 - 17 Feb 2025
Viewed by 329
Abstract
Short-term traffic flow prediction plays a crucial role in transportation systems by describing the time evolution of traffic flow over short periods, such as seconds, minutes, or hours. It helps people make informed decisions about their routes to avoid congested areas and enables [...] Read more.
Short-term traffic flow prediction plays a crucial role in transportation systems by describing the time evolution of traffic flow over short periods, such as seconds, minutes, or hours. It helps people make informed decisions about their routes to avoid congested areas and enables traffic management departments to quickly adjust road capacities and implement effective traffic management strategies. In recent years, numerous studies have been conducted in this area. However, there is a significant gap in research regarding the uncertainty of short-term traffic flow, which negatively impacts the accuracy and robustness of traffic flow prediction models. In this paper, we propose a novel comprehensive entropy-cloud model that includes two algorithms: the Fused Cloud Model Inference based on DS Evidence Theory (FCMI-DS) and the Cloud Model Inference and Prediction based on Compensation Mechanism (CMICM). These algorithms are designed to address the short-term traffic flow prediction problem. By utilizing the cloud model of historical flow data to guide future short-term predictions, our approach improves prediction accuracy and stability. Additionally, we provide relevant mathematical proofs to support our methodology. Full article
Show Figures

Figure 1

29 pages, 872 KiB  
Article
Cumulative Sum Schemes for Monitoring the Ratio of Two Correlated Normal Variables in Short Production Runs with Fixed and Variable Sampling Interval Strategies: Application in Wheat Seed Processing
by Wei Yang, Xueting Ji, Hongxing Cai and Jiujun Zhang
Mathematics 2025, 13(4), 657; https://doi.org/10.3390/math13040657 - 17 Feb 2025
Viewed by 251
Abstract
Short-run production is frequently used in manufacturing due to technological advancements, and it is integral to Agriculture 4.0. In addition, monitoring multiple variables in short production runs (SPR) is often essential. For example, balancing the ratio of coating components of wheat seeds is [...] Read more.
Short-run production is frequently used in manufacturing due to technological advancements, and it is integral to Agriculture 4.0. In addition, monitoring multiple variables in short production runs (SPR) is often essential. For example, balancing the ratio of coating components of wheat seeds is crucial for the growth and yield of wheat. Therefore, in this paper, two one-sided cumulative sum (CUSUM) schemes for monitoring the ratio of two correlated normal variables in SPR are proposed. Furthermore, performance metrics are evaluated, and the effects of various parameters on the schemes are analyzed through Monte Carlo simulations. To improve the detection efficiency of the proposed schemes, variable sampling interval (VSI) strategy is considered. The performance of these schemes under different sampling intervals is simulated. The results indicate that the monitoring performances of the schemes utilizing the VSI strategy surpass that of both the system without the VSI strategy and the Shewhart scheme. A comprehensive sensitivity analysis was conducted on the VSI strategy scheme to ensure its robustness. The analysis examined the effects of parameter variations, data contamination, and data correlation on the scheme’s performance. The proposed schemes were applied to the experiment of monitoring the nutrient composition ratio of wheat seed coating, and the results show that the schemes achieved the anticipated monitoring performance and possess practical application value. Full article
(This article belongs to the Special Issue Stochastic Processes and Its Applications)
Show Figures

Figure 1

16 pages, 4632 KiB  
Article
Interval Uncertainty Analysis for Wheel–Rail Contact Load Identification Based on First-Order Polynomial Chaos Expansion
by Shengwen Yin, Haotian Xiao and Lei Cao
Mathematics 2025, 13(4), 656; https://doi.org/10.3390/math13040656 - 17 Feb 2025
Viewed by 158
Abstract
Traditional methods for identifying wheel–rail contact loads are based on deterministic models, in which the uncertainties such as material inhomogeneity and geometric tolerance are not considered. For wheel–rail contact load analysis with uncertainties, a novel method named the Interval First-Order Polynomial Chaos Expansion [...] Read more.
Traditional methods for identifying wheel–rail contact loads are based on deterministic models, in which the uncertainties such as material inhomogeneity and geometric tolerance are not considered. For wheel–rail contact load analysis with uncertainties, a novel method named the Interval First-Order Polynomial Chaos Expansion method (IFOPCE) is proposed to propagate the uncertainty in wheel–rail contact systems. In IFOPCE, the polynomial chaos expansion (PCE) is first utilized to approximate the relationship between strain responses, wheel–rail loads, and uncertain variables. The expansion coefficients are calculated using Latin Hypercube Sampling (LHS). To efficiently decouple the wheel–rail loads, the relationship between load and strain is established based on the first-order PCE. By using IFOPCE, the variation range of wheel–rail contact loads can be effectively obtained. It is shown in numerical examples that the IFOPCE achieves high computational accuracy and the uncertainties have a great effect on the identification of wheel–rail loads. Full article
Show Figures

Figure 1

27 pages, 5252 KiB  
Article
Mathematical Modeling and Clustering Framework for Cyber Threat Analysis Across Industries
by Fahim Sufi and Musleh Alsulami
Mathematics 2025, 13(4), 655; https://doi.org/10.3390/math13040655 - 17 Feb 2025
Cited by 1 | Viewed by 317
Abstract
The escalating prevalence of cyber threats across industries underscores the urgent need for robust analytical frameworks to understand their clustering, prevalence, and distribution. This study addresses the challenge of quantifying and analyzing relationships between 95 distinct cyberattack types and 29 industry sectors, leveraging [...] Read more.
The escalating prevalence of cyber threats across industries underscores the urgent need for robust analytical frameworks to understand their clustering, prevalence, and distribution. This study addresses the challenge of quantifying and analyzing relationships between 95 distinct cyberattack types and 29 industry sectors, leveraging a dataset of 9261 entries filtered from over 1 million news articles. Existing approaches often fail to capture nuanced patterns across such complex datasets, justifying the need for innovative methodologies. We present a rigorous mathematical framework integrating chi-square tests, Bayesian inference, Gaussian Mixture Models (GMMs), and Spectral Clustering. This framework identifies key patterns, such as 1150 Zero-Day Exploits clustered in the IT and Telecommunications sector, 732 Advanced Persistent Threats (APTs) in Government and Public Administration, and Malware with a posterior probability of 0.287 dominating the Healthcare sector. Temporal analyses reveal periodic spikes, such as in Zero-Day Exploits, and a persistent presence of Social Engineering Attacks, with 1397 occurrences across industries. These findings are quantified using significance scores (mean: 3.25 ± 0.7) and posterior probabilities, providing evidence for industry-specific vulnerabilities. This research offers actionable insights for policymakers, cybersecurity professionals, and organizational decision makers by equipping them with a data-driven understanding of sector-specific risks. The mathematical formulations are replicable and scalable, enabling organizations to allocate resources effectively and develop proactive defenses against emerging threats. By bridging mathematical theory to real-world cybersecurity challenges, this study delivers impactful contributions toward safeguarding critical infrastructure and digital assets. Full article
(This article belongs to the Special Issue Analytical Frameworks and Methods for Cybersecurity, 2nd Edition)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop