Next Issue
Volume 6, February
Previous Issue
Volume 5, December
 
 

AppliedMath, Volume 6, Issue 1 (January 2026) – 18 articles

Cover Story (view full-size image): The human brain is a paradigmatic complex system, whose structure and dynamics emerge across multiple spatial and temporal scales. Fractal geometry offers a powerful mathematical framework to quantify this complexity beyond classical Euclidean descriptions. This review brings together the theoretical foundations, methodological advances, and neuroscientific applications of fractal dimension analysis. We highlight how fractal measures sensitively capture structural and functional brain organization across health, development, aging, neurodegenerative disease, and altered states of consciousness. By linking neural integration and differentiation with multiscale organization, fractal dimension emerges as a unifying descriptor of brain complexity and a promising quantitative biomarker at the interface of mathematics, physics, and clinical neuroscience. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
32 pages, 399 KB  
Article
Recovering Einstein’s Mature View of Gravitation: A Dynamical Reconstruction Grounded in the Equivalence Principle
by Jaume de Haro and Emilio Elizalde
AppliedMath 2026, 6(1), 18; https://doi.org/10.3390/appliedmath6010018 - 21 Jan 2026
Viewed by 149
Abstract
The historical and conceptual foundations of General Relativity are revisited, putting the main focus on the physical meaning of the invariant ds2, the Equivalence Principle, and the precise interpretation of spacetime geometry. It is argued that Albert Einstein initially sought [...] Read more.
The historical and conceptual foundations of General Relativity are revisited, putting the main focus on the physical meaning of the invariant ds2, the Equivalence Principle, and the precise interpretation of spacetime geometry. It is argued that Albert Einstein initially sought a dynamical formulation in which ds2 encoded the gravitational effects, without invoking curvature as a physical entity. The now more familiar geometrical interpretation—identifying gravitation with spacetime curvature—gradually emerged through his collaboration with Marcel Grossmann and the adoption of the Ricci tensor in 1915. Anyhow, in his 1920 Leiden lecture, Einstein explicitly reinterpreted spacetime geometry as the state of a physical medium—an “ether” endowed with metrical properties but devoid of mechanical substance—thereby actually rejecting geometry as an independent ontological reality. Building upon this mature view, gravitation is reconstructed from the Weak Equivalence Principle, understood as the exact compensation between inertial and gravitational forces acting on a body under a uniform gravitational field. From this fundamental principle, together with an extension of Fermat’s Principle to massive objects, the invariant ds2 is obtained, first in the static case, where the gravitational potential modifies the flow of proper time. Then, by applying the Lorentz transformation to this static invariant, its general form is derived for the case of matter in motion. The resulting invariant reproduces the relativistic form of Newton’s second law in proper time and coincides with the weak-field limit of General Relativity in the harmonic gauge. This approach restores the operational meaning of Einstein’s theory: spacetime geometry represents dynamical relations between physical measurements, rather than the substance of spacetime itself. By deriving the gravitational modification of the invariant directly from the Weak Equivalence Principle, Fermat Principle and Lorentz invariance, this formulation clarifies the physical origin of the metric structure and resolves long-standing conceptual issues—such as the recurrent hole argument—while recovering all the empirical successes of General Relativity within a coherent and sound Machian framework. Full article
(This article belongs to the Section Deterministic Mathematics)
22 pages, 1859 KB  
Article
Assessing Cost Efficiency Thresholds in Fragmented Agriculture: A Gamma-Based Model of the Trade-Off Between Unit and Total Parcel Costs
by Elena Sánchez Arnau, Antonia Ferrer Sapena, Maria Carmen Cárcel-Mas, Claudia Sánchez Arnau and Enrique A. Sánchez Pérez
AppliedMath 2026, 6(1), 17; https://doi.org/10.3390/appliedmath6010017 - 20 Jan 2026
Viewed by 129
Abstract
Parcel size strongly influences agricultural production costs, and combining spatial and economic information within a mathematical setting helps to clarify this relationship. In this study, we introduce a Gamma-based stochastic framework to integrate actual parcel size distributions into cost estimates, an approach that, [...] Read more.
Parcel size strongly influences agricultural production costs, and combining spatial and economic information within a mathematical setting helps to clarify this relationship. In this study, we introduce a Gamma-based stochastic framework to integrate actual parcel size distributions into cost estimates, an approach that, to our knowledge, has not been applied in this context. Using a representative traditional orchard system as a case study, parcel sizes (characterized by strong right skewness) are modelled with a Gamma distribution; for highly fragmented landscapes, a truncated Gamma on (0.01,1] ha yields a mean parcel area of about 0.255 ha. Results show that parcel-size heterogeneity substantially affects expected per-parcel costs; for example, calibrating ploughing at 800 EUR/ha leads to an average of ∼160 EUR/parcel, whereas intensive vegetable harvesting at 5000 EUR/ha reaches ∼2100 EUR/parcel. In our simulation, in which the main parameters have been roughly fixed with the aim of showing the methodology, results are given on an expected costs scale relative to parcel area and operation intensity. Overall, the framework shows how parcel-size distributions condition cost estimates and provides a transferable basis for comparative analyses, while acknowledging limitations related to the area-only specification. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

27 pages, 1619 KB  
Article
Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS
by Vesna Antoska Knights, Marija Prchkovska, Luka Krašnjak and Jasenka Gajdoš Kljusurić
AppliedMath 2026, 6(1), 16; https://doi.org/10.3390/appliedmath6010016 - 20 Jan 2026
Viewed by 149
Abstract
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) [...] Read more.
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) for correlation-robust sensor fusion, a Gaussian state–space backbone with Kalman filtering, heteroskedastic Bayesian regression with full posterior sampling via an affine-invariant MCMC sampler, and a Bayesian likelihood-ratio test (LRT) coupled to a risk-sensitive proportional–derivative (PD) control law. Theoretical guarantees are provided by bounding the state covariance under stability conditions, establishing convexity of the aCI weight optimization on the simplex, and deriving a Bayes-risk-optimal decision threshold for the LRT under symmetric Gaussian likelihoods. A proof-of-concept agro-environmental decision-support application is considered, where heterogeneous data streams (IoT soil sensors, meteorological stations, and drone-derived vegetation indices) are fused to generate early-warning alarms for crop stress and to adapt irrigation and fertilization inputs. The proposed pipeline reduces predictive variance and sharpens posterior credible intervals (up to 34% narrower 95% intervals and 44% lower NLL/Brier score under heteroskedastic modeling), while a Bayesian uncertainty-aware controller achieves 14.2% lower water usage and 35.5% fewer false stress alarms compared to a rule-based strategy. The framework is mathematically grounded yet domain-independent, providing a probabilistic pipeline that propagates uncertainty from raw multimodal data to operational control actions, and can be transferred beyond agriculture to robotics, signal processing, and environmental monitoring applications. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

11 pages, 291 KB  
Article
A Maple Implementation for Deterministically Certifying Isolated Simple Zeros of Over-Determined Polynomial Systems with Interval Arithmetic and Its Applications
by Xiaojie Dou, Jin-San Cheng and Junyi Wen
AppliedMath 2026, 6(1), 15; https://doi.org/10.3390/appliedmath6010015 - 19 Jan 2026
Viewed by 119
Abstract
This paper presents a Maple implementation of an interval verification method for identifying isolated simple zeros in square polynomial systems. Compared to the known MATLAB (R2019b) implementation, the Maple-based approach achieves significantly higher numerical accuracy. The implementation enables polynomial evaluation at specific points [...] Read more.
This paper presents a Maple implementation of an interval verification method for identifying isolated simple zeros in square polynomial systems. Compared to the known MATLAB (R2019b) implementation, the Maple-based approach achieves significantly higher numerical accuracy. The implementation enables polynomial evaluation at specific points to yield results with very small absolute values—sufficiently precise to reach error bounds computed through theoretical formulations for moderate-sized systems. This advancement allows the deterministic certification of isolated simple zeros in over-determined polynomial systems containing approximately 10,000 complex zeros. As a practical demonstration, the method is further applied to rigorously verify isolated multiple zeros in smaller-scale polynomial systems. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
19 pages, 515 KB  
Article
On Families of Elliptic Curves Ep,q:y2=x3pqx That Intersect the Same Line La,b:y=abx of Rational Slope
by Eldar Sultanow, Anja Jeschke, Amir Darwish Tfiha, Madjid Tehrani and William J. Buchanan
AppliedMath 2026, 6(1), 14; https://doi.org/10.3390/appliedmath6010014 - 14 Jan 2026
Viewed by 212
Abstract
We investigate a special family of elliptic curves, namely Ep,q:y2=x3pqx, where p<q are odd primes. We study sufficient conditions for p and q so that the corresponding [...] Read more.
We investigate a special family of elliptic curves, namely Ep,q:y2=x3pqx, where p<q are odd primes. We study sufficient conditions for p and q so that the corresponding elliptic curve has non-trivial rational points. The number of sufficient conditions reduces to six. These six sufficient conditions relate to Polignac’s conjecture, to the prime gap problem, the twin prime conjecture, and to results from Green and Sawhney and Friedlander and Iwaniec. Additionally, we analyze the structures of the sufficient conditions for p and q by their graphical visualizations of the six sufficient conditions for p,q6997. The graphical structures for the six sufficient conditions exhibit arc structures, quasi-linear arc segments, tile structures, and sparsely populated structures. Full article
(This article belongs to the Section Deterministic Mathematics)
Show Figures

Figure 1

16 pages, 421 KB  
Article
A Note on Tutz’s Pairwise Separation Estimator
by Alexander Robitzsch
AppliedMath 2026, 6(1), 13; https://doi.org/10.3390/appliedmath6010013 - 13 Jan 2026
Viewed by 165
Abstract
The Rasch model has the desirable property that item parameter estimation can be separated from person parameter estimation. This implies that no assumptions about the ability distribution are required when estimating item difficulties. Pairwise estimation approaches in the Rasch model exploit this principle [...] Read more.
The Rasch model has the desirable property that item parameter estimation can be separated from person parameter estimation. This implies that no assumptions about the ability distribution are required when estimating item difficulties. Pairwise estimation approaches in the Rasch model exploit this principle by estimating item difficulties solely from sample proportions of respondents who answer item i correctly and item j incorrectly. A recent contribution by Tutz introduced Tutz’s pairwise separation estimator (TPSE) for the more general class of homogeneous monotone (HM) models, extending the idea of pairwise estimation to this broader setting. The present article examines the asymptotic behavior of the TPSE within the Rasch model as a special case of the HM framework. It should be emphasized that both analytical derivations and a numerical illustration show that the TPSE yields asymptotically biased item parameter estimates, rendering the estimator inconsistent, even for a large number of items. Consequently, the TPSE cannot be recommended for empirical applications. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

19 pages, 984 KB  
Article
Enhanced Moving Object Detection in Dynamic Video Environments Using a Truncated Mean and Stationary Wavelet Transform
by Oussama Boufares, Mohamed Boussif and Noureddine Aloui
AppliedMath 2026, 6(1), 12; https://doi.org/10.3390/appliedmath6010012 - 12 Jan 2026
Viewed by 201
Abstract
In this paper, we present a novel method for background estimation and updating in video sequences, utilizing an innovative approach that combines an intelligent truncated mean, the stationary wavelet transform (SWT), and advanced thresholding techniques. This method aims to significantly enhance the accuracy [...] Read more.
In this paper, we present a novel method for background estimation and updating in video sequences, utilizing an innovative approach that combines an intelligent truncated mean, the stationary wavelet transform (SWT), and advanced thresholding techniques. This method aims to significantly enhance the accuracy of moving object detection by mitigating the impact of outliers and adapting background estimation to dynamic scene conditions. The proposed approach begins with a robust initial background estimation, followed by moving object detection through frame subtraction and gamma correction. Segmentation is then performed using SWT, coupled with adaptive thresholding methods, including hard and soft thresholding. These techniques work in tandem to effectively reduce noise while preserving critical details. Finally, the background is selectively updated to integrate new information from static regions while excluding moving objects, ensuring a precise and robust detection system. Experimental evaluation on the CDnet 2014 and SBI 2015 datasets demonstrates that the proposed method improves the F1 score by 12.5 percentage points (from 0.7511 to 0.8765), reduces false positives by up to 65%, and achieves higher PSNR values compared to GMM_Zivk, SuBSENSE, and SC_SOBS. These results confirm the robustness of the hybrid approach based on truncated mean and SWT in dynamic and challenging environments. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

33 pages, 10634 KB  
Article
Examining the Nature and Dimensions of Artificial Intelligence Incidents: A Machine Learning Text Analytics Approach
by Wullianallur Raghupathi, Jie Ren and Tanush Kulkarni
AppliedMath 2026, 6(1), 11; https://doi.org/10.3390/appliedmath6010011 - 9 Jan 2026
Viewed by 276
Abstract
As artificial intelligence systems proliferate across critical societal domains, understanding the nature, patterns, and evolution of AI-related harms has become essential for effective governance. Despite growing incident repositories, systematic computational analysis of AI incident discourse remains limited, with prior research constrained by small [...] Read more.
As artificial intelligence systems proliferate across critical societal domains, understanding the nature, patterns, and evolution of AI-related harms has become essential for effective governance. Despite growing incident repositories, systematic computational analysis of AI incident discourse remains limited, with prior research constrained by small samples, single-method approaches, and absence of temporal analysis spanning major capability advances. This study addresses these gaps through a comprehensive multi-method text analysis of 3494 AI incident records from the OECD AI Policy Observatory, spanning January 2014 through October 2024. Six complementary analytical approaches were applied: Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF) topic modeling to discover thematic structures; K-Means and BERTopic clustering for pattern identification; VADER sentiment analysis for emotional framing assessment; and LIWC psycholinguistic profiling for cognitive and communicative dimension analysis. Cross-method comparison quantified categorization robustness across all four clustering and topic modeling approaches. Key findings reveal dramatic temporal shifts and systematic risk patterns. Incident reporting increased 4.6-fold following ChatGPT’s (5.2) November 2022 release (from 12.0 to 95.9 monthly incidents), accompanied by vocabulary transformation from embodied AI terminology (facial recognition, autonomous vehicles) toward generative AI discourse (ChatGPT, hallucination, jailbreak). Six robust thematic categories emerged consistently across methods: autonomous vehicles (84–89% cross-method alignment), facial recognition (66–68%), deepfakes, ChatGPT/generative AI, social media platforms, and algorithmic bias. Risk concentration is pronounced: 49.7% of incidents fall within two harm categories (system safety 29.1%, physical harms 20.6%); private sector actors account for 70.3%; and 48% occur in the United States. Sentiment analysis reveals physical safety incidents receive notably negative framing (autonomous vehicles: −0.077; child safety: −0.326), while policy and generative AI coverage trend positive (+0.586 to +0.633). These findings have direct governance implications. The thematic concentration supports sector-specific regulatory frameworks—mandatory audit trails for hiring algorithms, simulation testing for autonomous vehicles, transparency requirements for recommender systems, accuracy standards for facial recognition, and output labeling for generative AI. Cross-method validation demonstrates which incident categories are robust enough for standardized regulatory classification versus those requiring context-dependent treatment. The rapid emergence of generative AI incidents underscores the need for governance mechanisms responsive to capability advances within months rather than years. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

21 pages, 20689 KB  
Article
Spatial Prediction of Forest Fire Risk in Guangdong Province Using Multi-Source Geospatial Data and Sparrow Search Algorithm-Optimized XGBoost
by Huiying Wang, Chengwei Yu and Jiahuan Wang
AppliedMath 2026, 6(1), 10; https://doi.org/10.3390/appliedmath6010010 - 6 Jan 2026
Viewed by 211
Abstract
Forest fires pose escalating threats to ecological security and public safety in Guangdong Province. This study presents a novel machine learning framework for fire occurrence prediction by synergistically integrating multi-source geospatial data. Utilizing Moderate-resolution Imaging Spectroradiometer (MODIS) active fire detections from 2014 to [...] Read more.
Forest fires pose escalating threats to ecological security and public safety in Guangdong Province. This study presents a novel machine learning framework for fire occurrence prediction by synergistically integrating multi-source geospatial data. Utilizing Moderate-resolution Imaging Spectroradiometer (MODIS) active fire detections from 2014 to 2023, we quantified historical fire patterns and incorporated four categories of predisposing factors: meteorological variables, topographic attributes, vegetation characteristics, and anthropogenic activities. Spatiotemporal clustering dynamics were characterized via kernel density estimation and spatial autocorrelation analysis. An XGBoost classifier, hyperparameter-optimized through the Sparrow Search Algorithm (SSA), achieved a predictive accuracy of 90.4%, with performance evaluated through precision, recall, and F1-score. Risk zoning maps generated from predicted probabilities were validated against independent fire records from 2019 to 2024. Results reveal pronounced spatial heterogeneity, with high-risk zones concentrated in northern and western mountainous areas, constituting 29% of the provincial territory. Critical driving factors include slope gradient, proximity to roads and rivers, temperature, population density, and elevation. This robust predictive framework furnishes a scientific foundation for spatially-explicit fire prevention strategies and optimized resource allocation in key high-risk jurisdictions, notably Qingyuan, Shaoguan, Zhanjiang, and Zhaoqing. Full article
Show Figures

Figure 1

20 pages, 8576 KB  
Article
Dynamical System Stability Criteria Based on the Frobenius Norm
by Dragana Cvetković and Ernest Šanca
AppliedMath 2026, 6(1), 9; https://doi.org/10.3390/appliedmath6010009 - 5 Jan 2026
Viewed by 233
Abstract
It is well known that the position of the Jacobian matrix spectrum in the left-half complex plane provides the local asymptotic stability of a nonlinear dynamical system, but it is also well known that for large matrices, computing its eigenvalues just to see [...] Read more.
It is well known that the position of the Jacobian matrix spectrum in the left-half complex plane provides the local asymptotic stability of a nonlinear dynamical system, but it is also well known that for large matrices, computing its eigenvalues just to see their position is computationally prohibitive. Instead, it is recommended to check if a given matrix belongs to the H-matrix class and has negative diagonal entries. Since confirming the H-matrix property is computationally costly, the preference is to work with its subclasses, which are defined by simpler conditions. In this paper, we develop and investigate a new subclass of H-matrices via the Frobenius matrix norm, which generalizes the recently introduced classes. We support its significance with real-life examples and clarify its relationship to some well-known block H-matrices based on the Euclidean matrix norm. The main novelty in this paper is that when a fast and inexpensive answer about the stability of a dynamical system is required, and the system matrix has a natural block structure, we develop a simple tool to check whether this structure, along with the additional condition of negative diagonal elements, ensures stability. This is especially important when the matrix does not belong to any previously known H-matrix subclasses. Full article
Show Figures

Figure 1

25 pages, 4375 KB  
Article
Conceptual Proposal for a Computational Platform to Assist in the Learning and Cognitive Development Process of Children with Autism Spectrum Disorder: A Solution Based on a Multicriteria Structure
by David de Oliveira Costa, Cleyton Mário de Oliveira Rodrigues, Ana Claudia Souza, Carlo Marcelo Revoredo da Silva, Andrei Bonamigo, Miguel Ângelo Lellis Moreira, Marcos dos Santos, Carlos Francisco Simões Gomes and Daniel Augusto de Moura Pereira
AppliedMath 2026, 6(1), 8; https://doi.org/10.3390/appliedmath6010008 - 4 Jan 2026
Viewed by 334
Abstract
This study proposes a structured multicriteria approach to assist professionals in the selection of appropriate computing platforms for children diagnosed with Autism Spectrum Disorder, particularly those between 4 and 6 years of age. Recognizing the learning limitations and reduced attention span typical of [...] Read more.
This study proposes a structured multicriteria approach to assist professionals in the selection of appropriate computing platforms for children diagnosed with Autism Spectrum Disorder, particularly those between 4 and 6 years of age. Recognizing the learning limitations and reduced attention span typical of this group, the study addresses a gap in the current selection process, which is often based on professional experience rather than objective and measurable criteria. A Systematic Literature Review (SLR), protocol analysis, and problem-structuring methods identified essential evaluation criteria that incorporated key dimensions of development and behavior. These include personalization and adaptation, interactivity and engagement, monitoring and feedback, communication and language, cognitive and social development, usability and accessibility, and security and privacy. Based on these dimensions, a multicriteria method was applied to rank the alternatives represented by the technologies in question. The proposed framework enables a rigorous and axiomatic comparison of platforms based on structured criteria aligned with established intervention protocols, such as ABA, DIR/Floortime, JASPER, and SCERTS. The results validate the model’s effectiveness in highlighting the most appropriate technological tools for this audience. Although the scope is limited to children aged 4 to 6 years, the proposed methodology can be adapted for use with broader age groups. This work contributes to inclusive education by providing a replicable, justifiable framework for selecting digital learning tools that may influence clinical recommendations and family engagement. Full article
Show Figures

Figure 1

16 pages, 1235 KB  
Review
Foundations and Clinical Applications of Fractal Dimension in Neuroscience: Concepts and Perspectives
by Francisco J. Esteban and Eva Vargas
AppliedMath 2026, 6(1), 7; https://doi.org/10.3390/appliedmath6010007 - 4 Jan 2026
Viewed by 460
Abstract
Fractal geometry offers a mathematical framework to quantify the complexity of brain structure and function. The fractal dimension (FD) captures self-similarity and irregularity across spatial and temporal scales, surpassing the limits of traditional Euclidean metrics. In neuroscience, FD serves as a key descriptor [...] Read more.
Fractal geometry offers a mathematical framework to quantify the complexity of brain structure and function. The fractal dimension (FD) captures self-similarity and irregularity across spatial and temporal scales, surpassing the limits of traditional Euclidean metrics. In neuroscience, FD serves as a key descriptor of the brain’s hierarchical organization—from dendritic arborization and cortical folding to neural dynamics measured by diverse neuroimaging techniques. This review summarizes theoretical foundations and methodological advances in FD estimation, including the box-counting approach for imaging, and Higuchi’s and Katz’s algorithms for electrophysiological data, addressing reliability and reproducibility issues. In addition, we illustrate how fractal analysis characterizes brain complexity in health and disease. Clinical applications include detecting white matter alterations in multiple sclerosis, atypical maturation in intrauterine growth restriction, reduced cortical complexity in Alzheimer’s disease, and altered neuroimaging patterns in schizophrenia. Emerging evidence highlights FD’s potential for distinguishing consciousness states and quantifying neural integration and differentiation. Bridging mathematics, physics, and neuroscience, fractal analysis provides a quantitative lens on the brain’s multiscale organization and pathological deviations. FD thus stands as both a theoretical descriptor and a translational biomarker whose standardization could advance precision diagnostics and understanding of neural dynamics. Full article
Show Figures

Figure 1

13 pages, 1432 KB  
Article
Online Hyperparameter Tuning in Bayesian Optimization for Material Parameter Identification: An Application in Strain-Hardening Plasticity for Automotive Structural Steel
by Teng Long, Leyu Wang, Cing-Dao Kan and James D. Lee
AppliedMath 2026, 6(1), 6; https://doi.org/10.3390/appliedmath6010006 - 3 Jan 2026
Viewed by 286
Abstract
Effective identification of strain-hardening parameters is essential for predictive plasticity models used in automotive applications. However, the performance of Bayesian optimization depends strongly on kernel hyperparameters in the Gaussian-process surrogate, which are often kept fixed. In this work, we propose a likelihood-based online [...] Read more.
Effective identification of strain-hardening parameters is essential for predictive plasticity models used in automotive applications. However, the performance of Bayesian optimization depends strongly on kernel hyperparameters in the Gaussian-process surrogate, which are often kept fixed. In this work, we propose a likelihood-based online hyperparameter strategy within Bayesian optimization to identify strain-hardening parameters in plasticity. Specifically, we used the rational polynomial strain-hardening scheme for the plasticity model to fit the force vs. displacement response of automotive structural steel in tension. An in-house Bayesian optimization framework was first developed, and an online hyperparameter tuning algorithm was further incorporated to advance the optimization scheme. The optimization histories obtained from the fixed and online-tuning hyperparameters were compared. For the same number of iterations, the online hyperparameter adaptation reduced the final residual by approximately 20.4%, 24.0%, and 3.8% for Specimens 1–3, respectively. These results demonstrate that the proposed strategy can significantly improve the efficiency and quality of strain-hardening parameter identification. The results show that the online tuning scheme improved the optimization efficiency. This proposed strategy may be readily extensible to other materials and identification problems where enhancing optimization efficiency is needed. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

18 pages, 345 KB  
Article
Generalized Interval-Valued Convexity in Fractal Geometry
by Muhammad Zakria Javed, Muhammad Uzair Awan, Dafang Zhao, Awais Gul Khan and Lorentz Jäntschi
AppliedMath 2026, 6(1), 5; https://doi.org/10.3390/appliedmath6010005 - 3 Jan 2026
Viewed by 209
Abstract
The main goal of this study is to explain the idea of generalized interval-valued (I.V) convexity on a fractal set. We first define the basic operations for a generalized interval of Rs with 0<s1 [...] Read more.
The main goal of this study is to explain the idea of generalized interval-valued (I.V) convexity on a fractal set. We first define the basic operations for a generalized interval of Rs with 0<s1. Then, we expand the idea of (I.V) Riemann integration to (I.V) local fractal integration, which sets the stage for further research. This is followed by the proof of new Jensen, Hermite, Hadamard, Pachpatte, and Fejer inequalities that are (I.V) and have to do with the generalized class of (I.V) convexity defined over the fractal domain. We furnish validation through visual and comparative approaches. Our outcomes are the refinement of many existing results, indicating that they are fruitful. In fractal settings, this is the first paper to work on (I.V) convexity and some set-valued versions of Hermite–Hadamard-type containments. Full article
(This article belongs to the Special Issue Advances in Intelligent Control for Solving Optimization Problems)
Show Figures

Figure 1

15 pages, 5974 KB  
Article
Advanced Computational Insights into Coronary Artery Disease Drugs: A Machine Learning and Topological Analysis
by Neveen Ali Eshtewy, Shahid Zaman and Shumaila Noreen
AppliedMath 2026, 6(1), 4; https://doi.org/10.3390/appliedmath6010004 - 2 Jan 2026
Viewed by 313
Abstract
Machine learning (ML) is a powerful tool in drug design, enabling the rapid analysis of large and complex molecular graphs that represent the structural and chemical properties of medications. It enhances the precision and speed with which molecular interactions are predicted, drug candidates [...] Read more.
Machine learning (ML) is a powerful tool in drug design, enabling the rapid analysis of large and complex molecular graphs that represent the structural and chemical properties of medications. It enhances the precision and speed with which molecular interactions are predicted, drug candidates are refined, and potential therapeutic targets are identified. When combined with graph theory, ML allows for the prediction of structural properties, molecular behaviour, and the performance of chemical compounds. This integration promotes drug development, reduces costs, and increases the likelihood of producing effective medicines. In this study, we focus on the efficacy of medications used in the treatment of coronary artery disease (CAD) using graph-theoretical methodologies, such as topological indices. We computed several degree-based topological descriptors from chemical graphs, capturing essential connectivity and structural properties. These variables were incorporated into a machine learning framework to develop predictive models that identify structural factors influencing medication performance. Our study explores a dataset of known CAD drugs using supervised learning techniques to estimate their potential efficacy and support improved molecular design. The findings highlight the utility of graph-theoretical descriptors in enhancing prediction accuracy and providing insights into fundamental structural elements related to drug efficacy. Furthermore, this work emphasises the synergy between chemical graph theory and machine learning in accelerating drug development for CAD, offering a scalable and interpretable framework for future pharmaceutical applications. Full article
Show Figures

Figure 1

17 pages, 3085 KB  
Article
Mind the Gap: A Solution to the Robustness Problem of Turing Patterns Through Patterning Mode Isolation
by Thomas E. Woolley
AppliedMath 2026, 6(1), 3; https://doi.org/10.3390/appliedmath6010003 - 2 Jan 2026
Viewed by 279
Abstract
Turing patterns, characterised by spatial self-organisation in reaction–diffusion systems, exhibit sensitivity to initial conditions. This sensitivity, known as the robustness problem, results in different final patterns emerging even from small initial perturbations. In this paper, we introduce a mechanism of pattern mode isolation, [...] Read more.
Turing patterns, characterised by spatial self-organisation in reaction–diffusion systems, exhibit sensitivity to initial conditions. This sensitivity, known as the robustness problem, results in different final patterns emerging even from small initial perturbations. In this paper, we introduce a mechanism of pattern mode isolation, where we investigate parameter regimes that promote the isolation of bifurcation branches, thereby delineating the conditions under which distinct pattern modes emerge and evolve independently. Pattern mode isolation can provide a means of enhancing the predictability of Turing pattern mode transitions and enhance the robustness and reproducibility of the patterning outputs. Full article
Show Figures

Figure 1

19 pages, 770 KB  
Article
Dynamic Behavior of a Delayed Model with One Core Enterprise and Four Satellite Enterprises
by Chunhua Feng
AppliedMath 2026, 6(1), 2; https://doi.org/10.3390/appliedmath6010002 - 1 Jan 2026
Viewed by 143
Abstract
In this paper, an economic competition–cooperation model is examined, which has one core enterprise and four satellite enterprises. This extends previous smaller models in the literature mathematically. The unique positive solution to the original solution after a change of variables corresponds to a [...] Read more.
In this paper, an economic competition–cooperation model is examined, which has one core enterprise and four satellite enterprises. This extends previous smaller models in the literature mathematically. The unique positive solution to the original solution after a change of variables corresponds to a trivial equilibrium of a linearized system. The instability of this linearized solution implies the instability of the positive solution to the original system. The instability of the positive solution and boundedness will force this system to have a periodic solution. Some sufficient conditions to guarantee the periodic oscillation of the solutions for this model are provided, and computer simulations are given to support the present criteria. Full article
Show Figures

Figure 1

23 pages, 795 KB  
Article
Survey of Quasiasymptotic Behavior of Distributions in Relation to the Properties of Their Fractional Transforms
by Sanja Atanasova, Slavica Gajić, Smiljana Jakšić and Snježana Maksimović
AppliedMath 2026, 6(1), 1; https://doi.org/10.3390/appliedmath6010001 - 31 Dec 2025
Viewed by 173
Abstract
Fractional transforms have emerged as powerful analytical tools that bridge the time, frequency, and scale domains by introducing a fractional-order parameter into the kernel of classical transforms. This survey provides an overview of the mathematical foundations and distributional frameworks of several key fractional [...] Read more.
Fractional transforms have emerged as powerful analytical tools that bridge the time, frequency, and scale domains by introducing a fractional-order parameter into the kernel of classical transforms. This survey provides an overview of the mathematical foundations and distributional frameworks of several key fractional transforms, with emphasis on their formulation within appropriate spaces of generalized functions. Particular attention is devoted to the quasiasymptotic behavior of distributions in relation to the asymptotic properties of their corresponding fractional transforms. We demonstrate how individual transforms map illustrative signals into their corresponding domains and identify the values of the parameter α for which they produce the best results. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop