Previous Issue
Volume 5, March
 
 

AppliedMath, Volume 5, Issue 2 (June 2025) – 22 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
25 pages, 2242 KiB  
Article
Next-Gen Video Watermarking with Augmented Payload: Integrating KAZE and DWT for Superior Robustness and High Transparency
by Himanshu Agarwal, Shweta Agarwal, Farooq Husain and Rajeev Kumar
AppliedMath 2025, 5(2), 53; https://doi.org/10.3390/appliedmath5020053 (registering DOI) - 6 May 2025
Abstract
Background: The issue of digital piracy is increasingly prevalent, with its proliferation further fueled by the widespread use of social media outlets such as WhatsApp, Snapchat, Instagram, Pinterest, and X. These platforms have become hotspots for the unauthorized sharing of copyrighted materials without [...] Read more.
Background: The issue of digital piracy is increasingly prevalent, with its proliferation further fueled by the widespread use of social media outlets such as WhatsApp, Snapchat, Instagram, Pinterest, and X. These platforms have become hotspots for the unauthorized sharing of copyrighted materials without due recognition to the original creators. Current techniques for digital watermarking are inadequate; they frequently choose less-than-ideal locations for embedding watermarks. This often results in a compromise on maintaining critical relationships within the data. Purpose: This research aims to tackle the growing problem of digital piracy, which represents a major risk to rights holders in various sectors, most notably those involved in entertainment. The goal is to devise a robust watermarking approach that effectively safeguards intellectual property rights and guarantees rightful earnings for those who create content. Approach: To address the issues at hand, this study presents an innovative technique for digital video watermarking. Utilizing the 2D-DWT along with the KAZE feature detection algorithm, which incorporates the Accelerated Segment Test with Zero Eigenvalue, scrutinize and pinpoint data points that exhibit circular symmetry. The KAZE algorithm pinpoints a quintet of stable features within the brightness aspect of video frames to act as central embedding sites. This research selects the chief embedding site by identifying the point of greatest intensity on a specific arc segment on a circle’s edge, while three other sites are chosen based on principles of circular symmetry. Following these procedures, the proposed method subjects videos to several robustness tests to simulate potential disturbances. The efficacy of the proposed approach is quantified using established objective metrics that confirm strong correlation and outstanding visual fidelity in watermarked videos. Moreover, statistical validation through t-tests corroborates the effectiveness of the watermarking strategy in maintaining integrity under various types of assaults. This fortifies the team’s confidence in its practical deployment. Full article
Show Figures

Figure 1

47 pages, 8140 KiB  
Article
How Babies Learn to Move: An Applied Riemannian Geometry Theory of the Development of Visually-Guided Movement Synergies
by Peter D. Neilson and Megan D. Neilson
AppliedMath 2025, 5(2), 52; https://doi.org/10.3390/appliedmath5020052 - 6 May 2025
Abstract
Planning a multi-joint minimum-effort coordinated human movement to achieve a visual goal is computationally difficult: (i) The number of anatomical elemental movements of the human body greatly exceeds the number of degrees of freedom specified by visual goals; and (ii) the mass–inertia mechanical [...] Read more.
Planning a multi-joint minimum-effort coordinated human movement to achieve a visual goal is computationally difficult: (i) The number of anatomical elemental movements of the human body greatly exceeds the number of degrees of freedom specified by visual goals; and (ii) the mass–inertia mechanical load about each elemental movement varies not only with the posture of the body but also with the mechanical interactions between the body and the environment. Given these complications, the amount of nonlinear dynamical computation needed to plan visually-guided movement is far too large for it to be carried out within the reaction time needed to initiate an appropriate response. Consequently, we propose that, as part of motor and visual development, starting with bootstrapping by fetal and neonatal pattern-generator movements and continuing adaptively from infancy to adulthood, most of the computation is carried out in advance and stored in a motor association memory network. From there it can be quickly retrieved by a selection process that provides the appropriate movement synergy compatible with the particular visual goal. We use theorems of Riemannian geometry to describe the large amount of nonlinear dynamical data that have to be pre-computed and stored for retrieval. Based on that geometry, we argue that the logical mathematical sequence for the acquisition of these data parallels the natural development of visually- guided human movement. Full article
Show Figures

Figure 1

13 pages, 412 KiB  
Article
Phillips Loops, Economic Relaxation, and Inflation Dynamics
by Raymond J. Hawkins
AppliedMath 2025, 5(2), 51; https://doi.org/10.3390/appliedmath5020051 - 1 May 2025
Viewed by 73
Abstract
We show how the dynamics of inflation as represented by the Phillips curve follow from a response formalism suggested by Phillips and motivated by the Keynesian notion that it takes time for an economy to respond to an economic shock. The resulting expressions [...] Read more.
We show how the dynamics of inflation as represented by the Phillips curve follow from a response formalism suggested by Phillips and motivated by the Keynesian notion that it takes time for an economy to respond to an economic shock. The resulting expressions for the Phillips curve are isomorphic to those of anelasticity—a result that provides a straightforward and parsimonious approach to macroeconomic-model construction. Our approach unifies forms of the Phillips curve that are used to account for time dependence of the Phillips curve, expands the possible microeconomic explanations of this time dependence, and broadens the reach of this formalism in economics. Full article
Show Figures

Figure 1

37 pages, 6457 KiB  
Article
A Two-Echelon Supply Chain Inventory Model for Perishable Products with a Shifting Production Rate, Stock-Dependent Demand Rate, and Imperfect Quality Raw Material
by Kapya Tshinangi, Olufemi Adetunji and Sarma Yadavalli
AppliedMath 2025, 5(2), 50; https://doi.org/10.3390/appliedmath5020050 - 30 Apr 2025
Viewed by 154
Abstract
This model extends the classical economic production quantity (EPQ) model to address the complexities within a two-echelon supply chain system. The model integrates the cost of raw materials necessary for production and takes into account the presence of imperfect quality items within the [...] Read more.
This model extends the classical economic production quantity (EPQ) model to address the complexities within a two-echelon supply chain system. The model integrates the cost of raw materials necessary for production and takes into account the presence of imperfect quality items within the acquired raw materials. Upon receipt of the raw material, a thorough screening process is conducted to identify imperfect quality items. Combining imperfect raw material and the concept of shifting production rate, two different inventory models for deteriorating products are formulated under imperfect production with demand dependent on the stock level. In the first model, the imperfect raw materials are sold at a discounted price at the end of the screening period, whereas in the second one, imperfect items are kept in stock until the end of the inventory cycle and then returned to the supplier. Numerical analysis reveals that selling imperfect raw materials yields a favourable financial outcome, with an optimal inventory level I1 = 11,774 units, optimal cycle time T=2140 h, and a total profit per hour of USD 183, while keeping the imperfect raw materials to return them to the supplier results in a negative profit of USD 4.44×103 per hour, indicating an unfavourable financial outcome with the optimal inventory level I1 and optimal cycle time T of 26,349 units and 4702.6 h, respectively. The findings show the importance of selling imperfect raw materials rather than returning them and provide valuable insights for inventory management in systems with deteriorating products and imperfect production processes. Sensitivity analysis further demonstrates the robustness of the model. This study contributes to satisfying the need for inventory models that consider both the procurement of imperfect raw materials, stock-dependent demand, and deteriorating products, along with shifts in production rates in a multi-echelon supply chain. Full article
Show Figures

Figure 1

22 pages, 1418 KiB  
Perspective
Statistics Reform: Practitioner’s Perspective
by Hening Huang
AppliedMath 2025, 5(2), 49; https://doi.org/10.3390/appliedmath5020049 - 17 Apr 2025
Viewed by 203
Abstract
It is widely believed that one of the main causes of the replication crisis in scientific research is some of the most commonly used statistical methods, such as null hypothesis significance testing (NHST). This has prompted many scientists to call for statistics reform. [...] Read more.
It is widely believed that one of the main causes of the replication crisis in scientific research is some of the most commonly used statistical methods, such as null hypothesis significance testing (NHST). This has prompted many scientists to call for statistics reform. As a practitioner in hydraulics and measurement science, the author extensively used statistical methods in environmental engineering and hydrological survey projects. The author strongly concurs with the need for statistics reform. This paper offers a practitioner’s perspective on statistics reform. In the author’s view, some statistical methods are good and should withstand statistics reform, while others are flawed and should be abandoned and removed from textbooks and software packages. This paper focuses on the following two methods derived from the t-distribution: the two-sample t-test and the t-interval method for calculating measurement uncertainty. We demonstrate why both methods should be abandoned. We recommend using “advanced estimation statistics” in place of the two-sample t-test and an unbiased estimation method in place of the t-interval method. Two examples are presented to illustrate the recommended approaches. Full article
Show Figures

Figure 1

11 pages, 233 KiB  
Review
Why We Do Not Need Dark Energy to Explain Cosmological Acceleration
by Felix M. Lev
AppliedMath 2025, 5(2), 48; https://doi.org/10.3390/appliedmath5020048 - 17 Apr 2025
Viewed by 176
Abstract
It has been shown that at the present stage of the evolution of the universe, cosmological acceleration is an inevitable kinematical consequence of quantum theory in semiclassical approximation. Quantum theory does not involve such classical concepts as Minkowski or de Sitter spaces. In [...] Read more.
It has been shown that at the present stage of the evolution of the universe, cosmological acceleration is an inevitable kinematical consequence of quantum theory in semiclassical approximation. Quantum theory does not involve such classical concepts as Minkowski or de Sitter spaces. In classical theory, when choosing Minkowski space, a vacuum catastrophe occurs, while when choosing de Sitter space, the value of the cosmological constant can be arbitrary. On the contrary, in quantum theory, there is no uncertainties in view of the following: (1) the de Sitter algebra is the most general ten-dimensional Lie algebra; (2) the Poincare algebra is a special degenerate case of the de Sitter algebra in the limit R where R is the contraction parameter for the transition from the de Sitter to the Poincare algebra and R has nothing to do with the radius of de Sitter space; (3) R is fundamental to the same extent as c and : c is the contraction parameter for the transition from the Poincare to the Galilean algebra and is the contraction parameter for the transition from quantum to classical theory; (4) as a consequence, the question (why the quantities (c, , R) have the values which they actually have) does not arise. The solution to the problem of cosmological acceleration follows on from the results of irreducible representations of the de Sitter algebra. This solution is free of uncertainties and does not involve dark energy, quintessence, and other exotic mechanisms, the physical meaning of which is a mystery. Full article
24 pages, 1330 KiB  
Article
Mathematical Models of Epidemics with Infection Time
by Benito Chen-Charpentier
AppliedMath 2025, 5(2), 47; https://doi.org/10.3390/appliedmath5020047 - 11 Apr 2025
Viewed by 236
Abstract
After an infectious contact, there is a time lapse before the individual actually becomes infected. In this paper, we present different ways of incorporating this incubation or infection time into epidemic mathematical models. For simplicity, we consider the Susceptible–Infective–Susceptible and the Susceptible–Infective–Recovered–Susceptible models [...] Read more.
After an infectious contact, there is a time lapse before the individual actually becomes infected. In this paper, we present different ways of incorporating this incubation or infection time into epidemic mathematical models. For simplicity, we consider the Susceptible–Infective–Susceptible and the Susceptible–Infective–Recovered–Susceptible models with no demographic effects, so we can concentrate on the infection process. We study the different methods from a modeling point of view to determine their biological validity and find that not all methods presented in the literature are valid. Specifically, the infection part of the model should only move individuals from one compartment to another but should not change the total population. We consider models with no delay, discrete delay, distributed delay, exposed populations, and fractional derivatives. We analyze the methods that are realistic and find their steady solutions, stability, and bifurcations. We also investigate the effect of the duration of the infection time on the solutions. Numerical simulations are conducted and guidelines on how to chose a method are presented. Full article
Show Figures

Figure 1

21 pages, 302 KiB  
Article
Mixed Cost Function and State Constrains Optimal Control Problems
by Hugo Leiva, Guido Tapia-Riera, Jhoana P. Romero-Leiton and Cosme Duque
AppliedMath 2025, 5(2), 46; https://doi.org/10.3390/appliedmath5020046 - 10 Apr 2025
Viewed by 261
Abstract
In this paper, we analyze an optimal control problem with a mixed cost function, which combines a terminal cost at the final state and an integral term involving the state and control variables. The problem includes both state and control constraints, which adds [...] Read more.
In this paper, we analyze an optimal control problem with a mixed cost function, which combines a terminal cost at the final state and an integral term involving the state and control variables. The problem includes both state and control constraints, which adds complexity to the analysis. We establish a necessary optimality condition in the form of the maximum principle, where the adjoint equation is an integral equation involving the Riemann and Stieltjes integrals with respect to a Borel measure. Our approach is based on the Dubovitskii–Milyutin theory, which employs conic approximations to efficiently manage state constraints. To illustrate the applicability of our results, we consider two examples related to epidemiological models, specifically the SIR model. These examples demonstrate how the developed framework can inform optimal control strategies to mitigate disease spread. Furthermore, we explore the implications of our findings in broader contexts, emphasizing how mixed cost functions manifest in various applied settings. Incorporating state constraints requires advanced mathematical techniques, and our approach provides a structured way to address them. The integral nature of the adjoint equation highlights the role of measure-theoretic tools in optimal control. Through our examples, we demonstrate practical applications of the proposed methodology, reinforcing its usefulness in real-life situations. By extending the Dubovitskii–Milyutin framework, we contribute to a deeper understanding of constrained control problems and their solutions. Full article
23 pages, 2154 KiB  
Article
A Hybrid PI–Fuzzy Control Scheme for a Drum Drying Process
by Gisela Ortíz-Yescas, Fidel Meléndez-Vázquez, Luis Alberto Quezada-Téllez, Arturo Torres-Mendoza, Alejandro Morales-Peñaloza, Guillermo Fernández-Anaya and Jorge Eduardo Macías-Díaz
AppliedMath 2025, 5(2), 45; https://doi.org/10.3390/appliedmath5020045 - 10 Apr 2025
Viewed by 198
Abstract
The drying process is widely used in the food industry for its ability to remove water, provide microbial stability, and reduce spoilage reactions, as well as storage and transportation costs. In particular, rotary drum drying becomes important when it is applied to liquid [...] Read more.
The drying process is widely used in the food industry for its ability to remove water, provide microbial stability, and reduce spoilage reactions, as well as storage and transportation costs. In particular, rotary drum drying becomes important when it is applied to liquid and pasty foods because of the desire to maintain defined characteristics in terms of product moisture. This drying process is characterized by the existence of many linearities; therefore, different strategies for controlling this process have been proposed. This work focuses on the design of a hybrid PI–fuzzy control scheme for the rotary drum drying process; the idea is to use the advantages of fuzzy logic to obtain a robust monitoring and control system. A pilot plant rotary drum dryer was used to tune the PI control part. Then, the proposed scheme was programmed and tested at the simulation level, comparing it with a classical PI control algorithm. Full article
Show Figures

Figure 1

24 pages, 970 KiB  
Article
A Note on a Random Walk on the L-Lattice and Relative First-Passage-Time Problems
by Serena Spina
AppliedMath 2025, 5(2), 44; https://doi.org/10.3390/appliedmath5020044 - 9 Apr 2025
Viewed by 219
Abstract
We analyze a discrete-time random walk on the vertices of an unbounded two-dimensional L-lattice. We determine the probability generating function, and we prove the independence of the coordinates. In particular, we find a relation of each component with a one-dimensional biased random walk [...] Read more.
We analyze a discrete-time random walk on the vertices of an unbounded two-dimensional L-lattice. We determine the probability generating function, and we prove the independence of the coordinates. In particular, we find a relation of each component with a one-dimensional biased random walk with time changing. Therefore, the transition probabilities and the main moments of the random walk can be obtained. The asymptotic behavior of the process is studied, both in the classical sense and involving the large deviations theory. We investigate first-passage-time problems of the random walk through certain straight lines, and we determine the related probabilities in closed form and other features of interest. Finally, we develop a simulation approach to study the first-exit problem of the process thought ellipses. Full article
(This article belongs to the Special Issue The Impact of Stochastic Perturbations)
Show Figures

Figure 1

23 pages, 556 KiB  
Article
A Nonlinear Ordinary Differential Equation Model for a Society Strongly Dependent on Non-Renewable Resources
by Marino Badiale and Isabella Cravero
AppliedMath 2025, 5(2), 43; https://doi.org/10.3390/appliedmath5020043 - 8 Apr 2025
Viewed by 132
Abstract
In this work, we investigate an ODE system designed to describe the interplay between human society and the environment, with a strong focus on the role of non-renewable resources. Specifically, our model captures how the depletion and replenishment of non-renewable resources (along with [...] Read more.
In this work, we investigate an ODE system designed to describe the interplay between human society and the environment, with a strong focus on the role of non-renewable resources. Specifically, our model captures how the depletion and replenishment of non-renewable resources (along with renewable ones) and dependence on wealth drive population and resource dynamics in a society. We prove that the solutions of the system remain in the non-negative cone and are bounded, implying that indefinite unbounded growth in wealth or population cannot occur within this model. Next, we compute and classify all equilibrium points, exploring which equilibria can be stable and physically relevant. In particular, we show that, depending on parameter regimes, the system may admit a stable equilibrium with positive levels of population, renewable resources, non-renewable resources, and wealth, suggesting a possible sustainable long-term outcome for a heavily resource-dependent society. Full article
Show Figures

Figure 1

23 pages, 934 KiB  
Article
Incorporating Local Communities into Sustainability Reporting: A Grey Systems-Based Analysis of Brazilian Companies
by Elcio Rodrigues Damaceno, Jefferson de Souza Pinto, Tiago F. A. C. Sigahi, Gustavo Hermínio Salati Marcondes de Moraes, Walter Leal Filho and Rosley Anholon
AppliedMath 2025, 5(2), 42; https://doi.org/10.3390/appliedmath5020042 - 8 Apr 2025
Viewed by 236
Abstract
This paper aims to evaluate the maturity of Brazilian companies regarding the inclusion of local communities in sustainability reporting. The analysis was based on sustainability reports from a sample of 26 companies listed on the Brazilian stock exchange sustainability index. The study employs [...] Read more.
This paper aims to evaluate the maturity of Brazilian companies regarding the inclusion of local communities in sustainability reporting. The analysis was based on sustainability reports from a sample of 26 companies listed on the Brazilian stock exchange sustainability index. The study employs a mixed-methods approach and includes the following sequential steps: literature review and content analysis of sustainability reporting standards to identify critical success factors; application of the CRITIC method to define weights for decision criteria; analysis of corporate practices related to the inclusion of local communities in sustainability reports performed by Brazilian companies to determine maturity levels using the Grey Fixed Weighted Clustering method and the Kernel technique. The findings reveal that transparency, comprehensive assessment, and accountability are the most critical factors of sustainability reporting maturity regarding local communities. The analysis shows that companies in the energy sector perform better and can serve as a benchmark for companies in other sectors, such as manufacturing, in which most companies present low maturity. Key corporate practices are identified and discussed for improving engagement with local communities aiming to enhance corporate social responsibility and sustainability reporting. This study advances the understanding of corporate sustainability by highlighting the role of businesses in fostering socio-economic development through the inclusion of local communities in sustainability reporting. It extends theoretical discussions on corporate social responsibility by emphasizing transparency, accountability, and comprehensive assessment as critical factors for sustainability reporting. Practically, the findings provide insights for companies seeking to enhance engagement with local communities, offering a benchmark for industries with lower maturity levels. By demonstrating how sustainability reporting can serve as a strategic tool for social impact, the study reinforces the broader role of businesses in sustainable development. Full article
Show Figures

Figure 1

19 pages, 1447 KiB  
Review
Solving Boundary Value Problems for a Class of Differential Equations Based on Elastic Transformation and Similar Construction Methods
by Jinfeng Liu, Pengshe Zheng and Jiajia Xie
AppliedMath 2025, 5(2), 41; https://doi.org/10.3390/appliedmath5020041 - 6 Apr 2025
Viewed by 200
Abstract
To address the boundary value problem associated with a class of third-order nonlinear differential equations with variable coefficients, this study integrates three key methods: the elastic transformation method (ETM), the similar construction method (SCM), and the elastic inverse transformation method (EITM). Firstly, ETM [...] Read more.
To address the boundary value problem associated with a class of third-order nonlinear differential equations with variable coefficients, this study integrates three key methods: the elastic transformation method (ETM), the similar construction method (SCM), and the elastic inverse transformation method (EITM). Firstly, ETM is employed to transform the original high-order nonlinear differential equations into the Tschebycheff equation, successfully reducing the order of the problem. Subsequently, SCM is applied to determine the general solution of the Tschebycheff equation under boundary conditions, thereby ensuring a structured and systematic approach. Ultimately, the EITM is used to reconstruct the solution of the original third-order nonlinear differential equation. The accuracy of the obtained solution is further validated by analyzing the corresponding solution curves. The synergy of these methods introduces a novel approach to solving nonlinear differential equations and extends the application of Tschebycheff equations in nonlinear systems. Full article
Show Figures

Figure 1

22 pages, 3340 KiB  
Article
Mathematical Modelling of Cancer Treatments, Resistance, Optimization
by Tahmineh Azizi
AppliedMath 2025, 5(2), 40; https://doi.org/10.3390/appliedmath5020040 - 4 Apr 2025
Viewed by 530
Abstract
Mathematical modeling plays a crucial role in the advancement of cancer treatments, offering a sophisticated framework for analyzing and optimizing therapeutic strategies. This approach employs mathematical and computational techniques to simulate diverse aspects of cancer therapy, including the effectiveness of various treatment modalities [...] Read more.
Mathematical modeling plays a crucial role in the advancement of cancer treatments, offering a sophisticated framework for analyzing and optimizing therapeutic strategies. This approach employs mathematical and computational techniques to simulate diverse aspects of cancer therapy, including the effectiveness of various treatment modalities such as chemotherapy, radiation therapy, targeted therapy, and immunotherapy. By incorporating factors such as drug pharmacokinetics, tumor biology, and patient-specific characteristics, these models facilitate predictions of treatment responses and outcomes. Furthermore, mathematical models elucidate the mechanisms behind cancer treatment resistance, including genetic mutations and microenvironmental changes, thereby guiding researchers in designing strategies to mitigate or overcome resistance. The application of optimization techniques allows for the development of personalized treatment regimens that maximize therapeutic efficacy while minimizing adverse effects, taking into account patient-related variables such as tumor size and genetic profiles. This study elaborates on the key applications of mathematical modeling in oncology, encompassing the simulation of various cancer treatment modalities, the elucidation of resistance mechanisms, and the optimization of personalized treatment regimens. By integrating mathematical insights with experimental data and clinical observations, mathematical modeling emerges as a powerful tool in oncology, contributing to the development of more effective and personalized cancer therapies that improve patient outcomes. Full article
Show Figures

Figure 1

24 pages, 2888 KiB  
Article
AI-Assisted Game Theory Approaches to Bid Pricing Under Uncertainty in Construction
by Joas Serugga
AppliedMath 2025, 5(2), 39; https://doi.org/10.3390/appliedmath5020039 - 3 Apr 2025
Viewed by 646
Abstract
The construction industry is inherently marked by high uncertainty levels driven by its complex processes. These relate to the bidding environment, resource availability, and complex project requirements. Accurate bid pricing under such uncertainty remains a critical challenge for contractors seeking a competitive advantage [...] Read more.
The construction industry is inherently marked by high uncertainty levels driven by its complex processes. These relate to the bidding environment, resource availability, and complex project requirements. Accurate bid pricing under such uncertainty remains a critical challenge for contractors seeking a competitive advantage while managing risk exposure. This exploratory study integrates artificial intelligence (AI) into game theory models in an AI-assisted framework for bid pricing in construction. The proposed model addresses uncertainties from external market factors and adversarial behaviours in competitive bidding scenarios by leveraging AI’s predictive capabilities and game theory’s strategic decision-making principles; integrating extreme gradient boosting (XGBOOST) + hyperparameter tuning and Random Forest classifiers. The key findings show an increase of 5–10% in high-inflation periods with a high model accuracy of 87% and precision of 88.4%. AI can classify conservative (70%) and aggressive (30%) bidders through analysis, demonstrating the potential of this integrated approach to improve bid accuracy (cost estimates are generally within 10% of actual bid prices), optimise risk-sharing strategies, and enhance decision making in dynamic and competitive environments. The research extends the current body of knowledge with its potential to reshape bid-pricing strategies in construction in an integrated AI–game-theoretic model under uncertainty. Full article
Show Figures

Figure 1

25 pages, 746 KiB  
Article
Convergence Analysis of Jarratt-like Methods for Solving Nonlinear Equations for Thrice-Differentiable Operators
by Indra Bate, Kedarnath Senapati, Santhosh George, Ioannis K. Argyros and Michael I. Argyros
AppliedMath 2025, 5(2), 38; https://doi.org/10.3390/appliedmath5020038 - 3 Apr 2025
Viewed by 410
Abstract
The main goal of this paper is to study Jarratt-like iterative methods to obtain their order of convergence under weaker conditions. Generally, obtaining the pth-order convergence using the Taylor series expansion technique needed at least p+1 times differentiability [...] Read more.
The main goal of this paper is to study Jarratt-like iterative methods to obtain their order of convergence under weaker conditions. Generally, obtaining the pth-order convergence using the Taylor series expansion technique needed at least p+1 times differentiability of the involved operator. However, we obtain the fourth- and sixth-order for Jarratt-like methods using up to the third-order derivatives only. An upper bound for the asymptotic error constant (AEC) and a convergence ball are provided. The convergence analysis is developed in the more general setting of Banach spaces and relies on Lipschitz-type conditions, which are required to control the derivative. The results obtained are examined using numerical examples, and some dynamical system concepts are discussed for a better understanding of convergence ideas. Full article
Show Figures

Figure 1

14 pages, 1326 KiB  
Article
Maximizing Tax Revenue for Profit Maximizing Monopolist with the Cobb-Douglas Production Function and Linear Demand as a Bilevel Programming Problem
by Zrinka Lukač, Krunoslav Puljić and Vedran Kojić
AppliedMath 2025, 5(2), 37; https://doi.org/10.3390/appliedmath5020037 - 3 Apr 2025
Viewed by 161
Abstract
Optimal taxation and profit maximization are two very important problems, naturally related to one another since companies operate under a given tax system. However, in the literature, these two problems are usually considered separately, either by studying optimal taxation or by studying profit [...] Read more.
Optimal taxation and profit maximization are two very important problems, naturally related to one another since companies operate under a given tax system. However, in the literature, these two problems are usually considered separately, either by studying optimal taxation or by studying profit maximization. This paper tries to link the two problems together by formulating a bilevel model in which the government acts as a leader and a profit maximizing follower acts as a follower. The exact form of the tax revenue function, as well as optimal tax amount and optimal input levels, are derived in cases when returns to scale take on values 0.5 and 1. Several illustrative numerical examples and accompanying graphical representations are given for decreasing, constant, and increasing returns to scale values. Full article
Show Figures

Figure 1

27 pages, 7104 KiB  
Article
Crypto Asset Markets vs. Financial Markets: Event Identification, Latest Insights and Analyses
by Eleni Koutrouli, Polychronis Manousopoulos, John Theal and Laura Tresso
AppliedMath 2025, 5(2), 36; https://doi.org/10.3390/appliedmath5020036 - 2 Apr 2025
Viewed by 693
Abstract
As crypto assets become more widely adopted, crypto asset markets and traditional financial markets may become increasingly interconnected. The close linkages between these markets have potentially important implications for price formation, contagion, risk management and regulatory frameworks. In this study, we assess the [...] Read more.
As crypto assets become more widely adopted, crypto asset markets and traditional financial markets may become increasingly interconnected. The close linkages between these markets have potentially important implications for price formation, contagion, risk management and regulatory frameworks. In this study, we assess the correlation between traditional financial markets and selected crypto assets, study factors that may impact the price of crypto assets and identify potentially significant events that may have an impact on Bitcoin and Ethereum price dynamics. For the latter analyses, we adopt a Bayesian model averaging approach to identify change points in the Bitcoin and Ethereum daily price time series. We then use the dates and probabilities of these change points to link them to specific events, finding that nearly all of the change points can be associated with known historical crypto asset-related events. The events can be classified into broader geopolitical developments, regulatory announcements and idiosyncratic events specific to either Bitcoin or Ethereum. Full article
Show Figures

Figure 1

27 pages, 2044 KiB  
Article
Robust Optimization for the Location Selection of Emergency Life Supplies Distribution Centers Based on Demand Information Uncertainty: A Case Study of Setting Transfer Points
by Dafu Fan, Qiong Zhou, Guangrong Li and Yonghui Qin
AppliedMath 2025, 5(2), 35; https://doi.org/10.3390/appliedmath5020035 - 1 Apr 2025
Viewed by 210
Abstract
Following various natural and man-made disasters, a critical challenge in emergency response is establishing an emergency living supplies distribution center that minimizes service costs while ensuring rapid and efficient delivery of essential goods to affected populations, thereby safeguarding their lives and material well-being. [...] Read more.
Following various natural and man-made disasters, a critical challenge in emergency response is establishing an emergency living supplies distribution center that minimizes service costs while ensuring rapid and efficient delivery of essential goods to affected populations, thereby safeguarding their lives and material well-being. The study addresses this challenge by developing an optimization function to minimize the total service cost for locating such distribution centers, using connection points as a foundation. Utilizing a robust optimization approach that incorporates constraint conditions and bounded intervals as the value set for uncertain demand, the optimization function is transformed into a robust equivalent model through the dual principle. The tabu search method, integrated with MATLAB R2015b software, is employed to perform statistical analysis on the data, yielding the optimal solution. Case study analysis demonstrates that the minimum total service cost escalates with increases in robustness level and disturbance parameters. Furthermore, the model incorporating connection points consistently yields better results than the model without connection points, highlighting the efficacy of the proposed approach. Full article
Show Figures

Figure 1

15 pages, 525 KiB  
Article
Modified Lagrange Interpolating Polynomial (MLIP) Method: A Straightforward Procedure to Improve Function Approximation
by Uriel A. Filobello-Nino, Hector Vazquez-Leal, Mario A. Sandoval-Hernandez, Jose A. Dominguez-Chavez, Alejandro Salinas-Castro, Victor M. Jimenez-Fernandez, Jesus Huerta-Chua, Claudio Hoyos-Reyes, Norberto Carrillo-Ramon and Javier Flores-Mendez
AppliedMath 2025, 5(2), 34; https://doi.org/10.3390/appliedmath5020034 - 27 Mar 2025
Viewed by 176
Abstract
This work presents the modified Lagrange interpolating polynomial (MLIP) method, which aims to provide a straightforward procedure for deriving accurate analytical approximations of a given function. The method introduces an exponential function with several parameters which multiplies one of the terms of a [...] Read more.
This work presents the modified Lagrange interpolating polynomial (MLIP) method, which aims to provide a straightforward procedure for deriving accurate analytical approximations of a given function. The method introduces an exponential function with several parameters which multiplies one of the terms of a Lagrange interpolating polynomial. These parameters will adjust their values to ensure that the proposed approximation passes through several points of the target function, while also adopting the correct values of its derivative at several points, showing versatility. Lagrange interpolating polynomials (LIPs) present the problem of introducing oscillatory terms and are, therefore, expected to provide poor approximations for the derivative of a given function. We will see that one of the relevant contributions of MLIPs is that their approximations contain fewer oscillatory terms compared to those obtained by LIPs when both approximations pass through the same points of the function to be represented; consequently, better MLIP approximations are expected. A comparison of the results obtained by MLIPs with those from other methods reported in the literature highlights the method’s potential as a useful tool for obtaining accurate analytical approximations when interpolating a set of points. It is expected that this work contributes to break the paradigm that an effective modification of a known method has to be lengthy and complex. Full article
Show Figures

Figure 1

32 pages, 2630 KiB  
Article
Autonomous Drifting like Professional Racing Drivers: A Survey
by Yang Liu, Fulong Ma, Xiaodong Mei, Bohuan Xue, Jin Wu and Chengxi Zhang
AppliedMath 2025, 5(2), 33; https://doi.org/10.3390/appliedmath5020033 - 26 Mar 2025
Viewed by 325
Abstract
Autonomous drifting is an advanced technique that enhances vehicle maneuverability beyond conventional driving limits. This survey provides a comprehensive, systematic review of autonomous drifting research published between 2005 and early 2025, analyzing approximately 80 peer-reviewed studies. We employed a modified PRISMA approach to [...] Read more.
Autonomous drifting is an advanced technique that enhances vehicle maneuverability beyond conventional driving limits. This survey provides a comprehensive, systematic review of autonomous drifting research published between 2005 and early 2025, analyzing approximately 80 peer-reviewed studies. We employed a modified PRISMA approach to categorize and evaluate research across two main methodological frameworks: dynamical model-based approaches and deep learning techniques. Our analysis reveals that while dynamical methods offer precise control when accurately modeled, they often struggle with generalization to unknown environments. In contrast, deep learning approaches demonstrate better adaptability but face challenges in safety verification and sample efficiency. We comprehensively examine experimental platforms used in the field—from high-fidelity simulators to full-scale vehicles—along with their sensor configurations and computational requirements. This review uniquely identifies critical research gaps, including real-time performance limitations, environmental generalization challenges, safety validation concerns, and integration issues with broader autonomous systems. Our findings suggest that hybrid approaches combining model-based knowledge with data-driven learning may offer the most promising path forward for robust autonomous drifting capabilities in diverse applications ranging from motorsports to emergency collision avoidance in production vehicles. Full article
(This article belongs to the Special Issue Applied Mathematics in Robotics: Theory, Methods and Applications)
Show Figures

Figure 1

16 pages, 293 KiB  
Article
Failed Skew Zero Forcing Numbers of Path Powers and Circulant Graphs
by Aidan Johnson, Andrew Vick, Rigoberto Flórez and Darren A. Narayan
AppliedMath 2025, 5(2), 32; https://doi.org/10.3390/appliedmath5020032 - 24 Mar 2025
Viewed by 186
Abstract
For a graph G, the zero forcing number of G, Z(G), is defined to be the minimum cardinality of a set S of vertices for which repeated applications of the forcing rule results in all vertices being [...] Read more.
For a graph G, the zero forcing number of G, Z(G), is defined to be the minimum cardinality of a set S of vertices for which repeated applications of the forcing rule results in all vertices being in S. The forcing rule is as follows: if a vertex v is in S, and exactly one neighbor u of v is not in S, then the vertex u is added to S in the subsequent iteration. Now, the failed zero forcing number of a graph is defined to be the maximum size of a set of vertices which does not force all of the vertices in the graph. A similar type of forcing is called skew zero forcing, which is defined so that if there is exactly one neighbor u of v that is not in S, then the vertex u is added to S in the next iteration. The key difference is that vertices that are not in S can force other vertices. The failed skew zero forcing number of a graph is denoted by F(G). At its core, the problem we consider is how to identify the tipping point at which information or infection will spread through a network or a population. The graphs we consider are where computers/routers or people are arranged in a linear or circular formation with varying proximities for contagion. Here, we present new results for failed skew zero forcing numbers of path powers and circulant graphs. Furthermore, we found that the failed skew zero forcing numbers of these families form interesting sequences with increasing n. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop