Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 365 KB  
Article
Applications of Shapley Value to Financial Decision-Making and Risk Management
by Sunday Timileyin Ayodeji, Olamide Ayodele and Kayode Oshinubi
AppliedMath 2025, 5(2), 59; https://doi.org/10.3390/appliedmath5020059 - 22 May 2025
Cited by 1 | Viewed by 3721
Abstract
We investigate the application of the Shapley value in addressing risk-related challenges, focusing on two primary areas. The first area explores the role of the Shapley value in the financial sector, specifically in managing portfolio risk. By conceptualizing a portfolio of assets as [...] Read more.
We investigate the application of the Shapley value in addressing risk-related challenges, focusing on two primary areas. The first area explores the role of the Shapley value in the financial sector, specifically in managing portfolio risk. By conceptualizing a portfolio of assets as a cooperative game, we analyze the contribution of individual securities to the reduction in overall portfolio risk. The second area addresses emergency facility logistics, where the Shapley value is utilized to optimize the selection of potential facility locations and mitigate the risks associated with the storage and transportation of hazardous materials. Using Markowitz’s mean-variance framework, the Shapley value facilitates a fair and efficient allocation of risk across portfolio assets, identifying both risk-increasing and risk-reducing assets. Through numerical experiments, we demonstrate that the Shapley value offers valuable insights into the equitable distribution of financial resources and the strategic placement of facilities to manage systemic risks. These findings highlight the practical advantages of integrating game-theoretic approaches into risk management strategies to enhance fairness, efficiency, and the robustness of decision-making processes. Full article
Show Figures

Figure 1

23 pages, 2154 KB  
Article
A Hybrid PI–Fuzzy Control Scheme for a Drum Drying Process
by Gisela Ortíz-Yescas, Fidel Meléndez-Vázquez, Luis Alberto Quezada-Téllez, Arturo Torres-Mendoza, Alejandro Morales-Peñaloza, Guillermo Fernández-Anaya and Jorge Eduardo Macías-Díaz
AppliedMath 2025, 5(2), 45; https://doi.org/10.3390/appliedmath5020045 - 10 Apr 2025
Cited by 1 | Viewed by 1164
Abstract
The drying process is widely used in the food industry for its ability to remove water, provide microbial stability, and reduce spoilage reactions, as well as storage and transportation costs. In particular, rotary drum drying becomes important when it is applied to liquid [...] Read more.
The drying process is widely used in the food industry for its ability to remove water, provide microbial stability, and reduce spoilage reactions, as well as storage and transportation costs. In particular, rotary drum drying becomes important when it is applied to liquid and pasty foods because of the desire to maintain defined characteristics in terms of product moisture. This drying process is characterized by the existence of many linearities; therefore, different strategies for controlling this process have been proposed. This work focuses on the design of a hybrid PI–fuzzy control scheme for the rotary drum drying process; the idea is to use the advantages of fuzzy logic to obtain a robust monitoring and control system. A pilot plant rotary drum dryer was used to tune the PI control part. Then, the proposed scheme was programmed and tested at the simulation level, comparing it with a classical PI control algorithm. Full article
Show Figures

Figure 1

23 pages, 934 KB  
Article
Incorporating Local Communities into Sustainability Reporting: A Grey Systems-Based Analysis of Brazilian Companies
by Elcio Rodrigues Damaceno, Jefferson de Souza Pinto, Tiago F. A. C. Sigahi, Gustavo Hermínio Salati Marcondes de Moraes, Walter Leal Filho and Rosley Anholon
AppliedMath 2025, 5(2), 42; https://doi.org/10.3390/appliedmath5020042 - 8 Apr 2025
Cited by 2 | Viewed by 1376
Abstract
This paper aims to evaluate the maturity of Brazilian companies regarding the inclusion of local communities in sustainability reporting. The analysis was based on sustainability reports from a sample of 26 companies listed on the Brazilian stock exchange sustainability index. The study employs [...] Read more.
This paper aims to evaluate the maturity of Brazilian companies regarding the inclusion of local communities in sustainability reporting. The analysis was based on sustainability reports from a sample of 26 companies listed on the Brazilian stock exchange sustainability index. The study employs a mixed-methods approach and includes the following sequential steps: literature review and content analysis of sustainability reporting standards to identify critical success factors; application of the CRITIC method to define weights for decision criteria; analysis of corporate practices related to the inclusion of local communities in sustainability reports performed by Brazilian companies to determine maturity levels using the Grey Fixed Weighted Clustering method and the Kernel technique. The findings reveal that transparency, comprehensive assessment, and accountability are the most critical factors of sustainability reporting maturity regarding local communities. The analysis shows that companies in the energy sector perform better and can serve as a benchmark for companies in other sectors, such as manufacturing, in which most companies present low maturity. Key corporate practices are identified and discussed for improving engagement with local communities aiming to enhance corporate social responsibility and sustainability reporting. This study advances the understanding of corporate sustainability by highlighting the role of businesses in fostering socio-economic development through the inclusion of local communities in sustainability reporting. It extends theoretical discussions on corporate social responsibility by emphasizing transparency, accountability, and comprehensive assessment as critical factors for sustainability reporting. Practically, the findings provide insights for companies seeking to enhance engagement with local communities, offering a benchmark for industries with lower maturity levels. By demonstrating how sustainability reporting can serve as a strategic tool for social impact, the study reinforces the broader role of businesses in sustainable development. Full article
Show Figures

Figure 1

15 pages, 525 KB  
Article
Modified Lagrange Interpolating Polynomial (MLIP) Method: A Straightforward Procedure to Improve Function Approximation
by Uriel A. Filobello-Nino, Hector Vazquez-Leal, Mario A. Sandoval-Hernandez, Jose A. Dominguez-Chavez, Alejandro Salinas-Castro, Victor M. Jimenez-Fernandez, Jesus Huerta-Chua, Claudio Hoyos-Reyes, Norberto Carrillo-Ramon and Javier Flores-Mendez
AppliedMath 2025, 5(2), 34; https://doi.org/10.3390/appliedmath5020034 - 27 Mar 2025
Cited by 1 | Viewed by 1840
Abstract
This work presents the modified Lagrange interpolating polynomial (MLIP) method, which aims to provide a straightforward procedure for deriving accurate analytical approximations of a given function. The method introduces an exponential function with several parameters which multiplies one of the terms of a [...] Read more.
This work presents the modified Lagrange interpolating polynomial (MLIP) method, which aims to provide a straightforward procedure for deriving accurate analytical approximations of a given function. The method introduces an exponential function with several parameters which multiplies one of the terms of a Lagrange interpolating polynomial. These parameters will adjust their values to ensure that the proposed approximation passes through several points of the target function, while also adopting the correct values of its derivative at several points, showing versatility. Lagrange interpolating polynomials (LIPs) present the problem of introducing oscillatory terms and are, therefore, expected to provide poor approximations for the derivative of a given function. We will see that one of the relevant contributions of MLIPs is that their approximations contain fewer oscillatory terms compared to those obtained by LIPs when both approximations pass through the same points of the function to be represented; consequently, better MLIP approximations are expected. A comparison of the results obtained by MLIPs with those from other methods reported in the literature highlights the method’s potential as a useful tool for obtaining accurate analytical approximations when interpolating a set of points. It is expected that this work contributes to break the paradigm that an effective modification of a known method has to be lengthy and complex. Full article
Show Figures

Figure 1

15 pages, 2697 KB  
Article
Exploring the Influence of Oblateness on Asymptotic Orbits in the Hill Three-Body Problem
by Vassilis S. Kalantonis
AppliedMath 2025, 5(1), 30; https://doi.org/10.3390/appliedmath5010030 - 17 Mar 2025
Cited by 3 | Viewed by 1566
Abstract
We examine the modified Hill three-body problem by incorporating the oblateness of the primary body and focus on its asymptotic orbits. Specifically, we analyze and characterize homoclinic and heteroclinic connections associated with the collinear equilibrium points. By systematically varying the oblateness parameter, we [...] Read more.
We examine the modified Hill three-body problem by incorporating the oblateness of the primary body and focus on its asymptotic orbits. Specifically, we analyze and characterize homoclinic and heteroclinic connections associated with the collinear equilibrium points. By systematically varying the oblateness parameter, we determine conditions for the existence and location of these orbits. Our results confirm the presence of both homoclinic orbits, where trajectories asymptotically connect an equilibrium point to itself, and heteroclinic orbits, which establish connections between two distinct equilibrium points, via their stable and unstable invariant manifolds, which are computed both analytically and numerically. To achieve precise computations, we employ differential correction techniques and leverage the system’s inherent symmetries. Numerical calculations are carried out for orbit multiplicities up to twelve, ensuring a comprehensive exploration of the dynamical properties. Full article
Show Figures

Figure 1

18 pages, 285 KB  
Article
Option Pricing with Given Risk Constraints and Its Application to Life Insurance Contracts
by Betty Guo and Alexander Melnikov
AppliedMath 2025, 5(1), 25; https://doi.org/10.3390/appliedmath5010025 - 4 Mar 2025
Cited by 1 | Viewed by 1145
Abstract
This paper presents a method for hedging in markets of two-factor diffusion and jump diffusion models under the restriction of a specified probability of success. In addition, a method for hedging with a given shortfall amount is developed. A maximal perfect hedging set [...] Read more.
This paper presents a method for hedging in markets of two-factor diffusion and jump diffusion models under the restriction of a specified probability of success. In addition, a method for hedging with a given shortfall amount is developed. A maximal perfect hedging set is constructed for options involving the exchange of one asset for another. The developed method is applied to the pricing of equity-linked life insurance contracts, such as “pure endowments with a guarantee”. Traditional pricing approaches for hedging options often yield minimal returns for investors. By accepting a predefined level of risk, investors can achieve higher returns. In light of this, this paper proposes risk management strategies applicable to these hybrid financial and insurance products. Full article
15 pages, 438 KB  
Article
Modeling and Mathematical Analysis of Liquidity Risk Contagion in the Banking System Using an Optimal Control Approach
by Said Fahim, Hamza Mourad and Mohamed Lahby
AppliedMath 2025, 5(1), 20; https://doi.org/10.3390/appliedmath5010020 - 27 Feb 2025
Cited by 3 | Viewed by 2813
Abstract
The study of contagion dynamics is a well-established domain within epidemiology, where the spread of infectious diseases is modeled and analyzed. In recent years, similar methodologies have been applied to the financial sector to understand and predict the propagation of risks within banking [...] Read more.
The study of contagion dynamics is a well-established domain within epidemiology, where the spread of infectious diseases is modeled and analyzed. In recent years, similar methodologies have been applied to the financial sector to understand and predict the propagation of risks within banking systems better. This paper examines the application of contagion models to assessing liquidity risk in the banking sector, leveraging optimal control theory to evaluate potential interventions by central banks. Using data from the largest European banks, we simulate the impact of central bank measures on liquidity risk. By employing optimal control techniques, we construct a model capable of simulating various scenarios to evaluate the effectiveness of policy interventions in mitigating financial contagion. Our approach provides a robust framework for analyzing the systemic risk propagation within the banking network, offering qualitative insights into the contagion mechanisms and their implications for the financial and macroeconomic landscape. The model simulates three distinct scenarios, with each representing varying levels of intervention and market conditions. The results demonstrate the model’s ability to capture the intricate interactions among major European banks, reflecting the complex realities of the financial system. These findings emphasize the critical role of central bank policies in maintaining financial stability and underscore the necessity of coordinated international efforts to manage systemic risks. This analysis contributes to a broader understanding of financial contagion, offering valuable insights for policymakers and financial institutions aiming to strengthen their resilience against future crises. The data used for the parameters are historical, which may not reflect recent changes in the banking system. The model could also be improved by incorporating non-financial factors, such as the behaviors of market actors. For future research, several improvements are possible. One improvement would be to make the bank interactions more dynamic to reflect rapid market changes better. It would also be interesting to add financial crisis scenarios to test the system’s resilience. Using more up-to-date data and incorporating new regulations would help refine the model. Finally, it would be relevant to examine the impact of external events, such as geopolitical crises, on the propagation of systemic risk. In conclusion, while the model is useful, there are several avenues for improving it and making it more suitable for our current realities. Full article
Show Figures

Figure 1

16 pages, 291 KB  
Article
Deterministic Asynchronous Threshold-Based Opinion Dynamics in Signed Weighted Graphs
by Miriam Di Ianni
AppliedMath 2025, 5(1), 16; https://doi.org/10.3390/appliedmath5010016 - 9 Feb 2025
Cited by 1 | Viewed by 1395
Abstract
Among the many (mostly randomized) models proposed in the last decades to study how opinions of a set of individuals interconnected by pairwise relations evolve, a novel deterministic model is introduced in this paper that is able to encompass individual choices, strength and [...] Read more.
Among the many (mostly randomized) models proposed in the last decades to study how opinions of a set of individuals interconnected by pairwise relations evolve, a novel deterministic model is introduced in this paper that is able to encompass individual choices, strength and sign of relations, and asynchronism. In particular, asynchronism has been considered until now only in randomized settings. It is here studied in which cases the behavior of the resulting dynamical network is predictable, that is, in which cases the number of opinion configurations encountered by the set of individuals before the dynamical network enters a loop is polynomially bounded by the network size. Full article
Show Figures

Figure 1

24 pages, 30802 KB  
Article
Effect of Calcium on the Characteristics of Action Potential Under Different Electrical Stimuli
by Xuan Qiao and Wei Yao
AppliedMath 2024, 4(4), 1358-1381; https://doi.org/10.3390/appliedmath4040072 - 1 Nov 2024
Cited by 2 | Viewed by 11312
Abstract
This study investigates the role of calcium ions in the release of action potentials by comparing two models based on the framework: the standard HH model and a HH + Ca model that incorporates calcium ion channels. Purkinje cells’ responses to four types [...] Read more.
This study investigates the role of calcium ions in the release of action potentials by comparing two models based on the framework: the standard HH model and a HH + Ca model that incorporates calcium ion channels. Purkinje cells’ responses to four types of electrical current stimuli—constant direct current, step current, square wave current, and sine current—were simulated to analyze the impact of calcium on action potential characteristics. The results indicate that, under the constant direct current stimulation, the action potential firing frequency of both models increased with the escalating current intensity, while the delay time of the first action potential decreased. However, when the current intensity exceeded a specific threshold, the peak amplitude of the action potential gradually diminished. The HH + Ca model exhibited a longer delay in the first action potential compared to the HH model but maintained an action potential release under stronger currents. In response to the step current, both models showed an increased action potential frequency with a higher current, but the HH + Ca model generated subthreshold oscillations under weak currents. With the square wave current, the action potential frequency increased, though the HH + Ca model experienced suppression under high-frequency weak currents. Under the sine current, the action potential frequency rose, with the HH + Ca model showing less depression near the sine peak due to calcium’s role in modulating membrane potential. These findings suggest that calcium ions contribute to a more stable action potential release under varying stimuli. Full article
Show Figures

Figure 1

19 pages, 425 KB  
Article
Train Neural Networks with a Hybrid Method That Incorporates a Novel Simulated Annealing Procedure
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
AppliedMath 2024, 4(3), 1143-1161; https://doi.org/10.3390/appliedmath4030061 - 6 Sep 2024
Cited by 3 | Viewed by 2350
Abstract
In this paper, an innovative hybrid technique is proposed for the efficient training of artificial neural networks, which are used both in class learning problems and in data fitting problems. This hybrid technique combines the well-tested technique of Genetic Algorithms with an innovative [...] Read more.
In this paper, an innovative hybrid technique is proposed for the efficient training of artificial neural networks, which are used both in class learning problems and in data fitting problems. This hybrid technique combines the well-tested technique of Genetic Algorithms with an innovative variant of Simulated Annealing, in order to achieve high learning rates for the neural networks. This variant was applied periodically to randomly selected chromosomes from the population of the Genetic Algorithm in order to reduce the training error associated with these chromosomes. The proposed method was tested on a wide series of classification and data fitting problems from the relevant literature and the results were compared against other methods. The comparison with other neural network training techniques as well as the statistical comparison revealed that the proposed method is significantly superior, as it managed to significantly reduce the neural network training error in the majority of the used datasets. Full article
Show Figures

Figure 1

15 pages, 331 KB  
Review
Two P or Not Two P: Mendel Random Variables in Combining Fake and Genuine p-Values
by M. Fátima Brilhante, M. Ivette Gomes, Sandra Mendonça, Dinis Pestana and Rui Santos
AppliedMath 2024, 4(3), 1128-1142; https://doi.org/10.3390/appliedmath4030060 - 5 Sep 2024
Cited by 1 | Viewed by 1938
Abstract
The classical tests for combining p-values use suitable statistics T(P1,,Pn), which are based on the assumption that the observed p-values are genuine, i.e., under null hypotheses, are observations from independent and [...] Read more.
The classical tests for combining p-values use suitable statistics T(P1,,Pn), which are based on the assumption that the observed p-values are genuine, i.e., under null hypotheses, are observations from independent and identically distributed Uniform(0,1) random variables P1,,Pn. However, the phenomenon known as publication bias, which generally results from the publication of studies that reject null hypotheses of no effect or no difference, can tempt researchers to replicate their experiments, generally no more than once, with the aim of obtaining “better” p-values and reporting the smallest of the two observed p-values, to increase the chances of their work being published. However, when such “fake p-values” exist, they tamper with the statistic T(P1,,Pn) because they are observations from a Beta(1,2) distribution. If present, the right model for the random variables Pk is described as a tilted Uniform distribution, also called a Mendel distribution, since it was underlying Fisher’s critique of Mendel’s work. Therefore, methods for combining genuine p-values are reviewed, and it is shown how quantiles of classical combining test statistics, allowing a small number of fake p-values, can be used to make an informed decision when jointly combining fake (from Two P) and genuine (from not Two P) p-values. Full article
12 pages, 1611 KB  
Article
Some Comments about the p-Generalized Negative Binomial (NBp) Model
by Daniel A. Griffith
AppliedMath 2024, 4(2), 731-742; https://doi.org/10.3390/appliedmath4020039 - 11 Jun 2024
Cited by 1 | Viewed by 1801
Abstract
This paper describes various selected properties and features of negative binomial (NB) random variables, with special reference to NB2 (i.e., p = 2), and some generalizations to NBp (i.e., p ≥ 2), specifications. It presents new results (e.g., the NBp moment-generating function) with [...] Read more.
This paper describes various selected properties and features of negative binomial (NB) random variables, with special reference to NB2 (i.e., p = 2), and some generalizations to NBp (i.e., p ≥ 2), specifications. It presents new results (e.g., the NBp moment-generating function) with regard to the relationship between a sample mean and its accompanying variance, as well as spatial statistical/econometric numerical and empirical examples, whose parameter estimators are maximum likelihood or method of moment ones. Finally, it highlights the Moran eigenvector spatial filtering methodology within the context of generalized linear modeling, demonstrating it in terms of spatial negative binomial regression. Its overall conclusion is a bolstering of important findings the literature already reports with a newly recognized empirical example of an NB3 phenomenon. Full article
Show Figures

Figure 1

22 pages, 907 KB  
Article
Introducing a Parallel Genetic Algorithm for Global Optimization Problems
by Vasileios Charilogis and Ioannis G. Tsoulos
AppliedMath 2024, 4(2), 709-730; https://doi.org/10.3390/appliedmath4020038 - 10 Jun 2024
Cited by 5 | Viewed by 5001
Abstract
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is [...] Read more.
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is due to their exceptional performance in solving optimization problems and their adaptability to various types of problems. However, genetic algorithms require significant computational resources and time, prompting the need for parallel techniques. Moving in this research direction, a new global optimization method is presented here that exploits the use of parallel computing techniques in genetic algorithms. This innovative method employs autonomous parallel computing units that periodically share the optimal solutions they discover. Increasing the number of computational threads, coupled with solution exchange techniques, can significantly reduce the number of calls to the objective function, thus saving computational power. Also, a stopping rule is proposed that takes advantage of the parallel computational environment. The proposed method was tested on a broad array of benchmark functions from the relevant literature and compared with other global optimization techniques regarding its efficiency. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

24 pages, 1233 KB  
Article
SIRS Epidemic Models with Delays, Partial and Temporary Immunity and Vaccination
by Benito Chen-Charpentier
AppliedMath 2024, 4(2), 666-689; https://doi.org/10.3390/appliedmath4020036 - 28 May 2024
Cited by 4 | Viewed by 6583
Abstract
The basic reproduction, or reproductive number, is a useful index that indicates whether or not there will be an epidemic. However, it is also very important to determine whether an epidemic will eventually decrease and disappear or persist as an endemic. Different infectious [...] Read more.
The basic reproduction, or reproductive number, is a useful index that indicates whether or not there will be an epidemic. However, it is also very important to determine whether an epidemic will eventually decrease and disappear or persist as an endemic. Different infectious diseases have different behaviors and mathematical models used to simulated them should capture the most important processes; however, the models also involve simplifications. Influenza epidemics are usually short-lived and can be modeled with ordinary differential equations without considering demographics. Delays such as the infection time can change the behavior of the solutions. The same is true if there is permanent or temporary immunity, or complete or partial immunity. Vaccination, isolation and the use of antivirals can also change the outcome. In this paper, we introduce several new models and use them to find the effects of all the above factors paying special attention to whether the model can represent an infectious process that eventually disappears. We determine the equilibrium solutions and establish the stability of the disease-free equilibrium using various methods. We also show that many models of influenza or other epidemics with a short duration do not have solutions with a disappearing epidemic. The main objective of the paper is to introduce different ways of modeling immunity in epidemic models. Several scenarios with different immunities are studied since a person may not be re-infected because he/she has total or partial immunity or because there were no close contacts. We show that some relatively small changes, such as in the vaccination rate, can significantly change the dynamics; for example, the existence and number of the disease-free equilibria. We also illustrate that while introducing delays makes the models more realistic, the dynamics have the same qualitative behavior. Full article
Show Figures

Figure 1

33 pages, 468 KB  
Article
The Reliability Inference for Multicomponent Stress–Strength Model under the Burr X Distribution
by Yuhlong Lio, Ding-Geng Chen, Tzong-Ru Tsai and Liang Wang
AppliedMath 2024, 4(1), 394-426; https://doi.org/10.3390/appliedmath4010021 - 17 Mar 2024
Cited by 3 | Viewed by 2459
Abstract
The reliability of the multicomponent stress–strength system was investigated under the two-parameter Burr X distribution model. Based on the structure of the system, the type II censored sample of strength and random sample of stress were obtained for the study. The maximum likelihood [...] Read more.
The reliability of the multicomponent stress–strength system was investigated under the two-parameter Burr X distribution model. Based on the structure of the system, the type II censored sample of strength and random sample of stress were obtained for the study. The maximum likelihood estimators were established by utilizing the type II censored Burr X distributed strength and complete random stress data sets collected from the multicomponent system. Two related approximate confidence intervals were achieved by utilizing the delta method under the asymptotic normal distribution theory and parametric bootstrap procedure. Meanwhile, point and confidence interval estimators based on alternative generalized pivotal quantities were derived. Furthermore, a likelihood ratio test to infer the equality of both scalar parameters is provided. Finally, a practical example is provided for illustration. Full article
Show Figures

Figure 1

14 pages, 417 KB  
Article
Decompositions of the λ-Fold Complete Mixed Graph into Mixed 6-Stars
by Robert Gardner and Kazeem Kosebinu
AppliedMath 2024, 4(1), 211-224; https://doi.org/10.3390/appliedmath4010011 - 5 Feb 2024
Cited by 1 | Viewed by 1430
Abstract
Graph and digraph decompositions are a fundamental part of design theory. Probably the best known decompositions are related to decomposing the complete graph into 3-cycles (which correspond to Steiner triple systems), and decomposing the complete digraph into orientations of a 3-cycle (the two [...] Read more.
Graph and digraph decompositions are a fundamental part of design theory. Probably the best known decompositions are related to decomposing the complete graph into 3-cycles (which correspond to Steiner triple systems), and decomposing the complete digraph into orientations of a 3-cycle (the two possible orientations of a 3-cycle correspond to directed triple systems and Mendelsohn triple systems). Decompositions of the λ-fold complete graph and the λ-fold complete digraph have been explored, giving generalizations of decompositions of complete simple graphs and digraphs. Decompositions of the complete mixed graph (which contains an edge and two distinct arcs between every two vertices) have also been explored in recent years. Since the complete mixed graph has twice as many arcs as edges, an isomorphic decomposition of a complete mixed graph into copies of a sub-mixed graph must involve a sub-mixed graph with twice as many arcs as edges. A partial orientation of a 6-star with two edges and four arcs is an example of such a mixed graph; there are five such mixed stars. In this paper, we give necessary and sufficient conditions for a decomposition of the λ-fold complete mixed graph into each of these five mixed stars for all λ>1. Full article
Show Figures

Figure 1

15 pages, 434 KB  
Article
Life at the Landau Pole
by Paul Romatschke
AppliedMath 2024, 4(1), 55-69; https://doi.org/10.3390/appliedmath4010003 - 1 Jan 2024
Cited by 13 | Viewed by 3316
Abstract
If a quantum field theory has a Landau pole, the theory is usually called ‘sick’ and dismissed as a candidate for an interacting UV-complete theory. In a recent study on the interacting 4d O(N) model at large N, it was shown that at [...] Read more.
If a quantum field theory has a Landau pole, the theory is usually called ‘sick’ and dismissed as a candidate for an interacting UV-complete theory. In a recent study on the interacting 4d O(N) model at large N, it was shown that at the Landau pole, observables remain well-defined and finite. In this work, I investigate both relevant and irrelevant deformations of the said model at the Landau pole, finding that physical observables remain unaffected. Apparently, the Landau pole in this theory is benign. As a phenomenological application, I compare the O(N) model to QCD by identifying ΛMS¯ with the Landau pole in the O(N) model. Full article
Show Figures

Figure 1

35 pages, 2277 KB  
Article
Measuring the Risk of Vulnerabilities Exploitation
by Maria de Fátima Brilhante, Dinis Pestana, Pedro Pestana and Maria Luísa Rocha
AppliedMath 2024, 4(1), 20-54; https://doi.org/10.3390/appliedmath4010002 - 24 Dec 2023
Cited by 5 | Viewed by 3693
Abstract
Modeling the vulnerabilities lifecycle and exploitation frequency are at the core of security of networks evaluation. Pareto, Weibull, and log-normal models have been widely used to model the exploit and patch availability dates, the time to compromise a system, the time between compromises, [...] Read more.
Modeling the vulnerabilities lifecycle and exploitation frequency are at the core of security of networks evaluation. Pareto, Weibull, and log-normal models have been widely used to model the exploit and patch availability dates, the time to compromise a system, the time between compromises, and the exploitation volumes. Random samples (systematic and simple random sampling) of the time from publication to update of cybervulnerabilities disclosed in 2021 and in 2022 are analyzed to evaluate the goodness-of-fit of the traditional Pareto and log-normal laws. As censoring and thinning almost surely occur, other heavy-tailed distributions in the domain of attraction of extreme value or geo-extreme value laws are investigated as suitable alternatives. Goodness-of-fit tests, the Akaike information criterion (AIC), and the Vuong test, support the statistical choice of log-logistic, a geo-max stable law in the domain of attraction of the Fréchet model of maxima, with hyperexponential and general extreme value fittings as runners-up. Evidence that the data come from a mixture of differently stretched populations affects vulnerabilities scoring systems, specifically the common vulnerabilities scoring system (CVSS). Full article
Show Figures

Figure 1

19 pages, 9516 KB  
Article
Modeling and Visualizing the Dynamic Spread of Epidemic Diseases—The COVID-19 Case
by Loukas Zachilas and Christos Benos
AppliedMath 2024, 4(1), 1-19; https://doi.org/10.3390/appliedmath4010001 - 20 Dec 2023
Viewed by 2109
Abstract
Our aim is to provide an insight into the procedures and the dynamics that lead the spread of contagious diseases through populations. Our simulation tool can increase our understanding of the spatial parameters that affect the diffusion of a virus. SIR models are [...] Read more.
Our aim is to provide an insight into the procedures and the dynamics that lead the spread of contagious diseases through populations. Our simulation tool can increase our understanding of the spatial parameters that affect the diffusion of a virus. SIR models are based on the hypothesis that populations are “well mixed”. Our model constitutes an attempt to focus on the effects of the specific distribution of the initially infected individuals through the population and provide insights, considering the stochasticity of the transmission process. For this purpose, we represent the population using a square lattice of nodes. Each node represents an individual that may or may not carry the virus. Nodes that carry the virus can only transfer it to susceptible neighboring nodes. This important revision of the common SIR model provides a very realistic property: the same number of initially infected individuals can lead to multiple paths, depending on their initial distribution in the lattice. This property creates better predictions and probable scenarios to construct a probability function and appropriate confidence intervals. Finally, this structure permits realistic visualizations of the results to understand the procedure of contagion and spread of a disease and the effects of any measures applied, especially mobility restrictions, among countries and regions. Full article
Show Figures

Figure 1

30 pages, 1226 KB  
Article
Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition
by Alex Santana dos Santos and Marcos Eduardo Valle
AppliedMath 2023, 3(4), 989-1018; https://doi.org/10.3390/appliedmath3040050 - 8 Dec 2023
Cited by 1 | Viewed by 1699
Abstract
Max-C and min-D projection auto-associative fuzzy morphological memories (max-C and min-D PAFMMs) are two-layer feedforward fuzzy morphological neural networks designed to store and retrieve finite fuzzy sets. This paper addresses the main features of these auto-associative memories: unlimited absolute [...] Read more.
Max-C and min-D projection auto-associative fuzzy morphological memories (max-C and min-D PAFMMs) are two-layer feedforward fuzzy morphological neural networks designed to store and retrieve finite fuzzy sets. This paper addresses the main features of these auto-associative memories: unlimited absolute storage capacity, fast retrieval of stored items, few spurious memories, and excellent tolerance to either dilative or erosive noise. Particular attention is given to the so-called Zadeh’ PAFMM, which exhibits the most significant noise tolerance among the max-C and min-D PAFMMs besides performing no floating-point arithmetic operations. Computational experiments reveal that Zadeh’s max-C PFAMM, combined with a noise masking strategy, yields a fast and robust classifier with a strong potential for face recognition tasks. Full article
Show Figures

Figure 1

48 pages, 663 KB  
Review
Interval Quadratic Equations: A Review
by Isaac Elishakoff and Nicolas Yvain
AppliedMath 2023, 3(4), 909-956; https://doi.org/10.3390/appliedmath3040048 - 1 Dec 2023
Viewed by 2655
Abstract
In this study, we tackle the subject of interval quadratic equations and we aim to accurately determine the root enclosures of quadratic equations, whose coefficients constitute interval variables. This study focuses on interval quadratic equations that contain only one coefficient considered as an [...] Read more.
In this study, we tackle the subject of interval quadratic equations and we aim to accurately determine the root enclosures of quadratic equations, whose coefficients constitute interval variables. This study focuses on interval quadratic equations that contain only one coefficient considered as an interval variable. The four methods reviewed here in order to solve this problem are: (i) the method of classic interval analysis used by Elishakoff and Daphnis, (ii) the direct method based on minimizations and maximizations also used by the same authors, (iii) the method of quantifier elimination used by Ioakimidis, and (iv) the interval parametrization method suggested by Elishakoff and Miglis and again based on minimizations and maximizations. We will also compare the results yielded by all these methods by using the computer algebra system Mathematica for computer evaluations (including quantifier eliminations) in order to conclude which method would be the most efficient way to solve problems relevant to interval quadratic equations. Full article
28 pages, 8158 KB  
Article
Some Comments about Zero and Non-Zero Eigenvalues from Connected Undirected Planar Graph Adjacency Matrices
by Daniel A. Griffith
AppliedMath 2023, 3(4), 771-798; https://doi.org/10.3390/appliedmath3040042 - 1 Nov 2023
Cited by 2 | Viewed by 4276
Abstract
Two linear algebra problems implore a solution to them, creating the themes pursued in this paper. The first problem interfaces with graph theory via binary 0-1 adjacency matrices and their Laplacian counterparts. More contemporary spatial statistics/econometrics applications motivate the second problem, which embodies [...] Read more.
Two linear algebra problems implore a solution to them, creating the themes pursued in this paper. The first problem interfaces with graph theory via binary 0-1 adjacency matrices and their Laplacian counterparts. More contemporary spatial statistics/econometrics applications motivate the second problem, which embodies approximating the eigenvalues of massively large versions of these two aforementioned matrices. The proposed solutions outlined in this paper essentially are a reformulated multiple linear regression analysis for the first problem and a matrix inertia refinement adapted to existing work for the second problem. Full article
Show Figures

Figure 1

12 pages, 315 KB  
Article
Terracini Loci for Maps
by Edoardo Ballico
AppliedMath 2023, 3(3), 690-701; https://doi.org/10.3390/appliedmath3030036 - 17 Sep 2023
Viewed by 1900
Abstract
Let X be a smooth projective variety and f:XPr a morphism birational onto its image. We define the Terracini loci of the map f. Most results are only for the case dimX=1. With [...] Read more.
Let X be a smooth projective variety and f:XPr a morphism birational onto its image. We define the Terracini loci of the map f. Most results are only for the case dimX=1. With this new and more flexible definition, it is possible to prove strong nonemptiness results with the full classification of all exceptional cases. We also consider Terracini loci with restricted support (solutions not intersecting a closed set BX or solutions containing a prescribed pX). Our definitions work both for the Zariski and the euclidean topology and we suggest extensions to the case of real varieties. We also define Terracini loci for joins of two or more subvarieties of the same projective space. The proofs use algebro-geometric tools. Full article
15 pages, 306 KB  
Article
Optimal Statistical Analyses of Bell Experiments
by Richard D. Gill
AppliedMath 2023, 3(2), 446-460; https://doi.org/10.3390/appliedmath3020023 - 16 May 2023
Viewed by 3956
Abstract
We show how both smaller and more reliable p-values can be computed in Bell-type experiments by using statistical deviations from no-signalling equalities to reduce statistical noise in the estimation of Bell’s S or Eberhard’s J. Further improvement was obtained by using [...] Read more.
We show how both smaller and more reliable p-values can be computed in Bell-type experiments by using statistical deviations from no-signalling equalities to reduce statistical noise in the estimation of Bell’s S or Eberhard’s J. Further improvement was obtained by using the Wilks likelihood ratio test based on the four tetranomially distributed vectors of counts of the four different outcome combinations, one 4-vector for each of the four setting combinations. The methodology was illustrated by application to the loophole-free Bell experiments of 2015 and 2016 performed in Delft and Munich, at NIST, and in Vienna, respectively, and also to the earlier (1998) Innsbruck experiment of Weihs et al. and the recent (2022) Munich experiment of Zhang et al., which investigates the use of a loophole-free Bell experiment as part of a protocol for device-independent quantum key distribution (DIQKD). Full article
11 pages, 423 KB  
Article
Analytical Approximation of the Jackknife Linking Error in Item Response Models Utilizing a Taylor Expansion of the Log-Likelihood Function
by Alexander Robitzsch
AppliedMath 2023, 3(1), 49-59; https://doi.org/10.3390/appliedmath3010004 - 5 Jan 2023
Cited by 6 | Viewed by 2715
Abstract
Linking errors in item response models quantify the dependence on the chosen items in means, standard deviations, or other distribution parameters. The jackknife approach is frequently employed in the computation of the linking error. However, this jackknife linking error could be computationally tedious [...] Read more.
Linking errors in item response models quantify the dependence on the chosen items in means, standard deviations, or other distribution parameters. The jackknife approach is frequently employed in the computation of the linking error. However, this jackknife linking error could be computationally tedious if many items were involved. In this article, we provide an analytical approximation of the jackknife linking error. The newly proposed approach turns out to be computationally much less demanding. Moreover, the new linking error approach performed satisfactorily for datasets with at least 20 items. Full article
Back to TopTop