Axioms doi: 10.3390/axioms13030205
Authors: Fatmah B. Jamjoom Fadwa M. Algamdei
The objective of our study is to generalize the results on product states of the tensor product of two JC-algebras to infinite tensor product JC-algebras. Also, we characterize the tracial product state of the tensor product of two JC-algebras, and the tracial product state of infinite tensor products of JC-algebras.
]]>Axioms doi: 10.3390/axioms13030204
Authors: Asifa Tassaddiq Amna Kalsoom Maliha Rashid Kainat Sehr Dalal Khalid Almutairi
Iterative procedures have been proved as a milestone in the generation of fractals. This paper presents a novel approach for generating and visualizing fractals, specifically Mandelbrot and Julia sets, by utilizing complex polynomials of the form QC(p)=apn+mp+c, where n≥2. It establishes escape criteria that play a vital role in generating these sets and provides escape time results using different iterative schemes. In addition, the study includes the visualization of graphical images of Julia and Mandelbrot sets, revealing distinct patterns. Furthermore, the study also explores the impact of parameters on the deviation of dynamics, color, and appearance of fractals.
]]>Axioms doi: 10.3390/axioms13030203
Authors: Jack C. Straton
This paper shows that certain 3F4 hypergeometric functions can be expanded in sums of pair products of 2F3 functions, which reduce in special cases to 2F3 functions expanded in sums of pair products of 1F2 functions. This expands the class of hypergeometric functions having summation theorems beyond those expressible as pair-products of generalized Whittaker functions, 2F1 functions, and 3F2 functions into the realm of pFq functions where p<q for both the summand and terms in the series. In addition to its intrinsic value, this result has a specific application in calculating the response of the atoms to laser stimulation in the Strong Field Approximation.
]]>Axioms doi: 10.3390/axioms13030202
Authors: Ahmed Bakhet Shahid Hussain Mohamed Niyaz Mohammed Zakarya Ghada AlNemer
In this paper, we explore a study focused on a two-variable extension of matrix Bessel polynomials. We initiate the discussion by introducing the matrix Bessel polynomials involving two variables and derive specific differential formulas and recurrence relations associated with them. Additionally, we present a segment detailing integral formulas for the extended matrix Bessel polynomials. Lastly, we introduce the Laplace–Carson transform for the two-variable matrix Bessel polynomial analogue.
]]>Axioms doi: 10.3390/axioms13030201
Authors: Lazhar Bougoffa Smail Bougouffa Ammar Khanfer
This article provides a detailed exploration of the SIR epidemic model, starting with its meticulous formulation. The study employs a novel approach called the upper and lower bounds technique to approximate the solution to the SIR model, providing insights into the dynamic interplay between susceptible S, infected I, and recovered R populations. A new parametric solution to this model has been presented. Applying the Adomian decomposition method (ADM) allows for the attaining of highly accurate approximate solutions in the context of the SIR epidemic model. To validate the accuracy and robustness of the proposed approach, a numerical exploration is conducted, considering a diverse range of experimental parameters. This numerical analysis provides valuable insights into the sensitivity and responsiveness of the SIR epidemic model under varying conditions, contributing to the broader understanding of infectious disease dynamics. The interplay between theoretical formulation and numerical exploration establishes a comprehensive framework for studying the SIR model, with implications for refining our ability to predict and manage the spread of infectious diseases.
]]>Axioms doi: 10.3390/axioms13030200
Authors: Xinya Li Yan Sun Jinfeng Qi Danzhu Wang
This study investigates a green multimodal routing problem with soft time window. The objective of routing is to minimize the total costs of accomplishing the multimodal transportation of a batch of goods. To improve the feasibility of optimization, this study formulates the routing problem in an uncertain environment where the capacities and carbon emission factors of the travel process and the transfer process in the multimodal network are considered fuzzy. Taking triangular fuzzy numbers to describe the uncertainty, this study proposes a fuzzy nonlinear programming model to deal with the specific routing problem. To make the problem solvable, this study adopts the fuzzy chance-constrained programming approach based on the possibility measure to remove the fuzziness of the proposed model. Furthermore, we use linear inequality constraints to reformulate the nonlinear equality constraints represented by the continuous piecewise linear functions and realize the linearization of the nonlinear programming model to improve the computational efficiency of problem solving. After model processing, we can utilize mathematical programming software to run exact solution algorithms to solve the specific routing problem. A numerical experiment is given to show the feasibility of the proposed model. The sensitivity analysis of the numerical experiment further clarifies how improving the confidence level of the chance constraints to enhance the possibility that the multimodal route planned in advance satisfies the real-time capacity constraint in the actual transportation, i.e., the reliability of the routing, increases both the total costs and carbon emissions of the route. The numerical experiment also finds that charging carbon emissions is not absolutely effective in emissions reduction. In this condition, bi-objective analysis indicates the conflicting relationship between lowering transportation activity costs and reducing carbon emissions in routing optimization. The sensitivity of the Pareto solutions concerning the confidence level reveals that reliability, economy, and environmental sustainability are in conflict with each other. Based on the findings of this study, the customer and the multimodal transport operator can organize efficient multimodal transportation, balancing the above objectives using the proposed model.
]]>Axioms doi: 10.3390/axioms13030199
Authors: Jong Il Baek S. E. Abbas Kul Hur Ismail Ibedou
Based on equivalence relation R on X, equivalence class [x] of a point and equivalence class [A] of a subset represent the neighborhoods of x and A, respectively. These neighborhoods play the main role in defining separation axioms, metric spaces, proximity relations and uniformity structures on an approximation space (X,R) depending on the lower approximation and the upper approximation of rough sets. The properties and the possible implications of these definitions are studied. The generated approximation topology τR on X is equivalent to the generated topologies associated with metric d, proximity δ and uniformity U on X. Separated metric spaces, separated proximity spaces and separated uniform spaces are defined and it is proven that both are associating exactly discrete topology τR on X.
]]>Axioms doi: 10.3390/axioms13030198
Authors: Kuen-Suan Chen Tsung-Hua Hsieh Chia-Pao Chang Kai-Chao Yao Tsun-Hung Huang
The Performance Evaluation Matrix (PEM) is an excellent decision-making tool for assessment and resource management. Satisfaction Index and Importance Index are two important evaluation indicators of construction and PEM. Managers can decide whether the service item needs to be improved based on the Satisfaction Index of the service item. When resources are limited, managers can determine the priority of improving the service item based on the Importance Index. In order to avoid the risk of misjudgment caused by sample errors and meet the needs of enterprises’ rapid decision-making, this study proposed a fuzzy test built on the confidence intervals of the above two key indicators to decide whether essential service items should be improved and determine the priority of improvement. Since the fuzzy test was relatively complex, this study further came up with fuzzy evaluation values and fuzzy evaluation critical values of service items following fuzzy testing rules. Besides, evaluation rules were established to facilitate industrial applications. This approach can be completed with any common word processing software, so it is relatively convenient in application and easy to manage. Finally, an application example was presented in this paper to explain the applicability of the proposed approach.
]]>Axioms doi: 10.3390/axioms13030197
Authors: Raquel Pinto Marcos Spreafico Carlos Vela
In general, the problem of building optimal convolutional codes under a certain criteria is hard, especially when size field restrictions are applied. In this paper, we confront the challenge of constructing an optimal 2D convolutional code when communicating over an erasure channel. We propose a general construction method for these codes. Specifically, we provide an optimal construction where the decoding method presented in the bibliography is considered.
]]>Axioms doi: 10.3390/axioms13030196
Authors: Nusrat Raza Mohammed Fadel Wei-Shih Du
In this paper, we introduce and study new features for 2-variable (p,q)-Hermite polynomials, such as the (p,q)-diffusion equation, (p,q)-differential formula and integral representations. In addition, we establish some summation models and their (p,q)-derivatives. Certain parting remarks and nontrivial examples are also provided.
]]>Axioms doi: 10.3390/axioms13030195
Authors: Mohammed Mohammed Fortuné Massamba Ion Mihai Abd Elmotaleb A. M. A. Elamin M. Saif Aldien
In the present article, we study submanifolds tangent to the Reeb vector field in trans-Sasakian manifolds. We prove Chen’s first inequality and the Chen–Ricci inequality, respectively, for such submanifolds in trans-Sasakian manifolds which admit a semi-symmetric non-metric connection. Moreover, a generalized Euler inequality for special contact slant submanifolds in trans-Sasakian manifolds endowed with a semi-symmetric non-metric connection is obtained.
]]>Axioms doi: 10.3390/axioms13030194
Authors: Wantao Ning Hao Li
For S⊆V(G),κG(S) denotes the maximum number k of edge disjoint trees T1,T2,…,Tk in G, such that V(Ti)∩V(Tj)=S for any i,j∈{1,2,…,k} and i≠j. For an integer 2≤r≤|V(G)|, the generalized r-connectivity of G is defined as κr(G)=min{κG(S)|S⊆V(G)and|S|=r}. In fact, κ2(G) is the traditional connectivity of G. Hence, the generalized r-connectivity is an extension of traditional connectivity. The exchanged folded hypercube EFH(s,t), in which s≥1 and t≥1 are positive integers, is a variant of the hypercube. In this paper, we find that κ3(EFH(s,t))=s+1 with 3≤s≤t.
]]>Axioms doi: 10.3390/axioms13030193
Authors: María Ángeles Moreno-Frías José Carlos Rosales
In this work, we will introduce the concept of ratio-covariety, as a family R of numerical semigroups that has a minimum, denoted by min(R), is closed under intersection, and if S∈R and S≠min(R), then S\{r(S)}∈R, where r(S) denotes the ratio of S. The notion of ratio-covariety will allow us to: (1) describe an algorithmic procedure to compute R; (2) prove the existence of the smallest element of R that contains a set of positive integers; and (3) talk about the smallest ratio-covariety that contains a finite set of numerical semigroups. In addition, in this paper we will apply the previous results to the study of the ratio-covariety R(F,m)={S∣S is a numerical semigroup with Frobenius number F and multiplicitym}.
]]>Axioms doi: 10.3390/axioms13030192
Authors: Roger Arnau Jose M. Calabuig Álvaro González Enrique A. Sánchez Pérez
Index spaces serve as valuable metric models for studying properties relevant to various applications, such as social science or economics. These properties are represented by real Lipschitz functions that describe the degree of association with each element within the underlying metric space. After determining the index value within a given sample subset, the classic McShane and Whitney formulas allow a Lipschitz regression procedure to be performed to extend the index values over the entire metric space. To improve the adaptability of the metric model to specific scenarios, this paper introduces the concept of a composition metric, which involves composing a metric with an increasing, positive and subadditive function ϕ. The results presented here extend well-established results for Lipschitz indices on metric spaces to composition metrics. In addition, we establish the corresponding approximation properties that facilitate the use of this functional structure. To illustrate the power and simplicity of this mathematical framework, we provide a concrete application involving the modeling of livability indices in North American cities.
]]>Axioms doi: 10.3390/axioms13030191
Authors: Huiling Niu Abdoulaye Ali Youssouf Binhua Feng
In this paper, we consider blow-up solutions for the fourth-order nonlinear Schrödinger equation with mixed dispersions. We study the dynamical properties of blow-up solutions for this equation, including the H˙γc-concentration and limiting profiles, which extend and improve the existing results in the literature.
]]>Axioms doi: 10.3390/axioms13030190
Authors: Yang Li Yingmei Xu Qianhai Xu Yu Zhang
New high-order weak schemes are proposed and simplified to solve stochastic differential equations with Markovian switching driven by pure jumps (PJ-SDEwMs). Using Malliavin calculus theory, it is rigorously proven that the new numerical schemes can achieve a high-order convergence rate. Some numerical experiments are provided to show the efficiency and accuracy.
]]>Axioms doi: 10.3390/axioms13030189
Authors: Danica Fatić Dragan Djurčić Ljubiša D. R. Kočinac
This paper deals with translational regular and rapid variations. By using a new method of proving the Galambos–Bojanić-Seneta type theorems, we prove two theorems of this type for translationally regularly varying and translationally rapidly varying functions and sequences, important objects in the asymptotic analysis of divergent processes. Also, we introduce and study the index functions for translationally regularly varying functions and sequences. For example, we prove that the index function of a translationally regularly varying function is also in the same class of functions.
]]>Axioms doi: 10.3390/axioms13030188
Authors: Zhongtian Dong Marçal Comajoan Cara Gopal Ramesh Dahale Roy T. Forestano Sergei Gleyzer Daniel Justice Kyoungchul Kong Tom Magorsch Konstantin T. Matchev Katia Matcheva Eyup B. Unlu
This paper presents a comparative analysis of the performance of Equivariant Quantum Neural Networks (EQNNs) and Quantum Neural Networks (QNNs), juxtaposed against their classical counterparts: Equivariant Neural Networks (ENNs) and Deep Neural Networks (DNNs). We evaluate the performance of each network with three two-dimensional toy examples for a binary classification task, focusing on model complexity (measured by the number of parameters) and the size of the training dataset. Our results show that the Z2×Z2 EQNN and the QNN provide superior performance for smaller parameter sets and modest training data samples.
]]>Axioms doi: 10.3390/axioms13030187
Authors: Eyup B. Unlu Marçal Comajoan Cara Gopal Ramesh Dahale Zhongtian Dong Roy T. Forestano Sergei Gleyzer Daniel Justice Kyoungchul Kong Tom Magorsch Konstantin T. Matchev Katia Matcheva
Models based on vision transformer architectures are considered state-of-the-art when it comes to image classification tasks. However, they require extensive computational resources both for training and deployment. The problem is exacerbated as the amount and complexity of the data increases. Quantum-based vision transformer models could potentially alleviate this issue by reducing the training and operating time while maintaining the same predictive power. Although current quantum computers are not yet able to perform high-dimensional tasks, they do offer one of the most efficient solutions for the future. In this work, we construct several variations of a quantum hybrid vision transformer for a classification problem in high-energy physics (distinguishing photons and electrons in the electromagnetic calorimeter). We test them against classical vision transformer architectures. Our findings indicate that the hybrid models can achieve comparable performance to their classical analogs with a similar number of parameters.
]]>Axioms doi: 10.3390/axioms13030186
Authors: Nanbin Cao Yue Zhang Xia Liu
This paper studies a particular type of planar Filippov system that consists of two discontinuity boundaries separating the phase plane into three disjoint regions with different dynamics. This type of system has wide applications in various subjects. As an illustration, a plant disease model and an avian-only model are presented, and their bifurcation scenarios are investigated. By means of the regularization approach, the blowing up method, and the singular perturbation theory, we provide a different way to analyze the dynamics of this type of Filippov system. In particular, the boundary equilibrium bifurcations of such systems are studied. As a consequence, the nonsmooth fold bifurcation becomes a saddle-node bifurcation, while the persistence bifurcation disappears after regularization.
]]>Axioms doi: 10.3390/axioms13030185
Authors: Sydney Day Zhidong Xiao Ehtzaz Chaudhry Matthew Hooker Xiaoqiang Zhu Jian Chang Andrés Iglesias Lihua You Jianjun Zhang
How to create realistic shapes by interpolating two known shapes for facial blendshapes has not been investigated in the existing literature. In this paper, we propose a physics-based mathematical model and its analytical solutions to obtain more realistic facial shape changes. To this end, we first introduce the internal force of elastic beam bending into the equation of motion and integrate it with the constraints of two known shapes to develop the physics-based mathematical model represented with dynamic partial differential equations (PDEs). Second, we propose a unified mathematical expression of the external force represented with linear and various nonlinear time-dependent Fourier series, introduce it into the mathematical model to create linear and various nonlinear dynamic deformations of the curves defining a human face model, and derive analytical solutions of the mathematical model. Third, we evaluate the realism of the obtained analytical solutions in interpolating two known shapes to create new shape changes by comparing the shape changes calculated with the obtained analytical solutions and geometric linear interpolation to the ground-truth shape changes and conclude that among linear and various nonlinear PDE-based analytical solutions named as linear, quadratic, and cubic PDE-based interpolation, quadratic PDE-based interpolation creates the most realistic shape changes, which are more realistic than those obtained with the geometric linear interpolation. Finally, we use the quadratic PDE-based interpolation to develop a facial blendshape method and demonstrate that the proposed approach is more efficient than numerical physics-based facial blendshapes.
]]>Axioms doi: 10.3390/axioms13030184
Authors: Jorge De Andrés-Sánchez
A highly relevant topic in the actuarial literature is so-called “claim reserving” or “loss reserving”, which involves estimating reserves to be provisioned for pending claims, as they can be deferred over various periods. This explains the proliferation of methods that aim to estimate these reserves and their variability. Regression methods are widely used in this setting. If we model error terms as random variables, the variability of provisions can consequently be modelled stochastically. The use of fuzzy regression methods also allows modelling uncertainty for reserve values using tools from the theory of fuzzy subsets. This study follows this second approach and proposes projecting claim reserves using a generalization of fuzzy numbers (FNs), so-called intuitionistic fuzzy numbers (IFNs), through the use of intuitionistic fuzzy regression. While FNs allow epistemic uncertainty to be considered in variable estimation, IFNs add bipolarity to the analysis by incorporating both positive and negative information regarding actuarial variables. Our analysis is grounded in the ANOVA two-way framework, which is adapted to the use of intuitionistic regression. Similarly, we compare our results with those obtained using deterministic and stochastic chain-ladder methods and those obtained using two-way statistical ANOVA.
]]>Axioms doi: 10.3390/axioms13030183
Authors: Yanlin Li Meraj Ali Khan MD Aquib Ibrahim Al-Dayel Maged Zakaria Youssef
In this article, we study isotropic submanifolds in locally metallic product space forms. Firstly, we establish the Chen–Ricci inequality for such submanifolds and determine the conditions under which the inequality becomes equality. Additionally, we explore the minimality of Lagrangian submanifolds in locally metallic product space forms, and we apply the result to create a classification theorem for isotropic submanifolds whose mean curvature is constant. More specifically, we have demonstrated that the submanifolds are either a product of two Einstein manifolds with Einstein constants, or they are isometric to a totally geodesic submanifold. To support our findings, we provide several examples.
]]>Axioms doi: 10.3390/axioms13030182
Authors: Xue Zhang Jing Zhang
In this thesis, we research quasilinear Schrödinger system as follows in which 3<N∈R, 2<p<N, 2<q<N, V1(x),V2(x) are continuous functions, k,ι are parameters with k,ι>0, and nonlinear terms f,h∈C(RN×R2,R). We find a nontrivial solution (u,v) for all ι>ι1(k) by means of the mountain-pass theorem and change of variable theorem. Our main novelty of the thesis is that we extend Δ to Δp and Δq to find the existence of a nontrivial solution.
]]>Axioms doi: 10.3390/axioms13030181
Authors: Oktay Duman Biancamaria Della Vecchia Esra Erkus-Duman
In the present work, in order to approximate integrable vector-valued functions, we study the Kantorovich version of vector-valued Shepard operators. We also display some applications supporting our results by using parametric plots of a surface and a space curve. Finally, we also investigate how nonnegative regular (matrix) summability methods affect the approximation.
]]>Axioms doi: 10.3390/axioms13030180
Authors: Talip Can Termen Ozgur Ege
In this work, the notion of digital fiber homotopy is defined and its properties are given. We present some new results on digital fibrations. Moreover, we introduce digital h-fibrations. We prove some of the properties of these digital h-fibrations. We show that a digital fibration and a digital map p are fiber homotopic equivalent if and only if p is a digital h-fibration. Finally, we explore a relation between digital fibrations and digital h-fibrations.
]]>Axioms doi: 10.3390/axioms13030179
Authors: Xiaoqing Hong Weiping Zhou Xiao Wang Min Li
Supersaturated designs (SSDs) refer to those designs in which the run size is much smaller than the main effects to be estimated. They are commonly used to identify a few, but critical, active factors from a large set of potentially active ones, keeping the cost as low as possible. In this regard, the development of new construction and analysis methods has recently seen a rapid increase. In this paper, we provide some methods to construct equi- and mixed-level E(f NOD) optimal SSDs with a large number of inert factors using the substitution method. The proposed methods are easy to implement, and many new SSDs can then be constructed from them. We also study a variable selection method based on the screening-selection network (SSnet) method for regression problems. A real example is analyzed to illustrate that it is able to effectively identify active factors. Eight different analysis methods are used to analyze the data generated from the proposed designs. Three scenarios with different setups of parameters are designed, and the performances of each method are illustrated by extensive simulation studies. Among all these methods, SSnet produced the most satisfactory results according to power.
]]>Axioms doi: 10.3390/axioms13030178
Authors: Ilaria Cacciari Anedio Ranfagni
Experimental results of delay-time measurements in the transfer of modulation between microwave beams, as reported in previous articles, were interpreted on a competition (interference) between two waves, one of which is modulated and the other is a continuous wave (c.w.). The creation of one of these waves was attributed to a saddle-point contribution, while the other was attributed to pole singularities. In this paper, such an assumption is justified by a quantitative field-amplitude analysis in order to make the modeling plausible. In particular, two ways of calculating field amplitudes are considered. These lead to results that are quantitatively markedly different, although qualitatively similar.
]]>Axioms doi: 10.3390/axioms13030177
Authors: Abel Cabrera-Martínez Juan Manuel Rueda-Vázquez Jaime Segarra
Let G be a nontrivial connected graph. For a set D⊆V(G), we define D¯=V(G)∖D. The set D is a total outer-independent dominating set of G if |N(v)∩D|≥1 for every vertex v∈V(G) and D¯ is an independent set of G. Moreover, D is a double outer-independent dominating set of G if |N[v]∩D|≥2 for every vertex v∈V(G) and D¯ is an independent set of G. In addition, D is a 2-outer-independent dominating set of G if |N(v)∩D|≥2 for every vertex v∈D¯ and D¯ is an independent set of G. The total, double or 2-outer-independent domination number of G, denoted by γtoi(G), γ×2oi(G) or γ2oi(G), is the minimum cardinality among all total, double or 2-outer-independent dominating sets of G, respectively. In this paper, we first show that for any cactus graph G of order n(G)≥4 with k(G) cycles, γ2oi(G)≤n(G)+l(G)2+k(G), γtoi(G)≤2n(G)−l(G)+s(G)3+k(G) and γ×2oi(G)≤2n(G)+l(G)+s(G)3+k(G), where l(G) and s(G) represent the number of leaves and the number of support vertices of G, respectively. These previous bounds extend three known results given for trees. In addition, we characterize the trees T with γ×2oi(T)=γtoi(T). Moreover, we show that γ2oi(T)≥n(T)+l(T)−s(T)+12 for any tree T with n(T)≥3. Finally, we give a constructive characterization of the trees T that satisfy the equality above.
]]>Axioms doi: 10.3390/axioms13030176
Authors: Assma Leulmi
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, we illustrate the superior performance of our new approximate function with respect to the line search one through a numerical experiment on numerous collections of test problems.
]]>Axioms doi: 10.3390/axioms13030175
Authors: Liangyu Wang Hongyu Li
Monge–Ampère equations have important research significance in many fields such as geometry, convex geometry and mathematical physics. In this paper, under some superlinear and sublinear conditions, the existence of nontrivial solutions for a system arising from Monge–Ampère equations with two parameters is investigated based on the Guo–Krasnosel’skii fixed point theorem. In the end, two examples are given to illustrate our theoretical results.
]]>Axioms doi: 10.3390/axioms13030174
Authors: Najla Altwaijry Silvestru Sever Dragomir Kais Feki
The main focus of this paper is on establishing inequalities for the norm and numerical radius of various operators applied to a power series with the complex coefficients h(λ)=∑k=0∞akλk and its modified version ha(λ)=∑k=0∞|ak|λk. The convergence of h(λ) is assumed on the open disk D(0,R), where R is the radius of convergence. Additionally, we explore some operator inequalities related to these concepts. The findings contribute to our understanding of operator behavior in bounded operator spaces and offer insights into norm and numerical radius inequalities.
]]>Axioms doi: 10.3390/axioms13030173
Authors: Ayed. R. A. Alanzi Raouf Fakhfakh Fatimah Alshahrani
In this article, we provide some new limiting laws related to the free multiplicative law of large numbers and involving free and Boolean additive convolutions. Some examples of these limiting laws are presented within the framework of non-commutative probability theory.
]]>Axioms doi: 10.3390/axioms13030172
Authors: Najla Altwaijry Silvestru Sever Dragomir Kais Feki
Consider the power series with complex coefficients h(z)=∑k=0∞akzk and its modified version ha(z)=∑k=0∞|ak|zk. In this article, we explore the application of certain Hölder-type inequalities for deriving various inequalities for operators acting on the aforementioned power series. We establish these inequalities under the assumption of the convergence of h(z) on the open disk D(0,ρ), where ρ denotes the radius of convergence. Additionally, we investigate the norm and numerical radius inequalities associated with these concepts.
]]>Axioms doi: 10.3390/axioms13030171
Authors: Yuting Xu Qianfan Liu Yao Chen Yang Lei Minghua Yang
In this article, we study the Cauchy problem of the chemotaxis-Navier–Stokes system with the consumption and production of chemosignals with a logistic source. The parameters χ≠0, ξ≠0, λ>0 and μ>0. The system is a model that involves double chemosignals; one is an attractant consumed by the cells themselves, and the other is an attractant or a repellent produced by the cells themselves. We prove the global-in-time existence and uniqueness of the weak solution to the system for a large class of initial data on the whole space R2.
]]>Axioms doi: 10.3390/axioms13030170
Authors: Taiyong Song Zexian Liu
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG methods have illustrated excellent numerical performance, they are only used to solve unconstrained optimization problems at present. In this paper, we extend the SMCG methods and present an efficient SMCG method for solving nonlinear monotone equations with convex constraints by combining it with the projection technique, where the search direction is sufficiently descent.Under mild conditions, we establish the global convergence and R-linear convergence rate of the proposed method. The numerical experiment indicates that the proposed method is very promising.
]]>Axioms doi: 10.3390/axioms13030169
Authors: Li Zhang Yang Liu
A class of fractional viscoelastic Kirchhoff equations involving two nonlinear source terms of different signs are studied. Under suitable assumptions on the exponents of nonlinear source terms and the memory kernel, the existence of global solutions in an appropriate functional space is established by a combination of the theory of potential wells and the Galerkin approximations. Furthermore, the asymptotic behavior of global solutions is obtained by a combination of the theory of potential wells and the perturbed energy method.
]]>Axioms doi: 10.3390/axioms13030168
Authors: Inna Kal’chuk
In this editorial, we present “Theory of Functions and Applications”, a Special Issue of Axioms [...]
]]>Axioms doi: 10.3390/axioms13030167
Authors: Mona Aljoufi
The homotopy perturbation method (HPM) is one of the recent fundamental methods for solving differential equations. However, checking the accuracy of this method has been ignored by some authors in the literature. This paper reanalyzes the nonlinear system of ordinary differential equations (ODEs) describing the SIR epidemic model, which has been solved in the literature utilizing the HPM. The main objective of this work is to obtain a highly accurate analytical solution for this model via a direct technique. The proposed technique is mainly based on reducing the given system to a single nonlinear ODE that can be easily solved. Numerical results are conducted to compare our approach with the previous HPM, where the Runge–Kutta numerical method is chosen as a reference solution. The obtained results reveal that the current technique exhibits better accuracy over HPM in the literature. Moreover, some physical properties are introduced and discussed in detail regarding the influence of the transmission rate on the behavior of the SIR model.
]]>Axioms doi: 10.3390/axioms13030166
Authors: Tingzeng Wu Xueji Jiu
Let G be a graph with n vertices and m edges. A(G) and I denote, respectively, the adjacency matrix of G and an n by n identity matrix. For a graph G, the permanent of matrix (I+A(G)) is called the permanental sum of G. In this paper, we give a relation between the Hosoya index and the permanental sum of G. This implies that the computational complexity of the permanental sum is NP-complete. Furthermore, we characterize the graphs with the minimum permanental sum among all graphs of n vertices and m edges, where n+3≤m≤2n−3.
]]>Axioms doi: 10.3390/axioms13030165
Authors: Richard Olatokunbo Akinola Ali Shokri Joshua Sunday Daniela Marian Oyindamola D. Akinlabi
In this paper, we compare the performances of two Butcher-based block hybrid methods for the numerical integration of initial value problems. We compare the condition numbers of the linear system of equations arising from both methods and the absolute errors of the solution obtained. The results of the numerical experiments illustrate that the better conditioned method outperformed its less conditioned counterpart based on the absolute errors. In addition, after applying our method on some examples, it was discovered that the absolute errors in this work were better than those of a recent study in the literature. Hence, we recommend this method for the numerical solution of stiff and non-stiff initial value problems.
]]>Axioms doi: 10.3390/axioms13030164
Authors: Belakavadi Radhakrishna Srivatsa Kumar Arjun K. Rathie Junesang Choi
A collection of functions organized according to their indexing based on non-negative integers is grouped by the common factor of fixed integer N. This grouping results in a summation of N series, each consisting of functions partitioned according to this modulo N rule. Notably, when N is equal to two, the functions in the series are divided into two subseries: one containing even-indexed functions and the other containing odd-indexed functions. This partitioning technique is widely utilized in the mathematical literature and finds applications in various contexts, such as in the theory of hypergeometric series. In this paper, we employ this partitioning technique to establish four distinct families of summation formulas for F34(1) hypergeometric series. Subsequently, we leverage these summation formulas to introduce eight categories of integral formulas. These integrals feature compositions of Beta function-type integrands and F23(x) hypergeometric functions. Additionally, we highlight that our primary summation formulas can be used to derive some well-known summation results.
]]>Axioms doi: 10.3390/axioms13030163
Authors: Jiabin Zuo Omar Hammouti Said Taarabti
The purpose of this paper is to investigate the existence and multiplicity of nontrivial solutions with the φp-Laplacian for the discrete 2n-th order periodic boundary value issue. To support these conclusions, we have employed variational techniques and contemporary critical point theory. A few new findings are expanded upon and enhanced. We give an example to show how our key findings can be applied.
]]>Axioms doi: 10.3390/axioms13030162
Authors: Chunli Li Wenchang Chu
By computing definite integrals, we shall examine binomial series of convergence rate ±1/2 and weighted by harmonic-like numbers. Several closed formulae in terms of the Riemann and Hurwitz zeta functions as well as logarithm and polylogarithm functions will be established, including a conjectured one made recently by Z.-W. Sun.
]]>Axioms doi: 10.3390/axioms13030161
Authors: Joyce A. Casimiro Jaume Llibre
In this article, we study the maximum number of limit cycles of discontinuous piecewise differential systems, formed by two Hamiltonians systems separated by a straight line. We consider three cases, when both Hamiltonians systems in each side of the discontinuity line have simultaneously degree one, two or three. We obtain that in these three cases, this maximum number is zero, one and three, respectively. Moreover, we prove that there are discontinuous piecewise differential systems realizing these maximum number of limit cycles. Note that we have solved the extension of the 16th Hilbert problem about the maximum number of limit cycles that these three classes of discontinuous piecewise differential systems separated by one straight line and formed by two Hamiltonian systems with a degree either one, two, or three, which such systems can exhibit.
]]>Axioms doi: 10.3390/axioms13030160
Authors: Roy T. Forestano Marçal Comajoan Cara Gopal Ramesh Dahale Zhongtian Dong Sergei Gleyzer Daniel Justice Kyoungchul Kong Tom Magorsch Konstantin T. Matchev Katia Matcheva Eyup B. Unlu
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics. One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles. The increasing size and complexity of the LHC particle datasets, as well as the computational models used for their analysis, have greatly motivated the development of alternative fast and efficient computational paradigms such as quantum computation. In addition, to enhance the validity and robustness of deep networks, we can leverage the fundamental symmetries present in the data through the use of invariant inputs and equivariant layers. In this paper, we provide a fair and comprehensive comparison of classical graph neural networks (GNNs) and equivariant graph neural networks (EGNNs) and their quantum counterparts: quantum graph neural networks (QGNNs) and equivariant quantum graph neural networks (EQGNN). The four architectures were benchmarked on a binary classification task to classify the parton-level particle initiating the jet. Based on their area under the curve (AUC) scores, the quantum networks were found to outperform the classical networks. However, seeing the computational advantage of quantum networks in practice may have to wait for the further development of quantum technology and its associated application programming interfaces (APIs).
]]>Axioms doi: 10.3390/axioms13030159
Authors: Ho-Hsuan Chang Shiqi Guan Miaowang Zeng Peiyao Chen
Prime period sequences can serve as the fundamental tool to construct arbitrary composite period sequences. This is a review study of the prime period perfect Gaussian integer sequence (PGIS). When cyclic group {1,2,…,N−1} can be partitioned into k cosets, where N=kf+1 is an odd prime number, the construction of a degree-(k + 1) PGIS can be derived from either matching the flat magnitude spectrum criterion or making the sequence with ideal periodic autocorrelation function (PACF). This is a systematic approach of prime period N=kf+1 PGIS construction, and is applied to construct PGISs with degrees 1, 2, 3 and 5. However, for degrees larger than 3, matching either the flat magnitude spectrum or achieving the ideal PACF encounters a great challenge of solving a system of nonlinear constraint equations. To deal with this problem, the correlation and convolution operations can be applied upon PGISs of lower degrees to generate new PGISs with a degree of 4 and other higher degrees, e.g., 6, 7, 10, 11, 12, 14, 20 and 21 in this paper. In this convolution-based scheme, both degree and pattern of a PGIS vary and can be indeterminate, which is rather nonsystematic compared with the systematic approach. The combination of systematic and nonsystematic schemes contributes great efficiency for constructing abundant PGISs with various degrees and patterns for the associated applications.
]]>Axioms doi: 10.3390/axioms13030158
Authors: Salvador Romaguera
In this paper, we introduce and examine the notion of a protected quasi-metric. In particular, we give some of its properties and present several examples of distinguished topological spaces that admit a compatible protected quasi-metric, such as the Alexandroff spaces, the Sorgenfrey line, the Michael line, and the Khalimsky line, among others. Our motivation is due, in part, to the fact that a successful improvement of the classical Banach fixed-point theorem obtained by Suzuki does not admit a natural and full quasi-metric extension, as we have noted in a recent article. Thus, and with the help of this new structure, we obtained a fixed-point theorem in the framework of Smyth-complete quasi-metric spaces that generalizes Suzuki’s theorem. Combining right completeness with partial ordering properties, we also obtained a variant of Suzuki’s theorem, which was applied to discuss types of difference equations and recurrence equations.
]]>Axioms doi: 10.3390/axioms13030157
Authors: Yuqi Sun
Let X,Y be two Banach spaces and f:X→Y be a standard coarse isometry. In this paper, we first show a sufficient and necessary condition for the coarse left-inverse operator of general Banach spaces to admit a linearly isometric right inverse. Furthermore, by using the well-known simultaneous extension operator, we obtain an asymptotical stability result when Y is a space of continuous functions. In addition, we also prove that every coarse left-inverse operator does admit a linear isometric right inverse without other assumptions when Y is a Lp(1<p<∞) space, or both X and Y are finite dimensional spaces of the same dimension. Making use of the results mentioned above, we generalize several results of isometric embeddings and give a stability result of coarse isometries between Banach spaces.
]]>Axioms doi: 10.3390/axioms13030156
Authors: Maryam Iqbal Amjad Ali Hamid Al Sulami Aftab Hussain
This article introduces a novel iterative process, denoted as F★, designed for the class of generalized α-Nonexpensive mappings. The study establishes strong and weak convergence theorems within the context of Banach spaces, supported by carefully chosen assumptions. The convergence results contribute to the theoretical foundation of iterative processes in functional analysis. The presented framework is applied to address nonlinear integral equations, showcasing the versatility and applicability of the proposed F* for the class of generalized iteration process. Additionally, the article includes numerical examples that not only validate the theoretical findings but also provide insights into the practical utility of the developed methodology.
]]>Axioms doi: 10.3390/axioms13030155
Authors: Saravanan Gunasekar Baskaran Sudharsanan Musthafa Ibrahim Teodor Bulboacă
The purpose of this research is to unify and extend the study of the well-known concept of coefficient estimates for some subclasses of analytic functions. We define the new subclass A4r,s of analytic functions related to the four-leaf domain, to increase the adaptability of our investigation. The initial findings are the bound estimates for the coefficients |an|, n=2,3,4,5, among which the bound of |a2| is sharp. Also, we include the sharp-function illustration. Additionally, we obtain the upper-bound estimate for the second Hankel determinant for this subclass as well as those for the Fekete–Szegő functional. Finally, for these subclasses, we provide an estimation of the Krushkal inequality for the function class A4r,s.
]]>Axioms doi: 10.3390/axioms13030154
Authors: Robertas Alzbutas Gintautas Dundulis
A probability-based approach, combining deterministic and probabilistic methods, was developed for analyzing building and component failures, which are especially crucial for complex structures like nuclear power plants. This method links finite element and probabilistic software to assess structural integrity under static and dynamic loads. This study uses NEPTUNE software, which is validated, for a deterministic transient analysis and ProFES software for probabilistic models. In a case study, deterministic analyses with varied random variables were transferred to ProFES for probabilistic analyses of piping failure and wall damage. A Monte Carlo Simulation, First-Order Reliability Method, and combined methods were employed for probabilistic analyses under severe transient loading, focusing on a postulated accident at the Ignalina Nuclear Power Plant. The study considered uncertainties in material properties, component geometry, and loads. The results showed the Monte Carlo Simulation method to be conservative for high failure probabilities but less so for low probabilities. The Response Surface/Monte Carlo Simulation method explored the impact load–failure probability relationship. Given the uncertainties in material properties and loads in complex structures, a deterministic analysis alone is insufficient. Probabilistic analysis is imperative for extreme loading events and credible structural safety evaluations.
]]>Axioms doi: 10.3390/axioms13030153
Authors: Mani Parimala Saeid Jafari
The theory of spherical linear Diophantine fuzzy sets (SLDFS) boasts several advantages over existing fuzzy set (FS) theories such as Picture fuzzy sets (PFS), spherical fuzzy sets (SFS), and T-spherical fuzzy sets (T-SFS). Notably, SLDFS offers a significantly larger portrayal space for acceptable triplets, enabling it to encompass a wider range of ambiguous and uncertain knowledge data sets. This paper delves into the regularity of spherical linear Diophantine fuzzy graphs (SLDFGs), establishing their fundamental concepts. We provide a geometrical interpretation of SLDFGs within a spherical context and define the operations of complement, union, and join, accompanied by illustrative examples. Additionally, we introduce the novel concept of a spherical linear Diophantine isomorphic fuzzy graph and showcase its application through a social network scenario. Furthermore, we explore how this amplified depiction space can be utilized for the study of various graph theoretical topics.
]]>Axioms doi: 10.3390/axioms13030152
Authors: Refah Alotaibi Mazen Nassar Ahmed Elshahhat
To gather enough data from studies that are ongoing for an extended duration, a newly improved adaptive Type-II progressive censoring technique has been offered to get around this difficulty and extend several well-known multi-stage censoring plans. This work, which takes this scheme into account, focuses on some conventional and Bayesian estimation missions for parameter and reliability indicators, where the unit log-log model acts as the base distribution. The point and interval estimations of the various parameters are looked at from a classical standpoint. In addition to the conventional approach, the Bayesian methodology is examined to derive credible intervals beside the Bayesian point by leveraging the squared error loss function and the Markov chain Monte Carlo technique. Under varied settings, a simulation study is carried out to distinguish between the standard and Bayesian estimates. To implement the proposed procedures, two actual data sets are analyzed. Finally, multiple precision standards are considered to pick the optimal progressive censoring scheme.
]]>Axioms doi: 10.3390/axioms13030151
Authors: Ali Akgül J. Alberto Conejero
A three-differential-equation mathematical model is presented for the degradation of phenol and p-cresol combination in a bioreactor that is continually agitated. The stability analysis of the model’s equilibrium points, as established by the study, is covered. Additionally, we used three alternative kernels to analyze the model with the fractal–fractional derivatives, and we looked into the effects of the fractal size and fractional order. We have developed highly efficient numerical techniques for the concentration of biomass, phenol, and p-cresol. Lastly, numerical simulations are used to illustrate the accuracy of the suggested method.
]]>Axioms doi: 10.3390/axioms13030150
Authors: Xiaolong Shi Ruiqi Cai Ali Asghar Talebi Masomeh Mojahedfar Chanjuan Liu
Vague influence graphs (VIGs) are well articulated, useful and practical tools for managing the uncertainty preoccupied in all real-life difficulties where ambiguous facts, figures and explorations are explained. A VIG gives the information about the effect of a vertex on the edge. In this paper, we present the domination concept for VIG. Some issues and results of the domination in vague graphs (VGs) are also developed in VIGs. We defined some basic notions in the VIGs such as the walk, path, strength of In-pair , strong In-pair, In-cut vertex, In-cut pair (CP), complete VIG and strong pair domination number in VIG. Finally, an application of domination in illegal drug trade was introduced.
]]>Axioms doi: 10.3390/axioms13030149
Authors: Giovanni Nastasi
In this editorial, we present the Special Issue of the scientific journal Axioms entitled “Mathematical Models and Simulations” [...]
]]>Axioms doi: 10.3390/axioms13030148
Authors: Gianni Bosi Roberto Daris Magalì Zuanon
Chipman contended, in stark contrast to the conventional view, that, utility is not a real number but a vector, and that it is inherently lexicographic in nature. On the other hand, in recent years continuous multi-utility representations of a preorder on a topological space, which proved to be the best kind of continuous representation, have been deeply studied. In this paper, we first state a general result, which guarantees, for every preordered topological space, the existence of a lexicographic order-embedding of the Chipman type. Then, we combine the Chipman approach and the continuous multi-utility approach, by stating a theorem that guarantees, under certain general conditions, the coexistence of these two kinds of continuous representations.
]]>Axioms doi: 10.3390/axioms13030147
Authors: Miao Wang Yaping Wang Lin Hu Linfei Nie
Taking into account the effects of the immune response and delay, and complexity on HIV-1 transmission, a multiscale AIDS/HIV-1 model is formulated in this paper. The multiscale model is described by a within-host fast time model with intracellular delay and immune delay, and a between-host slow time model with latency delay. The dynamics of the fast time model is analyzed, and includes the stability of equilibria and properties of Hopf bifurcation. Further, for the coupled slow time model without an immune response, the basic reproduction number R0h is defined, which determines whether the model may have zero, one, or two positive equilibria under different conditions. This implies that the slow time model demonstrates more complex dynamic behaviors, including saddle-node bifurcation, backward bifurcation, and Hopf bifurcation. For the other case, that is, the coupled slow time model with an immune response, the threshold dynamics, based on the basic reproduction number R˜0h, is rigorously investigated. More specifically, if R˜0h<1, the disease-free equilibrium is globally asymptotically stable; if R˜0h>1, the model exhibits a unique endemic equilibrium that is globally asymptotically stable. With regard to the coupled slow time model with an immune response and stable periodic solution, the basic reproduction number R0 is derived, which serves as a threshold value determining whether the disease will die out or lead to periodic oscillations in its prevalence. The research results suggest that the disease is more easily controlled when hosts have an extensive immune response and the time required for new immune particles to emerge in response to antigenic stimulation is within a certain range. Finally, numerical simulations are presented to validate the main results and provide some recommendations for controlling the spread of HIV-1.
]]>Axioms doi: 10.3390/axioms13030146
Authors: Speranta Cecilia Bolea Mironela Pirnau Silviu-Ioan Bejinariu Vasile Apopei Daniela Gifu Horia-Nicolai Teodorescu
The article extends the theoretical and applicative analysis of Zipf’s law. We are concerned with a set of properties of Zipf’s law that derive directly from the power law expression and from the discrete nature of the objects to which the law is applied, when the objects are words, lemmas, and the like. We also search for variations of Zipf’s law that can help explain the noisy results empirically reported in the literature and the departures of the empirically obtained nonlinear graph from the theoretical linear one, with the variants analyzed differing from Mandelbrot and lognormal distributions. A problem of interest that we deal with is that of mixtures of populations obeying Zipf’s law. The last problem has relevance in the analysis of texts with words with various etymologies. Computational aspects are also addressed.
]]>Axioms doi: 10.3390/axioms13030145
Authors: Andrius Geštautas Antanas Laurinčikas
Let P be the set of generalized prime numbers, and ζP(s), s=σ+it, denote the Beurling zeta-function associated with P. In the paper, we consider the approximation of analytic functions by using shifts ζP(s+iτ), τ∈R. We assume the classical axioms for the number of generalized integers and the mean of the generalized von Mangoldt function, the linear independence of the set {logp:p∈P}, and the existence of a bounded mean square for ζP(s). Under the above hypotheses, we obtain the universality of the function ζP(s). This means that the set of shifts ζP(s+iτ) approximating a given analytic function defined on a certain strip σ^<σ<1 has a positive lower density. This result opens a new chapter in the theory of Beurling zeta functions. Moreover, it supports the Linnik–Ibragimov conjecture on the universality of Dirichlet series. For the proof, a probabilistic approach is applied.
]]>Axioms doi: 10.3390/axioms13030144
Authors: Manuel Arana-Jiménez Julio Lozano-Ramírez M. Carmen Sánchez-Gil Atefeh Younesi Sebastián Lozano
This paper proposes a novel slacks-based interval DEA approach that computes interval targets, slacks, and crisp inefficiency scores. It uses interval arithmetic and requires solving a mixed-integer linear program. The corresponding super-efficiency formulation to discriminate among the efficient units is also presented. We also provide a case study of its application to sustainable tourism in the Mediterranean region, assessing the sustainable tourism efficiency of twelve Mediterranean regions to validate the proposed approach. The inputs and outputs cover the three sustainability dimensions and include GHG emissions as an undesirable output. Three regions were found to be inefficient, and the corresponding inputs and output improvements were computed. A total rank of the regions was also obtained using the super-efficiency model.
]]>Axioms doi: 10.3390/axioms13030143
Authors: Sunil Kumar Janak Raj Sharma Lorentz Jäntschi
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence rate. Furthermore, the study explores applications of the methods in both real-life and academic contexts. In particular, we examine the convergence of the methods by applying them to the problems, namely Van der Waals equation of state, Planck’s law of radiation, the Manning equation for isentropic supersonic flow and some academic problems. Numerical results reveal that the proposed derivative-free methods are more efficient and consistent than existing methods.
]]>Axioms doi: 10.3390/axioms13030142
Authors: Mateus Alberto Dorna de Oliveira Ferreira Laura Cozzi Ribeiro Henrique Silva Schuffner Matheus Pereira Libório Petr Iakovlevitch Ekel
This paper reflects the results of research analyzing models of multi-attribute decision-making based on fuzzy preference relations. Questions of constructing the corresponding multi-attribute models to deal with quantitative information concomitantly with qualitative information based on experts’ knowledge are considered. Human preferences may be represented within the fuzzy preference relations and by applying diverse other preference formats. Considering this, so-called transformation functions reduce any preference format to fuzzy preference relations. This paper’s results can be applied independently or as part of a general approach to solving a wide class of problems with fuzzy coefficients, as well as within the framework of a general scheme of multi-criteria decision-making under conditions of uncertainty. The considered techniques for fuzzy preference modeling are directed at assessing, comparing, choosing, prioritizing, and/or ordering alternatives. These techniques have served to develop a computing system for multi-attribute decision-making. It has been implemented in the C# programming language, utilizing the “.NET” framework. The computing system allows one to represent decision-makers’ preferences in one of five preference formats. These formats and quantitative estimates are reduced to nonreciprocal fuzzy preference relations, providing homogeneous preference information for decision procedures. This paper’s results have a general character and were applied to analyze power engineering problems.
]]>Axioms doi: 10.3390/axioms13030141
Authors: Yanshan Chen Xingfa Zhang Chunliang Deng Yujiao Liu
The portmanteau test is an effective tool for testing the goodness of fit of models. Motivated by the fact that high-frequency data can improve the estimation accuracy of models, a modified portmanteau test using high-frequency data is proposed for ARCH-type models in this paper. Simulation results show that the empirical size and power of the modified test statistics of the model using high-frequency data are better than those of the daily model. Three stock indices (CSI 300, SSE 50, CSI 500) are taken as an example to illustrate the practical application of the test.
]]>Axioms doi: 10.3390/axioms13030139
Authors: Najiyah Omar Stefano Serra-Capizzano Belgees Qaraad Faizah Alharbi Osama Moaaz Elmetwally M. Elabbasy
In the current paper, we aim to study the oscillatory behavior of a new class of third-order differential equations. In the present study, we are interested in a better understanding of the relationships between the solutions and their derivatives. The recursive nature of these relationships enables us to obtain new criteria that ensure the oscillation of all solutions of the studied equation. In comparison with previous studies, our results are more general and include models in a wider range of applications. Furthermore, our findings are also significant because no additional restrictive conditions are required. The presented examples illustrate the significance of the results.
]]>Axioms doi: 10.3390/axioms13030140
Authors: Indranil Ghosh Hon Keung Tony Ng Kipum Kim Seong W. Kim
In many real-life scenarios, one variable is observed only if the other concomitant variable or the set of concomitant variables (in the multivariate scenario) is truncated from below, above, or from a two-sided approach. Hidden truncation models have been applied to analyze data when bivariate or multivariate observations are subject to some form of truncation. While the statistical inference for hidden truncation models (truncation from above) under the frequentist and the Bayesian paradigms has been adequately discussed in the literature, the estimation of a two-sided hidden truncation model under the Bayesian framework has not yet been discussed. In this paper, we consider the Bayesian inference for a general two-sided hidden truncation model based on the Arnold–Strauss bivariate exponential distribution. In addition, a Bayesian model selection approach based on the Bayes factor to select between models without truncation, with truncation from below, from above, and two-sided truncation is also explored. An extensive simulation study is carried out for varying parameter choices under the conjugate prior set-up. For illustrative purposes, a real-life dataset is re-analyzed to demonstrate the applicability of the proposed methodology.
]]>Axioms doi: 10.3390/axioms13030138
Authors: Felix M. Lev
We solve the particle-antiparticle and cosmological constant problems proceeding from quantum theory, which postulates that: various states of the system under consideration are elements of a Hilbert space H with a positive definite metric; each physical quantity is defined by a self-adjoint operator in H; symmetry at the quantum level is defined by a representation of a real Lie algebra A in H such that the representation operator of any basis element of A is self-adjoint. These conditions guarantee the probabilistic interpretation of quantum theory. We explain that in the approaches to solving these problems that are described in the literature, not all of these conditions have been met. We argue that fundamental objects in particle theory are not elementary particles and antiparticles but objects described by irreducible representations (IRs) of the de Sitter (dS) algebra. One might ask why, then, experimental data give the impression that particles and antiparticles are fundamental and there are conserved additive quantum numbers (electric charge, baryon quantum number and others). The reason is that, at the present stage of the universe, the contraction parameter R from the dS to the Poincare algebra is very large and, in the formal limit R→∞, one IR of the dS algebra splits into two IRs of the Poincare algebra corresponding to a particle and its antiparticle with the same masses. The problem of why the quantities (c,ℏ,R) are as are does not arise because they are contraction parameters for transitions from more general Lie algebras to less general ones. Then the baryon asymmetry of the universe problem does not arise. At the present stage of the universe, the phenomenon of cosmological acceleration (PCA) is described without uncertainties as an inevitable kinematical consequence of quantum theory in semiclassical approximation. In particular, it is not necessary to involve dark energy the physical meaning of which is a mystery. In our approach, background space and its geometry are not used and R has nothing to do with the radius of dS space. In semiclassical approximation, the results for the PCA are the same as in General Relativity if Λ=3/R2, i.e., Λ>0 and there is no freedom for choosing the value of Λ.
]]>Axioms doi: 10.3390/axioms13030137
Authors: Song-Kyoo (Amang) Kim
This paper targets the area of optimizing machine learning (ML) training data by constructing compact data. The methods of optimizing ML training have improved and become a part of artificial intelligence (AI) system development. Compact data learning (CDL) is an alternative practical framework to optimize a classification system by reducing the size of the training dataset. CDL originated from compact data design, which provides the best assets without handling complex big data. CDL is a dedicated framework for improving the speed of the machine learning training phase without affecting the accuracy of the system. The performance of an ML-based arrhythmia detection system and its variants with CDL maintained the same statistical accuracy. ML training with CDL could be maximized by applying an 85% reduced input dataset, which indicated that a trained ML system could have the same statistical accuracy by only using 15% of the original training dataset.
]]>Axioms doi: 10.3390/axioms13030136
Authors: Yong Fang Xue Sang Manwai Yuen Yong Zhang
A time scale is a special measure chain that can unify continuous and discrete spaces, enabling the construction of integrable equations. In this paper, with the Lax operator generated by the displacement operator, N-dimensional lattice integrable systems on the time scale are given by the R-matrix approach. The recursion operators of the lattice systems are derived on the time scale. Finally, two integrable hierarchies of the discrete chain with a bi-Hamiltonian structure are obtained. In particular, we give the structure of two-field and four-field systems.
]]>Axioms doi: 10.3390/axioms13030135
Authors: Kadir Kanat Selin Erdal
This article is concerned with the Durrmeyer-type generalization of Szász operators, including confluent Appell polynomials and their approximation properties. Also, the rate of convergence of the confluent Durrmeyer operators is found by using the modulus of continuity and Peetre’s K-functional. Then, we show that, under special choices of A(t), the newly constructed operators reduce confluent Hermite polynomials and confluent Bernoulli polynomials, respectively. Finally, we present a comparison of newly constructed operators with the Durrmeyer-type Szász operators graphically.
]]>Axioms doi: 10.3390/axioms13020134
Authors: Jack C. Straton
The Bessel function of the first kind JNkx is expanded in a Fourier–Legendre series, as is the modified Bessel function of the first kind INkx. The purpose of these expansions in Legendre polynomials was not an attempt to rival established numerical methods for calculating Bessel functions but to provide a form for JNkx useful for analytical work in the area of strong laser fields, where analytical integration over scattering angles is essential. Despite their primary purpose, one can easily truncate the series at 21 terms to provide 33-digit accuracy that matches the IEEE extended precision in some compilers. The analytical theme is furthered by showing that infinite series of like-powered contributors (involving  1F2 hypergeometric functions) extracted from the Fourier–Legendre series may be summed, having values that are inverse powers of the eight primes 1/2i3j5k7l11m13n17o19p multiplying powers of the coefficient k.
]]>Axioms doi: 10.3390/axioms13020133
Authors: Ghaliah Alhamzi Aafrin Gouri Badr Saad T. Alkahtani Ravi Shanker Dubey
In this study, we present the generalized form of the higher-order nonlinear fractional Bratu-type equation. In this generalization, we deal with a generalized fractional derivative, which is quite useful from an application point of view. Furthermore, some special cases of the generalized fractional Bratu equation are recognized and examined. To solve these nonlinear differential equations of fractional order, we employ the homotopy perturbation transform method. This work presents a useful computational method for solving these equations and advances our understanding of them. We also plot some numerical outcomes to show the efficiency of the obtained results.
]]>Axioms doi: 10.3390/axioms13020132
Authors: Xiangyang Han Senlin Wu Longzhen Zhang
In Chuanming Zong’s program to attack Hadwiger’s covering conjecture, which is a longstanding open problem from Convex and Discrete Geometry, it is essential to estimate covering functionals of convex bodies effectively. Recently, He et al. and Yu et al. provided two deterministic global optimization algorithms having high computational complexity for this purpose. Since satisfactory estimations of covering functionals will be sufficient in Zong’s program, we propose a stochastic global optimization algorithm based on CUDA and provide an error estimation for the algorithm. The accuracy of our algorithm is tested by comparing numerical and exact values of covering functionals of convex bodies including the Euclidean unit disc, the three-dimensional Euclidean unit ball, the regular tetrahedron, and the regular octahedron. We also present estimations of covering functionals for the regular dodecahedron and the regular icosahedron.
]]>Axioms doi: 10.3390/axioms13020131
Authors: Khellaf Ould Melha Abdelhamid Mohammed Djaouti Muhammad Amer Latif Vaijanath L. Chinchane
This paper focuses on studying the uniqueness of the mild solution for an abstract fractional differential equation. We use Banach’s fixed point theorem to prove this uniqueness. Additionally, we examine the stability properties of the equation using Ulam’s stability. To analyze these properties, we consider the involvement of Hadamard fractional derivatives. Throughout this study, we put significant emphasis on the role and properties of resolvent operators. Furthermore, we investigate Ulam-type stability by providing examples of partial fractional differential equations that incorporate Hadamard derivatives.
]]>Axioms doi: 10.3390/axioms13020130
Authors: Pasquale Nardone Giorgio Sonnino
We propose a new tool for estimating the complexity of a time series: the entropy of difference (ED). The method is based solely on the sign of the difference between neighboring values in a time series. This makes it possible to describe the signal as efficiently as prior proposed parameters, such as permutation entropy (PE) or modified permutation entropy (mPE). Firstly, this method reduces the size of the sample that is necessary to estimate the parameter value, and secondly it enables the use of the Kullback–Leibler divergence to estimate the “distance” between the time series data and random signals.
]]>Axioms doi: 10.3390/axioms13020129
Authors: Nada A. M. Alshomrani Abdelhalim Ebaid Faten Aldosari Mona D. Aljoufi
The existence of the advance parameter in a scalar differential equation prevents the application of the well-known standard methods used for solving classical ordinary differential equations. A simple procedure is introduced in this paper to remove the advance parameter from a special kind of first-order scalar differential equation. The suggested approach transforms the given first-order scalar differential equation to an equivalent second-order ordinary differential equation (ODE) without the advance parameter. Using this method, we are able to construct the exact solution of both the transformed model and the given original model. The exact solution is obtained in a wave form with specified amplitude and phase. Furthermore, several special cases are investigated at certain values/relationships of the involved parameters. It is shown that the exact solution in the absence of the advance parameter reduces to the corresponding solution in the literature. In addition, it is declared that the current model enjoys various kinds of solutions, such as constant solutions, polynomial solutions, and periodic solutions under certain constraints of the included parameters.
]]>Axioms doi: 10.3390/axioms13020128
Authors: Houria Selatnia Abdelhamid Ayadi Imad Rezzoug
In this paper, we analyze the identification of the amount of pollutant discharged problem by each source in a heat system when the dynamics of the state are governed by a parameterized unknown operator. In this way, we introduce the notion of average sentinel. The decomposition method is used to solve the equation of this problem, the gradient method is used to calculate the averaged control, and the combination of the two methods is used to estimate the pollution terms. Numerical example is given to confirm this result.
]]>Axioms doi: 10.3390/axioms13020127
Authors: Javad Tayyebi Mihai-Lucian Rîtan Adrian Marius Deaconu
In this paper, the generalized widest path problem (or generalized maximum capacity problem) is studied. This problem is denoted by the GWPP. The classical widest path problem is to find a path from a source (s) to a sink (t) with the highest capacity among all possible s-t paths. The GWPP takes into account the presence of loss/gain factors on arcs as well. The GWPP aims to find an s-t path considering the loss/gain factors while satisfying the capacity constraints. For solving the GWPP, three strongly polynomial time algorithms are presented. Two algorithms only work in the case of losses. The first one is less efficient than the second one on a CPU, but it proves to be more efficient on large networks if it parallelized on GPUs. The third algorithm is able to deal with the more general case of losses/gains on arcs. An example is considered to illustrate how each algorithm works. Experiments on large networks are conducted to compare the efficiency of the algorithms proposed.
]]>Axioms doi: 10.3390/axioms13020126
Authors: Huifu Xia Yunfei Peng Peng Zhang
In this paper, a class of nonlinear ordinary differential equations with impulses at variable times is considered. The existence and uniqueness of the solution are given. At the same time, modifying the classical definitions of continuous dependence and Gâteaux differentiability, some results on the continuous dependence and Gâteaux differentiable of the solution relative to the initial value are also presented in a new topology sense. For the autonomous impulsive system, the periodicity of the solution is given. As an application, the properties of the solution for a type of controlled nonlinear ordinary differential equation with impulses at variable times is obtained. These results are a foundation to study optimal control problems of systems governed by differential equations with impulses at variable times.
]]>Axioms doi: 10.3390/axioms13020125
Authors: Gadir Alomair Razik Ridzuan Mohd Tajuddin Hassan S. Bakouch Amal Almohisen
Count data consists of both observed and unobserved events. The analysis of count data often encounters overdispersion, where traditional Poisson models may not be adequate. In this paper, we introduce a tractable one-parameter mixed Poisson distribution, which combines the Poisson distribution with the improved second-degree Lindley distribution. This distribution, called the Poisson-improved second-degree Lindley distribution, is capable of effectively modeling standard count data with overdispersion. However, if the frequency of the unobserved events is unknown, the proposed distribution cannot be directly used to describe the events. To address this limitation, we propose a modification by truncating the distribution to zero. This results in a tractable zero-truncated distribution that encompasses all types of dispersions. Due to the unknown frequency of unobserved events, the population size as a whole becomes unknown and requires estimation. To estimate the population size, we develop a Horvitz–Thompson-like estimator utilizing truncated distribution. Both the untruncated and truncated distributions exhibit desirable statistical properties. The estimators for both distributions, as well as the population size, are asymptotically unbiased and consistent. The current study demonstrates that both the truncated and untruncated distributions adequately explain the considered medical datasets, which are the number of dicentric chromosomes after being exposed to different doses of radiation and the number of positive Salmonella. Moreover, the proposed population size estimator yields reliable estimates.
]]>Axioms doi: 10.3390/axioms13020123
Authors: Lei Du Yingying Xu Haifeng Song Songsong Dai
This paper introduces the concept of equivalence operators based on overlap and grouping functions where the associativity property is not strongly required. Overlap functions and grouping functions are weaker than positive and continuous t-norms and t-conorms, respectively. Therefore, these equivalence operators do not necessarily satisfy certain properties, such as associativity and the neutrality principle. In this paper, two models of fuzzy equivalence operators are obtained by the composition of overlap functions, grouping functions and fuzzy negations. Their main properties are also studied.
]]>Axioms doi: 10.3390/axioms13020124
Authors: Dragana Valjarević Vladica Stojanović Aleksandar Valjarević
In this paper, we investigate an application of the statistical concept of causality, based on Granger’s definition of causality, on raw increasing processes as well as on optional and predictable measures. A raw increasing process is optional (predictable) if the bounded (left-continuous) process X, associated with the measure μA(X), is self-caused. Also, the measure μA(X) is optional (predictable) if an associated process X is self-caused with some additional assumptions. Some of the obtained results, in terms of self-causality, can be directly applied to defining conditions for an optional stopping time to become predictable.
]]>Axioms doi: 10.3390/axioms13020122
Authors: Vladimir Volenec Ružica Kolar-Šuper
The cubic structure, a captivating geometric structure, finds applications across various areas of geometry through different models. In this paper, we explore the significant characteristics of tangentials in cubic structures of ranks 0, 1, and 2. Specifically, in the cubic structure of rank 2, we derive the Hessian configuration (123,164) of points and lines. Finally, we introduce and investigate the de Vries configuration of points and lines in a cubic structure.
]]>Axioms doi: 10.3390/axioms13020121
Authors: Georgy I. Burde
Multi-parameter families of Lax pairs for the modified Korteweg-de Vries (mKdV) equation are defined by applying a direct method developed in the present study. The gauge transformations, converting the defined Lax pairs to some simpler forms, are found. The direct method and its possible applications to other types of evolution equations are discussed.
]]>Axioms doi: 10.3390/axioms13020120
Authors: Jack C. Straton
We extend previous research to derive three additional M-1-dimensional integral representations over the interval [0,1]. The prior version covered the interval [0,∞]. This extension applies to products of M Slater orbitals, since they (and wave functions derived from them) appear in quantum transition amplitudes. It enables the magnitudes of coordinate vector differences (square roots of polynomials) |x1−x2|=x12−2x1x2cosθ+x22 to be shifted from disjoint products of functions into a single quadratic form, allowing for the completion of its square. The M-1-dimensional integral representations of M Slater orbitals that both this extension and the prior version introduce provide alternatives to Fourier transforms and are much more compact. The latter introduce a 3M-dimensional momentum integral for M products of Slater orbitals (in M separate denominators), followed in many cases by another set of M-1-dimensional integral representations to combine those denominators into one denominator having a single (momentum) quadratic form. The current and prior methods are also slightly more compact than Gaussian transforms that introduce an M-dimensional integral for products of M Slater orbitals while simultaneously moving them into a single (spatial) quadratic form in a common exponential. One may also use addition theorems for extracting the angular variables or even direct integration at times. Each method has its strengths and weaknesses. We found that these M-1-dimensional integral representations over the interval [0,1] are numerically stable, as was the prior version, having integrals running over the interval [0,∞], and one does not need to test for a sufficiently large upper integration limit as one does for the latter approach. For analytical reductions of integrals arising from any of the three, however, there is the possible drawback for large M of there being fewer tabled integrals over [0,1] than over [0,∞]. In particular, the results of both prior and current representations have integration variables residing within square roots asarguments of Macdonald functions. In a number of cases, these can be converted to Meijer G-functions whose arguments have the form (ax2+bx+c)/x, for which a single tabled integral exists for the integrals from running over the interval [0,∞] of the prior paper, and from which other forms can be found using the techniques given therein. This is not so for integral representations over the interval [0,1]. Finally, we introduce a fourth integral representation that is not easily generalizable to large M but may well provide a bridge for finding the requisite integrals for such Meijer G-functions over [0,1].
]]>Axioms doi: 10.3390/axioms13020118
Authors: Chun-Fei Long Gui-Dong Li
We investigate the existence of solutions to the scalar field equation −Δu=g(u)−λuinRN, with mass constraint ∫RN|u|2dx=a>0,u∈H1(RN). Here, N≥3; g is a continuous function satisfying the conditions of the Berestycki–Lions type; λ is a Lagrange multiplier. Our results supplement and generalize some of the results in L. Jeanjean, S.-S. Lu, Calc. Var. Partial Differential Equations. 61 (2022), Paper No. 214, 18, and J. Hirata, K. Tanaka, Adv. Nonlinear Stud. 19 (2019), 263–290.
]]>Axioms doi: 10.3390/axioms13020119
Authors: Kao-Yi Shen
This research introduces a rule-based decision-making model to investigate corporate governance, which has garnered increasing attention within financial markets. However, the existing corporate governance model developed by the Security and Future Institute of Taiwan employs numerous indicators to assess listed stocks. The ultimate ranking hinges on the number of indicators a company meets, assuming independent relationships between these indicators, thereby failing to reveal contextual connections among them. This study proposes a hybrid rough set approach based on multiple rules induced from a decision table, aiming to overcome these constraints. Additionally, four sample companies from Taiwan undergo evaluation using this rule-based model, demonstrating consistent rankings with the official outcome. Moreover, the proposed approach offers a practical application for guiding improvement planning, providing a basis for determining improvement priorities. This research introduces a rule-based decision model comprising ten rules, revealing contextual relationships between indicators through if–then decision rules. This study, exemplified through a specific case, also provides insights into utilizing this model to strengthen corporate governance by identifying strategic improvement priorities.
]]>Axioms doi: 10.3390/axioms13020117
Authors: Isaac Vázquez-Mendoza Erika E. Rodríguez-Torres Mojgan Ezadian Lindi M. Wahl Philip J. Gerrish
A mutator is a variant in a population of organisms whose mutation rate is higher than the average mutation rate in the population. For genetic and population dynamics reasons, mutators are produced and survive with much greater frequency than anti-mutators (variants with a lower-than-average mutation rate). This strong asymmetry is a consequence of both fundamental genetics and natural selection; it can lead to a ratchet-like increase in the mutation rate. The rate at which mutators appear is, therefore, a parameter that should be of great interest to evolutionary biologists generally; for example, it can influence: (1) the survival duration of a species, especially asexual species (which are known to be short-lived), (2) the evolution of recombination, a process that can ameliorate the deleterious effects of mutator abundance, (3) the rate at which cancer appears, (4) the ability of pathogens to escape immune surveillance in their hosts, (5) the long-term fate of mitochondria, etc. In spite of its great relevance to basic and applied science, the rate of mutation to a mutator phenotype continues to be essentially unknown. The reasons for this gap in our knowledge are largely methodological; in general, a mutator phenotype cannot be observed directly, but must instead be inferred from the numbers of some neutral “marker” mutation that can be observed directly: different mutation-rate variants will produce this marker mutation at different rates. Here, we derive the expected distribution of the numbers of the marker mutants observed, accounting for the fact that some of the mutants will have been produced by a mutator phenotype that itself arose by mutation during the growth of the culture. These developments, together with previous enhancements of the Luria–Delbrück assay (by one of us, dubbed the “Jones protocol”), make possible a novel experimental protocol for estimating the rate of mutation to a mutator phenotype. Simulated experiments using biologically reasonable parameters that employ this protocol show that such experiments in the lab can give us fairly accurate estimates of the rate of mutation to a mutator phenotype. Although our ability to estimate mutation-to-mutator rates from simulated experiments is promising, we view this study as a proof-of-concept study and an important first step towards practical empirical estimation.
]]>Axioms doi: 10.3390/axioms13020116
Authors: Yuanhong Xu Mingcong Deng
In this paper, the robustness of a system with sundry disturbed open loop dynamics is investigated by employing robust right coprime factorization (RRCF). These sundry disturbed open loop dynamics are present not only in the feed forward path, but also within the feedback loop. In such a control framework, the nominal plant is firstly right coprime factorized and a feed forward and a feedback controllers are designed based on Bezout identity to ensure the overall stability. Subsequently, considering the sundry disturbed open loop dynamics, a new condition formulated as a disturbed Bezout identity is put forward to achieve the closed loop stability of the system, even in the presence of disturbances existing in sundry open loops, where in the feedback loop a disturbed identity operator is defined. This approach guarantees the system robustness if a specific inequality condition is satisfied. And, it should be noted that the proposed approach is applicable to both linear and nonlinear systems with sundry disturbed open loop dynamics. Simulations demonstrate the effectiveness of our methodology.
]]>Axioms doi: 10.3390/axioms13020115
Authors: Anthony G. Pakes
The deterministic SIR model for disease spread in a closed population is extended to allow infected individuals to recover to the susceptible state. This extension preserves the second constant of motion, i.e., a functional relationship of susceptible and removed numbers, S(t) and R(t), respectively. This feature allows a substantially complete elucidation of qualitative properties. The model exhibits three modes of behaviour classified in terms of the sign of −S′(0), the initial value of the epidemic curve. Model behaviour is similar to that of the SIS model if S′(0)>0 and to the SIR model if S′(0)<0. The separating case is completely soluble and S(t) is constant-valued. Long-term outcomes are determined for all cases, together with determination of the rate of convergence. Determining the shape of the epidemic curve motivates an investigation of curvature properties of all three state functions and quite complete results are obtained that are new, even for the SIR model. Finally, the second threshold theorem for the SIR model is extended in refined and generalised forms.
]]>Axioms doi: 10.3390/axioms13020114
Authors: Muhammad Aamir Ali Thanin Sitthiwirattham Elisabeth Köbis Asma Hanif
In this work, we initially derive an integral identity that incorporates a twice-differentiable function. After establishing the recently created identity, we proceed to demonstrate some new Hermite–Hadamard–Mercer-type inequalities for twice-differentiable convex functions. Additionally, it demonstrates that the recently introduced inequalities have extended certain pre-existing inequalities found in the literature. Finally, we provide applications to the newly established inequalities to verify their usefulness.
]]>Axioms doi: 10.3390/axioms13020113
Authors: Cheng Shen Zhijie Wen Wenliang Zhu Dapeng Fan Mingyuan Ling
Electro-optical detection systems face numerous challenges due to the complexity and difficulty of targeting controls for “low, slow and tiny” moving targets. In this paper, we present an optimal model of an advanced n-step adaptive Kalman filter and gyroscope short-term integration weighting fusion (nKF-Gyro) method with targeting control. A method is put forward to improve the model by adding a spherical coordinate system to design an adaptive Kalman filter to estimate target movements. The targeting error formation is analyzed in detail to reveal the relationship between tracking controller feedback and line-of-sight position correction. Based on the establishment of a targeting control coordinate system for tracking moving targets, a dual closed-loop composite optimization control model is proposed. The outer loop is used for estimating the motion parameters and predicting the future encounter point, while the inner loop is used for compensating the targeting error of various elements in the firing trajectory. Finally, the modeling method is substituted into the disturbance simulation verification, which can monitor and compensate for the targeting error of moving targets in real time. The results show that in the optimal model incorporating the nKF-Gyro method with targeting control, the error suppression was increased by up to 36.8% compared to that of traditional KF method and was 25% better than that of the traditional nKF method.
]]>Axioms doi: 10.3390/axioms13020112
Authors: Alexander J. Zaslavski
In this work, we study an iterative process induced by a contractive type set-valued mapping in a complete metric space and show its convergence, taking into account computational errors.
]]>Axioms doi: 10.3390/axioms13020111
Authors: Rodrick Wallace
Organized conflict, while confined by the laws of physics—and, under profound strategic incompetence, by the Lanchester equations—is not a physical process but rather an extended exchange between cognitive entities that have been shaped by path-dependent historical trajectories and cultural traditions. Cognition itself is confined by the necessity of duality, with an underlying information source constrained by the asymptotic limit theorems of information and control theories. We introduce the concept of a ‘basic underlying probability distribution’ characteristic of the particular cognitive process studied. The dynamic behavior of such systems is profoundly different for ‘thin-tailed’ and ‘fat-tailed’ distributions. The perspective permits the construction of new probability models that may provide useful statistical tools for the analysis of observational and experimental data associated with organized conflict, and, in some measure, for its management.
]]>Axioms doi: 10.3390/axioms13020110
Authors: Xiaoji Liu Ying Liu Hongwei Jin
In this paper, we present a new concept of the generalized core orthogonality (called the C-S orthogonality) for two generalized core invertible matrices A and B. A is said to be C-S orthogonal to B if ASB=0 and BAS=0, where AS is the generalized core inverse of A. The characterizations of C-S orthogonal matrices and the C-S additivity are also provided. And, the connection between the C-S orthogonality and C-S partial order has been given using their canonical form. Moreover, the concept of the strongly C-S orthogonality is defined and characterized.
]]>Axioms doi: 10.3390/axioms13020109
Authors: Huiling Xiang Hafiz Muhammad Athar Farid Muhammad Riaz
As digital technologies continue to reshape economic landscapes, the comprehensive evaluation of digital economy (DE) development in provincial regions becomes a critical endeavor. This article proposes a novel approach, integrating the linear programming method, fuzzy logic, and the alternative ranking order method accounting for two-step normalization (AROMAN), to assess the multifaceted facets of DE growth. The primary contribution of the AROMAN is the coupling of vector and linear normalization techniques in order to produce accurate data structures that are subsequently utilized in calculations. The proposed methodology accommodates the inherent uncertainties and complexities associated with the evaluation process, offering a robust framework for decision-makers. The linear programming aspect optimizes the weightings assigned to different evaluation criteria, ensuring a dynamic and context-specific assessment. By incorporating fuzzy logic, the model captures the vagueness and imprecision inherent in qualitative assessments, providing a more realistic representation of the DE’s multifaceted nature. The AROMAN further refines the ranking process, considering the interdependencies among the criteria and enhancing the accuracy of the evaluation. In order to ascertain the efficacy of the suggested methodology, a case study is undertaken pertaining to provincial areas, showcasing its implementation in the evaluation and a comparison of DE progress in various geographical settings. The outcomes illustrate the capacity of the model to produce perceptive and implementable insights for policymakers, thereby enabling them to make well-informed decisions and implement focused interventions that promote the expansion of the DE. Moreover, managerial implications, theoretical limitations, and a comparative analysis are also given of the proposed method.
]]>Axioms doi: 10.3390/axioms13020108
Authors: Huichao Yan Jia Chen
We investigate L2-approximation problems in the worst case setting in the weighted Hilbert spaces H(KRd,α,γ) with weights Rd,α,γ under parameters 1≥γ1≥γ2≥⋯≥0 and 1<α1≤α2≤⋯. Several interesting weighted Hilbert spaces H(KRd,α,γ) appear in this paper. We consider the worst case error of algorithms that use finitely many arbitrary continuous linear functionals. We discuss tractability of L2-approximation problems for the involved Hilbert spaces, which describes how the information complexity depends on d and ε−1. As a consequence we study the strongly polynomial tractability (SPT), polynomial tractability (PT), weak tractability (WT), and (t1,t2)-weak tractability ((t1,t2)-WT) for all t1>1 and t2>0 in terms of the introduced weights under the absolute error criterion or the normalized error criterion.
]]>Axioms doi: 10.3390/axioms13020107
Authors: Artūras Dubickas
In this note, we show that, for any real number τ∈[12,1), any finite set of positive integers K and any integer s1≥2, the sequence of integers s1,s2,s3,… satisfying si+1−si∈K if si is a prime number, and 2≤si+1≤τsi if si is a composite number, is bounded from above. The bound is given in terms of an explicit constant depending on τ,s1 and the maximal element of K only. In particular, if K is a singleton set and for each composite si the integer si+1 in the interval [2,τsi] is chosen by some prescribed rule, e.g., si+1 is the largest prime divisor of si, then the sequence s1,s2,s3,… is periodic. In general, we show that the sequences satisfying the above conditions are all periodic if and only if either K={1} and τ∈[12,34) or K={2} and τ∈[12,59).
]]>Axioms doi: 10.3390/axioms13020106
Authors: Alexander D. Bruno Alijon A. Azimov
The paper is a continuation and completion of the paper Bruno, A.D.; Azimov, A.A. Parametric Expansions of an Algebraic Variety Near Its Singularities. Axioms 2023, 5, 469, where we calculated parametric expansions of the three-dimensional algebraic manifold Ω, which appeared in theoretical physics, near its 3 singular points and near its one line of singular points. For that we used algorithms of Nonlinear Analysis: extraction of truncated polynomials, using the Newton polyhedron, their power transformations and Formal Generalized Implicit Function Theorem. Here we calculate parametric expansions of the manifold Ω near its one more singular point, near two curves of singular points and near infinity. Here we use 3 new things: (1) computation in algebraic extension of the field of rational numbers, (2) expansions near a curve of singular points and (3) calculation of branches near infinity.
]]>