Numerical Analysis and Optimization

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 10312

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, University of Pristina in Kosovska Mitrovica, Lole Ribara 29, 38220 Kosovska Mitrovica, Serbia
Interests: numerical analysis; optimization; line search; convergence rate; operations research; iterative methods

E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18106 Niš, Serbia
Interests: artificial neural networks; nonlinear systems; computational mathematics; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Serbian Academy of Sciences and Arts, Kneza Mihaila 35, 11000 Belgrade, Serbia
2. Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
Interests: orthogonal polynomials, orthogonal systems and special functions; interpolation, quadrature processes and integral equations; approximations by polynomials, splines and linear operators; numerical and optimization methods; polynomials (extremal problems, inequalities, zeros); iterative processes and inequalities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We prepare a collection of papers in applied mathematics regarding its two essential subject matters: numerical analysis and optimization. These two areas are the forerunners of the mathematical modeling and the numerical simulations. Mathematical modeling tools allow us to transform a physical reality into adequate abstract models on which we can further apply relevant calculations. Correspondingly, numerical simulation presents the process through which we calculate the solutions of the mathematical models on a computer, thus allowing us to simulate physical reality.

Numerical analysis, as an area of mathematics and computer science, analyzes and inspects convergence properties, and implements algorithms for solving various problems numerically. Such problems could originate from real-world applications of algebra, geometry, calculus and other mathematical disciplines. These problems generally appear to the natural sciences, social sciences, engineering, medicine, business and any other real-life areas.

On the numerical analysis and the optimization theory basis, many efficient iterative processes can be established. These models can be applied to solve different types of problems which often occur throughout matrix equations. Solving different types of matrix equations appears as a contemporary problem in almost any computational procedure, such as training algorithms in machine learning, numerical simulations in many scientific and engineering areas and advanced data analysis in economics and social sciences.

Dr. Milena J. Petrović
Prof. Dr. Predrag S. Stanimirovic
Prof. Dr. Gradimir V. Milovanović
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unconstrained optimization
  • constrained optimization
  • optimization methods
  • iterative processes
  • gradient-descent methods
  • projection methods
  • line search
  • convergence rate
  • hybrid methods
  • linear matrix equations
  • nonlinear system of equations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2188 KiB  
Article
Simultaneous Method for Solving Certain Systems of Matrix Equations with Two Unknowns
by Predrag S. Stanimirović, Miroslav Ćirić, Spyridon D. Mourtas, Gradimir V. Milovanović and Milena J. Petrović
Axioms 2024, 13(12), 838; https://doi.org/10.3390/axioms13120838 - 28 Nov 2024
Viewed by 378
Abstract
Quantitative bisimulations between weighted finite automata are defined as solutions of certain systems of matrix-vector inequalities and equations. In the context of fuzzy automata and max-plus automata, testing the existence of bisimulations and their computing are performed through a sequence of matrices that [...] Read more.
Quantitative bisimulations between weighted finite automata are defined as solutions of certain systems of matrix-vector inequalities and equations. In the context of fuzzy automata and max-plus automata, testing the existence of bisimulations and their computing are performed through a sequence of matrices that is built member by member, whereby the next member of the sequence is obtained by solving a particular system of linear matrix-vector inequalities and equations in which the previously computed member appears. By modifying the systems that define bisimulations, systems of matrix-vector inequalities and equations with k unknowns are obtained. Solutions of such systems, in the case of existence, witness to the existence of a certain type of partial equivalence, where it is not required that the word functions computed by two WFAs match on all input words, but only on all input words whose lengths do not exceed k. Solutions of these new systems represent finite sequences of matrices which, in the context of fuzzy automata and max-plus automata, are also computed sequentially, member by member. Here we deal with those systems in the context of WFAs over the field of real numbers and propose a different approach, where all members of the sequence are computed simultaneously. More precisely, we apply a simultaneous approach in solving the corresponding systems of matrix-vector equations with two unknowns. Zeroing neural network (ZNN) neuro-dynamical systems for approximating solutions of heterotypic bisimulations are proposed. Numerical simulations are performed for various random initial states and comparison with the Matlab, linear programming solver linprog, and the pseudoinverse solution generated by the standard function pinv is given. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

15 pages, 3160 KiB  
Article
Packing Spheres into a Minimum-Height Parabolic Container
by Yuriy Stoyan, Georgiy Yaskov, Tetyana Romanova, Igor Litvinchev, José Manuel Velarde Cantú and Mauricio López Acosta
Axioms 2024, 13(6), 396; https://doi.org/10.3390/axioms13060396 - 13 Jun 2024
Cited by 1 | Viewed by 649
Abstract
Sphere packing consists of placing several spheres in a container without mutual overlapping. While packing into regular-shape containers is well explored, less attention is focused on containers with nonlinear boundaries, such as ellipsoids or paraboloids. Packing n-dimensional spheres into a minimum-height container [...] Read more.
Sphere packing consists of placing several spheres in a container without mutual overlapping. While packing into regular-shape containers is well explored, less attention is focused on containers with nonlinear boundaries, such as ellipsoids or paraboloids. Packing n-dimensional spheres into a minimum-height container bounded by a parabolic surface is formulated. The minimum allowable distances between spheres as well as between spheres and the container boundary are considered. A normalized Φ-function is used for analytical description of the containment constraints. A nonlinear programming model for the packing problem is provided. A solution algorithm based on the feasible directions approach and a decomposition technique is proposed. The computational results for problem instances with various space dimensions, different numbers of spheres and their radii, the minimal allowable distances and the parameters of the parabolic container are presented to demonstrate the efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

16 pages, 344 KiB  
Article
Strict Vector Equilibrium Problems of Multi-Product Supply–Demand Networks with Capacity Constraints and Uncertain Demands
by Ru Li and Guolin Yu
Axioms 2024, 13(4), 263; https://doi.org/10.3390/axioms13040263 - 16 Apr 2024
Viewed by 1270
Abstract
This paper considers a multi-product, multi-criteria supply–demand network equilibrium model with capacity constraints and uncertain demands. Strict network equilibrium principles are proposed both in the case of a single criterion and multi-criteria, respectively. Based on a single criterion, it proves that strict network [...] Read more.
This paper considers a multi-product, multi-criteria supply–demand network equilibrium model with capacity constraints and uncertain demands. Strict network equilibrium principles are proposed both in the case of a single criterion and multi-criteria, respectively. Based on a single criterion, it proves that strict network equilibrium flows are equivalent to vector variational inequalities, and the existence of strict network equilibrium flows is derived by virtue of the Fan–Browder fixed point theorem. Based on multi-criteria, the scalarization of strict network equilibrium flows is given by using Gerstewitz’s function without any convexity assumptions. Meanwhile, the necessary and sufficient conditions of strict network equilibrium flows are derived in terms of vector variational inequalities. Finally, an example is given to illustrate the application of the derived theoretical results. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

14 pages, 274 KiB  
Article
An Efficient Penalty Method without a Line Search for Nonlinear Optimization
by Assma Leulmi
Axioms 2024, 13(3), 176; https://doi.org/10.3390/axioms13030176 - 7 Mar 2024
Viewed by 1071
Abstract
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, [...] Read more.
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, we illustrate the superior performance of our new approximate function with respect to the line search one through a numerical experiment on numerous collections of test problems. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
16 pages, 312 KiB  
Article
An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints
by Taiyong Song and Zexian Liu
Axioms 2024, 13(3), 170; https://doi.org/10.3390/axioms13030170 - 6 Mar 2024
Cited by 1 | Viewed by 1210
Abstract
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG [...] Read more.
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG methods have illustrated excellent numerical performance, they are only used to solve unconstrained optimization problems at present. In this paper, we extend the SMCG methods and present an efficient SMCG method for solving nonlinear monotone equations with convex constraints by combining it with the projection technique, where the search direction is sufficiently descent.Under mild conditions, we establish the global convergence and R-linear convergence rate of the proposed method. The numerical experiment indicates that the proposed method is very promising. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

19 pages, 512 KiB  
Article
Hyper-Heuristic Approach for Tuning Parameter Adaptation in Differential Evolution
by Vladimir Stanovov, Lev Kazakovtsev and Eugene Semenkin
Axioms 2024, 13(1), 59; https://doi.org/10.3390/axioms13010059 - 19 Jan 2024
Cited by 2 | Viewed by 1330
Abstract
Differential evolution (DE) is one of the most promising black-box numerical optimization methods. However, DE algorithms suffer from the problem of control parameter settings. Various adaptation methods have been proposed, with success history-based adaptation being the most popular. However, hand-crafted designs are known [...] Read more.
Differential evolution (DE) is one of the most promising black-box numerical optimization methods. However, DE algorithms suffer from the problem of control parameter settings. Various adaptation methods have been proposed, with success history-based adaptation being the most popular. However, hand-crafted designs are known to suffer from human perception bias. In this study, our aim is to design automatically a parameter adaptation method for DE with the use of the hyper-heuristic approach. In particular, we consider the adaptation of scaling factor F, which is the most sensitive parameter of DE algorithms. In order to propose a flexible approach, a Taylor series expansion is used to represent the dependence between the success rate of the algorithm during its run and the scaling factor value. Moreover, two Taylor series are used for the mean of the random distribution for sampling F and its standard deviation. Unlike most studies, the Student’s t distribution is applied, and the number of degrees of freedom is also tuned. As a tuning method, another DE algorithm is used. The experiments performed on a recently proposed L-NTADE algorithm and two benchmark sets, CEC 2017 and CEC 2022, show that there is a relatively simple adaptation technique with the scaling factor changing between 0.4 and 0.6, which enables us to achieve high performance in most scenarios. It is shown that the automatically designed heuristic can be efficiently approximated by two simple equations, without a loss of efficiency. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

26 pages, 1324 KiB  
Article
Application of Gradient Optimization Methods in Defining Neural Dynamics
by Predrag S. Stanimirović, Nataša Tešić, Dimitrios Gerontitis, Gradimir V. Milovanović, Milena J. Petrović, Vladimir L. Kazakovtsev and Vladislav Stasiuk
Axioms 2024, 13(1), 49; https://doi.org/10.3390/axioms13010049 - 14 Jan 2024
Cited by 1 | Viewed by 1515
Abstract
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN [...] Read more.
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN model, termed as GGNN(A,B,D). The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN(A,B,D) design converges asymptotically to the solution of the matrix equation AXB=D, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics is considered. The Simulink implementation of presented GGNN models is carried out on the set of real matrices. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

17 pages, 319 KiB  
Article
Constraint Qualifications for Vector Optimization Problems in Real Topological Spaces
by Renying Zeng
Axioms 2023, 12(8), 783; https://doi.org/10.3390/axioms12080783 - 12 Aug 2023
Cited by 2 | Viewed by 850
Abstract
In this paper, we introduce a series of definitions of generalized affine functions for vector-valued functions by use of “linear set”. We prove that our generalized affine functions have some similar properties to generalized convex functions. We present examples to show that our [...] Read more.
In this paper, we introduce a series of definitions of generalized affine functions for vector-valued functions by use of “linear set”. We prove that our generalized affine functions have some similar properties to generalized convex functions. We present examples to show that our generalized affinenesses are different from one another, and also provide an example to show that our definition of presubaffinelikeness is non-trivial; presubaffinelikeness is the weakest generalized affineness introduced in this article. We work with optimization problems that are defined and taking values in linear topological spaces. We devote to the study of constraint qualifications, and derive some optimality conditions as well as a strong duality theorem. Our optimization problems have inequality constraints, equality constraints, and abstract constraints; our inequality constraints are generalized convex functions and equality constraints are generalized affine functions. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
19 pages, 441 KiB  
Article
Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions
by Yanni Guo and Yinan Yan
Axioms 2023, 12(6), 508; https://doi.org/10.3390/axioms12060508 - 24 May 2023
Viewed by 881
Abstract
In this paper, we propose a parameterized variable metric three-operator algorithm for finding a zero of the sum of three monotone operators in a real Hilbert space. Under some appropriate conditions, we prove the strong convergence of the proposed algorithm. Furthermore, we propose [...] Read more.
In this paper, we propose a parameterized variable metric three-operator algorithm for finding a zero of the sum of three monotone operators in a real Hilbert space. Under some appropriate conditions, we prove the strong convergence of the proposed algorithm. Furthermore, we propose a parameterized variable metric three-operator algorithm with a multi-step inertial term and prove its strong convergence. Finally, we illustrate the effectiveness of the proposed algorithm with numerical examples. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

Back to TopTop