Numerical Analysis and Optimization

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: 1 June 2024 | Viewed by 5444

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, University of Pristina in Kosovska Mitrovica, Lole Ribara 29, 38220 Kosovska Mitrovica, Serbia
Interests: numerical analysis; optimization; line search; convergence rate; operations research; iterative methods

E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18106 Niš, Serbia
Interests: numerical linear algebra; operations research; nonlinear optimization; heuristic optimization; hybrid methods of optimization; gradient neural networks; zeroing neural networks; symbolic computation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Serbian Academy of Sciences and Arts, Kneza Mihaila 35, 11000 Belgrade, Serbia
2. Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
Interests: orthogonal polynomials, orthogonal systems and special functions; interpolation, quadrature processes and integral equations; approximations by polynomials, splines and linear operators; numerical and optimization methods; polynomials (extremal problems, inequalities, zeros); iterative processes and inequalities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We prepare a collection of papers in applied mathematics regarding its two essential subject matters: numerical analysis and optimization. These two areas are the forerunners of the mathematical modeling and the numerical simulations. Mathematical modeling tools allow us to transform a physical reality into adequate abstract models on which we can further apply relevant calculations. Correspondingly, numerical simulation presents the process through which we calculate the solutions of the mathematical models on a computer, thus allowing us to simulate physical reality.

Numerical analysis, as an area of mathematics and computer science, analyzes and inspects convergence properties, and implements algorithms for solving various problems numerically. Such problems could originate from real-world applications of algebra, geometry, calculus and other mathematical disciplines. These problems generally appear to the natural sciences, social sciences, engineering, medicine, business and any other real-life areas.

On the numerical analysis and the optimization theory basis, many efficient iterative processes can be established. These models can be applied to solve different types of problems which often occur throughout matrix equations. Solving different types of matrix equations appears as a contemporary problem in almost any computational procedure, such as training algorithms in machine learning, numerical simulations in many scientific and engineering areas and advanced data analysis in economics and social sciences.

Dr. Milena J. Petrović
Prof. Dr. Predrag S. Stanimirovic
Prof. Dr. Gradimir V. Milovanović
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unconstrained optimization
  • constrained optimization
  • optimization methods
  • iterative processes
  • gradient-descent methods
  • projection methods
  • line search
  • convergence rate
  • hybrid methods
  • linear matrix equations
  • nonlinear system of equations

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 344 KiB  
Article
Strict Vector Equilibrium Problems of Multi-Product Supply–Demand Networks with Capacity Constraints and Uncertain Demands
by Ru Li and Guolin Yu
Axioms 2024, 13(4), 263; https://doi.org/10.3390/axioms13040263 - 16 Apr 2024
Viewed by 394
Abstract
This paper considers a multi-product, multi-criteria supply–demand network equilibrium model with capacity constraints and uncertain demands. Strict network equilibrium principles are proposed both in the case of a single criterion and multi-criteria, respectively. Based on a single criterion, it proves that strict network [...] Read more.
This paper considers a multi-product, multi-criteria supply–demand network equilibrium model with capacity constraints and uncertain demands. Strict network equilibrium principles are proposed both in the case of a single criterion and multi-criteria, respectively. Based on a single criterion, it proves that strict network equilibrium flows are equivalent to vector variational inequalities, and the existence of strict network equilibrium flows is derived by virtue of the Fan–Browder fixed point theorem. Based on multi-criteria, the scalarization of strict network equilibrium flows is given by using Gerstewitz’s function without any convexity assumptions. Meanwhile, the necessary and sufficient conditions of strict network equilibrium flows are derived in terms of vector variational inequalities. Finally, an example is given to illustrate the application of the derived theoretical results. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

14 pages, 274 KiB  
Article
An Efficient Penalty Method without a Line Search for Nonlinear Optimization
by Assma Leulmi
Axioms 2024, 13(3), 176; https://doi.org/10.3390/axioms13030176 - 07 Mar 2024
Viewed by 671
Abstract
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, [...] Read more.
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, we illustrate the superior performance of our new approximate function with respect to the line search one through a numerical experiment on numerous collections of test problems. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
16 pages, 312 KiB  
Article
An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints
by Taiyong Song and Zexian Liu
Axioms 2024, 13(3), 170; https://doi.org/10.3390/axioms13030170 - 06 Mar 2024
Viewed by 728
Abstract
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG [...] Read more.
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG methods have illustrated excellent numerical performance, they are only used to solve unconstrained optimization problems at present. In this paper, we extend the SMCG methods and present an efficient SMCG method for solving nonlinear monotone equations with convex constraints by combining it with the projection technique, where the search direction is sufficiently descent.Under mild conditions, we establish the global convergence and R-linear convergence rate of the proposed method. The numerical experiment indicates that the proposed method is very promising. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

19 pages, 512 KiB  
Article
Hyper-Heuristic Approach for Tuning Parameter Adaptation in Differential Evolution
by Vladimir Stanovov, Lev Kazakovtsev and Eugene Semenkin
Axioms 2024, 13(1), 59; https://doi.org/10.3390/axioms13010059 - 19 Jan 2024
Viewed by 829
Abstract
Differential evolution (DE) is one of the most promising black-box numerical optimization methods. However, DE algorithms suffer from the problem of control parameter settings. Various adaptation methods have been proposed, with success history-based adaptation being the most popular. However, hand-crafted designs are known [...] Read more.
Differential evolution (DE) is one of the most promising black-box numerical optimization methods. However, DE algorithms suffer from the problem of control parameter settings. Various adaptation methods have been proposed, with success history-based adaptation being the most popular. However, hand-crafted designs are known to suffer from human perception bias. In this study, our aim is to design automatically a parameter adaptation method for DE with the use of the hyper-heuristic approach. In particular, we consider the adaptation of scaling factor F, which is the most sensitive parameter of DE algorithms. In order to propose a flexible approach, a Taylor series expansion is used to represent the dependence between the success rate of the algorithm during its run and the scaling factor value. Moreover, two Taylor series are used for the mean of the random distribution for sampling F and its standard deviation. Unlike most studies, the Student’s t distribution is applied, and the number of degrees of freedom is also tuned. As a tuning method, another DE algorithm is used. The experiments performed on a recently proposed L-NTADE algorithm and two benchmark sets, CEC 2017 and CEC 2022, show that there is a relatively simple adaptation technique with the scaling factor changing between 0.4 and 0.6, which enables us to achieve high performance in most scenarios. It is shown that the automatically designed heuristic can be efficiently approximated by two simple equations, without a loss of efficiency. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

26 pages, 1324 KiB  
Article
Application of Gradient Optimization Methods in Defining Neural Dynamics
by Predrag S. Stanimirović, Nataša Tešić, Dimitrios Gerontitis, Gradimir V. Milovanović, Milena J. Petrović, Vladimir L. Kazakovtsev and Vladislav Stasiuk
Axioms 2024, 13(1), 49; https://doi.org/10.3390/axioms13010049 - 14 Jan 2024
Viewed by 957
Abstract
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN [...] Read more.
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN model, termed as GGNN(A,B,D). The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN(A,B,D) design converges asymptotically to the solution of the matrix equation AXB=D, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics is considered. The Simulink implementation of presented GGNN models is carried out on the set of real matrices. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

17 pages, 319 KiB  
Article
Constraint Qualifications for Vector Optimization Problems in Real Topological Spaces
by Renying Zeng
Axioms 2023, 12(8), 783; https://doi.org/10.3390/axioms12080783 - 12 Aug 2023
Viewed by 530
Abstract
In this paper, we introduce a series of definitions of generalized affine functions for vector-valued functions by use of “linear set”. We prove that our generalized affine functions have some similar properties to generalized convex functions. We present examples to show that our [...] Read more.
In this paper, we introduce a series of definitions of generalized affine functions for vector-valued functions by use of “linear set”. We prove that our generalized affine functions have some similar properties to generalized convex functions. We present examples to show that our generalized affinenesses are different from one another, and also provide an example to show that our definition of presubaffinelikeness is non-trivial; presubaffinelikeness is the weakest generalized affineness introduced in this article. We work with optimization problems that are defined and taking values in linear topological spaces. We devote to the study of constraint qualifications, and derive some optimality conditions as well as a strong duality theorem. Our optimization problems have inequality constraints, equality constraints, and abstract constraints; our inequality constraints are generalized convex functions and equality constraints are generalized affine functions. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
19 pages, 441 KiB  
Article
Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions
by Yanni Guo and Yinan Yan
Axioms 2023, 12(6), 508; https://doi.org/10.3390/axioms12060508 - 24 May 2023
Viewed by 632
Abstract
In this paper, we propose a parameterized variable metric three-operator algorithm for finding a zero of the sum of three monotone operators in a real Hilbert space. Under some appropriate conditions, we prove the strong convergence of the proposed algorithm. Furthermore, we propose [...] Read more.
In this paper, we propose a parameterized variable metric three-operator algorithm for finding a zero of the sum of three monotone operators in a real Hilbert space. Under some appropriate conditions, we prove the strong convergence of the proposed algorithm. Furthermore, we propose a parameterized variable metric three-operator algorithm with a multi-step inertial term and prove its strong convergence. Finally, we illustrate the effectiveness of the proposed algorithm with numerical examples. Full article
(This article belongs to the Special Issue Numerical Analysis and Optimization)
Show Figures

Figure 1

Back to TopTop