Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = conjugate subgradients

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1366 KB  
Article
A Family of Multi-Step Subgradient Minimization Methods
by Elena Tovbis, Vladimir Krutikov, Predrag Stanimirović, Vladimir Meshechkin, Aleksey Popov and Lev Kazakovtsev
Mathematics 2023, 11(10), 2264; https://doi.org/10.3390/math11102264 - 11 May 2023
Cited by 3 | Viewed by 2350
Abstract
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along [...] Read more.
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along the opposite direction (with a minus sign) enables the algorithm to go beyond the neighborhood of the current minimum. The family of algorithms for finding the descent direction is based on solving systems of inequalities. The finite convergence of the algorithms on separable bounded sets is proved. Algorithms for solving systems of inequalities are used to organize the RSM family. On quadratic functions, the methods of the RSM family are equivalent to the conjugate gradient method (CGM). The methods are intended for solving high-dimensional problems and are studied theoretically and numerically. Examples of solving convex and non-convex smooth and non-smooth problems of large dimensions are given. Full article
(This article belongs to the Special Issue Intelligent Computing and Optimization)
Show Figures

Figure 1

20 pages, 1598 KB  
Article
Properties of the Quadratic Transformation of Dual Variables
by Vladimir Krutikov, Elena Tovbis, Anatoly Bykov, Predrag Stanimirovic, Ekaterina Chernova and Lev Kazakovtsev
Algorithms 2023, 16(3), 148; https://doi.org/10.3390/a16030148 - 7 Mar 2023
Cited by 1 | Viewed by 2303
Abstract
We investigate a solution of a convex programming problem with a strongly convex objective function based on the dual approach. A dual optimization problem has constraints on the positivity of variables. We study the methods and properties of transformations of dual variables that [...] Read more.
We investigate a solution of a convex programming problem with a strongly convex objective function based on the dual approach. A dual optimization problem has constraints on the positivity of variables. We study the methods and properties of transformations of dual variables that enable us to obtain an unconstrained optimization problem. We investigate the previously known method of transforming the components of dual variables in the form of their modulus (modulus method). We show that in the case of using the modulus method, the degree of the degeneracy of the function increases as it approaches the optimal point. Taking into account the ambiguity of the gradient in the boundary regions of the sign change of the new dual function variables and the increase in the degree of the function degeneracy, we need to use relaxation subgradient methods (RSM) that are difficult to implement and that can solve non-smooth non-convex optimization problems with a high degree of elongation of level surfaces. We propose to use the transformation of the components of dual variables in the form of their square (quadratic method). We prove that the transformed dual function has a Lipschitz gradient with a quadratic method of transformation. This enables us to use efficient gradient methods to find the extremum. The above properties are confirmed by a computational experiment. With a quadratic transformation compared to a modulus transformation, it is possible to obtain a solution of the problem by relaxation subgradient methods and smooth function minimization methods (conjugate gradient method and quasi-Newtonian method) with higher accuracy and lower computational costs. The noted transformations of dual variables were used in the program module for calculating the maximum permissible emissions of enterprises (MPE) of the software package for environmental monitoring of atmospheric air (ERA-AIR). Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

33 pages, 530 KB  
Article
Relaxation Subgradient Algorithms with Machine Learning Procedures
by Vladimir Krutikov, Svetlana Gutova, Elena Tovbis, Lev Kazakovtsev and Eugene Semenkin
Mathematics 2022, 10(21), 3959; https://doi.org/10.3390/math10213959 - 25 Oct 2022
Cited by 9 | Viewed by 2395
Abstract
In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to [...] Read more.
In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to solve with traditional gradient or conjugate gradient methods. Relaxation subgradient minimization methods (RSMMs) construct a descent direction that forms an obtuse angle with all subgradients of the current minimum neighborhood, which reduces to the problem of solving systems of inequalities. Having formalized the model and taking into account the specific features of subgradient sets, we reduced the problem of solving a system of inequalities to an approximation problem and obtained an efficient rapidly converging iterative learning algorithm for finding the direction of descent, conceptually similar to the iterative least squares method. The new algorithm is theoretically substantiated, and an estimate of its convergence rate is obtained depending on the parameters of the subgradient set. On this basis, we have developed and substantiated a new RSMM, which has the properties of the conjugate gradient method on quadratic functions. We have developed a practically realizable version of the minimization algorithm that uses a rough one-dimensional search. A computational experiment on complex functions in a space of high dimension confirms the effectiveness of the proposed algorithm. In the problems of training neural network models, where it is required to remove insignificant variables or neurons using methods such as the Tibshirani LASSO, our new algorithm outperforms known methods. Full article
(This article belongs to the Special Issue Applied and Computational Mathematics for Digital Environments)
Show Figures

Figure 1

Back to TopTop