Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = multiextremal function

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2151 KB  
Article
Modifications of Flower Pollination, Teacher-Learner and Firefly Algorithms for Solving Multiextremal Optimization Problems
by Pavel Sorokovikov and Alexander Gornov
Algorithms 2022, 15(10), 359; https://doi.org/10.3390/a15100359 - 28 Sep 2022
Cited by 1 | Viewed by 1592
Abstract
The article offers a possible treatment for the numerical research of tasks which require searching for an absolute optimum. This approach is established by employing both globalized nature-inspired methods as well as local descent methods for exploration and exploitation. Three hybrid nonconvex minimization [...] Read more.
The article offers a possible treatment for the numerical research of tasks which require searching for an absolute optimum. This approach is established by employing both globalized nature-inspired methods as well as local descent methods for exploration and exploitation. Three hybrid nonconvex minimization algorithms are developed and implemented. Modifications of flower pollination, teacher-learner, and firefly algorithms are used as nature-inspired methods for global searching. The modified trust region method based on the main diagonal approximation of the Hessian matrix is applied for local refinement. We have performed the numerical comparison of variants of the realized approach employing a representative collection of multimodal objective functions. The implemented nonconvex optimization methods have been used to solve the applied problems. These tasks utilize an optimization of the low-energy metal Sutton-Chen clusters potentials with a very large number of atoms and the parametric identification of the nonlinear dynamic model. The results of this research confirms the performance of the suggested algorithms. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

20 pages, 652 KB  
Article
Optimization of Turbulence Model Parameters Using the Global Search Method Combined with Machine Learning
by Konstantin Barkalov, Ilya Lebedev, Marina Usova, Daria Romanova, Daniil Ryazanov and Sergei Strijhak
Mathematics 2022, 10(15), 2708; https://doi.org/10.3390/math10152708 - 31 Jul 2022
Cited by 13 | Viewed by 3408
Abstract
The paper considers the slope flow simulation and the problem of finding the optimal parameter values of this mathematical model. The slope flow is modeled using the finite volume method applied to the Reynolds-averaged Navier–Stokes equations with closure in the form of the [...] Read more.
The paper considers the slope flow simulation and the problem of finding the optimal parameter values of this mathematical model. The slope flow is modeled using the finite volume method applied to the Reynolds-averaged Navier–Stokes equations with closure in the form of the kωSST turbulence model. The optimal values of the turbulence model coefficients for free surface gravity multiphase flows were found using the global search algorithm. Calibration was performed to increase the similarity of the experimental and calculated velocity profiles. The Root Mean Square Error (RMSE) of derivation between the calculated flow velocity profile and the experimental one is considered as the objective function in the optimization problem. The calibration of the turbulence model coefficients for calculating the free surface flows on test slopes using the multiphase model for interphase tracking has not been performed previously. To solve the multi-extremal optimization problem arising from the search for the minimum of the loss function for the flow velocity profile, we apply a new optimization approach using a Peano curve to reduce the dimensionality of the problem. To speed up the optimization procedure, the objective function was approximated using an artificial neural network. Thus, an interdisciplinary approach was applied which allowed the optimal values of six turbulence model parameters to be found using OpenFOAM and Globalizer software. Full article
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)
Show Figures

Figure 1

14 pages, 532 KB  
Article
Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning
by Konstantin Barkalov, Ilya Lebedev and Evgeny Kozinov
Entropy 2021, 23(10), 1272; https://doi.org/10.3390/e23101272 - 28 Sep 2021
Cited by 3 | Viewed by 3210
Abstract
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used [...] Read more.
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy). Full article
Show Figures

Figure 1

Back to TopTop