Next Article in Journal
Boosting the Performance of CDCL-Based SAT Solvers by Exploiting Backbones and Backdoors
Next Article in Special Issue
Domination and Independent Domination in Extended Supergrid Graphs
Previous Article in Journal
Correlation Analysis of Factors Affecting Firm Performance and Employees Wellbeing: Application of Advanced Machine Learning Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of the Tomtit Flock Metaheuristic Optimization Algorithm to the Optimal Discrete Time Deterministic Dynamical Control Problem

by
Andrei V. Panteleev
* and
Anna A. Kolessa
Department of Mathematics and Cybernetics, Moscow Aviation Institute, National Research University, 4, Volokolamskoe Shosse, 125993 Moscow, Russia
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(9), 301; https://doi.org/10.3390/a15090301
Submission received: 4 August 2022 / Revised: 21 August 2022 / Accepted: 23 August 2022 / Published: 26 August 2022
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)

Abstract

:
A new bio-inspired method for optimizing the objective function on a parallelepiped set of admissible solutions is proposed. It uses a model of the behavior of tomtits during the search for food. This algorithm combines some techniques for finding the extremum of the objective function, such as the memory matrix and the Levy flight from the cuckoo algorithm. The trajectories of tomtits are described by the jump-diffusion processes. The algorithm is applied to the classic and nonseparable optimal control problems for deterministic discrete dynamical systems. This type of control problem can often be solved using the discrete maximum principle or more general necessary optimality conditions, and the Bellman’s equation, but sometimes it is extremely difficult or even impossible. For this reason, there is a need to create new methods to solve these problems. The new metaheuristic algorithm makes it possible to obtain solutions of acceptable quality in an acceptable time. The efficiency and analysis of this method are demonstrated by solving a number of optimal deterministic discrete open-loop control problems: nonlinear nonseparable problems (Luus–Tassone and Li–Haimes) and separable problems for linear control dynamical systems.

1. Introduction

Global optimization algorithms are widely used to solve engineering, financial, optimal control problems, as well as problems of clustering, classification, deep machine learning and many others [1,2,3,4,5,6,7,8,9,10]. To solve complex applied problems, both deterministic methods of mathematical programming [11,12,13] and stochastic metaheuristic optimization algorithms can be used [14,15,16,17,18,19,20,21,22,23,24,25,26]. The advantage of the methods of the first group is their guaranteed convergence to the global extremum, and the advantage of the second group is the possibility of obtaining a good-quality solution at acceptable computational costs, even in the absence of convergence guarantees. Among metaheuristic optimization algorithms, various groups are conventionally distinguished: evolutionary methods, swarm intelligence methods, algorithms generated by the laws of biology and physics, multi-start, multi-agent, memetic, and human-based methods. The classification is conditional, since the same algorithm can belong to several groups at once. Four characteristic groups of metaheuristic algorithms can be distinguished, in which heuristics that have proven themselves in solving various optimization problems are coordinated by a higher-level algorithm.
Evolutionary methods, in which the search process is associated with the evolution of a solutions set, named a population, include Genetic Algorithms (GA), Self-Organizing Migrating Algorithm (SOMA), Memetic Algorithms (MA), Differential Evolution (DE), Covariance Matrix Adaptation Evolution Strategy (CMAES), Scatter Search (SS), Artificial Immune Systems (AIS), Variable Mesh Optimization (VMO), Invasive Weed Optimization (IWO), and Cuckoo Search (SC) [3,4,14,15,16,17,18,20].
The group of Swarm Intelligence algorithms includes Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Bacterial Foraging Optimization (BFO), Bat-Inspired Algorithm (BA), Fish School Search (FSS), Cat Swarm Optimization (CSO), Firefly Algorithm (FA), Gray Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Glowworm Swarm Optimization (GSO), Shuffled Frog-Leaping Algorithm (SFLA), Krill Herd (KH), Elephant Herding Optimization (EHO), Lion Pride Optimization Algorithm (LPOA), Spotted Hyena Optimizer (SHO), Spider Monkey Optimization (SMO), Imperialist Competitive Algorithm (ICA), Stochastic Diffusion Search (SDS), Human Group Optimization Algorithm (HGOA), and Perch School Search Algorithm (PSS). In the methods of this group, swarm members (solutions) exchange information during the search process, using information about the absolute leaders and local leaders among neighbors of each solution and their own best positions [2,15,16,17,20,22,23,24,25,26,27,28].
The group of physics-based algorithms includes Simulated Annealing (SA), Adaptive Simulated Annealing (ASA), Central Force Optimization (CFO), Big Bang-Big Crunch (BB-BC), Harmony Search (HS), Fireworks Algorithm (FA), Grenade Explosion Method (GEM), Spiral Dynamics Algorithm (SDA), Intelligent Water Drops Algorithm (IWD), Electromagnetism-like Mechanism (EM), and the Gravitational Search Algorithm (GSA) [15,16,17,19,23,24,25,26,27,28].
The group of multi-start-based algorithms includes Greedy Randomized Adaptive Search (GRAS) and Tabu Search (TS) [15,16,17].
Among these four groups, bio-inspired optimization algorithms can be distinguished as a part of nature-inspired algorithms [2,7,15,22,23,24,25,26,27,28]. In turn, among bio-inspired methods, bird-inspired algorithms that imitate the characteristic features of the behavior of flocks of various birds during foraging, migration, and hunting are widely used: Bird Mating Optimizer (BMO), Chicken Swarm Optimization (CSO), Crow Search Algorithm (CSA), Cuckoo Search (CS), Cuckoo Optimization Algorithm (COA), Emperor Penguin Optimizer (EPO), Emperor Penguins Colony (EPC), Harris Hawks Optimization (HHO), Migrating Bird Optimization (MBO), Owl Search Algorithm (OSA), Pigeon Inspired Optimization (PIO), Raven Roosting Optimization (RRO), Satin Bowerbird Optimizer (SBO), Seagull Optimization Algorithm (SOA), and the Sooty Tern Optimization Algorithm (STOA) [15,24,25,26].
One of the applications of bio-inspired optimization algorithms is the problem of finding control laws for discrete dynamical systems [1,10]. As a rule, to solve this class of optimal control problems, the necessary optimality conditions are applied, which are reduced to solve a boundary value problem for a system of difference equations. For systems with convex varying, the discrete maximum principle is applied. An alternative way is to apply sufficient optimality conditions in the form of the Bellman’s equation. In this case, the optimal control found in the form of feedback depends on the state vector of the system, which is more preferable. However, it is well known that with an increase in the state vector dimension, the computational costs of using the dynamic programming procedure increase significantly. Special problems are caused by the solution of nonseparable problems, as well as problems in which the vector variation is not convex and, therefore, the discrete maximum principle is not valid. Since the problem of optimal control of discrete systems is finite dimensional, the use of efficient bio-inspired optimization methods for its resolution is natural.
The article is devoted to the development of the swarm intelligence and bird-inspired groups of methods based on observing the process of searching for food by a flock of tomtits, organizing their sequential movement under the influence of the leader of the flock. To simulate the trajectories of movement of each tomtit, the solution of a stochastic differential equation with jumps is used. The parameters of the random process—the drift vector and the diffusion matrix—depend on the position of all members of the flock and their individual achievements. The method is hybrid because it also uses the ideas of particle swarm optimization methods [15,16,17], methods that imitate the behavior of cuckoos with Levy flights [29], and the Luus–Jaakola method with successive reduction and further incomplete restoration of the search area [1]. Based on the assertion of the Free Lunch Theorem [30], it can be argued that the problem of developing new efficient global optimization algorithms used to solve complex optimal control problems remains relevant. In particular, the method should allow us to solve both separable and non-classical nonseparable optimization problems for discrete deterministic dynamical control systems [1,10]. As a benchmark of problems for assessing the accuracy of the method and computational costs, a set of nonlinear nonseparable and classical linear separable problems with known best or exact solutions was used.
The paper is organized as follows. Section 2.1 contains the statement of the discrete deterministic open-loop control problem. Section 2.2 provides a description of the solution search strategy and a step-by-step novel bio-inspired metaheuristic optimization method. In Section 3, the application of the new optimization method described in Section 2.2 for the representative set of optimal control problems described in Section 2.1 is given. Recommendations on the choice of method hyperparameters are given, time costs are estimated to obtain numerical results of acceptable quality, and comparison with the results obtained by other known metaheuristic algorithms is presented.

2. Materials and Methods

2.1. Open-Loop Control Problem

Let us consider a nonlinear discrete deterministic dynamical control system described by a state equation of the form
x ( t + 1 ) = f ( t , x ( t ) , u ( t ) ) ,   t = 0 , 1 , , N 1 ,
where t is a discrete time with number of stages N ; x is the ( n × 1 ) state vector; u is the ( q × 1 ) control vector, u U ( t ) = [ a 1 ( t ) , b 1 ( t ) ] × × [ a q ( t ) , b q ( t ) ] ;   f i ( t , x , u ) , i = 1 , , n are a known continuous functions.
Initial condition:
x ( 0 ) = x 0 .
Let us define a set D ( 0 , x 0 ) of pairs d = ( x ( ) , u ( ) ) , where x ( ) = { x 0 , x ( 1 ) , , x ( N ) } is a trajectory, u ( ) = { u ( 0 ) , u ( 1 ) , , u ( N 1 ) } is an open-loop control, satisfying the state equation (1) and initial condition (2).
The performance index to be minimized is defined on the set D ( 0 , x 0 ) as
I ( d ) = t = 0 N 1 f 0 ( t , x ( t ) , u ( t ) ) + F ( x ( N ) ) ,
or
I ( d ) = F ( x ( 0 ) , , x ( N ) ; u ( 0 ) , , u ( N 1 ) ) ,
where f 0 ( t , x , u ) , F ( x ) , F ( x ( ) , u ( ) ) are known continuous functions. The notation form (3) is typical for separable optimal control problems, and the form (4) is typical for nonseparable ones.
It is required to find an optimal pair d * = ( x * ( ) , u * ( ) ) D ( 0 , x 0 ) that minimizes the performance index, i.e.,
I ( d * ) = min d D ( 0 , x 0 ) I ( d ) .
To solve the problem (5), an algorithm that imitates the behavior of a tomtits flock is proposed. This algorithm belongs to the nature-inspired (more precisely, bird-inspired) and swarm intelligence metaheuristic optimization algorithms [2,15,16,17]. This problem was solved using iterative dynamic programming and the Luus–Jaakola algorithm [1] and by the Perch School Search optimization algorithm [10]. The results of a comparative analysis of the obtained results are presented below in the solved examples.

2.2. Bio-Inspired Metaheuristic Optimization Method

The problem of finding the global minimum of the objective function f ( x ) = f ( x 1 , , x n ) on the set of feasible solutions of the form D = [ a 1 , b 1 ] × × [ a n , b n ] is considered.
The Tomtit Flock Optimization (TFO) method of simulating the behavior of a flock of tomtits is a hybrid algorithm for finding the global conditional extremum of functions of many variables, related to both swarm intelligence methods and bio-inspired methods.
A tomtit unites in flocks. They obey the commands of the flock leader and have some freedom in choosing the way to search for food. Tomtits are distinguished from other birds and animals by their special cohesion, coordination of collective actions, intensity of use of the food source found until it disappears, and clear and friendly execution of commands common to members of the flock.
Finite sets I = { x j = ( x 1 j , x 2 j , , x n j ) T , j = 1 , 2 , , N P } D of possible solutions, named populations, are used to solve the problem of finding a global constrained minimum of objective function, where x j is a tomtit-individual (potential feasible solution) with number j , and N P is a population size.
In the beginning, when number of iterations k = 0 , the method creates initial population of tomtits via the uniform distribution law on a feasible solutions set D . The value of objective function f ( x ) is calculated for each tomtit, which is a possible solution. The solution with the best value of objective function is the position of flock leader. The leader does not search for food in the current iteration. It waits for results of finding food for processing results and further storage in the memory matrix:
( x 1 1 x n 1 f ( x 1 ) x 1 K x n K f ( x K ) ) .
The memory matrix size is K × ( n + 1 ) , where K is a given maximum number of records. The best achieved result on each k -th iteration is added in the matrix until the matrix is fulfilled. The number K defines the iterations counted in one pass. Results in the matrix are ordered by the following rule. The first record in the matrix is the best solution ( x 1 , f ( x 1 ) ) ; other records are ordered by increasing (nondecreasing) value of the objective function. When the memory matrix is fulfilled, the best result is placed in a special set Pool (set of the best results of passes), after which the matrix is cleared.
New position of the leader is randomly generated by Levy distribution [29]:
x i 1 , k + 1 = x i 1 , k + α k + 1 L e v y i ( λ ) , i = 1 , , n ,
where x i 1 , k is a coordinate of the leader’s position on the k -th iteration, and α is a movement step, λ ( 1 , 3 ] . Studies of animal behavior have shown that the Levy distribution most accurately describes the trajectory of birds and insects. Due to the “heavy tails” of the Levy distribution, the probability of significant deviations of the random variable from the mean is high. Therefore, according to the above expression, sufficiently large increments are possible for each coordinate of the vector x j , k . If the new value of the certain coordinate does not belong to the set of feasible solutions; that is x i [ a i , b i ] , then one should repeat the generation process.
This process describes the flight of the flock leader from one place to another. Other tomtits fly after it at the command of the leader. Positions of these tomtits are modeled using a uniform distribution on the parallelepiped set. The center of the parallelepiped set is determined by the position of the leader of the flock; the lengths of the sides are equal r k ( b i a i ) , i = 1 , , n , where r k + 1 = γ r k , r 0 = 1 , γ —reduction parameter of search set; if r k < ε , then the current pass is stopped and a new one starts. Upon reaching K iterations or when the condition r k < ε is fulfilled, the pass is considered complete, the counter of passes is increased: p = p + 1 , and the parameter that describes the size of the next search space is set equal to r 0 = η p , where η —reconstruction parameter of search set. The found coordinate values determine the initial conditions for the search at the current iteration (the initial position of each tomtit).
It is assumed that each j -th individual has a memory that stores:
  • Current iteration count k ;
  • Current position x j , k and the corresponding value of objective function f ( x j , k ) ;
  • The best position x b e s t and the corresponding value of objective function in population f ( x b e s t ) ;
  • The best position of a tomtit x j , b e s t during all iterations and the corresponding value of objective function f ( x j , b e s t ) ;
  • The best position x j , l o c a l among all tomtits located in the vicinity of the j -th individual of radius ρ and the corresponding value of objective function f ( x j , l o c a l ) .
The trajectory of movement of each individual (for all tomtits j = 1 , , N P ) on the segment [ 0 ; T t ] , during which the search is carried out at the current iteration, is described by the solution of the stochastic differential equation:
d x j , k = f ( x j , k ( t ) ) d t + σ ( x j , k ( t ) ) d W + d q ,   x j , k ( 0 ) = x j , k ,   j = 2 , , N P ,
where W ( t ) —standard Wiener stochastic process, T t —the time allotted by the leader of the pack for searching for members of the pack at the current iteration, d q —Poisson component, which can be written as:
d q = p θ p δ ( t τ p ) d t ,
δ ( t ) —asymmetric delta function, τ p —moments of jumps. In random moments of time τ p the position of a tomtit experiences random increments θ p , forming a Poisson stream of events of a given intensity μ . The solution of the equation determines the trajectories of the tomtit’s movement that implement the diffusion search procedure with jumps.
Drift vector f ( x j , k ( t ) )   t [ 0 ; T ] is described by the equation:
f ( x j , k ( t ) ) = c 1 r 1 [ x b e s t x j , k ( t ) ] ,
i.e., it takes into account information about the best solution in the population: the position of the global leader of the flock.
Diffusion matrix σ ( x j , k ( t ) ) takes into account information about the best solution obtained by a given individual for all past iterations, about the best solution in the vicinity of the current solution, determined by the radius ρ :
σ ( x j , k ( t ) ) = c 2 r 2 [ x j , b e s t x j , k ( t ) ] + c 3 r 3 [ x j , l o c a l x j , k ( t ) ] ,
where c 1 , c 2 , c 3 —effect coefficients; r 1 , r 2 , r 3 —random parameters uniformly distributed on the segment [ 0 , 1 ] . Parameter c 2 determines the process of forgetting about one’s search history; parameter c 3 describes the leader’s effect among neighbors.
The solution of a stochastic differential equation is a random process, the trajectories of which have sections of continuous change, interrupted by jumps of a given intensity. It describes the movement of the tomtit, accompanied by relatively short jumps. This solution can be found by numerical integration with a step size h . If any coordinate value hits the boundary of the search area or goes beyond it, then it is taken equal to the value on this boundary. The best position achieved during the current iteration is chosen as the new position x j , k , s e a r c h of the tomtit. This process is shown in Figure 1a. Among all the new positions of tomtits, the best one is selected. It is recorded in the memory matrix and identified with the final position of the leader of the flock at the current iteration, after which the next iteration begins with the procedure for finding a new position of the leader of the flock and the initial positions of the members of the flock relative to it. This procedure is shown in Figure 1b. The method terminates when the maximum number of passes P is reached.
The proposed hybrid method uses the ideas of evolutionary methods to create the initial population [15,16,17,18], the method of imitating the behavior of cuckoo to simulate the jump of the flock leader based on Levy flights [29], the application of a modified numerical Euler–Maruyama method for solving a stochastic differential equation describing the movement of individuals in a population [31]; the idea of particle swarm optimization technique in a flock for describing the interaction of individuals with each other, the modified method of artificial immune systems for updating the population, the Luus-Jaakola method for updating the search set at the end of the next pass [1].
Figure 2 below illustrates the block diagram of the algorithm.
Below is a detailed description of the algorithm.
Step 1. Creation of the initial tomtit population:
Step 1.1. Set parameters of method:
  • Number of tomtits in population N P ;
  • Flock leader movement step α ;
  • Reduction parameter of search set γ ;
  • Reconstruction parameter of search set η ;
  • Levy distribution parameter λ ;
  • Tomtit’s neighborhood radius size ρ ;
  • Parameters c 1 , c 2 , c 3 , which describe drift vector and diffusion matrix in the stochastic differential equation;
  • Number of maximum records in memory matrix K ;
  • Integration step size h ;
  • Number of maximum discrete steps L ;
  • Time required for searching by tomtits T t = L h ;
  • Number of maximum passes P ;
  • Jump intensity parameter μ .
Set the value p = 0 (the number of passes), r 0 = 1 .
Step 1.2. Create initial population I = { x j = ( x 1 j , x 2 j , , x n j ) T , j = 1 , 2 , , N P } D of N P solutions (tomtits) with randomly generated coordinates x i from the segment [ a i , b i ] using a uniform distribution:
x i j = a i + r a n d i [ 0 ,   1 ] ( b i a i ) , i = 1 , , n ; j = 1 , , N P ,
where r a n d i [ 0 , 1 ] is the uniform distribution law on the segment [ 0 ; 1 ] .
Step 2. Movement of flock members. Implementation of the diffusion search procedure with jumps.
Step 2.1. Set the value: k = 0 (iteration counter).
Step 2.2. For each member of flock calculate the value of objective function: f ( x 1 , k ) , , f ( x N P , k ) . Order flock members in increasing (nondecreasing) objective function values. The solution x 1 , k corresponds to the best value, i.e., the position of the leader.
Step 2.3. Process current information about flock members.
For j = 1 set the following. The best position is x b e s t = x 1 , k and the corresponding value of objective function f ( x b e s t ) = f ( x 1 , k ) in population.
For all other flock members ( j = 2 , , N P ) find:
  • The best position of a tomtit x j , b e s t during all iterations and the corresponding value of objective function f ( x j , b e s t ) ;
  • The best position x j , l o c a l among all tomtits located in the vicinity of the j -th individual of radius ρ and the corresponding value of objective function f ( x j , l o c a l ) .
Step 2.4. For each j = 2 , , N P find the numerical solution of the stochastic differential equation with step size h on the segment [ 0 , L h ] using the Euler–Maruyama method.
Step 2.4.1. Set the value: x j , k ( 0 ) = x j , k and l = 0 .
Step 2.4.2. Find the diffusion part of the solution
x ˜ j , k ( l + 1 ) = x j , k ( l ) + h f ( x j , k ( l ) ) + h σ ( x j , k ( l ) ) ξ ,
where
f ( x j , k ( l ) ) = c 1 r 1 [ x b e s t x j , k ( l ) ] , σ ( x j , k ( l ) ) = c 2 r 2 [ x j , b e s t x j , k ( l ) ] + c 3 r 3 [ x j , l o c a l x j , k ( l ) ] ,
r 1 , r 2 , r 3 are random parameters uniformly distributed on the segment [ 0 , 1 ] and ξ is a random variable that has a standard normal distribution with zero mean value and unit variance value. It is possible to model this variable using the Box–Muller method:
ξ = 2 ln α 1 cos 2 π α 2   or   ξ = 2 ln α 1 sin 2 π α 2 ,
in which α 1 and α 2 are an independent random variables uniformly distributed on the segment ( 0 ; 1 ) .
Step 2.4.3. Check jump condition: β μ h , where β is a random variable uniformly distributed on the segment ( 0 ; 1 ) . In the process of integration, one should check whether the solution belongs to the set of feasible solutions: if any coordinate value of the solution hits the boundary of the search area or goes beyond it, then it is taken equal to the corresponding value on the boundary.
If the jump condition is satisfied, then set the value:
x j , k ( l + 1 ) = x ˜ j , k ( l + 1 ) + θ ,
where θ is a random increment, in which coordinates are modeled according to the uniform distribution law: θ i [ Δ i , Δ i ] ,   Δ i = min [ ( b i x ˜ i j , k ( l + 1 ) , ( x ˜ i j , k ( l + 1 ) a i ) ] .
Otherwise, set the value: x j , k ( l + 1 ) = x ˜ j , k ( l + 1 ) .
Step 2.4.4. Check the stop condition of the moving process of a flock’s member.
If l < L , set the value l = l + 1 and go to step 2.4.2. Otherwise, go to step 2.5.
Step 2.5. For each flock member ( j = 2 , , N P ), find the best solution among all solutions obtained: x j , k ( 0 ) , x j , k ( 1 ) , , x j , k ( L ) . Denote it as x j , k , s e a r c h , j = 2 , , N P .
Step 2.6. Among positions x 1 , k , x 2 , k , s e a r c h , , x N P , k , s e a r c h of tomtits (solutions) find the best one. Record it in the memory matrix. This is the leader’s position at the end of the current iteration x 1 , k , s e a r c h .
Step 2.7. Check the stop condition of a pass. If k = K (memory matrix is full) or r k < ε , then stop the pass. From the memory matrix, choose the best solution and put it in the set Pool, then go to step 3. If k < K , set the value r k + 1 = γ r k , k = k + 1 and go to step 2.8.
Step 2.8. Find the new position of the leader:
x 1 , k = x 1 , k , s e a r c h + α k + 1 L e v y ( λ ) .  
To generate a random variable according to the Levy distribution, it is required: for each coordinate x i = L e v y i ( λ ) to generate number R i , i = 1 , , n by uniform distribution law on the set [ ε ; b i a i ] , where ε = 10 7 distinguishability constant, and carry out the following [10]:
  • Generate numbers θ i = R i 2 π and L i = ( R i + ε ) 1 λ , i = 1 , , n , where λ is a distribution parameter;
  • Calculate values of coordinates:
    x i ( λ ) = L i sin θ i ,   i = 1 , , [ n 2 ] ;   x i ( λ ) = L i cos θ i ,   i = [ n 2 ] + 1 , , n .  
If the obtained value of the coordinate x i does not belong to the set of feasible solutions; that is x i [ a i ; b i ] , then repeat the generation process for the coordinate x i .
If, after ten unsuccessful generations, the coordinate x i does not belong to the set of feasible solutions, then generate x i by uniform distribution law on the set [ a i ; b i ] .
Step 2.9. Release the flight of the rest of the tomtits.
Positions of all other tomtits ( j = 2 , , N P ) are modeled using a uniform distribution on the parallelepiped set (see step 1.2). The center of the parallelepiped set is determined by the position of the leader of the flock x 1 , k (see step 2.8), and the lengths of the sides are equal r k ( b i a i ) , i = 1 , , n .
If the value of tomtit’s coordinate does not belong to the set of feasible solutions, then generate a new position using uniform distribution:
  • On the set [ a i , x i 1 , k ] when x i j , k < a i , j = 2 , , N P ;
  • On the set [ x i 1 , k , b i ] when x i j , k > b i , j = 2 , , N P .
Go to step 2.2.
Step 3. Check the stop condition of the search. If p = P , then stop the process and go to step 4. If p < P , clear the memory matrix, increase the counter of passes: p = p + 1 , and set the value r 0 = η p , where η is a reconstruction parameter of the search set.
Generate a new flock of tomtits. Choose the best solution from Pool set: x 1 , 0 . Positions of all other tomtits ( j = 2 , , N P ) are modeled using a uniform distribution on the parallelepiped set (see step 1.2). The center of the parallelepiped set is determined by the position of the leader of the flock x 1 , 0 , and the lengths of the sides are equal r 0 ( b i a i ) , i = 1 , , n .
If the value of the tomtit’s coordinate does not belong to the set of feasible solutions, then generate a new position using uniform distribution:
  • On the set [ a i , x i 1 , 0 ] when x i j , k < a i , j = 2 , , N P ;
  • On the set [ x i 1 , 0 , b i ] when x i j , k > b i , j = 2 , , N P .
Go to step 2.
Step 4. Choosing the best solution. Among solutions in the set P o o l , find the best one, which is considered to be the approximate solution of the optimization problem.
As a result of generalizing the data obtained both in solving classical separable and nonclassical nonseparable optimal control problems, recommendations for choosing the values of hyperparameters were developed: number of tomtits in population N P [ 10 ; 1000 ] ; flock leader movement step α [ 0.0001 ; 0.2 ] ; reduction parameter of search set γ [ 0.1 ; 0.99 ] ; reconstruction parameter of search set η [ 0.1 ; 0.9 ] ; Levy distribution parameter λ [ 1.1 ; 2 ] ; tomtit’s neighborhood radius size ρ [ 1 ; 1000 ] ; parameters c 1 , c 2 , c 3 [ 0.5 ; 30 ] ; number of maximum records in memory matrix K [ 5 ; 50 ] ; integration step size h [ 0.01 ; 0.1 ] ; number of maximum discrete steps L [ 2 ; 10 ] ; number of maximum passes P [ 10 ; 200 ] ; jump intensity parameter μ [ 1 ; 30 ] .

3. Results

3.1. Example 1. The One-Dimensional Optimal Control Problem with an Exact Solution

The dynamical system is described by the state equation:
x ( t + 1 ) = x ( t ) + u ( t ) ,
where x R ; t = 0 , 1 , , N 1 ; u R . All variables, i.e., the coordinates of the state vector of dynamical systems and the coordinates of the control vector, hereinafter, are written in the normalized form.
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = 1 2 t = 0 N 1 γ t u 2 ( t ) + x ( N ) , γ > 1 .
In this case, it is possible to solve this problem analytically:
u * ( t ) = γ t ; x * ( t ) = x 0 + ( 1 γ k ) γ 1 ; min I = x 0 + 1 γ N 2 ( γ 1 ) .
In this problem, the initial condition is known: x ( 0 ) = 0 , and the constraints on the control are also given: 2 10 4 u 2 10 4 . For this task, the number of stages is set: N = 50 ; the exact value of the performance index is min I = 581.954264398477 .
To solve the problems under consideration with the help of the new TFO algorithm, a computer with the following characteristics was used: an Intel Core i7-2860QM processor with a clock frequency of 3.3 GHz, 16 gigabytes of RAM. To implement the software package, the C# programming language (7.2) with Microsoft Framework. Net version 4.7.2 was used from https://dotnet.microsoft.com/en-us/ (accessed on 25 August 2022).
When solving this problem, the approximate value of the performance index obtained by the TFO algorithm was I T F O * = 581.953556374572 and the runtime was 1:07 min. The relative error of the obtained value of the performance index was 1.2 × 10 6 . The same problem was solved by Perch School Search algorithm [10]. The runtime was 5:36 min and the value of performance index was I P S S * = 581.954264313551 . It is obvious that the new TFO algorithm makes it possible to obtain solutions much faster and with sufficient accuracy.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 1. Figure 3 shows the obtained pair ( x ( ) , u ( ) ) .
In order to analyze the effect of algorithm parameters on the result obtained, the same problem was solved with another set of algorithm parameters, which are presented in Table 2. Figure 4 shows the obtained pair ( x ( ) , u ( ) ) .
The approximate value of the performance index obtained by the TFO algorithm was I * = 579.311903151533 , and the runtime was 14.92 s. It turned out that the resulting value of the performance index was worse due to other parameters, but the change in relative error was not significant. This example shows the importance of selecting parameters, which is a rather difficult task. A slight change in the parameters can significantly affect the result of solving the problem.
To demonstrate the increase in the runtime to solve the problem, a search for optimal control and the trajectory with the parameters of the algorithm from Table 1 is performed. The difference will be an increase in the number of stages: N = 80 , but the parameters of the system remain unchanged.
Figure 5 shows the obtained pair ( x ( ) , u ( ) ) .
The exact value of the performance index was min I = 10237.001072927329 . The obtained value of the performance index was I * = 10174.6684936038 , with a runtime of 1:58 min.
Example 1 illustrates the possibilities of the method in solving financial optimization problems that take the discounting effect into account.

3.2. Example 2. Luus–Tassone Nonseparable Control Problem

The dynamical system is described by three difference equations:
x 1 ( t + 1 ) = x 1 ( t ) 1 + 0.01 u 1 ( t ) ( 3 + u 2 ( t ) ) ; x 2 ( t + 1 ) = x 2 ( t ) + u 1 ( t ) x 1 ( t + 1 ) 1 + u 1 ( t ) ( 1 + u 2 ( t ) ) ; x 3 ( t + 1 ) = x 3 ( t ) 1 + 0.01 u 2 ( t ) ( 1 + u 3 ( t ) ) ,
where x R 3 , t = 0 , , N 1 , and u R 3 .
It is required to find such a pair of trajectories and control ( x * ( ) , u * ( ) ) so that the value of the performance index I is minimal:
I = x 1 2 ( N ) + x 2 2 ( N ) + x 3 2 ( N ) + + [ ( k = 1 N x 1 2 ( k 1 ) + x 2 2 ( k 1 ) + 2 u 3 2 ( k 1 ) ) ( k = 1 N x 3 2 ( k 1 ) + 2 u 1 2 ( k 1 ) + 2 u 2 2 ( k 1 ) ) ] 1 2 .
In this problem, the initial condition: x ( 0 ) = ( 2 ; 5 ; 7 ) T , and the constraints on the control are also given: 0 u 1 ( t ) 4 , 0 u 2 ( t ) 4 , 0 u 3 ( t ) 0.5 . For this task, the number of stages is set: N = 20 .
The best known value of the performance index for this problem was obtained in [1]: min I = 209.26937 The approximate value of the performance index obtained by the TFO algorithm was I T F O * = 209.389060601957 , and the runtime was 16.47 s. The same problem was solved by the Perch School Search algorithm [10]. The runtime was 49.92 s, and the value of performance index was I P S S * = 209.429533683522 . It is obvious that the new TFO algorithm makes it possible to obtain the solution much faster. However, the obtained solution of this problem is still not as accurate as that of the author of the problem [1]. It turned out to be extremely difficult to select the parameters for the algorithm for this problem.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 3. Figure 6 and Figure 7 show the obtained pair ( x ( ) , u ( ) ) .

3.3. Example 3. Li–Haimes Nonseparable Control Problem

The dynamical system is described by three difference equations:
x ( 1 ) = x ( 0 ) u ( 0 ) ; x ( 2 ) = ( 1 + u ( 1 ) ) x ( 1 ) ; x ( 3 ) = x ( 2 ) + u ( 2 ) ,
where x R , t = 0 , 1 , 2 , and u R .
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = [ x 2 ( 0 ) + x 2 ( 1 ) + ( 2 x 2 ( 2 ) + x 2 ( 3 ) ) exp ( x 2 ( 1 ) ) ] × [ 50 + u 2 ( 0 ) + ( u 2 ( 1 ) + u 2 ( 2 ) ) exp ( u 2 ( 0 ) ) ] 1 / 2 .
In this problem, the initial condition is known: x ( 0 ) = 15 , and the constraints on the control are also given: 1 u ( t ) 1 , t = 0 , 1 , 2 . For this task, the number of stages is set: N = 3 . The best-known value of the performance index obtained for this nonseparable control problem was obtained in [1]: min I = 1596.4796778 .
The approximate value of the performance index obtained by the TFO algorithm was I T F O * = 1596.47967783381 and the runtime was 0.13 s. The same problem was solved by Perch School Search algorithm [10]. The runtime was 0.19 s and the value of performance index was I P S S * = 1596.47967783389 . The new TFO algorithm makes it possible to obtain almost the same solution much faster.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 4. Figure 8 shows the obtained pair ( x ( ) , u ( ) ) . The values of the obtained control u ( ) and trajectory x ( ) are shown in Table 5.
Examples 2 and 3 illustrate the possibilities of solving complex optimal control problems with nonseparable performance indexes, for which obtaining solutions using known optimality conditions is extremely difficult.

3.4. Example 4. The Two-Dimensional Lagrange Optimal Control Problem with an Exact Solution

The dynamical system is described by two difference equations:
x 1 ( t + 1 ) = x 1 ( t ) + u ( t ) ; x 2 ( t + 1 ) = 2 x 1 ( t ) + x 2 ( t ) ,
where x R 2 , t = 0 , 1 , and u R .
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = t = 0 1 [ x 1 2 ( t ) + x 2 2 ( t ) + u 2 ( t ) ] .
In this problem, the initial condition x ( 0 ) = ( 2 ; 1 ) T and the constraints on the control are also given: 10 5 u 10 5 , the number of stages is set: N = 2 . The exact value of the performance index is min I = 32 . For this control problem, the analytical solution can be found: u * = { 1 ; 0 } ; x 1 * ( ) = { 2 ; 1 ; 1 } ; x 2 * ( ) = { 1 ; 5 ; 7 } .
The approximate value of the performance index obtained by the TFO algorithm was I T F O * = 32.0000000000001 and the runtime was less than 0.01 s. The relative error of the obtained value of the performance index was 3.1 × 10 15 . The same problem was solved by the Perch School Search algorithm [10]. The runtime was 1.58 s and the value of the performance index was I P S S * = 32 . The new TFO algorithm makes it possible to obtain solutions much faster and with almost the same accuracy.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 6. Figure 9 shows the obtained pair ( x ( ) , u ( ) ) .

3.5. Example 5. The Two-Dimensional Meyer Optimal Control Problem with an Exact Solution

The dynamical system is described by two difference equations:
x 1 ( t + 1 ) = x 1 ( t ) + u ( t ) ; x 2 ( t + 1 ) = 2 x 1 ( t ) + x 2 ( t ) ,
where x R 2 , t = 0 , 1 , and u R .
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = x 1 2 ( 2 ) + x 2 2 ( 2 ) .
In this problem, the initial condition vector is known: x ( 0 ) = ( 2 ; 3 ) T , and the constraints on the control are also given: 1 u 1 . For this task, the number of stages is set: N = 2 ; the exact value of the performance index is min I = 5 .
The approximate value of the performance index obtained by the TFO algorithm was I T F O * = 5 and the runtime was less than 0.01 s. The same problem was solved by Perch School Search algorithm [10]. The runtime was 0.25 s and the value of the performance index was I P S S * = 5.00000000000002 . . It is obvious that the new TFO algorithm makes it possible to obtain the solution much faster and with almost the same accuracy.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 7. Figure 10 shows the obtained pair ( x ( ) , u ( ) ) . The values of the obtained control and trajectory components are shown in Table 8.
Examples 4 and 5 illustrate the possibility of solving the problem of finding the optimal program control for classical control problems for linear discrete deterministic dynamical systems with a quadratic performance index.

3.6. Example 6. The Two-Dimensional Bolza Optimal Control Problem with an Exact Solution

The dynamical system is described by two difference equations:
x 1 ( t + 1 ) = x 2 ( t ) ; x 2 ( t + 1 ) = 2 x 2 ( t ) x 1 ( t ) + 1 N 2 u ( t ) ,
where x R 2 , t = 0 , 1 , , N 1 , and u R .
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = x 1 ( N ) + 1 2 N t = 0 N 1 u 2 ( t ) .
In this case, it is possible to solve this problem analytically:
u * ( t ) = N t 1 N ; min I = 1 3 + 3 N 1 6 N 2 + 1 2 N 3 t = 0 N 1 t 2 .
In this problem, the initial condition vector is known: x ( 0 ) = ( 0 ; 0 ) T , and the constraints on the control are also given: 0 u 100 . For this task, the number of stages is set: N = 10 . ; the exact value of the performance index is min I = 0.1425 .
When solving this problem, the approximate value of the performance index obtained by the TFO algorithm was I T F O * = 0.142499964879453 and the runtime was 0.13 s. The relative error of the obtained value of the performance index was 2.5 × 10 7 . The same problem was solved by Perch School Search algorithm [7]. The runtime was 7.39 s and the value of the performance index was I P S S * = 0.142499999999796 . The new algorithm makes it possible to obtain solutions much faster and with almost the same accuracy.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 9. Figure 11 shows the obtained pair ( x ( ) , u ( ) ) . Figure 12 shows deviations of approximate control values from exact ones at different moments of the dynamic system operation, i.e., Δ u ( t ) = | u ( t ) u * ( t ) | .

3.7. Example 7. The Two-Dimensional Meyer Optimal Control Problem with an Exact Solution

The dynamical system is described by two difference equations:
x 1 ( t + 1 ) = x 1 ( t ) + 2 u ( t ) ; x 2 ( t + 1 ) = x 1 2 ( t ) + x 1 ( t ) + u 2 ( t ) ,
where x R 2 , t = 0 , 1 , and u R .
It is required to find such a pair of trajectory and control ( x * ( ) , u * ( ) ) that the value of the performance index I is minimal:
I = x 2 ( 2 ) .
In this problem, the initial condition vector is known: x ( 0 ) = ( 3 ; 0 ) T , and the constraints on the control are also given: 5 u 5 . For this task, the number of stages is set: N = 2 ; the exact value of the performance index is min I = 19 .
In this example, the convex varying condition is not satisfied, which means that the discrete maximum principle cannot be applied, while the necessary optimality conditions are satisfied.
This problem has two solutions: x 1 1 * ( ) = { 3 ; 1 ; 9 } , x 2 1 * ( ) = { 0 ; 5 ; 19 } , u 1 * ( ) = { 2 ; 5 } and x 1 2 * ( ) = { 3 ; 1 ; 11 } , x 2 2 * ( ) = { 0 ; 5 ; 19 } , u 2 * ( ) = { 2 ; 5 } ; the TFO algorithm successfully finds both solutions.
When solving this problem, the approximate values of the performance index obtained by the TFO algorithm for both solutions were: I T F O 1 * = 18.999999999918 (the runtime was less than 0.01 s for the first solution); I T F O 2 * = 18.9999999991551 (the runtime was less than 0.01 s for the second solution). The relative errors of the obtained value of the performance index were 4.3 × 10 12 and 4.4 × 10 11 , respectively. The same problem was solved by the Perch School Search algorithm [7]. Runtimes were 0.30 s and 0.31 s, and the values of the performance index were I P S S 1 * = 18.9999999999354 and I P S S 2 * = 18.9999999999939 , respectively. The TFO algorithm makes it possible to obtain solutions much faster and with almost the same accuracy.
To solve this problem, the TFO algorithm was used with the parameters shown in Table 10. Figure 13 and Figure 14 show the obtained pairs ( x ( ) , u ( ) ) for both solutions. The values of the obtained control and trajectory components are shown in Table 11 and Table 12.

4. Conclusions

In this paper, a new bio-inspired metaheuristic optimization algorithm named TFO, used to find the solution to open-loop control problems for discrete deterministic dynamical systems, is proposed.
This algorithm has successfully inherited a number of new ideas and known techniques from several algorithms; for example, the numerical solution of stochastic differential equations with jumps by the Euler–Maruyama method to describe the movement of tomtits when foraging, compression and recovery of the search area б exchange of information between members of the flock. These ideas allow the algorithm to solve applied optimization problems in a short runtime, which was shown by examples of the problem of finding optimal control and trajectories of discrete dynamical systems. The solutions obtained in most cases are extremely close, or coincide with the analytical solutions if they are known. Compared to the previously developed PSS algorithm, this algorithm is more efficient when it comes to solving problems of finding the optimal control and trajectory, which is shown by the through comparison of the running time of the algorithm and the relative error.
In the future, we plan to improve this algorithm or create new algorithms based on the TFO algorithm in order to obtain more accurate (but no less fast) algorithms.
The direction of further development of this work can be the application of the developed metaheuristic optimization algorithm in solving applied problems of optimal control of aircraft of various types and optimal control problems with full and incomplete feedback on the measured variables, as well as solving problems in the presence of uncertainty when setting the initial state vector.

Author Contributions

Conceptualization, A.V.P.; methodology, A.V.P. and A.A.K.; software, A.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source code at https://github.com/AeraConscientia/N-dimensionalTomtitOptimizer (accessed on 30 May 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luus, R. Iterative Dynamic Programming, 1st ed.; Chapman & Hall/CRC: London, UK, 2000. [Google Scholar]
  2. Yang, X.S.; Chien, S.F.; Ting, T.O. Bio-Inspired Computation and Optimization, 1st ed.; Morgan Kaufmann: New York, NY, USA, 2015. [Google Scholar]
  3. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning, 1st ed.; Addison-Wesley Publishing Company: Boston, MA, USA, 1989. [Google Scholar]
  4. Michalewicz, Z.; Fogel, D. How to Solve It: Modern Heuristics, 2nd ed.; Springer: New York, NY, USA, 2004. [Google Scholar]
  5. Panteleev, A.V.; Lobanov, A.V. Application of Mini-Batch Metaheuristic algorithms in problems of optimization of deterministic systems with incomplete information about the state vector. Algorithms 2021, 14, 332. [Google Scholar] [CrossRef]
  6. Roni, H.K.; Rana, M.S.; Pota, H.R.; Hasan, M.; Hussain, S. Recent trends in bio-inspired meta-heuristic optimization techniques in control applications for electrical systems: A review. Int. J. Dyn. Control. 2022, 10, 999–1011. [Google Scholar] [CrossRef]
  7. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.S.; Esposito, W.R.; Gümüs, Z.H.; Harding, S.T.; Klepeis, J.L.; Meyer, C.A.; Schweiger, C.A. Handbook of Test Problems in Local and Global Optimization, 1st ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  8. Roman, R.C.; Precup, R.E.; Petriu, E.M. Hybrid data-driven fuzzy active disturbance rejection control for tower crane systems. Eur. J. Control 2021, 58, 373–387. [Google Scholar] [CrossRef]
  9. Chi, R.; Li, H.; Shen, D.; Hou, Z.; Huang, B. Enhanced P-type control: Indirect adaptive learning from set-point updates. IEEE Trans. Autom. Control 2022. [Google Scholar] [CrossRef]
  10. Panteleev, A.V.; Kolessa, A.A. Optimal open-loop control of discrete deterministic systems by application of the perch school metaheuristic optimization algorithm. Algorithms 2022, 15, 157. [Google Scholar] [CrossRef]
  11. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [PubMed]
  12. Sergeyev, Y.D.; Kvasov, D.E. Deterministic Global Optimization: An Introduction to the Diagonal Approach, 1st ed.; Springer: New York, NY, USA, 2017. [Google Scholar]
  13. Pinter, J.D. Global Optimization in Action (Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications), 1st ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996. [Google Scholar]
  14. Chambers, D.L. Practical Handbook of Genetic Algorithms, Applications, 2nd ed.; Chapman & Hall/CRC: London, UK, 2001. [Google Scholar]
  15. Floudas, C.; Pardalos, P. Encyclopedia of Optimization, 2nd ed.; Springer: New York, NY, USA, 2009. [Google Scholar]
  16. Gendreau, M. Handbook of Metaheuristics, 2nd ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  17. Glover, F.W.; Kochenberger, G.A. (Eds.) Handbook of Metaheuristics; Kluwer Academic Publishers: Boston, MA, USA, 2003. [Google Scholar]
  18. Neri, F.; Cotta, C.; Moscato, P. Handbook of Memetic Algorithms, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  19. Chattopadhyay, S.; Marik, A.; Pramanik, R. A brief overview of physics-inspired metaheuristic optimization techniques. arXiv 2022, arXiv:2201.12810v1. [Google Scholar]
  20. Beheshti, Z.; Shamsuddin, S.M. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  21. Locatelli, M.; Schoen, F. (Global) Optimization: Historical notes and recent developments. EURO J. Comput. Optim. 2021, 9, 100012. [Google Scholar] [CrossRef]
  22. Dragoi, E.N.; Dafinescu, V. Review of metaheuristics inspired from the animal kingdom. Mathematics 2021, 9, 2335. [Google Scholar] [CrossRef]
  23. Brownlee, J. Clever Algorithms: Nature-Inspired Programming Recipes, 1st ed.; LuLu.com: Raleigh, CA, USA, 2011. [Google Scholar]
  24. Tzanetos, A.; Fister, I.; Dounias, G. A comprehensive database of nature-inspired algorithms. Data Brief 2020, 31, 105792. [Google Scholar] [CrossRef] [PubMed]
  25. Fister, I., Jr.; Yang, X.-S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv 2013, arXiv:1307.4186. [Google Scholar]
  26. Yang, X.S. Nature-Inspired Metaheuristic Algorithms, 2nd ed.; Luniver Press: Frome, UK, 2010. [Google Scholar]
  27. Panteleev, A.V.; Belyakov, I.A.; Kolessa, A.A. Comparative analysis of optimization strategies by software complex “Metaheuristic nature-inspired methods of global optimization”. J. Phys. Conf. Ser. 2022, 2308, 012002. [Google Scholar] [CrossRef]
  28. Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.-S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello Coello, C.A.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
  29. Yang, X.S.; Deb, S. Cuckoo search via Levy flights. In Proceedings of World Congress on Nature and Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  30. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  31. Averina, T.A.; Rybakov, K.A. Maximum cross section method in optimal filtering of jump-diffusion random processes. In Proceedings of the 15th International Asian School-Seminar Optimization Problems of Complex Systems, Novosibirsk, Russia, 26–30 August 2019; pp. 8–11. [Google Scholar]
Figure 1. Movement of tomtits; (a) stochastic movement of tomtits, choosing the best solution and adding into the memory matrix, (b) new position of the leader and flight results of other tomtits.
Figure 1. Movement of tomtits; (a) stochastic movement of tomtits, choosing the best solution and adding into the memory matrix, (b) new position of the leader and flight results of other tomtits.
Algorithms 15 00301 g001
Figure 2. A block diagram of the proposed TFO algorithm.
Figure 2. A block diagram of the proposed TFO algorithm.
Algorithms 15 00301 g002
Figure 3. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Figure 3. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Algorithms 15 00301 g003
Figure 4. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Figure 4. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Algorithms 15 00301 g004
Figure 5. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Figure 5. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 1.
Algorithms 15 00301 g005
Figure 6. Graphical illustration of obtained u ( ) in Example 2.
Figure 6. Graphical illustration of obtained u ( ) in Example 2.
Algorithms 15 00301 g006
Figure 7. Graphical illustration of obtained x ( ) in Example 2.
Figure 7. Graphical illustration of obtained x ( ) in Example 2.
Algorithms 15 00301 g007
Figure 8. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 3.
Figure 8. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 3.
Algorithms 15 00301 g008
Figure 9. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 4.
Figure 9. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 4.
Algorithms 15 00301 g009
Figure 10. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 5.
Figure 10. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 5.
Algorithms 15 00301 g010
Figure 11. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 6.
Figure 11. Graphical illustration of obtained pair ( x ( ) , u ( ) ) in Example 6.
Algorithms 15 00301 g011
Figure 12. Deviations of approximate control values from the exact ones.
Figure 12. Deviations of approximate control values from the exact ones.
Algorithms 15 00301 g012
Figure 13. Graphical illustration of obtained pair ( x ( ) , u ( ) ) for the first solution in Example 7.
Figure 13. Graphical illustration of obtained pair ( x ( ) , u ( ) ) for the first solution in Example 7.
Algorithms 15 00301 g013
Figure 14. Graphical illustration of obtained pair ( x ( ) , u ( ) ) for the second solution in Example 7.
Figure 14. Graphical illustration of obtained pair ( x ( ) , u ( ) ) for the second solution in Example 7.
Algorithms 15 00301 g014
Table 1. The set of parameters of the TFO algorithm in Example 1.
Table 1. The set of parameters of the TFO algorithm in Example 1.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
10000.10.61000303030500.01416030 1 × 10 9 1.30.1
Table 2. The set of parameters of the TFO algorithm in Example 1.
Table 2. The set of parameters of the TFO algorithm in Example 1.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
10000.30.61000303030100.0122030 1 × 10 9 1.30.1
Table 3. The set of parameters of the TFO algorithm in Example 2.
Table 3. The set of parameters of the TFO algorithm in Example 2.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
3000.30.9220.53100.05101001 1 × 10 9 1.10.0001
Table 4. The set of parameters of the TFO algorithm in Example 3.
Table 4. The set of parameters of the TFO algorithm in Example 3.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
1000.10.15333100.151002 1 × 10 9 1.50.001
Table 5. The values of the obtained control and trajectory.
Table 5. The values of the obtained control and trajectory.
t u x
0 −0.42716 15
1 −0.09897 0.31450
2 −0.08238 0.28337
3 0.20099
Table 6. The set of parameters of the TFO algorithm in Example 4.
Table 6. The set of parameters of the TFO algorithm in Example 4.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
700.10.110023250.16102 1 × 10 9 1.50.001
Table 7. The set of parameters of the TFO algorithm in Example 5.
Table 7. The set of parameters of the TFO algorithm in Example 5.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
300.810.6211150.16102 1 × 10 9 1.20.2
Table 8. The values of the obtained control and trajectory.
Table 8. The values of the obtained control and trajectory.
t u x 1 x 2
012−3
1−1−3 1
2 1−2
Table 9. The set of parameters of the TFO algorithm in Example 6.
Table 9. The set of parameters of the TFO algorithm in Example 6.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
1200.90.240101010200.14203 1 × 10 9 1.70.1
Table 10. The set of parameters of the TFO algorithm in Example 7.
Table 10. The set of parameters of the TFO algorithm in Example 7.
N P γ η ρ c 1 c 2 c 3 K h L P μ ε λ α
300.810.432.52.72.8200.14302.5 1 × 10 9 20.01
Table 11. The values of the obtained control and trajectory components for the first solution.
Table 11. The values of the obtained control and trajectory components for the first solution.
t u x 1 x 2
0−2.000013 0
15 −1.00001 −4.99998
2 8.9999919
Table 12. The values of the obtained control and trajectory components for the second solution.
Table 12. The values of the obtained control and trajectory components for the second solution.
t u x 1 x 2
0−1.99998 3 0
1 −5−0.99997 −5.00007
2 −10.9999719
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Panteleev, A.V.; Kolessa, A.A. Application of the Tomtit Flock Metaheuristic Optimization Algorithm to the Optimal Discrete Time Deterministic Dynamical Control Problem. Algorithms 2022, 15, 301. https://doi.org/10.3390/a15090301

AMA Style

Panteleev AV, Kolessa AA. Application of the Tomtit Flock Metaheuristic Optimization Algorithm to the Optimal Discrete Time Deterministic Dynamical Control Problem. Algorithms. 2022; 15(9):301. https://doi.org/10.3390/a15090301

Chicago/Turabian Style

Panteleev, Andrei V., and Anna A. Kolessa. 2022. "Application of the Tomtit Flock Metaheuristic Optimization Algorithm to the Optimal Discrete Time Deterministic Dynamical Control Problem" Algorithms 15, no. 9: 301. https://doi.org/10.3390/a15090301

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop