Application of Heuristic Algorithms in the Tomography Problem for Pre-Mining Anomaly Detection in Coal Seams

The paper presents research on a specific approach to the issue of computed tomography with an incomplete data set. The case of incomplete information is quite common, for example when examining objects of large size or difficult to access. Algorithms devoted to this type of problems can be used to detect anomalies in coal seams that pose a threat to the life of miners. The most dangerous example of such an anomaly may be a compressed gas tank, which expands rapidly during exploitation, at the same time ejecting rock fragments, which are a real threat to the working crew. The approach presented in the paper is an improvement of the previous idea, in which the detected objects were represented by sequences of points. These points represent rectangles, which were characterized by sequences of their parameters. This time, instead of sequences in the representation, there are sets of objects, which allow for the elimination of duplicates. As a result, the reconstruction is faster. The algorithm presented in the paper solves the inverse problem of finding the minimum of the objective function. Heuristic algorithms are suitable for solving this type of tasks. The following heuristic algorithms are described, tested and compared: Aquila Optimizer (AQ), Firefly Algorithm (FA), Whale Optimization Algorithm (WOA), Butterfly Optimization Algorithm (BOA) and Dynamic Butterfly Optimization Algorithm (DBOA). The research showed that the best algorithm for this type of problem turned out to be DBOA.


Introduction
Application of computed tomography can take place wherever there is a need to examine the interior of a certain object without disturbing its structure. Medicine is a classic example of using the computed tomography. In this field, the developed algorithms are so reliable and efficient that reconstruction of the interior structure of the examined patient in sufficient resolution in a short time is not a big challenge for modern computer tomographs. A large part of the papers published in non-medical journals concerns the modification of algorithms in such a way that reducing the dose of radiation received by the patient does not deteriorate the quality of reproduction [1].
In this paper, we deal with a non-medical problem related to the issue of an incomplete data set. Such phenomena are quite frequent, and an example of it may be the study of coal seams. The natural process of coal seams formation may result in the formation of undesirable, for economic reasons, excesses of other rocks or compressed gas tanks hazardous to the health and life of the working crew. The current energy crisis may put Due to the unavailability of the coal seam, the obtained projections are "one-sided", thus they carry much less information about the examined object. The research carried out so far shows that in such a case "quality" can be made up of with quantity. However, this approach has a disadvantage; many more calculations are needed, and thus the time needed to obtain a satisfactory accuracy is significantly extended. A series of work devoted to eliminate this drawback has brought results, but it seems that the efficiency of computed tomography reconstructive algorithms in the issue of incomplete data set can be significantly increased, which is the goal of this study.
In this paper, we focus on improving one of these approaches [4], where the detected objects are represented by sequences of rectangles characterized by sequences of their parameters. This time, instead of sequences in the representation, there are sets, which allow for the elimination of duplicates of such representations. The considered algorithms were adapted to the task in such a way that they do not operate on the argument sequence, but on sets of fives, where the first pair in the five represents the lower left corner of the rectangle, the second pair-the upper right corner, and the fifth argument is the desired density. As a result, the reconstruction is faster. Additionally, other heuristic algorithms, such as Aquila Optimizer (AO), Firefly Algorithm (FA), Whale Optimization Algorithm (WOA) and Dynamic Butterfly Optimization Algorithm (DBOA) are tested for usability.

Ideas of Computed Tomography
Computed tomography can be used where there is a need for non-invasive examination of the interior of different objects. Such examination consists of X-raying (this is the most common, but not the only method) of the examined object with penetrating radiation and analyzing the energy loss after passing such radiation through the examined object. Such an analysis can be useful due to the fact that each type of matter has its own individual (different than for other types of matter) coefficient (capacity) of absorbing the energy of penetrating radiation. The measure of this coefficient is often given in Hounsfield units [HU]. Examples of the values of this coefficient for X-ray radiation can be found in [3].
The mathematical description of the measure of such a loss for one radius L (projection) is given by the formula: where p L is the projection obtained on the path L of the given ray, I 0 is the initial radiation intensity, I is the final intensity (after passing along the path L of the ray through the tested object), and the function f is the density distribution of the tested object. Obviously, the information obtained from one ray does not allow for reconstructing the interior of the examined object along the path of this ray. Such X-rays should be done sufficiently, not only in terms of quantity, but also in terms of quality understood as the number of X-rays angles. The first studies of computed tomography algorithms say that accurate reconstruction requires infinitely many x-rays angles and infinitely many X-rays. In practice, this is impossible. An important result was the Kotelnikov theorem [10], which says how many X-rays (scans) should be performed to obtain a reconstruction of satisfactory quality, of course for a finite number of scan angles and for a finite number of rays for a given angle. After the X-rays are performed, on the basis of the information obtained in this way, it is possible to proceed to the process of reconstructing the internal structure of the X-rayed object. Reconstructive algorithms are responsible for this process. These algorithms can be divided into two main groups: analytical algorithms [11][12][13] and algebraic algorithms [14][15][16].
In the case of an incomplete data set, we obtain information (projections) of much worse quality, mainly due to the range of scanning angles. Such projections do not satisfy the assumptions of Kotelnikov's theorem, so there is no guarantee that it would be possible to obtain a reconstruction of satisfactory quality. The research carried out by the authors showed that analytical algorithms cannot solve this type of problem (in terms of an incomplete data set), while algebraic algorithms, with appropriate adaptation to the problem of an incomplete data set, allow for obtaining such a reconstruction.

Methodology
The ideas of analytical and algebraic algorithms arose long before the construction of the first computer tomograph. Thus, in 1937, Stefan Kaczmarz presented an iterative algorithm for solving a system of linear equations and proved its convergence [17] (assuming that a given square matrix system of linear equations has exactly one solution). This algorithm was rediscovered in the 1970s by Gordon, Bender and Herman as a useful tool for the problem of image reconstruction (after proving the convergence of Kaczmarz's algorithm also in [15]). Figure 3 shows a graphic interpretation of Kaczmarz's algorithm. Before we discuss the presented algorithm, let us first define the way in which the system of linear equations to be solved arises. In algebraic algorithms, we initially assume that the examined object is inscribed in a square (which of course is always possible), and then we discretize this square by dividing it into n × n = n 2 smaller congruent squares (pixels). It is also assumed that this discretization is so dense (the number n is large enough) that can be assumed the function f describing the density distribution of the examined object is constant for each pixel. Such an assumption results in the fact that the energy loss p i of the i-th ray is presented as the sum of energy losses along the path of this ray during its passage through the tested object. Energy is only lost on those pixels that are in the path of this ray. The energy loss on such a pixel is directly proportional to the change in length (knowing the size of the square in which the tested object is inscribed, the discretization density and the equation of the line along which a given radius runs, we can determine this length) the common part of a given radius and a given pixel and to the unknown energy absorption capacity of this pixel (constant value of the function f on this pixel). By carrying out a number of such X-rays, each of them is an equation, the left side of which is the sum of the above-mentioned energy losses, and the right side is the total loss. Value of the function f for each pixel is an unknown. As a result, we get a system of equations: where A is the matrix of the r × n 2 dimension (r is the number of X-rays), B is the matrix (vector) of the r × 1 dimension of the energy loss of each ray, and X is the matrix (vector) of the n 2 × 1 dimension of unknowns.
Because matrix A has specific properties (it is a rectangular, sparse, asymmetric and large-dimension matrix), classical algorithms are not useful in this case. However, the Kaczmarz algorithm, or algorithms based on it, can be successfully used, e.g., the most famous ART algorithm, in which, after selecting any initial solution x 0 ∈ R n 2 , subsequent approximations of the exact solution are determined by the formula: where k ∈ N 0 is the solution index, λ k is the relaxation coefficient (for λ k = λ = 1, we obtain the classic Kaczmarz algorithm), p i is the i-th projection (i-th element of vector B), a i is the i-th row of the matrix A, ·, · means the classical dot product, and · is the norm of a vector. Trummer showed [18] that, for 0 < λ k = λ < 2, i.e., for the constant value of the λ parameter, or for 0 < lim inf λ k ≤ lim sup λ k < 2, i.e., for the variable value of the λ parameter, the ART algorithm is convergent.
In Ref. [4], the authors approached the issue of reconstruction the interior of the examined object in an innovative way. In this paper, authors significantly increase the efficiency of this process through a new approach to the previously introduced idea. It was assumed that detected objects were represented as rectangles. This approach has advantages and is a good mathematical model of the studied phenomenon. Since there are relatively few detectable undesirable objects in the coal seam, we can limit to a relatively small number of rectangles representing these objects. The adopted shape is appropriate, despite the fact that the actual objects may not be rectangles, and the difference between a rectangle and a "non-rectangle" is negligible (small-size stone overgrowth is not a significant loss of the output quality).
After X-rays, we obtain a system of linear equations (2) and in the classic approach, this system of equations should be solved using the relationship (3). However, this step requires a lot of computation and therefore a long time to reconstruction. Thus far, rectangles representing the density distribution have been detected by using heuristic algorithms. We use a similar approach in this paper. The innovation of this approach will be the representation of these rectangles. Previously, order of rectangles was important, which really does not matter. The representation of a single rectangle was also ambiguous (in means of the order of the vertices). In this paper, we eliminate these two inconveniences, thanks to which the time of finding the rectangles should be significantly shortened.
Such an approach is necessary for the possibility of using heuristic algorithms in a computed tomography task. We can find articles (see, e.g., [19][20][21]) in which the authors combine these two methods, but, in these approaches, heuristic algorithms are used to analyze the image, after the system of equations (2) has been solved. The lack of works in which heuristic algorithms were used was dictated by their feature-these algorithms work well with the optimization of functions with a relatively small number of arguments. In the standard approach, there would be as many arguments as there would be unknowns in the system of equations (2). Thus, it would be enough to perform a not very dense discretization (e.g., assuming n = 20, and the variables would then be n 2 = 400), which would make it impossible to use the heuristic algorithm. The innovative approach is based on the fact that the unknowns (variables) are the features identifying the rectangles that represent the searched objects in the considered area. These features are the position of the rectangle and the value of the function f in the rectangle. Another advantage of this approach is the insensitivity of the method to discretization density. If we place m transmitters on one of the walls of the coal seam, and m sensors on the other, and if each transmitter sends a beam to each sensor, the system of equations will have m 2 of equations and n 2 of unknowns (increasing the discretization density forces an increase in the number of sensor sources). In this approach, the complexity of the task does not directly depend on the discretization density as much as in the classical approach. The heuristic algorithm arranges rectangles (along with information about the value of the function f in these rectangles) in the area in such a way as to minimize the value of the function: where F is a minimized function, B is a projection vector (right side of the system 2), B * is a projection vector created on the basis of the constant matrix A and a given approximation of the solution X, m is the number of sources (and sensors), b i and b * i , i = 1, 2, . . . , n 2 , are, respectively, i-th element of vectors B and B * ; n 2 is the number of pixels.

Heuristic Algorithms
This section presents the heuristic algorithms used in the calculations to minimize the objective function (4) and reconstruction the sought rectangles. Algorithms of this type are widely used in various types of engineering tasks [22][23][24][25][26][27][28][29]. In this paper, we present and compare the following algorithms: Aquila Optimizer (AO), Firefly algorithm (FA), Whale Optimization Algorithm (WOA), Butterfly Optimization Algorithm (BOA) and Dynamic Butterfly Optimization Algorithm (DBOA). In order to describe the methods, we introduce the following notations:

Aquila Optimizer
Aquila is a type of bird from the hawk family (Accipitridae) found in Eurasia, North America, Africa and Australia. The way in which these birds hunt their prey was the inspiration for the creation of the algorithm of searching for a minimum of functions. Aquila predators base their hunting behavior on four main techniques: • Expanded exploration. The predator soars high in the air with a vertical tilt-this method is used for hunting birds in flight, where the Aquila rises high above the ground. Upon examining the prey, Aquila goes into a long, low-angle slide with increasing speed. Aquila needs a height above its prey for this method to be effective. Just before the attack, the wings and tail are spread out and the claws bent forward to catch the prey. This process in the algorithm is described by the following equation: where x t+1 is solution in t + 1-th iteration, x t best is the best solution obtained in the t-th iteration (so far) and reflects the approximate location of the prey (which is the optimization goal). By x t m , we denote the mean solution in the iteration number t and calculate it as: The expression 1 − t T is responsible for the scope of the search. The closer the value of t is to the total iterations T, the more narrow the scope of the search. Value of rd is a random number from [0, 1] interval. • Narrowed exploration. In this method, when the aquila is high above the prey, it begins to circle around it and prepare to land and attack. This technique is called short stroke contour flight. Aquila narrowly explores the selected area where the prey is located and prepare to attack. This behavior is mathematically represented by the equation: where x t+1 is solution in iteration t + 1, x t best is the best solution obtained in the iteration t, x t r is a random solution from iteration t, and rd is random number from range [0, 1]. Levy d is value of the Levy flight distribution function. We compute the values of this function as follows: where s, β are constants u, v are random numbers from interval [0, 1], and σ is computed by formula [30]: The expression r cos φ − r sin φ is intended to simulate a spiral flight. Parameters r and φ are calculated as follows: where r 1 is a fixed integer from {1, 2, . . . , 30}, V, ξ are small constants, D 1 is an integer from {1, 2, . . . , Dim}. • Expanded exploitation. In this step, the prey is already located and the aquila launches an initial vertical attack to see the victim's reaction. Here, the aquila exploits a selected target area to get as close to its prey as possible. We transfer this behavior to the mathematical description of the method with the following equation: where x t+1 is the solution in the next iteration after t, x t best is the best solution obtained in the iterations so far, and x t m is the population mean solution defined by the Formula (6). As before, rd is a random number between [0, 1], lBound, uBound define the problem domain (lower and upper bound), and α and δ are exploitation adjustment constants' parameters. • Narrowed exploitation. In this technique, the aquila has already approached the prey and attacks with stochastic movements, approaching and grabbing the victim. Mathematically, this process is described by the equation: where x t+1 is solution in t + 1 iteration, and QF denotes the so-called quality function: G 1 and G 2 are calculated by: These parameters reflect the way the predator flies, and in the case of the algorithm, we can fine-tune the algorithm with it.
In nature, all these techniques are mixed in hunting prey, and in the case of the algorithm described here, the Formulas (5)- (14) form the set of four transformations that make up the AO algorithm. Pseudocode Algorithm 1 shows steps of the AO algorithm. Details related to the Aquila Optimizer and application of it can be found in [30][31][32]. Calculate the values of the cost function Fit for each element in the population.

7:
Determine the best individual x best in the population. 8: for k = 1, 2, . . . , N do 9: Update mean individual x m in the population. 10: Update algorithm parameters based on population number and random 11: values (G 1 , G 2 , QF).

Firefly Algorithm
The firefly algorithm is a swarm metaheuristics developed in 2008 by Xin-She Yang [33]. It is inspired by the social behavior of fireflies (insects from the Lampyridae family) and the phenomenon of their bioluminescent communication. The algorithm is intended to constrained continuous optimization problems. It is dedicated to minimize cost function Fit, i.e., finding x * that: where S ⊂ R n . Now, we present the basics assumptions of the fireflies algorithm: • Assume that there is a swarm of N agents (fireflies) solving the optimization problem iteratively, where x t j represents the solution for the firefly j in the k iteration, and Fit(x k i ) stands for its cost. • Each firefly has a distinguishing β attractiveness, which determines how strongly it attracts other members of the swarm. • As attractiveness of the firefly, a decreasing distance function should be used, e.g., as suggested by Yang, the following function: where β 0 ∈ [0, 1] and γ are the algorithm parameters, respectively: maximum attractiveness and absorption coefficient. Distance function is denoted by It is common that dist can be taken as ||x i − x j || 2 . Often, the absorption coefficient is • Each swarm member is characterized by the luminosity l j , which can be directly expressed as the inverse of the value of the cost function Fit(x j ). • Initially, all fireflies are placed in the search space S (randomly or using some deterministic strategy). • To effectively explore the solution space, it is assumed that each firefly j changes its position iteratively, taking into account two factors: attractiveness of other members of the swarm with greater luminosity l i > l j , ∀i = 1, . . . , N, i = j, which decreases with distance, and a random step u j . • For the brightest firefly, only the above-mentioned random step is applied. • The movement of the firefly j to the brighter firefly i is described by equation: where the second term corresponds to the attraction, and the third term is a random number from interval (min u j , max u j ). The pseudocode Algorithm 2 describes in steps the fireflies algorithm. More about the firefly algorithm can be found in [33,34].
Fit(x t k ) and k best = min k Fit(x t k ).

12:
Transform k-th solution x t k according to the Formula (17). 13: end if 14: end for 15: else 16: Calculate u k best = rand(min u k best , max u k best ) 17: Convert the best solution x t best based on the formula: x t+1 best = x t best + u k best . 18: end if 19: end for 20: end for 21: return x best .

Whale Optimization Algorithm
Whale Optimization Algorithm (WOA) discussed in this paragraph is inspired by the hunting behavior of whales. These huge mammals hunt their prey in a specific way, which is called the bubble-net feeding method. It has been observed that this foraging takes place by the formation of characteristic bubbles along a circle or spiral shaped path. Whale-watching scientists noticed two maneuvers related to the bubble and called them "up spirals" and "double loops". In the first method, predators dive approximately 12 m downwards and then begin to form a spiral-shaped bubble around their prey and swim towards the surface. The second hunting technique involves two different steps: coral loop and capture loop. It is worth mentioning here that the bubble-net feeding method is a unique behavior that can only be observed in whales. In this paper, the whale hunting technique serves as an inspiration for the minimum cost function search algorithm. Below, we present the basic principles of the Whale Optimization Algorithm.

•
Since the position of the optimal solution in the search space is unknown, the WOA algorithm assumes that the target victim is currently the best candidate for a solution or is close to the optimum. Once the best search agent is defined, the remaining agents try to update their positions towards the best search agent. This behavior is represented by the following equations: where x t is a certain solution in the t-th iteration, x t best is the best solution in iteration t, a, c are vectors of coefficients, and operation · is defined: Vectors a, c are calculated by formula: where v is the Dim length vector of numbers which decrease from iteration to iteration. In the case under consideration, we assume v i = 2 T−t T for all i = 1, 2, . . . , Dim, where T is total number of iterations. rd denotes the vector of length Dim random numbers from interval [0, 1]. • The mathematical model of the bubble-net behavior is based on the following two assumptions: shrinking encircling mechanism is reducing the value of the vector v from iteration to iteration. As a result, the value range of the vector a in Equation (20) also becomes smaller; -spiral updating position describes a relation between the whale's and the prey's position to mimic the spiral-shaped movement of the whale. We describe this process mathematically with the following equation: where constant b and random number l ∈ [−1, 1] define the shape of the spiral movement.
• The whales move around the prey in a contracting circle and at the same time along a spiral path. To model this behavior, we assume that there is a 50% probability to choose between a contracting, orbiting mechanism or a spiral model to update the position of the whales during optimization. The mathematical model is described by the following equations: x t+1 = x t best − a · r, for p < 0.5, x t+1 = |x t best − x t |e bl cos(2πl) + x t best , for p ≥ 0.5, where p is a random number from range [0, 1]. • A similar approach was applied, based on the range variability of the vector a, for the exploration phase. Hence, we use a with random values greater than 1 or less than −1 to force the agent to move away from the best position. In practice, we obtain |a i | > 1, for all i = 1, 2, . . . , Dim. Unlike the exploitation phase, here we update the agent position according to a randomly chosen solution instead of the best one found so far. Since |a i | > 1, this mechanism places emphasis on exploration and allows the WOA to perform a global search. This process mathematically can be described with the following equation: where x rand is a random agent.
More about the algorithm and its applications can be found in the papers [35,36]. Based on the above rules, we now present the pseudocode Algorithm 3 of the WOA. Determine values of a, c, l, p. Calculating the value of the cost function Fit for each individual x t+1 i (i = 1, 2, . . . , N) in the population. 24: Determine the best agent in the population x t+1 best . 25: end for 26: return x best .

Butterfly Optimization Algorithm
Butterflies use their sense of smell, taste and touch to find food and partners. These senses are also helpful in migrating from one place to another, escaping from a predator, and laying eggs in the right places. Among senses, the smell is the most important one that helps the butterfly find food, usually nectar-even if it is far away. To find a source of nectar, butterflies use sense receptors scattered throughout the butterfly's body (e.g., on foreheads, legs). These receptors are in fact nerve cells on the surface of the butterfly's body and are called chemoreceptors.
Based on scientific observations, it was found that butterflies can sense, and therefore locate, the source of an odor very accurately. In addition, they can distinguish smells as well as sense their intensity. These abilities and behavior of butterflies were used to develop the Butterfly optimization algorithm (BOA) [37]. The butterfly produces a scent of a certain intensity, which is related to its ability to move from one location to another. The fragrance is sprayed from a distance and other butterflies can sense it and thus communicate with each other, creating a collective knowledge network. A butterfly's ability to sense the scent of another butterfly became the basis for the development of algorithm to search a global optimum. In another scenario, when the butterfly cannot sense the smell of its surroundings, it will move randomly. In the proposed algorithm, this phase is referred to as local search.
In order to understand how fragrance is computed in the BOA, one first needs to understand how modality, smell, sound, light, temperature, etc. are processed by the stimulus. The whole concept of sensing and modality processing is based on three parameters: sensory modality (c), stimulus intensity (I) and power exponent (a). By modality, we can understand smell, sound, light or temperature. In the case of BOA, the modality is fragrance. I is the stimulus intensity. In the BOA, I is correlated with butterfly fitness. This means that, when a butterfly emits more smell, the remaining butterflies in that environment can sense it and become attracted to a highly scented butterfly. Fragrance f is directly proportional to the power of intensity I with the exponent a: where f is fragrance, c is the sensory modality, I is the stimulus intensity, and a is the power exponent depending on the modality. In most cases, in implementation, we can take a and c from the [0, 1] interval. The a parameter is a modality dependent power exponent, which means that it has absorption variability. Thus, the a parameter controls the behavior of the algorithm. Another important parameter is c, which is also a key parameter in determining BOA behavior. Theoretically, c ∈ [0, ∞), but, in practice, it is in the [0, 1] range. The a and c values have a decisive influence on the convergence speed of the algorithm. The selection of these parameters is important and depends on the characteristics of the problem under consideration.
The BOA algorithm is based on the following rules: • Each of the butterflies gives off a fragrance with a different intensity. Thanks to the smell, butterflies can communicate. • The movement of the butterfly occurs in two ways: towards an individual emitting a stronger smell, or at random. • Global search move is represented by the formula: where x t denotes the location of butterfly (agent) in the t-th iteration, x t+1 is the transform location of butterfly in the t + 1-th iteration t + 1, x best is the position of the best butterfly in the t-th iteration, f is a fragrance of x t , and r is a random number from [0, 1]. • Local search move is described by the following equation: where x t r1 i x t r2 are randomly chosen agents from iteration t. Information about applications of BOA can be found in [38,39]. Scheme Algorithm 4 presents a pseudocode of BOA. for k = 1, 2, . . . , N do 8: Calculate value of fragrance for x t k using formula (26). 9: end for 10: Determine the best agent x t best in the population.

Dynamic Butterfly Optimization Algorithm
Now, we present an improved version of the BOA called in the literature Dynamic Butterfly Optimization Algorithm (DBOA) [40]. This improvement consists of adding a novel local search algorithm based on mutation operator (LSAM) at the end of the BOA main loop. LSAM transforms the population (best solution first) using the mutation operator. If the new solution (obtained after mutation) turns out to be better than the previous one (before mutation), then it replaces the previous one. This will transform the agents population. The LSAM algorithm is presented in the scheme Algorithm 5. In the literature, there are many papers about the applications of the butterfly algorithm and its modifications (see [41][42][43]).
The mutation operator occurring in the schema Algorithm 5 transforms each coordinate of the solution x = [x 1 , x 2 , . . . , x Dim ] substituting it with a random number from the normal distribution, as shown in the formula below: where x i is mean and σ = 0.1(uBound − lBound) is standard deviation. Pseudocode of DBOA is shown in the scheme Algorithm 6.

Algorithm 5
Novel local search algorithm based on mutation operator (LSAM) pseudocode.
1: x best -current best solution from BOA. 2: Fit best = Fit(x best ) -value of objective function for current best solution. 3: I -number of iterations, mr -mutation rate.

7:
if Fit new < Fit best then 8: x best = x new , Fit best = Fit new . 9: else 10: Determine a random solution x rnd from the population, different from x best .

11:
Calculate the value of the objective function Fit rnd = Fit(x rnd ).

12:
if Fit new < Fit rnd then 13: x rnd = for k = 1, 2, . . . , N do 8: Calculate value of fragrance for x t k using formula (26). 9: end for 10: Determine the best agent x t best in the population.

20:
Apply the LSAM algorithm to transform the agents population. 21: end for 22: return x best .

Results
Selected algorithms are quite commonly used in various types of optimization problems, as evidenced by a large number of scientific publications. A lot of numerical experiments and research have shown that these algorithms can be adapted to the requirements of the considered problem. Numerical experiments were performed for all the heuristic algorithms presented in the previous section. All algorithms were implemented by the authors and then tested on the following functions: Bent Cigar Function, Rosenbrock Function, Rastrigin Function and HGBat Function.
Fine-tuning of the algorithms, based on the literature, own experiences and experiments on test functions, consisted of determining the range of values for each parameter. From this range, 20 values were uniformly selected and tested on numerous examples relating to the detection of a single object. Numerical experiments were performed for testing all possible combinations of selected parameters multiple times with the same data set. As a result, the best sets of parameters were selected for each of the algorithms. Further calculations were performed for them.
The essential step of the research is a series of numerical experiments for tomography tasks. First, 50 tests were prepared for which the problem of tomography comes down to the detection of one object. These tasks were primarily used to calibrate the parameters of individual algorithms. At this stage, it was found that the AO algorithm is not suitable for this type of issue. Numerous attempts to select algorithm parameters led to unsatisfactory results, and the best approximate solution of the problem finally obtained was burdened with an unacceptable error of 2%. The second stage of the experiments was to find a solution in the form of two disjoint anomalies. In this case, a random set of test tasks was also generated. Each of them consisted of identifying two separate anomalies in the form of rectangles. In this case, all algorithms were called for the parameters (except for the size of the population and the number of iterations) calibrated in the first stage of the research. For this significantly more complex task, where the functional depends on 10 parameters, the size of the population and the maximum number of iterations were selected by means of a numerical experiment. Tasks of this type turned out to be significantly more difficult for WOA and BOA algorithms. This stage turned out to be too difficult for these algorithms and, in an acceptable time, these algorithms failed to find a satisfactory approximate solution. Figure 4 presents a comparison of the dependence of the functional value on the number of calls to the objective function for the examples with one and two detected objects. The left figure shows that the value of the functional for the AO algorithm is too large (not at all zero). In the case of tasks with two searched objects, the WOA and BOA algorithms are not able to deal with, which is illustrated by the graphs in the figure on the right.
The last third stage of the research was to identify three disjoint anomalies. At this stage, the experiments were carried out for two algorithms: FA and DBOA because only these algorithms reconstruct two anomalies in a satisfactory time. In this case, similar to previous cases, experiments were also carried out for 50 different test tasks. For these tasks, the minimized functional depended on 15 arguments. The threefold increase in the number of parameters in relation to the first task significantly influenced the extension of the algorithm's operation time, which was associated with the need to multiply the population size and the number of iterations. Now, we present a comparison of the obtained results for the two tested algorithms at this stage of the experiments. Figure 5 presents comparisons of the exact solution (areas bordered with a solid line) with the approximate solution (areas bordered with a dashed line) obtained with the FA algorithm for selected iterations. The height of the areas (they are cuboids) has been marked according to the color scale. The presented results show that the FA algorithm, after a relatively short number of iterations (it = 20), "approaches" the exact solution, and obtaining a satisfactory result requires the performance of another 80 iterations.  Similarly, Figure 6 presents a summary of results in selected iterations for the second tested algorithm DBOA in the third stage of numerical experiments. Comparing the obtained results, we can see that the convergence of both algorithms significantly decreases with the increase of successive iterations. In order to compare the effectiveness of the DBOA algorithm and the FA algorithm, Figure 7 compares the values of the objective function depending on the number of iterations (left figure) and the comparison of the objective function values depending on the number of calls the objective function (right figure). Comparing the number of iterations performed would indicate that the FA algorithm requires 10 times less iteration, but such a comparison does not indicate into the algorithm execution time, which in the case of the analyzed algorithms depends primarily on the number of calls to the objective function. As we can see, this number in the case of the DBOA algorithm is significantly smaller than in the case of the FA algorithm. In order not to blur the readability, the figure has been drawn in the range [0, 10 6 ] of calls to the objective function. It should be mentioned that the FA algorithm obtained a values of objective function close to zero only for 5 · 10 6 calls to the objective function. Such comparisons show that the DBOA algorithm is definitely more effective in determining anomalies.  Table 1 presents the values of the objective function together with the number of calls to the objective function depending on the number of detected objects, which translates into the number of unknowns. The inscription "no results" means that the algorithm parameters were not selected so as to obtain repeatable results for many samples. The presented data clearly show that the DBOA algorithm turned out to be the best algorithm in the optimization problem presented in the paper. Table 1. Comparison of the number of calls to the objective function (n f ) and the value of the F function depending on the number of detected objects for the analyzed algorithms.

Conclusions
The conducted research has shown that not all of the analyzed algorithms are suitable for the task formulated in this paper. The identifying of a larger number of anomalies is a complex process and the correct selection of the optimization method plays a crucial and important role. The experiments showed significant difficulties with the selection of appropriate parameters for the heuristic algorithms under study. The presented research results show that the DBOA algorithm handles the problem under consideration best. Analyzing the plot of the dependence of the value of the objective function on the number of its calls, it can be noticed that these algorithms quite quickly obtain solutions similar to the exact ones but require many calculations to obtain results close enough to the exact solution. Therefore, the DBOA method could be an initial method in a hybrid solution combining a heuristic algorithm for determining the starting point for classical computed tomography methods. The authors of this paper have future plans for research on the improvement and development of existing methods, as previously mentioned by creating a hybrid algorithm in which the DBOA algorithm would perform an exploratory function and the exploitation phase would be performed by classical computed tomography. The minimized functional turned out to be too difficult for some of the analyzed algorithms. It should be emphasized that this function, due to the qualitative nature of the detected anomalies (description by rectangles), should be generalized in further works by introducing a more complex geometry of the searched objects. Another point of research is therefore to change the geometry of the detected objects from rectangles to convex polygons. With this in mind, an important point of this stage of the research was to find an algorithm that allows for effectively detecting anomalies in the studied areas. It can be assumed that, for more geometrically complex objects, this task will be even more computationally difficult.