Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors

In recent times, fractional calculus has gained popularity in various types of engineering applications. Very often, the mathematical model describing a given phenomenon consists of a differential equation with a fractional derivative. As numerous studies present, the use of the fractional derivative instead of the classical derivative allows for more accurate modeling of some processes. A numerical solution of anomalous heat conduction equation with Riemann-Liouville fractional derivative over space is presented in this paper. First, a differential scheme is provided to solve the direct problem. Then, the inverse problem is considered, which consists in identifying model parameters such as: thermal conductivity, order of derivative and heat transfer. Data on the basis of which the inverse problem is solved are the temperature values on the right boundary of the considered space. To solve the problem a functional describing the error of the solution is created. By determining the minimum of this functional, unknown parameters of the model are identified. In order to find a solution, selected heuristic algorithms are presented and compared. The following meta-heuristic algorithms are described and used in the paper: Ant Colony Optimization (ACO) for continous function, Butterfly Optimization Algorithm (BOA), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimize (AO). The accuracy of the presented algorithms is illustrated by examples.


Introduction
With the increase in computing power of computers, all kinds of simulations of various phenomena occurring, among others, in physics, biology and technology are gaining in importance. The considered mathematical models are more and more complicated and can be used to model various processes in nature, science and engineering. In the case of modeling anomalous diffusion processes (e.g., heat conduction in porous materials) or processes with long memory, fractional derivatives play a special role. There are many different fractional derivatives, in which the following are the most popular: Caputo, Riemann-Liouville and Riesz. Authors of the study [1] present a model dedicated risk of corporate default, which can be described as a fractional self-exciting model. The model and methods introduced in the study were used to carry out a validation on real market data. In result, the fractional derivative model became better. Ming et al. [2] used Caputo fractional derivative to simulate China's gross domestic product. The fractional model was compared with the model based on the classical derivative. Using the fractional derivative, the authors built a better and more precise model to predict the values of gross domestic product in China. Another applications of fractional derivatives in modeling processes in biology can be found in the article [3]. The authors presented the applications of the Atangan-Baleanu fractional derivative to create models of such processes as: Newton's law of cooling, population growth model and blood alcohol model. In the article [4], authors used Caputo fractional derivative to investigate and model population dynamics among tumor cells-macrophage. The study also estimated unknown model parameters based on samples which were collected from the patient with non-small cell lung cancer who had chemotherapy-naive hospitalized. De Gaetano et al. [5] presented a mathematical model with a fractional derivative for Continuous Glucose Monitoring. The paper also contains the numerical solution of the considered fractional model. Based on experimental data from diabetic patients, the authors determine the order of the fractional derivative for which the model best fits the data. The research shows that the fractional derivative model fits the data better than the integer derivative model (both first and second order). More about fractional calculus and its application can be found in [6][7][8].
In order to implement more and more accurate and faster computer simulations, it is necessary to improve various types of numerical methods or algorithms that solve direct and inverse problems. Solving the inverse problem allows to design the process and select the input parameters of the model in a way that make possible obtaining the desired output state. Such tasks are considered difficult due to the fact that they are ill conditioned [9]. Sensor measurements often provide additional information for inverse issues. Based on these measurements, the input parameters of the model are selected and the entire process is designed. In the study [10] a variational approach for reconstructing the thermal conductivity coefficient is presented. The authors also cite statements regarding the existence and uniqueness of the solution. Numerical examples are also provided. In the article [11] the solution of the inverse problem consists in identifying the coefficients of the heat conduction model based on temperature measurements from sensors. In addition, several mathematical models were compared, in particular fractional models with classical model. Under the study, the parameters like order of fractional derivative as well as thermal conductivity and heat transfer coefficient were identified. Considerations regarding solving the inverse problem are also included in the article [12]. The authors present the approach of the solution from the Deep Neural Network, in which they used deep-learning methods. It allowed for learning all free parameters and functions through training. The backpropagation of the training data can be one of the methods for training the deep network. More examples of inverse problems in mathematical modeling and simulations can be found in [13][14][15][16][17][18][19][20].
In this article, the mathematical model of heat conduction with Riemann-Liouville fractional derivative is presented. In the provided model, the boundary conditions of the second and third order are adopted. Then, a solution of direct problem is shortly described. To solve this problem a finite difference scheme is derived. The inverse problem posed in this article consists in the reconstruction of the third order boundary condition and the identification of such parameters as order of fractional derivative and thermal conductivity. In the process of developing a procedure that solves the inverse problem, a fitness function is created. It describes the error of the approximate solution. In order to identify the parameters, the minimum of this function should be found. The following algorithms are used and compared to minimize the fitness function: Ant Colony Optimization (ACO), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO). The presented procedure has been tested on numerical examples.

Anomalous Diffusion Model
We consider an anomalous diffusion equation in the form of a differential equation with a fractional derivative with over spatial variable: In this approach, the considered anomalous diffusion equation describes the phenomenon of heat flow in porous medium [11,21,22]. In Equation (1) we assume the follow-ing notations: T [K]-temperature, x [m]-spatial variable, t [s]-time, c J kg K -specific heat, kg m 3 -density, β ∈ (1, 2)-order of derivative and λ = wλ W m 3−β K is scaled heat conduction coefficient, where w is scale parameter. Heat conduction λ had to be scaled to keep the units consistent. To Equation (1) an initial condition is added: On the left side of the spatial interval the homogeneous boundary condition of the second order is taken: and for the right boundary of the spatial interval the boundary condition of the third order is assumed: The symbols T ∞ , h appearing in the Equation (4) denote ambient temperature and the heat transfer coefficient.
In the Equation (1) there is a fractional derivative with respect to space, which is defined as the Riemann-Liouville derivative [23]:

Numerical Solution of Direct Problem
In order to solve the direct problem for model (1)-(4) it is used finite difference scheme. The considered area is discretized by creating a mesh S = {(x i , t k ) : x i = x L + i ∆x, t k = k ∆t}, where ∆x = (x R − x L )/M and ∆t = t end /K and i = 0, . . . , M, k = 0, . . . , K. Then the Riemann-Liouville derivative has to be approximated [23]: as well as boundary conditions (3) and (4): where T ∞ is the ambient temperature, T k i is the approximate value of the function T in point (x i , t k ), and h is a function describing the heat transfer coefficient. Using the Equations (6)-(8) we obtain a differential scheme (a system of equations). By solving this system, the values of the function T will be determined in mesh points.

Inverse Problem and the Procedure for Its Solution
The problem considered in this article concern the inverse problem. It consists in establishing the input parameters of the model in a way that allows obtaining the temperature at the boundary corresponding to the measurements from the sensors. The identified parameters are: thermal conductivity λ, order of derivative β and heat transfer function h in the form of a second degree polynomial. In the presented approach, after solving the direct problem for fixed values of unknown parameters, we obtain approximation of T and compare it to the measurements data. This is a method of creation the fitness function: where N is a number of measurements, T j ( λ, β, h) are temperature values at the measurement point calculated from the model, and T m j are measurements from sensors. To find the minimum of function (9) we use selected metaheuristic algorithms described in Section 5.

Meta-Heuristic Algorithms
In this section, we present selected metaheuristic algorithms for finding the minimum of functions. These algorithms will be: Ant Colony Optimization (ACO) for continuous function optimization, Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO).

ACO for Continuous Function Optimization
The inspiration for the creation this minimum function search algorithm was the observation of the habits of ants while searching for food. In the first stage, the ants randomly search the area around their nest. In the process of foraging for food, ants secrete a chemical called a pheromone. Thanks to this substance, the ants have a chance to communicate with each other. The amount of secreted substance depends on the amount of food found. If the ant has successfully found a food source, the next step is to return to the nest with a food sample. The animal leaves a pheromone trail that will allow other ants to find the food source. This mechanism was adapted to create the ACO algorithm for continuous function optimization [24]. More on the algorithm and its applications can be found, among others, in articles [25][26][27][28].
There are three main parts to the algorithm: • Solution (pheromone) representation. Points from the search area R n are identified as pheromone patches. In other words, the pheromone spot plays the role of a solution. Thus, k-th pheromone spot (or approximate solution) can be represented as x k = (x k 1 , x k 2 , . . . , x k n ). Each solution (pheromone spot) has its quality calculated on the basis of fitness function F(x k ). In each iteration of the algorithm, we store a fixed number of pheromone spots in the set of solutions (establish at the start of the algorithm). • Transformation of the solution by the ant. The procedure of constructing a new solution, in the first place, consists in choosing one of the current solutions (pheromone spots) with a certain probability. The quality of the solution is a factor that determines the probability. The relationship here is as follows: with the increase in the quality of the solution, the probability of selection increases. In this paper, the following formula is adopted to calculate the probability (based on the rank) of the k-th solution: where L denotes number of all pheromone spots, and ω is expressed by the formula: The symbol rank(k) in the Equation (11) denotes the rank of the k-th solution in the set of solutions. The parameter q is a parameter that narrows the search area. In case of small value of q, the choice of the best solution is preferred. The greater q, the closer the probabilities of choosing each of the solutions. After choosing k-th solution, it is required to perform Gaussian sampling using the formula: where µ = x k i is i-th coordinate of k-th solution and σ = ξ Assignment of probability to pheromone spots according to the Equation (10). 8: for ant m = 1, 2, . . . , M do 9: The ant chooses the k-th (k = 1, 2, . . . , L) solution with probability p k . 10: for coordinate j = 1, 2, . . . , n do 11: Using the probability density function (12) in the sampling process, the ant changes the j-th coordinate of the k-th solution. 12: end for 13: end for 14: Calculation the value of the fitness function F for M new solutions. 15: Adding M new solutions to the set of archive of old, sorting the archive by quality and then rejection of the M worst solutions. 16: end for 17: return best solution x best .

Dynamic Butterfly Optimization Algorithm
Another of the presented heuristic algorithms is an improved version of the Butterfly Optimization Algorithm (BOA), namely the Dynamic Butterfly Optimization Algorithm (DBOA) [29].
In order to communicate, search for food, connect with a partner, and to escape from a predator, these animals use the sense of smell, taste and touch. The most important of these senses is smell. Thanks to the sense of smell butterflies look for food sources. Sensory receptors, called chemoreceptors, are scattered all over the body of a butterfly (e.g., on the legs).
Scientists studying the life of butterflies have noticed that these animals locate the source of a fragrance with great precision. In addition, they can distinguish fragrances and recognize their intensity. Those were an inspiration for the development of the Butterfly Optimization Algorithm (BOA) [30]. Each butterfly emits a specific fragrance of a given intensity. Spraying the fragrance allows other butterflies to recognize it and then communicate with each other. In this way, a "collective knowledge network" is created. The global optimum search algorithm is based on the ability of butterflies to sense the fragrance. If the animal cannot sense the fragrance of the environment, its movement will be random.
The key concept is fragrance and the way it is received and processed. The concept of modality detection and processing (fragrance) is based on the following parameters: stimulus intensity (I), sensory modality (c) and power exponent (a). I is the intensity of the stimulus. In BOA, fitness function is somehow correlated with the intensity of the stimulus I. Hence, it can be shown that the more fragrance a butterfly emits (solution quality is better), the easier it is for other butterflies in the environment to sense it and be attracted to it. This relationship is described as follows: where f denotes fragrance, c is the sensory modality, I denotes the stimulus intensity, and a is the power exponent, which depends on the modality. In this article, we assume values for the parameters a and c in the range [0, 1]. The parameter a is a modality-dependent power exponent. It has a variability in absorption and its value may decrease in subsequent iterations. Thus, the parameter a can control the behavior of the algorithm, its convergence. The parameter c is also important in the perspective of the BOA operation. In theory c ∈ [0, ∞), while in practice it is assumed that c ∈ [0, 1]. The values of a and c have a significant impact on the speed of the algorithm. Considering this, it should be noted that an important step here is the appropriate selection of these parameters. It should be carried out once for various optimization tasks.
In the BOA we can distinguish the following stages: • Butterflies in the considered environment emit fragrances that differ in intensity, which results from the quality of the solution. Communication between these animals takes place through sensing the emitted fragrances. • There are two ways of movement of a butterfly, namely: towards a more intense fragrance emitted by another butterfly and in a random direction. • Global search is represented by: where x old is the position of the butterfly (agent) before the move, and x new is the transformation position of the butterfly, x best is the position of the best butterfly in the current population, and f is the fragrance of a butterfly x old and r denotes a number from the range [0, 1] selected in a random way. • Local search move is formulated by: where x r1 , x r2 are randomly selected butterflies from the population.
At the end of each iteration modifying the population of agents (butterflies), the local search algorithm based on mutation operator (LSAM) is run. This is a significant modification compared to BOA. In this article, the operation of LSAM consisted in the selection of several individuals (solutions) and their transformation with the use of the mutation operator. In case of obtaining better solution after mutation, it replaces the old one. The LSAM algorithm is presented as pseudocode in Algorithm 2. More information regarding the applications of the butterfly algorithm can be found in [31][32][33].

Algorithm 2 Pseudocode of LSAM operator.
1: x r -random solution among the top half best agents in population (obtained from BOA). 2: Fit r = F(x r )-value of the fitness function for x r . 3: I-number of iterations, ξ-mutation rate. 4: Iterative part. 5: for iteration i = 1, 2, . . . , I do 6: Calculate: if Fit new < Fit r then 8: x r = x new , Fit r = Fit new . 9: else 10: Set a random solution x rnd from the population, but not x r .

12:
if Fit new < Fit rnd then 13: x rnd = x new 14: end if 15: end if 16: end for Algorithm 2 includes the process of transforming the individual coordinates of the solution x = (x 1 , x 2 , . . . , x n ) with the use of the mutation operator. The transformation consists in drawing a number from the normal distribution and replacing the old coordinate with a new one. For j-th coordinate we use normal distribution:
for iteration i = 1, 2, . . . , I do for k = 1, 2, . . . , N do Calculate value of fragnance for x k with the use of Equation (13). 5: end for Set the best agent x best among the butterflies. for k = 1, 2, . . . , N do Set a random number r from range [0, 1]. if r < p then 10: Convert solution x t k in accordance with the Equation (14). else Convert solution x t k in accordance with the Equation (15). end if end for 15: Change value of the parameter a. Adopt the LSAM algorithm to convert the agents population with mutation rate ξ. end for return x best .

Aquila Optimizer
Another of the considered algorithms is Aquila Optimizer (AO). This algorithm is a mathematical representation of the hunting behavior of a genus of bird called Aquila (family of hawks). Four main techniques can be distinguished in the way these predators hunt: • Expanded exploration. In the case that a predator is high in the air and wants to hunt other birds, it tilts vertically. After locating the victim from a height, Aquila begins nosediving with increasing speed. We can express this phenomenon with the use of the following equation: where x new is solution after transformation, x best is the best solution so far and symbolizes position of the prey, i is current iteration, I is number of maximum iteration and rd is random number from [0, 1]. In this case x best can also be defined as the optimization goal or approximate solution. Vector x mean is mean solution from all population: • Narrowed exploration. This technique involves circling the prey in flight and preparing to drop the earth and attack the prey. It is also known as short stroke contour flight. This is described in the algorithm by the equation: where x new and x best denotes the same as in expanded exploration point, x random is a random solution from population and rd is a random number from interval [0, 1]. Term r cos φ − r sin φ simulates spiral flight of Aquila. Expression Levy D is random value of the Levy flight distribution: where s, β are constants u, v denote random numbers from range [0, 1], and σ is formulated as follows [34]: In above equation Γ denotes gamma function. In order to determine the values of the parameters r and φ the following formula is used: where r 1 is a fixed integer from {1, 2, . . . , 30}, V, ξ are small constants, D 1 is an integer from {1, 2, . . . , n}. • Expanded exploitation. This hunting technique begins with a vertical attack on a prey, which location is known within some approximation defining the search area. Thanks to this information, Aquila gets as close to its prey as possible. It can be described as follows: where x new is the solution after transformation, x best is the best solution at the moment and x mean is the mean solution in all population determined with the use of the formula (18). As before, rd denotes a random number from range [0, 1], while lB, uB are lower and upper bound, α and δ are constants parameters of exploitation regulation. • Narrowed exploitation. The characteristic feature of this technique are the stochastic movements of the bird, which attacks the prey in close proximity. It can be described by the formula: where x new denotes solution before transformation, QF is quality function: G 1 and G 2 are described by: We can adjust the algorithm with the above parameters.
The Aquila's food-gathering behavior consists of the four hunting techniques previously described. The Formulas (17)-(26) describing four transformations consists in AO algorithm. Algorithm 4 shows description of implementation of the AO algorithm. More about the Aquila Optimizer can be found in [34,35]. Iterative part. 5: for iteration i = 1, 2, . . . , I do 6: Determine values of the fitness function F for each agent in the population. 7: Establish the best solution x best in the population. 8: for k = 1, 2, . . . , N do 9: Calculate mean solution x mean in the population. 10: Improve parameters G 1 , G 2 , QF of the algorithm. 11: if iteration i ≤ 2 3 I then 12: if rd < 0.5 then 13: Perform step expanded exploration (17) by updating solution x k . 14: In the result solution x new,k is obtained. 15: if F(x new,k ) < F(x k ) then make substitution x k = x new,k

16:
end if 17: if F(x new,k ) < F(x best ) then make substitution x best = x new,k

18:
end if 19: else 20: Perform step narrowed exploration (19) by updating solution x k . 21: In the result solution x new,k is obtained. 22: if F(x new,k ) < F(x k ) then make substitution x k = x new,k .

23:
end if 24: if F(x new,k ) < F(x best ) then make substitution x best = x new,k . 25: end if 26: end if 27: else 28: if rd < 0.5 then 29: Perform step Expanded exploitation (23) by updating solution x k . 30: In the result solution x new,k is obtained. 31: if F(x new,k ) < F(x k ) then make substitution x k = x new,k . 32: end if 33: if F(x new,k ) < F(x best ) then make substitution x best = x new,k .

Numerical Example and Test of Algorithms
In this section, we present a numerical example illustrating the effectiveness of the algorithms described above. On this basis, the algorithms are compared with each other regarding the inverse problem in the heat flow model. As described in the Section 4, the unknown model parameters that need to be identified are: λ-thermal conductivity, β-order of derivative and h-heat transfer function. Temperature measurements on the right boundary of the considered area ( Figure 1) are supplementary data necessary to solve the inverse problem. The process should be modeled in a way that allows obtaining temperature values from the mathematical model adjusted to the measurement data. The calculations in the inverse problem are performed on the grid ∆x × ∆t = 100 × 1995. In the considered example, the following data are assumed in the model (1)-(4): In the case of heat transfer function, the error between the exact function h, and the recreated h is defined by the following formula: In Table 1 the results obtained for individual algorithms are presented. Evaluating the tested algorithms according to the criterion of the value of the fitness function F (9), it is concluded that the DBOA algorithm turned out to be the most appropriate. The value of the fitness function for this algorithm is definitely and significantly lower than in the other cases. Also, the reconstruction errors of the parameters λ and h are the smallest for the DBOA algorithm. The second place belongs to the ACO algorithm. Based on the results, it can be seen that minimizing the fitness function is difficult, and the inverse problem is ill-posed. The value of the fitness function (9) is strongly dependent on changes in the values of the searched parameters.    We compare the reconstructed temperature values at the measurement points with the measurement data afterwards. The Table 2 presents that the best results are obtained for the DBOA algorithm, and the worst for the BOA algorithm. Generally, these values are not high. Hence, it can be concluded that the reconstructed temperature is well matched to the measurement data, but also that the set problems are ill-posed and difficult to minimize. An important parameter evaluating the obtained results is matching the temperature values at the measurement points with the measurement data. Figures 4 and 5 show graphs of reconstructed temperature and graphs of measurement data for each of the algorithms. As can be seen, the reconstructed temperature values are well matched to the measurement data, despite the fact that the reconstructed values of the searched parameters λ and h differ significantly for considered algorithms. This proves that the graph of the objective function is flat in the vicinity of the exact solution. Thus, the considered inverse problem is difficult to solve. And the found solution (reconstructed parameter values) may contain significant errors.

Conclusions
The paper presents the inverse problem of heat flow consisting in the identifying parametric data of the model with given temperature measurements.The unknown parameters of the model are: thermal conductivity, order of fractional derivative and heat transfer function. To solve inverse problem, the function describing the error of the approximate solution should be minimized. Four meta-heuristic algorithms were used and compared, such as: ACO, DBOA, AO and BOA. DBOA turned out to be the best in terms of the value of the minimized function. In the case of DBOA, the value of the minimized function was 0.45, which is a satisfactory result. In the case of other algorithms, these values were much higher: ACO ∼ 273; BOA ∼ 2501 and AO ∼ 482. The DBOA also turned out to be the best in terms of errors in reconstruction model parameters and fitting reconstructed temperature to measurement data. In the case of DBOA, the error of reconstruction the temperature at the measurement points is equal to 0.0131, while for the other algorithms this error was of the order of 10 −1 . The considered problems turned out to be difficult to solve. The graph of the fitness function is very flat in the vicinity of the searched solution. Thus, even significant differences in the values of the reconstructed parameters have little impact on the differences in the values of the fitness function.