Multi-Scale Memetic Image Registration

: Many technological applications of our time rely on images captured by multiple cameras. Such applications include the detection and recognition of objects in captured images, the tracking of objects and analysis of their motion, and the detection of changes in appearance. The alignment of images captured at different times and/or from different angles is a key processing step in these applications. One of the most challenging tasks is to develop fast algorithms to accurately align images perturbed by various types of transformations. The paper reports a new method used to register images in the case of geometric perturbations that include rotations, translations, and non-uniform scaling. The input images can be monochrome or colored, and they are preprocessed by a noise-insensitive edge detector to obtain binarized versions. Isotropic scaling transformations are used to compute multi-scale representations of the binarized inputs. The algorithm is of memetic type and exploits the fact that the computation carried out in reduced representations usually produces promising initial solutions very fast. The proposed method combines bio-inspired and evolutionary computation techniques with clustered search and implements a procedure specially tailored to address the premature convergence issue in various scaled representations. A long series of tests on perturbed images were performed, evidencing the efﬁciency of our memetic multi-scale approach. In addition, a comparative analysis has proved that the proposed algorithm outperforms some well-known registration procedures both in terms of accuracy and runtime.


Introduction
The modern world uses a lot of digital imaging, both motion and static, captured through a myriad of devices of all kinds, and the trend is rapidly growing. All these images must be processed, implying understanding the content and making decisions. Most of the digital content is, in the end, irrelevant for the decision-making process but must be examined nevertheless in order to categorize it. While the human worker is still the best choice for understanding an image, the sheer amount of digital content to be processed is beyond their capabilities. Nobody can afford to hire and retain a huge army of people to analyze the digital imagery that must be processed. Besides being a huge task, it is also a repetitive and, after all, menial task, which makes it prone to errors. This is where computers can step in to perform the menial, repetitive tasks that make up the bulk of the processing and leave only final decision-making steps to humans. Even some simpler decisions can be entrusted to computers.
The understanding of digital imagery content by computers is generally referred to as "computer vision", an umbrella term that covers all the research conducted in this field.
The processing power of computers constantly grows, but at the same time the amount of processing needed also grows, which leads to an "arms race". Time is a critical resource and reducing the processing time is the main goal. Computers need better arms in this fight, which are materialized in better algorithms. Technological advances translate into images with higher resolution. More and more pixels are available for computers to process, but

Literature Review
The idea of using multi-scaling is not new; the reasons behind it manifesting themselves early in the development of computers and computer vision. Key articles at the base of multi-scaling [2][3][4] indicate two main reasons for using images with lower resolutions in various processing stages of computer vision: first, the obvious reduction in processing power needed; second, a lower resolution eliminates distracting pixels, and thus the context of the image is better understood and can be applied on the high-resolution image.
Multi-scale images have been successfully used for various resource-intensive tasks. The identification of elements in images is one such task intensively approached. One direction looks into the identification of objects of interest in an image (salient objects), while another direction looks to identify the same object in a set of images acquired by various sources. In [5], the authors show that multi-scale images are related to the way biological vision works for segmenting, identifying objects, and understanding the perceived image.
The detection of salient segments using adapted graph algorithms and numerical techniques through multi-scale versions of an image is explored in [6]. In [7,8], Convolutional Neural Networks (CNN) are used to create models for salient object detection (objects that attract the attention of the eye) with the purpose of object detection. The CNN are trained using reduced-scale images, which provide the needed information without wasting computing time on irrelevant details. The trained network can be used to detect objects of interest in high-resolution images for various purposes, including personal identification. Multiple scale versions of an image are also used to identify targets of various sizes (both big and small) in an image through the use of the YOLO (You Only Look Once) v3 CNN in [9].
Re-identification (or identification of the same subject in multiple images) is an important task in processing images captured by various cameras. In [10], the potential of meaningful information in reduced scale images is harnessed for vehicle re-identification in images captured by non-overlapping security cameras. In [11,12], the same idea is used for the re-identification of persons. Various reduced-scale images provide useful explicit information for the stated goal. The identification of objects in static or motion images is also accomplished using multi-scale image processing in [13].
For a more engineering-based approach, multi-scale images have been used to detect and classify specific elements of importance. In [14], reduced-scale images (scale 2 and 4) are used to train a CNN to identify and outline six types of possible damages in civil infrastructure, thus relieving human inspectors of a huge workload, usually beyond their physical capabilities. An improved image denoising method that uses multi-scaling in combination with a Normalized Attention Neural Network is proposed in [15]. In [16], the authors use multi-scaling to overcome the difficulties in the detection and identification of very specific objects (ships) in single sensor images in infrared and visible spectra.
The biomedical field also benefits from the added information that can be found in reduced scale images. In [17], authors show that machine learning technology performs very well in image recognition, thus helping diagnosis, but not so well in prognosis. The authors also discuss the integration of machine learning with multi-scale modelling to improve prognosis performance.

Memetic Approach of Image Registration
Hybrid and memetic algorithms are among the most commonly used metaheuristics for image alignment, as the evolutionary process is enhanced with local search techniques that reduce the risk of premature convergence and speed up computation. In the following, we present the basic method we proposed in [1]. The method will be further improved to speed up computation, extending it to multi-scale processing and including specialized mechanisms to avoid premature convergence.
The degradation model is of geometric type, consisting of translations, rotations, and non-uniform scaling. We denote the target image by T and let S be the observed image, i.e., the perturbed version of T, where S(x, y)= T f p (x, y) and The parameter vector defining (2) is p = a, b, s x , s y , θ , where (a, b) is the translation vector, s x and s y are the scale factors, s x , s y > 0, and θ is the rotation angle. Note that the rotation is relative to the upper left corner of the image. From a mathematical point of view, aligning S to T means computing g, the transformation to reverse (2), that is: The inverse transformation g p is computed as: To solve the image alignment problem using evolutionary computing one has to define the chromosome representation, the search space, and the fitness function. In our work, the images are binarized by applying an edge detector. The chromosome space coincides with the phenotype space, and it is defined by [a min , a max ] × [b min , b max ] × [−π, 0] × (0 , smax x ] × (0 , smax y . The boundaries of the translation parameters are evaluated based on smax x , smax y , and the object pixels in T and S, respectively [1]. The quality of a chromosome corresponding to a parameter vector p1 is defined in our work by the Dice similarity between the target and the result computed by (3): The Dice coefficient of the binary images A and B is defined by [18]: where |·| is the cardinal function.
The registration mechanism combines a version of Firefly Algorithm (FA) global optimization [19,20] with the local search implemented by Two Membered Evolutionary Strategy (2MES) [21] applied on clustered data. Let c 0 be an input individual, and we denote the initial step size by σ 0 . At each moment t, 2MES iteratively updates the point c t−1 using, where z is a random value drawn from N(0, σ t−1 ). The step size parameter σ t is updated according to the celebrated 1/5 rule [21].
where ϑ ∈ [0.817, 1) and sr is the success rate, i.e., the displacement rate corresponding to the last τ updates. FA is a nature-inspired search that simulates the behavior of fireflies in terms of bioluminescence evolution. The position of a firefly corresponds to a candidate solution of (1), its fitness being measured by the corresponding light intensity. Each firefly j attracts less bright fireflies i, i.e., j modifies the positions c i according to: where αr is the randomness parameter, ε is a draw from U(0, 1), β j (r) is the attractiveness of firefly j seen by firefly i. The attractiveness function is defined as: where r is the distance between fireflies i and j, β 0 indicates the brightness at r = 0, and γ stands for the light absorption coefficient. In our work, we use αr, the updating rule, and the border reflection mechanism introduced in [22]. The registration algorithm reported in [1] is summarized below as Algorithm 1, where the FA parameters are β 0 , γ, the 2MES input arguments are σ 0 , ϑ, τ, and the size of the sequence of individuals, MAX.

1.
Input: the binarized versions of S and T, 2MES parameters, FA parameters, the number of iterations NMax, K (K < Nmax), and the fitness value threshold τ stop 2.
Compute the boundaries of the search space 3.
Compute the initial population, randomly generate the candidate solutions, and apply the 2MES procedure to locally improve a small number of individuals 4.
Evaluate the initial population and find the best; time = 0 5.
while time < NMax and the highest fitness value < τ stop do 6.
Execute one FA iteration 7.
Compute the best individual 8.
if the best fitness value has not been improved 9.
if the best fitness value has not been improved during the last K iterations, apply the premature convergence avoidance mechanism: 10.
Increase the step size of 2MES, 11.
Increase the number of clusters 12.
Replace a small number of individuals with randomly generated and locally improved ones 13.
Apply k-means to split the population into clusters and locally improve the best individual from each class using 2MES 15.
Keep the best individual in the current population 16.
Output: The best individual corresponding to the perturbation parameter vector

The New Multi-Scale Methodology
The algorithm aligns an observed image S to the target T by extending the bio-inspired cluster-based technique described in S3 to a multi-scale approach that includes mechanisms especially tailored to avoid premature convergence. The ideas underlying the proposed methodology are that the computation carried out in different reduced representations can produce both promising initial solutions and individuals able to redirect the search to the global optimum. Furthermore, multi-scale processing may significantly improve registration accuracy and lead to faster algorithms.

The Geometric Degradation Model and the Search Space
The proposed approach aims to register images perturbed by translations, rotations, and non-uniform scaling according to (1). The translation domain [a min , a max ] × [b min , b max ] can be narrowed down by considering the following transformation instead of (2).
where p = a, b, s x , s y , θ and the rotation is relative to the center of the image (m, n). In this case, the inverse transformation g p is given by: The search space boundaries are set as follows. We assume that θ, s x , s y ∈ [−π/2, 0] × [0, smax x ] × 0, smax y . The alignment is performed using a binarized version of S and T computed by the Canny edge detector [23]; therefore, the input images are represented by sets of contour pixels. We denote by B(T) = {(x T , y T ), minx T ≤ x T ≤ maxx T , miny T ≤ y T ≤ maxy T } and B(S) = x S , y S , minx S ≤ x S ≤ maxx S , miny S ≤ y S ≤ maxy S the binarized versions Electronics 2022, 11, 278 6 of 18 of T and S, respectively. Since the elements in B(S) are obtained from those in B(T) using (11), we obtain: where a_lim : And b_lim : Note that the functions a_lim and b_lim are bounded and attain their margins. From a practical point of view, the extreme values of a_lim and b_lim can be computed in many ways. In our work, we used the pattern search method implemented by the MATLAB function patternsearch [24].

The Multi-Scale Representation of Images
A long series of research works involving multi-scale image processing using various mathematical tools have been reported in the literature [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. In our approach, scaling refers to the uniform change of object sizes in processed images and corresponds to the standard geometric transformation (11) defined by the parameter vector p s = (0, 0, s, s, 0), s > 1. Note that the dimension of the input images remains unchanged, while the size of the binary representations B(T) and B(S) decrease proportionally to s. Let s > 1 be a stretching factor, T the target image, S the version of T perturbed by (11) with p = a, b, s x , s y , θ , and [a min , a max ] × [b min , b max ] × [−π/2, 0] × (0, smax x ] × (0, smax y the search space corresponding to the inputs S and T. The representation of S and T in scale s, denoted by S s and T s , leads to a narrowed search space [a min /s, a max /s] × [b min /s, b max /s] × [−π/2, 0] × (0, smax x ] × (0, smax y . Indeed, denoting by p = a/s, b/s, s x , s y , θ , we obtain: Since T s (x, y)= T f p s (x, y) , the following relation holds: Consequently, aligning S to T can be reduced to registering S s and T s . That is, computing p'. From an implementation point of view, digital image scaling using large scale factors s involves information loss due to object shrinkage. Therefore, the size of the binary representations B(T s ) and B(S s ) is smaller than the size of B(T) and B(S), yielding to faster registration algorithms.
The multi-scale memetic algorithm introduced in the next section involves stages that use images represented at different scale factors. The algorithm evolves from one stage to another by changing the chromosome values and the boundaries of the translation parameter according to (20).

The Proposed Multi-Scale Registration Method
The extension of the clustered-based memetic algorithm provided in S3 registers pairs of monochrome images (S, T) in the case of the perturbation model defined by (11). If S and T are colored images, monochrome representation can be considered instead to obtain the inverse transformation (11).
The proposed algorithm involves a pre-processing stage, as follows. First, the binary representations B(S) and B(T) are computed using a noise invariant edge detector, and the search space boundaries are evaluated according to (13)- (16). Then, the data is represented at two scales 1 < s1 < s2. The scale s1 is used in developing the registration procedure from (Algorithm 2), while s2 is used to generate the initial population (Algorithm 3) and to avoid getting stuck in a local optimum (Algorithm 4). The genotypes computed in the search space defined by s2 are converted in the algorithm's scale s1 using transformation (2), where s = s1/s2. One can generalize the multi-scale approach by using multiple scale parameters to develop Algorithms 3 and 4, respectively. The fitness function that controls the evolution of the memetic algorithm is defined by (6), while the method used to cluster the current population of individuals is the k-mean. Note that the fitness values lie in [0, 1].
First, the population is randomly instantiated. Then, a few chromosomes computed by Algorithm 1 using the s2 scale are added to it. The memetic registration is an iterative process that applies an FA iteration to compute the new generation followed by locally 2MES-based improvement of the best chromosome of clustered data. The data is grouped into k clusters, which varies depending on the population quality.
where ct1 > 1 is a constant that controls the maximum number of clusters, and c_number stands for the initial number of clusters.
To avoid premature convergence, we apply the following procedure. If the best fitness value bval is the same during the last iterations, increase the step size of 2MES, multiplying it by ct2/bval 2 , where ct2 > 1 is a constant tuning the perturbation size z in (7). If the best fitness value bval is the same during the last it' > it iterations, k 0 new locally improved individuals and the result computed by Algorithm 1 with s2 = 15 replace k 0 + 1 individuals belonging to the current population.
The proposed method is summarized by Algorithm 2. Using C_Pop t = {c t 1 , c t 2 , . . . , c t n , we denote the population at moment t. The input arguments are grouped depending on their use, as follows: • general parameters: the input images, S and T; the maximum number of iterations, NMax; the population size, n; the threshold fitness value, τ stop ; the scales s1 and s2; the constants c_number, ct1, and ct2; 2MES inputs, σ 0 , ϑ , τ ES , and MAX ; FA parameters β 0 , γ; • Algorithm 3 parameters, corresponding to Algorithm 1 arguments: 2MES parameters, σ 0 , ϑ, τ ES , υ, and MAX; FA parameters (same as those belonging to the list of general parameters); NMax1, K1, and the threshold value τ stop1 ; • Algorithm 4 parameters: the number of non-effective successive iterations, i.e., without highest quality changes, cp; it, it', and k 0 as described above; 2MES parameters (same as those belonging to the list of general parameters); Algorithm 1 parameters (same as those described in the list of Algorithm 3 parameters); the population C_Pop.
Compute B T s1 , B T s2 , B S s1 , B S s2 , and the corresponding boundaries of each search space 3.
Obtain the representation of C_Pop t in the s1 scale search space 6.
Get the representation of C_Pop t+1 in the s1 scale search space 13.
Compute the best candidate solution c ∈ C i 16.
Randomly generate an individual c i 8.
Apply 2MES with the arguments σ 0 , ϑ , τ ES , MAX to locally improve c i 9.
end for 10.

Experimental Results and Discussion
A long series of tests on binary, monochrome, and colored images have been performed to assess the performances of the new registration algorithm. The computer used for testing has the following configuration: Intel Core i7-10870H, 16GB RAM DDR4, SSD 512GB, NVIDIA GeForce GTX 1650Ti 4GB GDDR6.
The algorithm performances have been measured using runtime and registration accuracy. The accuracy has been evaluated by a series of measures to reflect the effectiveness of Algorithm 2 in a comprehensive manner. The main indicator is the mean success rate recorded for NR runs of Algorithm 2, where a successful run is the one that produces an individual whose quality exceeds a certain limit. The indicator measures the capability of Algorithm 2 to compute approximations of the fitness global optimum. The success rate of the algorithm that aligns the image S to the target T is computed by: where NS represents the number of attempts with correct registration, and S and T are of the same size, M × N. We also evaluated the accuracy of Algorithm 2 through similarity indicators computed between the images T and T, where T is the image obtained by aligning S using the result of Algorithm 2. Denoting the density function by p(x), the similarity measures are the following: • Signal-to-Noise-Ratio (SNR).
where MI S T, T = H S (T)+H S T −H S T, T , • Tsallis normalized mutual information of order α [27,28]. where A good approximation T is such that the value of SNR T, T is very large (infinite for T = T), while NMI S T, T and NMI T α T, T are both near 1. Note that, in case of significant perturbations, the information residing in the observed image S is not enough to completely reconstruct T; that is, reversing the exact geometric transformation leads to an image T' possible different from T. For this reason, the correct way to measure the quality of the registration is to evaluate the ratio.
where SIM ∈ SNR, NMI S , NMI T α ; the theoretical maximum value being 1. If T can be completely reconstructed using a geometric transformation, the fitness threshold value is usually set above 0.8. In case of significant perturbations, the threshold value τ stop is set to [0.5, 0.6].
Since the proposed method is of stochastic type, the above-mentioned measures is applied NR times on each pair of images, and the recorded result is computed using the corresponding mean value. Consequently, if we denote by T 1 , . . . , T R the images obtained when S is aligned using the geometric transformations computed by Algorithm 2, the accuracy measures are defined by: The evaluation of the computation complexity is assessed by: where t 1 , . . . , t NR are the corresponding runtimes. We used various parameter settings and uniform scaling factors to implement the proposed multi-scale memetic approach and optimize the alignment accuracy and the execution times.
Below, we provide a summary of the registration results obtained for images belonging to the Yale Face Database [29]. The database consists of 165 monochrome face images of 15 persons, 11 samples for each. The spatial resolution for all images is 320 × 243 pixels. Tables 1-4 present results for 30 test images (two for each person). Figures 1-7 show a selection of images for two persons that includes target, perturbed, and aligned pictures and also binarized and scaled versions used during computations.
The results of applying Algorithm 2 are summarized below. Figures 1 and 2 show recorded, sensed, and registered images for two test samples, subjects 10 and 7. The corresponding numerical results are presented in rows 10 and 7 in Tables 1-4

.
Below, we provide a summary of the registration results obtained for images belonging to the Yale Face Database [29]. The database consists of 165 monochrome face images of 15 persons, 11 samples for each. The spatial resolution for all images is 320 × 243 pixels. Tables 1-4 present results for 30 test images (two for each person). Figures 1-7 show a selection of images for two persons that includes target, perturbed, and aligned pictures and also binarized and scaled versions used during computations.
The proposed alignment procedure computes an approximation of the perturbation parameter vector in the search space narrowed down by the stretching factor s1 = 4, while significantly larger scaling values s2 are used to generate the initial population and it prevent becoming stuck in a local optimum. In our work, the scaling parameter s2 was between 11 and 15.
The results of applying Algorithm 2 are summarized below. Figures 1 and 2 show recorded, sensed, and registered images for two test samples, subjects 10 and 7. The corresponding numerical results are presented in rows 10 and 7 in Tables 1-4.  Below, we provide a summary of the registration results obtained for images belonging to the Yale Face Database [29]. The database consists of 165 monochrome face images of 15 persons, 11 samples for each. The spatial resolution for all images is 320 × 243 pixels. Tables 1-4 present results for 30 test images (two for each person). Figures 1-7 show a selection of images for two persons that includes target, perturbed, and aligned pictures and also binarized and scaled versions used during computations.
The proposed alignment procedure computes an approximation of the perturbation parameter vector in the search space narrowed down by the stretching factor s1 = 4, while significantly larger scaling values s2 are used to generate the initial population and it prevent becoming stuck in a local optimum. In our work, the scaling parameter s2 was between 11 and 15.
The results of applying Algorithm 2 are summarized below. Figures 1 and 2 show recorded, sensed, and registered images for two test samples, subjects 10 and 7. The corresponding numerical results are presented in rows 10 and 7 in Tables 1-4.       [19][20][21], which constitute a de facto standard. Figure 5 present the alignment results of Algorithm 2. The proposed algorithm has yielded a perfect success rate, correctly aligning all the test image pairs. Note that Algorithm 1 correctly aligns the considered images, but the recorded runtimes are substantially larger than those obtained by the proposed method. The numeric results reported below refer to the mean value and the standard deviation of  [19][20][21], which constitute a de facto standard. Figure 5 present the alignment results of Algorithm 2.
(a) (b)   [19][20][21], which constitute a de facto standard. Figure 5 present the alignment results of Algorithm 2. The proposed algorithm has yielded a perfect success rate, correctly aligning all the test image pairs. Note that Algorithm 1 correctly aligns the considered images, but the recorded runtimes are substantially larger than those obtained by the proposed method. The numeric results reported below refer to the mean value and the standard deviation of The proposed algorithm has yielded a perfect success rate, correctly aligning all the test image pairs. Note that Algorithm 1 correctly aligns the considered images, but the recorded runtimes are substantially larger than those obtained by the proposed method. The numeric results reported below refer to the mean value and the standard deviation of the runtimes computed for Algorithms 2 and 1, respectively. The data in Table 1 prove that Algorithm 2 is significantly faster than Algorithm 1.
The mean values and the standard deviation values computed for the accuracy measures are displayed in Tables 2-4. Note that we used α = 1.2 to compute the Tsallis mutual information. The maximum value of the functions defined by (31) is 1, but due to rounding and computation errors, slightly larger values may be obtained.
Additionally, in order to derive comprehensive conclusions regarding the performances of Algorithm 2, we tested it against two classical methods for monomodal image registration, the regular step gradient descent optimization (RS-GD) based on mean squares image similarity metric (MS) [30,31], and Principal Axes Transform (PAT) [18]. RS-GD based registration adjusts the geometric transformation parameters so that the evolution of the considered metric is toward the extrema. PAT is an image registration technique based on features automatically extracted from images, where the image features are defined by the corresponding set of principal axes.    The accuracy results of all tested methods are reported in Tables 2-4. The mean and standard deviation values of RSNR correspond to Algorithm 2 and RSNR values recorded for PAT and RS-GD are provided in Table 2. In addition, Table 2 show the success ratios of Algorithm 2 and whether classical methods managed to correctly align the tested pairs of images. Note that, in the case of severely perturbed sensed images, both classical methods may misregister the inputs. The resulted accuracy rate of PAT is only 26.7%, while for RS-GD, it is 53.3%. Algorithm 2 had a 100% accuracy, correctly registering all tested images in all runs.  The accuracy results of all tested methods are reported in Tables 2-4. The mean and standard deviation values of RSNR correspond to Algorithm 2 and RSNR values recorded for PAT and RS-GD are provided in Table 2. In addition, Table 2 show the success ratios of Algorithm 2 and whether classical methods managed to correctly align the tested pairs of images. Note that, in the case of severely perturbed sensed images, both classical methods may misregister the inputs. The resulted accuracy rate of PAT is only 26.7%, while for RS-GD, it is 53.3%. Algorithm 2 had a 100% accuracy, correctly registering all tested images in all runs. The mean and standard deviation values of RNMI S and RNMI T α computed in the case of Algorithm 2 are displayed in Tables 3 and 4, respectively. The tables also present the values of RNMI S and RNMI T α corresponding to PAT and RS-GD methods. The numerical results indicate that Algorithm 2 produces more accurate results than PAT and RS-GD in the light of all informational and quantitative indicators used. In addition, the new method is considerably faster than the method reported in [1].

Conclusions
The aim of the paper was to propose a new comprehensive multi-scale method that extends the approach reported in [1] to obtain accurate and efficient registration algorithms. The input images were pre-processed by a noise-insensitive edge detector to obtain binarized versions, i.e., the sets containing contour pixels. Isotropic scaling transformations were used to compute multi-scale representations of the binarized inputs. The registration was then carried out in different reduced representations to obtain promising initial solutions and to identify search directions leading to the global optimum. The process combined bio-inspired and evolutionary computation techniques with clustered search and implemented a procedure specially tailored to address the premature convergence issue.
A long series of tests involving monochrome images were conducted to ascertain meaningful conclusions regarding the registration capabilities of the proposed method. The experiments involved accuracy and efficiency measures, expressed in terms of SNR, Shannon mutual information, Tsallis entropy, and runtime. We compared Algorithm 2 against the basic method introduced in [1] and two of the most commonly used alignment procedures for monomodal images, namely the regular step gradient descent optimization based on MS image similarity metric and PAT registration. In terms of accuracy, Algorithm 2 is similar to Algorithm 1, with a success rate of 100%, which means that it has always managed to correctly align the input images. In contrast, both RS-GD and registration and PAT alignment failed to solve the problem of severely perturbed sensed images, their corresponding success rate being far less than 100%. In terms of efficiency, there were significant improvements over Algorithm 1, with the proposed method being at least two times faster.
The experimentally established results validate the proposed method and open the path for further developments and extensions to more complex transformations. In addition, metaheuristics involving other promising bio-inspired techniques, such as the flower pollination algorithm, cuckoo search, and bat algorithm, will be considered for the population-based optimization component. In addition, an experimental study on the influence of parameter values on the performance of the proposed method is in progress.