Percentile-Based Adaptive Immune Plasma Algorithm and Its Application to Engineering Optimization

The immune plasma algorithm (IP algorithm or IPA) is one of the most recent meta-heuristic techniques and models the fundamental steps of immune or convalescent plasma treatment, attracting researchers’ attention once more with the COVID-19 pandemic. The IP algorithm determines the number of donors and the number of receivers when two specific control parameters are initialized and protects their values until the end of termination. However, determining which values are appropriate for the control parameters by adjusting the number of donors and receivers and guessing how they interact with each other are difficult tasks. In this study, we attempted to determine the number of plasma donors and receivers with an improved mechanism that depended on dividing the whole population into two sub-populations using a statistical measure known as the percentile and then a novel variant of the IPA called the percentile IPA (pIPA) was introduced. To investigate the performance of the pIPA, 22 numerical benchmark problems were solved by assigning different values to the control parameters of the algorithm. Moreover, two complex engineering problems, one of which required the filtering of noise from the recorded signal and the other the path planning of an unmanned aerial vehicle, were solved by the pIPA. Experimental studies showed that the percentile-based donor–receiver selection mechanism significantly contributed to the solving capabilities of the pIPA and helped it outperform well-known and state-of-art meta-heuristic algorithms.


Introduction
Easily implementable, relatively simple, flexible structures, gradient-free direction search mechanisms, and sufficient local minima-avoidance capabilities have increased the usage of meta-heuristics for solving different types of numerical or combinatorial optimization problems in recent years [1,2].Even though there are various classification criteria for meta-heuristic algorithms, they are usually categorized by considering the kind of intelligence or phenomena to be modeled [3,4].Meta-heuristics mimicking natural selection, crossover, mutation, or similar biological processes are generally referred to as evolutionary algorithms [5].The genetic algorithm (GA) [6], differential evolution (DE) [7,8] algorithm, and evolutionary strategies (ES) [9] are the most famous evolutionary algorithms.Similar to these algorithms, population-based incremental learning (PBIL), proposed by Baluja, is another well-studied evolutionary technique that tries to empower the existing Darwinian operations with competitive learning [10].The biogeography-based optimizer (BBO) proposed by Simon is also an evolutionary meta-heuristic that considers the distribution of biological species from one habitat to another via migration and how species arise and fade away to model the exploration and exploitation phases of a robust search [11,12].
The second group of meta-heuristics, also called swarm-intelligence (SI)-based metaheuristics, considers the various behaviors of creatures such as ants, birds, bees, moths, bats, flowers, and even humans [13].One of the most successful swarm-intelligence metaheuristics is the ant colony (ACO) algorithm proposed by Dorigo and Caro [14].The intelligent communication characteristics and food-source-finding capabilities of ants were used as a guide to design the ACO algorithm [14].Particle swarm optimization (PSO) is another successful swarm-intelligence meta-heuristic in which the collective movements of bird blocking or fish schooling are referenced [15].Krishnanand and Ghose tried to model how a glowworm attracts its companions, resulting in glowworm swarm optimization (GSO) [16].The brood reproduction or parasitism of cuckoo birds gave inspiration to Yang and Deb, who introduced the cuckoo search (CS) algorithm [17].The flashing nature of fireflies also gave inspiration to Yang in the development of the firefly algorithm (FA) [18].The meta-heuristics introduced by Yang are not limited to the CS and FA.The bat algorithm (BA) [19], modeling the advanced echolocation properties of bats, and the flower pollination algorithm (FPA) [20], based on the self-and cross-pollination of flowers, was also announced in studies by Yang.The foraging habits of honeybees were analyzed by Karaboga, who presented the artificial bee colony algorithm (ABC for short) [21,22].The gray wolf optimizer (GWO) algorithm was designed by Mirjalili et al. after investigating the hierarchy and hunting methods of gray wolves [23].Mirjalili considered how moths navigate and fly at night and proposed the moth-flame optimization (MFO) algorithm [24].Mirjalili also directly contributed to the development processes of the ant lion optimizer (ALO) [25], sine cosine algorithm (SCA) [26], multi-verse optimizer (MVO) [27], salp swarm algorithm (SSA) [28], Harris hawk optimizer (HHO) [29] and slime mold algorithm (SMA) [30].Satapathy and Naik focused on the problem-solving concept of the social behavior of human beings, and social group optimization (SGO) was presented [31].Tree social relations (TRS), introduced by Alimoradi et al. [32] after analyzing the collective and hierarchical life of trees, the gannet optimization algorithm (GOA) belonging to Pan et al. [33], developed based on the unique characteristics of foraging gannets, and the orchard algorithm (OA) developed by Kaveh et al. to model fruit-gardening procedures [34] are other recent meta-heuristics.The spotted hyena optimizer (SHO) [35], which mimics the collaborative hunting methods of the spotted hyena, the emperor penguin optimizer (EPO) [36], which was inspired by the huddling behavior of emperor penguins, and the seagull optimization algorithm (SOA) [37], which referenced how seagulls attack their prey, are recent competitive meta-heuristics proposed by Dhiman and Kumar.A special kind of sea bird called a sooty tern was investigated by Dhiman and Kaur, and the sooty tern optimization algorithm (STOA) was announced [38].Although the migration behaviors of the abovementioned birds provided a steady exploration capability for the STOA, their spiral attacking method towards prey was modeled carefully to increase the exploitation capability of the same algorithm [38].The tunicate swarm algorithm (TSA) is another SI-based meta-heuristic introduced as a result of studies by Dhiman and Kaur [39].The main motivation behind the TSA was modeling the survival capacity of tunicates living in the depths of the ocean [39].Experimental studies carried out with almost 100 test cases showed that the TSA is a strong optimizer and can be used successfully for different types of optimization problems [39].
Another group of meta-heuristics mainly focuses on using the fundamental steps of physical laws.Birbir and Fang proposed the electromagnetism-like algorithm (EMA), guided by the basics of electromagnetism [40].The gravitational forces between masses became the source of motivation for Rashedi et al., and the gravitational search algorithm (GSA) was introduced [41].Gravitational forces were interpreted by Formato differently, and the central force optimization (CFO) was developed [42].The ray optimization (RO) algorithm, which simulates Snell's law, describing the relationship between incident and reflected rays, was outlined by Shen and Li [43].Cuevas et al. considered the transition between the solid, liquid, and gas phases of matter, and the state of matter search (SMS) was presented [44].The interactions between positive and negative ions were referenced by Javidy et al. when the ions motion (IMO) algorithm was designed [45].Savsani and Savsani focused on the mathematics of passing vehicles on a two-lane highway, and the passing vehicle search (PVS) was developed as a new meta-heuristic [46].Azizi proposed the atomic orbital search (AOS) algorithm, for which the principles of quantum mechanics and quantum-based atomic schema related to the electron and nuclei were considered [47].
The intelligent behaviors of species, biological or evolutionary processes, and physical laws that have been tried to be modeled by the meta-heuristic algorithms are so diverse, as easily seen from the shortly summarized literature [48][49][50].One of the most recent meta-heuristics showing how the natural phenomena guided by these problem-solving techniques can be various is the immune plasma algorithm (IP algorithm or IPA) [51].IPA solves an optimization problem with its phases inspired by a medical method called immune or convalescent plasma treatment [51].Even though the immune plasma treatment mainly depends on executing a relatively simple process, in which the antibody-rich part of the blood taken from the previously recovered patient or donor is transferred into the critical one or receiver, its efficiency and practical usage are proven again with the ongoing COVID-19 pandemic.In the standard implementation of the IPA, the number of donors and receivers are determined when control parameters are initialized, and remain unchanged until the end of the execution [51].However, rather than assigning two different values to the number of donors and the number of receivers and guessing their interactions between them for each problem and running configuration, a simplified but effective method should be found and integrated into the workflow of the IPA.In this study, by considering this requirement about the IPA: • A new donor and receiver selection mechanism based on a statistical metric known as the percentile was proposed.

•
The new donor and receiver selection mechanism adjusted the number of donors and receivers in an adaptive manner due to the percentile description and the control parameters used in the standard IPA were not required.

•
Because of the adaptive adjustment of donors and receivers at each infection cycle, the density of exploration and exploitation dominant operations were calibrated more robustly, and the solving capability increased.
The new IPA variant determining the number of donors and number of receivers with the proposed approach was called the percentile IPA, or pIPA.To analyze how the percentile-based selection mechanism affects the overall solving performance of the pIPA, a set of detailed experiments using 22 numerical benchmark problems and two challenging engineering problems, the former a big-data optimization problem requiring noise minimization and the latter a planning problem for an unmanned aerial system, was carried out.The detailed experiments and comparative studies showed that pIPA was capable of obtaining better solutions than other considered algorithms for most of the test cases.The rest of the paper is organized as follows: Fundamental properties of the IPA are summarized in Section 2. The newly proposed donor-receiver selection mechanism is introduced in Section 3. Details of the experimental studies, their results, and related interpretations are given in Sections 4 and 5. Finally, the conclusion and possible works about the IPA and pIPA are presented in Section 6.

Immune Plasma Algorithm
The immune system is responsible for starting and managing a set of sophisticated defense operations with the lymphoid organs, T and B lymphocytes, to find and destroy antigens which are actually parasites, viruses, or part of them causing an infection [51].The B lymphocytes or cells have receptors recognizing and binding specific antigens.When B lymphocytes bind to their specific antigens, they call upon T lymphocytes.T lymphocytes contribute to the multiplication of the B cells.In addition to this, T lymphocytes mature the B cells into plasma cells [51].Plasma cells similar to T and B lymphocytes have an important role in the immune system.Each plasma cell is regulated for synthesizing an antigen-specific protein called an antibody [51].Antibodies can be free-floating in the blood or seen on the membranes of different immune-system cells.Moreover, an antibody in both forms can bind its specific antigen to limit the interaction with this antigen and healthy cells [51].Antibodies increase slowly with the start of an infection and reach a peak level [51].However, in cases of immune-system diseases or disorders, the required number of antibodies cannot be produced.For infected individuals who are suffering from immune-system diseases or disorders, antibody-rich parts of the blood of the patients who have recovered shortly before can be a valuable source.Using the antibody-rich part of the blood, also called plasma, is the main motivation of the immune or convalescent plasma treatment [51] for critical patients.Even though the idea lying behind the immune plasma treatment is relatively simple, the efficiency of the biologically strong and evident implementation steps of the immune plasma treatment has been proven for the H1N1, SARS, MERS, Ebola, and the SARS-COV2, namely COVID-19 infection [51].
The properties of the immune plasma treatment were also guided by researchers, and a new meta-heuristic method known as the IP algorithm or IPA was proposed [51].In the IP algorithm, each individual is assumed as a solution to the problem being solved.The immune-system response or level of antibody for an individual is directly matched with the quality or appropriateness of the solution in terms of objective function value [51].The infection is distributed between individuals, and immune-system responses are determined.By controlling the immune-system responses of the individuals, while some of them are considered to be critical and become receivers, some of them become donors and contribute to the treatment operations of the critical individuals with their plasmas [51].Until reaching a predetermined termination condition, the IP algorithm continues to spread infection between individuals for exploring the search space of the problem, select the donor and receiver individuals, and then apply plasma transfer to balance the exploration with the exploitation.The subsections given below describe the detailed workflow of the IP algorithm.

Details of Infection Distribution
The IP algorithm starts its operations by assigning initial values to the individuals in the population of size PS [51].Assume that IPA tries to solve a D-dimensional problem.The initial value of the jth parameter related to the x k individual is calculated using Equation (1) [51].In Equation (1), k is an index ranging from 1 to PS and the lower and upper bounds of the jth parameter are equal to the x low j and x high j . Also, rand(0, 1) corresponds to a random number generated between 0 and 1 for each calculation [51].
An infection can easily distribute among individuals with droplets containing antigens.For describing how the randomly selected x m individual affects the x k and triggers the immune system, Equation ( 2) is employed by the IP algorithm [51].Although the x in f k represents the infectious x k individual, x in f kj is used on behalf of the randomly selected jth parameter of x in f k in Equation (2).It should be noted that the x in f k individual is the same as the x k except the jth parameter.Also, x kj and x mj are matched with the jth parameters of the x k and x m individuals.Finally, rand(−1, 1) is used on behalf of a random number between −1 and 1.
As stated earlier, immune-system responses or antibody levels of the individuals are directly matched with the corresponding objective function values.For a minimization problem with an objective function f , if the immune-system response or antibody level of the infectious x k individual showed as f (x in f k ) is less than the immune-system response or antibody level of the same individual before the infection showed as f (x k ), it is assumed that x k recognizes the infection triggered by the x m and updates the immune system for the same or similar infection as in Equation (3) [51].Otherwise, the immune system of the x k remains unchanged.

Details of Plasma Treatment
After infecting all individuals in the population, IPA first decides how many individuals will be donors and how many individuals will be receivers.For this purpose, it describes two different control parameters known as NoR and NoD [51].Although NoR is matched with the abbreviation of the number of receivers, NoD represents the abbreviation of the number of donors.The values of the NoR and NoD parameters are assigned when the IPA is initialized, and the first NoR worst individuals are treated with the plasmas of the first NoD best individuals [51].If x rcv k is the k indexed receiver in the set of receivers of size NoR and x dnr m is the randomly determined donor in the set of donors of size NoD, the plasma of x dnr m is transferred into the x rcv k using Equation (4) given below [51].In Equation ( 4 .It should be noted that each parameter of the x rcv k is modified with the corresponding parameter of the x dnr m by guiding Equation (4).
The immune-system response or antibody level of the x rcv k after the first dose of plasma is important for deciding whether the second dose of the plasma will be transferred or not.If f (x rcv−p k ) showing the antibody level of x rcv k after the first dose plasma is less than the f (x dnr m ), x rcv k is changed with and second dose of plasma from x dnr m is prepared for transferring.Otherwise, x rcv k is changed with the x dnr m to guarantee that a single plasma dose is given and the treatment is ended [51].When the IP algorithm decides to apply the second dose of plasma, it first determines the new x rcv−p k and then compares f (x rcv−p k

)
showing the antibody level of x rcv k after the second dose plasma with the f (x rcv k ) value of the current and third dose of plasma from x dnr m is prepared for transferring.Otherwise, the plasma treatment is ended for the x rcv k [51].The IP algorithm controls and modifies the immune memories of the donors by considering the ratio between t cr and t max after the plasma treatment is completed for all receivers.Although t cr shows the current evaluation or calculation number of the objective function, t max corresponds to the maximum evaluation or calculation number of the objective function.If a random number produced between 0 and 1 is less than t cr /t max , each parameter of the x dnr m where m ranges from 1 to NoD is updated using Equation (5) [51].If the mentioned random number is equal or higher than t cr /t max , each parameter of the x dnr m is initialized with Equation (1) [51].As easily seen from the model used to control and update the donors, it is understood that the probability of producing a random number less than t cr /t max increases, and the x dnr m donor is modified slightly as in Equation ( 5) [51].

Modified Donor-Receiver Selection Mechanism
The standard implementation of IPA determines the number of donors and number of receivers at initialization, and their values are protected until the end of execution.Selecting the first NoD best individual or individuals from the population and using them as plasma sources for the first NoR worst individual or individuals significantly contributed to the local search capability of the IPA, and experimental studies showed that the value of the NoD parameter should be chosen equal or less than the value of the NoR parameter [51].However, it should be noted that determining the appropriate values for both NoD and NoR parameters and guessing their effect on the solving capabilities of the algorithm and interactions between them are difficult.Rather than assigning static values to the NoD and NoR parameters, a more convenient, implicitly self-adjustable, and simplified approach by considering the existing control parameters such as NoD and NoR can be proposed and integrated into the workflow of the IP algorithm.
Percentile is one of the most commonly used metrics in order statistics [52].It helps to indicate where a special value falls within a distribution of a set of values and understand the relative standing of that value.Assume that the x element is at the k th percentile.By considering this assumption, it is said that the x element is greater than the k% of the other elements related to the same set.In addition to the help of evaluation the relative standing of a given element within a distribution, percentile also provides a method for dividing the dataset into partitions [53].If the x element is at the k th percentile, the dataset is divided into two partitions.Although the first partition contains the k percent of the whole dataset whose elements are less than x, the second partition contains the rest of the initial dataset, and each element in the second partition is equal to or greater than x.For deciding which element in this dataset will be chosen as x and will be at the k th percentile, elements of the dataset are first sorted in ascending order, and then Equation (6) given below is utilized [54].In Equation (6), N shows the number of elements in the dataset, and r corresponds to the index or rank of the element that will be chosen from the sorted dataset on behalf of x and at the k th percentile.
When considering the properties of the percentile metric about the intrinsic division of the dataset, it is seen that the population of the IP algorithm can be partitioned into two groups using the percentile calculation.One of the groups is devoted to the possible donor or donors, while the other is related to the possible receiver or receivers, and then a new variant of the IP algorithm named percentile IP algorithm (pIPA) can be introduced.In the pIPA, rather than determining the number of donors and the number of receivers separately by assigning values to both NoD and NoR control parameters, the possible donor and receiver individuals are determined with a new and single control parameter showed as prc.The prc parameter is actually used to find the individual that is at the prc th percentile.For a minimization problem, pIPA first sorts the individuals in the population of size PS according to their objective function values, and the r index and x r individual are determined as in Equation ( 6) by changing the k with the prc.If the objective function value of the x i individual where i ranges from 1 to PS is less than or equal to the objective function value of the x r , the x i individual becomes a donor candidate and it is added into the set of donors.Otherwise, x i individual becomes a receiver candidate, and it is added to the set of receivers.By considering the relationship between the x r and other individuals in the population, it is seen that nearly prc percent of the population becomes the receiver candidates, and (100 − prc) percent of the population becomes the donor candidates.The workflow of the percentile-based donor-receiver selection strategy can be summarized visually as in Figure 1.

3.
Each donor is matched with a unique and randomly selected receiver if there are less donors and plasma transfer from the donor to its receiver is carried out.Each donor is matched wit its receiver is carried out. 1.Before selecting the x individual, the whole population is sorted from worst to best by considering the objective function values.
2. The x individual is at the (prc)th percentile and the r index is determined with the Eq. ( 6).By using the x individual, the whole population is divided into two sub-populations.sub-lations.su sub-populations.pulations.After generating the set of donors and set of receivers by utilizing the value assigned to the prc parameter, pIPA controls the number of possible donors and receivers.If there are more donors compared to the receivers, each receiver is matched with a unique and randomly selected donor, and plasma transfer is carried out for the receivers as per the standard IPA.In Algorithm 1, the plasma transfer operations are summarized by considering that there are more donors than the receivers.If there are more receivers compared to the donors, each donor is matched with a unique and randomly selected receiver, and plasma transfer from the donor to its receiver is carried out as in the standard IP algorithm.Algorithm 2 states the donor and receiver selection and plasma transfer operations of the pIPA for the scenarios in which there are more receivers than the donors or there is an equal number of donors and receivers.
Even though the value being assigned to the prc remains unchanged until the end of the execution, the number of possible donors and receivers can vary because of the description of the percentile and objective function values of individuals in the population.If there are individuals whose objective function values are the same as the objective function value of the x r , more than (100 − prc) percent of the population can be related to the set of possible donors.As an expected result of this situation, less then prc percent of the population can be related to the set of possible receivers, and the number of donors and number of receivers can be adjusted dynamically.Another important situation that should be considered is the equivalence of the x r individual with the remaining individuals of the population based on the objective function values.If the objective function value of x r is equal to the objective function values of the remaining individuals in the population, the set of possible receivers is empty and pIPA continues spreading infection without applying plasma transfer operations.
Similar to the standard implementation of IPA, pIPA also differs from most metaheuristics when the number of evaluations or the number of objective function calls per cycle, iteration, or generation is considered.The NoD and NoR parameters of the standard IPA and prc parameter of the pIPA can change the number of evaluations from one cycle to another because of the possible repetition of the plasma treatment for receivers.Even though the number of evaluations consumed per cycle changes in both IPA and pIPA, they complete their operations if the maximum evaluation number abbreviated as t max in the previous section is reached.When the IPA and pIPA are employed in order to solve a D-dimensional problem for which the complexity of calculating the objective function is estimated as O(D) using big-O notation, the running time of the IPA and pIPA becomes equal to O(t max × D).This description of the running time of the IPA and pIPA in terms of t max and the complexity of the objective function calculation can be guided for comparison with other meta-heuristics.
A more specialized analysis for the running time of the pIPA can be made by considering the cost of newly added or existing operations such as distribution of infection, selection of the x r individual, treatment of the receiver or receivers and modification of the donor or donors.When pIPA with PS individuals starts solving a D-dimensional problem for which the complexity of calculating the objective function is O(D), the cost stemmed from the distribution of infection is found as O(PS × D).After completing the distribution of infection, pIPA divides the whole population by considering the value assigned to the prc.All individuals are first sorted by a sorting algorithm for which the complexity is equal to O(PS × log 2 PS) and the number of possible receivers shown as Rc, and the number of possible donors shown as Dn are determined.If the Rc is equal to zero, i.e., there is no receiver in the population, the complexity of this cycle is defined as O(PS × (D + log 2 PS)).Otherwise, the cost of giving one dose plasma for each receiver and modification of the donor or donors are found as O(Rc × D 2 + Dn × D 2 ) and the running time of the pIPA for a cycle becomes equal to O(PS × D + PS × logPS + D 2 × (Rc + Dn)).Because the sum of Rc and Dn is equal to PS, the running time of the pIPA can be shown as O(PS × (D + logPS + D 2 )) or simply O(PS × D 2 ) by utilizing from the property of the used asymptotic notation and the dominance of D 2 term.

Experimental Studies
The possible contribution of the percentile-based donor-receiver selection strategy can vary with the values assigned to the control parameters such as population size, maximum evaluation number, dimensions, prc, and types of optimization problems.To provide a clear vision of how the newly proposed strategy changes the solving capabilities, experimental studies were divided into four major subsections.In the first subsection, 100 and 200-dimensional classical numerical problems were solved with the pIPA by assigning different values to the prc.Obtained results by the pIPA were also compared with a set of meta-heuristics including IPA [51], PSO [55], GSA [41], CS [17], BA [19], FPA [20], SMS [44], FA [18], GA [6], MFO [24] and ALO [25].The second subsection of the experimental studies was devoted to the investigation of the capabilities of pIPA in solving complex numerical problems first introduced at the CEC 2015.The results of the pIPA for CEC 2015 benchmark problems were compared with the IPA [51], SOA [37], SHO [35], GWO [23], PSO [55], MFO [24], MVO [27], SCA [26], GSA [41], GA [6] and DE [56].In the third subsection of the experimental studies, a recent real-world engineering problem that requires splitting a source signal into noise and noise-free parts optimally was solved with pIPA, and comparisons between pIPA and other well-known meta-heuristics such as IPA [51], GA [6], PSO [55], DE [56], ABC [57], GSA [41], MFO [24], SCA [26], SSA [28] and HHO [29] were carried out.In the last subsection, pIPA was also used to solve another realworld problem for which the path of an unmanned aerial vehicle (UAV) or an unmanned combat aerial vehicle (UCAV) is tried to be determined by considering the enemy threats and fuel consumption.The results of the pIPA for path planning problem were compared with the IPA [51], BA [58], BAM [58], ACO [59], BBO [59], DE [59], ES [59], FA [59,60], GA [59], MFA [59,60], PBIL [59], PSO [59], SGA [59] and PGSO [59]-based approaches.

Solving Classical Benchmark Problems with pIPA
The benchmark problems or functions for which the formulation, lower and upper bounds are given in Table 1 were solved with the pIPA.The global minimum values of all these problems except the f 8 are equal to zero.For the f 8 , the global minimum value is calculated as −D × 418.98 where D corresponds to the number parameters as stated before.When solving the 100-dimensional benchmark problems given in Table 1, the population size and maximum evaluation number were set to 30 and 30,000 [24].To analyze how the qualities of the solutions change with the values assigned to prc, nine positive integers, including 30, 35, 40, 50, 60, 70, 80, 90, and 95 were used.The pIPA with the mentioned configurations was tested 30 times for each problem instance using random seeds.The objective function values of the best solutions found at each of 30 runs were averaged and reported in Table 2 with the related standard deviations.

Name
Range Formulation Step [−100, 100] f 6 Mean 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × Std.0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × Mean 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × Std.0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × The results reported in Table 2 give important information about the pIPA and appropriate values of the prc parameter.Although the pIPA obtains the global minimum solutions with all of the nine different values assigned to the prc for the f 6 , f 9 and f 11 functions, it finds relatively close mean best objective function values for the f 5 function with all of the constants assigned to the prc.As distinct from the global minimums of f 6 , f 9 and f 11 functions, the global minimum of the f 5 function is located at the end of a long and narrow valley and converging to the global minimum of it is extremely difficult.Because of this main reason, meta-heuristic algorithms usually require subtly configured control parameters and more evaluations for the mentioned function.However, it should be noted that the percentile-based donor-receiver selection mechanism adjusts the workflow and contributes to the convergence of the pIPA even though the prc is changed.For f 1 , f 2 , f 8 and f 12 functions, the qualities of the solutions found by the pIPA get better or change slightly when the value assigned to the prc increases from 30 to 80 or even 90.Similar generalizations can also be made for the f 3 , f 4 , f 7 , and f 10 functions by considering a small set of values of prc.The qualities of the solutions found by the pIPA get better or change slightly for the these functions when the prc increases from 30 to 40.
As stated before, the number of donors and receivers in pIPA can be different at each cycle, while the prc remains unchanged.To analyze whether the number of donors and the number of receivers change or not when the initial value of the prc is preserved until the end of a run, they are first counted at each cycle, averaged, and then recorded.After completing 30 independent runs, the number of donors and number of receivers recorded for each run are averaged again and presented in Table 3.When the results given in Table 3 are controlled, it is seen that the newly proposed mechanism is capable of changing or adjusting the number of donors and receivers for the f 4 , f 6 , f 9 , f 10 and f 11 functions.It tries to increase the number of donors and decrease the number of receivers while the value of the prc is protected.By changing the number of donors and number of receivers without increasing or decreasing the initial value of the prc, pIPA also has a chance to adjust the execution of exploration and exploitation dominant phases explicitly.
The newly proposed percentile-based donor-receiver selection strategy requires the execution of extra computational operations compared to the standard implementation of the IPA and changes the density of the exploration and exploitation dominant phases.To understand whether the usage of the percentile-based donor-receiver selection strategy increases the execution time of the algorithm or not, the duration of each run in terms of seconds is recorded and then averaged after the completion of 30 independent tests of pIPA with different prc values.Also, the duration of each run in terms of seconds is recorded and then averaged when 30 independent tests are completed for the standard IPA whose NoD and NoR parameters are set to 1.Both pIPA and IPA were coded in C programming language, and experiments were carried out on a PC equipped with a single-core processor with 1.33 Ghz.
From the average execution times and related standard deviations belonging to pIPA and IPA given in Table 4, it is clearly seen that IPA requires less time compared to the pIPA with lower prc values.Moreover, it is understood that there is a relationship between the execution time of the pIPA and the value assigned to the prc.Although the prc is increased, the average execution time of the pIPA generally decreases.If the prc is increased, the number of possible donors decreases, and plasma treatment is carried out for a small set of randomly determined receivers.Otherwise, the number of possible donors increases, more receivers are supported with the plasma treatment, and the execution time of the pIPA increases because of the computationally intensive operations of the plasma transfer.However, it should be noted that when the difference between the number of donors tried to be adjusted with the prc and NoD decreases, the difference between the average execution times of the pIPA and IPA also decreases.The contribution of the proposed mechanism can be understood by comparing the results of the pIPA with the results of other meta-heuristics.For this purpose, the results of the pIPA were compared with the results of the IPA [51], MFO [24], PSO [55], GSA [41], BA [19], FPA [20], SMS [44], FA [18] and GA [6].To guarantee that all meta-heuristics obtain their results under the same conditions, population sizes of them were set to 30, and the maximum evaluation number was taken equal to 30,000 [24,51].Although the prc of the pIPA was 90 for f 1 , f 2 , f 6 , f 8 , f 9 and f 11 , it was determined as 60 and 50 for the f 5 and f 12 .Moreover, the value of the prc was equal to 40 for f 3 , f 7 and f 10 and equal to 30 for the f 4 function.When the mean best objective function values and standard deviations belonging to the 30 independent runs of these algorithms in Table 5 are investigated, the superiority of the pIPA can be seen.For 10 of 12 benchmark functions, pIPA outperforms its competitors or obtains the same mean best objective function values.It only lags behind the IPA for the f 8 function and the GSA for the f 12 function and becomes the second-best algorithm among other tested meta-heuristics for these functions.The idea lying behind the pIPA manages donor and receiver selection operations more robustly compared to the standard implementation of the IPA by setting only one control parameter.In pIPA, the number of donors and receivers can be updated from one cycle to another while the prc remains unchanged.Furthermore, although the number of donors and receivers are the same for different cycles, donors and receivers are matched by a controlled-randomized approach, and receivers have a chance of treatment with the plasma of a different donor.Another comparison between pIPA and IPA was made for the convergence performances.To analyze and compare the convergence performances of meta-heuristic algorithms, there are two commonly used metrics, namely success rate and mean evaluation.If a run of the algorithm achieves a better solution compared to a threshold before the previously determined termination criteria are met, it is said that the run is successful.The success rate is the ratio between the number of successful runs and the total number of runs.For each successful run, the minimum number of function evaluations required to achieve a better solution compared to a threshold is recorded.The average of these recorded values corresponds to the mean evaluation.The convergence comparison between pIPA and IPA was made by setting the threshold to 1 × 10 −25 for f 1 , f 2 , f 3 , f 6 , f 9 and f 11 functions, 1 × 10 −10 for f 10 function, 1 × 10 −03 for f 7 function, 1 × 10 00 for f 12 function, 1 × 10 01 for f 4 function, 1 × 10 02 for f 5 function, −1 × 10 04 for f 8 function and then success rate and mean evaluation metrics abbreviated as Sr and Me were summarized in Table 6.When these metrics given in Table 6 are investigated, it is easily seen that the convergence performance of pIPA is more robust than the convergence performance of IPA.Even though the Sr metrics of both pIPA and IPA are equal to 100% for f 1 , f 6 , f 8 , f 10 and f 11 functions, the Me metric of pIPA is better than the Me metric of IPA.For all the remaining benchmark functions, pIPA outperforms the standard implementation of the IPA by considering the convergence performance measured in terms of Sr and Me.The Figure 2 given should also be viewed to investigate the changes in the convergence curves of the pIPA with the varying prc values.The final comparison between pIPA and other meta-heuristics for 100-dimensional problems was carried out to decide whether the result of pIPA is enough to generate a statistical difference in favor of pIPA or not using the Wilcoxon signed rank test with the significance level equal to 0.05.If the significance level shown as ρ is less than 0.05, it is said that the difference between the two algorithms is statistically significant in favor of one of them.Otherwise, the results obtained by the algorithms are not enough to decide which one is statistically significant.The statistical test results related to the pIPA and its competitors were given in Table 7.In Table 7, R+ and R− show the sum of positive ranks and the sum of negative ranks, respectively.Also, the Z corresponds to the standardized test statistic.The results given in Table 7 show that the statistical difference between pIPA and MFO, PSO, GSA, BA, FPA, SMS, FA, or GA is in favor of pIPA.Only the decision about whether a statistical difference between pIPA and IPA exists or not cannot be made from the current results of the algorithms.The qualities of the final solutions, convergence performance, and statistical test results of the pIPA for 100-dimensional problems gave strong evidence of its capabilities.However, its capabilities should also be analyzed with another scenario in which population size, dimensionalities of the problems, and termination criteria are changed.For this purpose, the benchmark functions given in Table 1 were solved by setting the population size of the pIPA to 100 and number of parameters to 200 [25,51].The maximum evaluation number was taken equal to 500, 000 [25,51].Nine positive integers including 30, 35, 40, 50, 60, 70, 80, 90 and 95 were assigned to the prc and pIPA was tested 30 times with random seeds for each problem instance and prc combination.The objective function values of the best solutions found for each of 30 runs were averaged and reported in Table 8 with the related standard deviations.
When the results given in Table 8 are investigated, it is seen that the change trend of the pIPA with the different prc for 200-dimensional problems is similar to the change trend of the pIPA with the different prc for 100-dimensional problems.The pIPA obtains the global minimum solutions with the different values assigned to the prc for the f 1 , f 6 , f 9 , and f 11 functions.Moreover, it finds almost the same mean best objective function values for the f 5 function with all nine different values of the prc as in the previous experimental settings.For f 2 , f 9 , and f 12 functions, pIPA obtains better or slightly changed solutions when the value assigned to the prc increases from 30 to 80 or even 90.Similar generalization can also be made for the f 3 and f 4 , f 10 functions by considering the prc increasing from 30 to 40 and f 7 function by considering the prc increasing from 60 to 90.However, it should be noted that more robust solutions for the f 7 function can be obtained with the prc less than 40.
The changes in the average number of donors and receivers of the pIPA for 200dimensional benchmark functions can be examined with Table 9.As seen from Table 9, pIPA tries to adjust the number of donors and receivers for the f 1 , f 4 , f 6 , f 9 , f 10 and f 11 functions while the number of donors and receivers remains unchanged for the other functions.Choosing the value of the prc relatively close to its upper or lower bound decreases the number of possible donors or receivers.However, the donor-receiver selection strategy of the pIPA can increase the number of donors compared to the number of donors determined with the value of the prc, if the objective function values of the qualified individuals are relatively close to each other or same.Otherwise, the number of donors and receivers is simply calculated using the assigned value to the prc.The results of the pIPA for 200-dimensional problems should be validated with the comparison to the results of other meta-heuristics obtained under the same conditions.For this purpose, pIPA was compared with the standard implementation of the IPA [51], ALO [25], PSO [55], SMS [44], BA [19], FPA [20], CS [17], FA [18] and GA [6].Although population sizes of the tested algorithms were equal to 100, the maximum evaluation number was set to 500, 000.The prc of the pIPA was 90 for f 1 , f 2 , f 5 , f 6 , f 8 and f 12 .Also, it was determined as 40 for the f 3 , f 4 , f 7 , f 9 , f 10 and f 11 .When the mean best objective function values and standard deviations of the algorithms in Table 10 are controlled, it is seen that pIPA outperforms other tested algorithms or obtains the same mean best objective function values for ten of 12 benchmark functions.Although pIPA lags behind the ALO for the f 5 function and the IPA for the f 8 function, it becomes the second-best algorithm among other competitors for these functions and proves its superiority with the average rank equal to 1.1667.The comparison between pIPA and other meta-heuristics for classical benchmark problems was completed by the results of the Wilcoxon signed rank test with the significance level 0.05.From the test results given in Table 11, it is understood that the solutions obtained by the pIPA for 200-dimensional problems are strong enough to generate a statistical difference in favor of the pIPA.Although the ρ value is found equal to 0.0022 for the statistical comparison between the pIPA and PSO, SMS, BA, FPA, CS, FA, or GA and proves that the difference is in favor of pIPA, the ρ value is found equal to 0.0151 for the statistical comparison between pIPA and ALO and 0.0285 for the statistical comparison between pIPA and IPA.The results found by the IPA for the f 8 function and ALO for the f 5 function cause a slight change in the ρ values.However, it is still less than 0.05, and validates the comparative performance of the pIPA.The complexities of the benchmark problems can be increased using operations related to shifting, rotation, hybridization, and composition.To investigate the performance of pIPA on solving these kinds of problems, ten different 30-dimensional problems introduced at CEC 2015 were chosen, and their names, base functions, and global minimums are listed in Table 12 [61].The lower and upper bounds of these functions were equal to −100 and +100 [61].Although the f 1 and f 2 functions in Table 12 are rotated, f 3 , f 4 , f 5 , f 6 , f 7 and f 8 functions are both shifted and rotated [61].Moreover, while the f 9 is a hybrid function generated by four base functions, the f 10 is a compositional function joining three base functions [61].When solving the problems given in Table 12, the population size of the pIPA was set to 100, and the maximum evaluation number was taken equal to 100,000 [37].Nine different values including 30,35,40,50,60,70,80, 90 and 95 were assigned to prc and pIPA was tested 30 times with random seeds for each problem and prc combination.The objective function values of the best solutions found by each of 30 runs were averaged and reported in Table 13 with the related standard deviations.The results given in Table 13 guide us to interpret the change trend of the pIPA with the varied prc values.For the f 1 and f 2 functions, the objective function values of the obtained solutions by pIPA grow better with the prc increasing from 30 to 90 or 95.Similar generalizations can be made for the remaining functions except f 5 .Although the increasing values of prc from 30 to 60 or 70 improves the qualities of the solutions found by pIPA for f 3 , f 7 , f 9 and f 10 functions, the qualities of the solutions found by pIPA grow better with the prc increasing from 30 to 50.Only for the f 5 function do the values assigned to the prc not cause a significant change in the solution qualities of the pIPA.Figure 3 should also be viewed to analyze the effect of the prc on the performance of the pIPA.For validating the qualities of the solutions found by the pIPA, its mean best objective function values and standard deviations are compared with the mean best objective function values and standard deviations belonging to the IPA [51], SOA [37], SHO [35], GWO [23], PSO [55], MFO [24], MVO [27], SCA [26], GSA [41], GA [6] and DE [56] as in Table 14.To ensure that the comparison is made under the same conditions, the population size and maximum evaluation number were set to 100 and 100,000 for all algorithms [37].The prc value of the pIPA was taken equal to 50 for f 5 and f 6 functions, 60 for f 3 and f 9 functions, 70 for f 4 function, 80 for f 7 function, 90 for f 1 and f 8 functions, 95 for f 2 and f 10 functions.The NoR and NoD parameters of the IPA were set to 1.The results given in Table 14 showed that the pIPA and IPA outperform SOA, SHO, GWO, PSO, MFO, MVO, SCA, GSA, GA and DE for the f 4 , f 5 , f 6 , f 8 , f 9 and f 10 functions.Although all the tested algorithms find the same mean best objective function values for the f 3 function, SOA outperforms tested algorithms for the f 1 function, and GSA outperforms tested algorithms for the f 2 function.9.12 × 10 2 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 × 10 3 1.00 ×

Solving Noise Minimization Problem with pIPA
The volume, velocity, variety, and veracity properties of the data moved the difficulties of data-dependent optimization problems into another stage [62,63].One of the datadependent optimization problems has recently been introduced by Abbass et al., and a special competition has been organized at CEC 2015 with the name BigOpt [64].The real-world optimization problem introduced by Abbass et al. mainly focuses on minimizing the measurement noise of the electro-encephalography (EEG) signals [65,66].They stored 0.5 megabit of binary formatted data and 20 kilobyte of text formatted data per second and organized them for providing different problem instances.If the measurement of the EEG signal is extended to a period of time, a unique problem instance per second by neglecting the time spent for the storage and preparation will be encountered.Assume that X and S are two different matrices of size N × M.Although N corresponds to the number of time series belonging to the EEG signal, M is used on behalf of the number of elements for a time series.In addition to the X and S matrices, there is a square transformation matrix A of size N × N, and it relates S matrix to the X matrix as described in Equation ( 7) [64].If the S matrix is matched with the EEG signal containing N time series with M samples in each series, the noise-free part of the S showed by S 1 and the noise part of the S showed by S 2 must be obtained and used with the A matrix for finding X as in Equation ( 8) [65,66].Even though the relationship between S, S 1 and S 2 matrices is straightforward, a simple method splitting the S matrix into S 1 and S 2 matrices cannot be found easily.By considering the difficulty of splitting the S matrix, Abbass et al. decided to guide the Pearson Correlation Coefficients showed as C in Equation ( 9) [66].In Equation ( 9), covar(X, A × S 1 ) is the covariance matrix and var(X) and var(A × S 1 ) are variance matrices, respectively.
Abbass et al. also stated that the diagonal and off-diagonal elements of the C matrix have important information about the appropriateness of the S 1 matrix and can be referenced for splitting the original S matrix [65,66].Although the S 1 matrix is obtained from the S matrix, the diagonal elements of the C should be maximized, and other elements should be minimized by considering the upper and lower bounds.To understand how the calculated C matrix for a guessed S 1 satisfies the mentioned properties about the diagonal and off-diagonal elements, Equation ( 10) is utilized [65,66].
Another important situation that should be controlled when the S 1 matrix is tried to be determined is its similarity with the original S matrix.Because the S 1 matrix represents the noise-free part of the original S matrix, the difference between S and S 1 matrices should be minimized.For measuring the difference between S and S 1 matrices, Equation ( 11) can be used [65,66].As easily seen from Equation (11), the S 1 matrix should be chosen relatively close to the S matrix for representing the properties of the EEG signal.When the S 1 matrix is tried to be found by guiding the minimization of the sum of f 1 and f 2 , an optimization problem can be introduced.For analyzing the performance of the solving techniques on the mentioned optimization problem, different instances named D4, D4N, D12, and D12N were introduced by Abbass et al. and required X, A, and S matrices for each instance were reported [65,66].The D4 and D12 instances have four and 12 time series with length 256.Similar to the D4 and D12 instances, D4N and D12N instances also have four and 12 time series with length 256.However, these problem instances are changed slightly with the additional noise components.
The pIPA was tested for solving the D4, D4N, D12, and D12N problem instances.The population size of pIPA was set to 50 [51].Nine different values including 30, 35, 40, 50, 60, 70, 80, 90 and 95 were assigned to the prc.For each combination of problem instance and prc, pIPA was tested 30 times with random seeds by setting the maximum evaluation number to 10,000 [51].The mean best objective function values and standard deviation of each test scenario were recorded and presented in Table 15.The results given in Table 15 showed that mean best objective function values of pIPA decrease with the increasing value of the prc from 30 to 80 for D4, D4N, and D12 problem instances and increasing value of the prc from 30 to 90 for D12N problem instance.Although the appropriate value of the prc parameter is 80 for D4, D4N, and D12 problem instances by considering the mean best objective function values, the appropriate value of the prc parameter is 90 for D12N problem instance by considering the mean best objective function values.The results obtained by the pIPA for noise minimization problem were compared with the results of IPA [51], GA [6], PSO [55], DE [56], ABC [57], GSA [41], MFO [24], SCA [26], SSA [28] and HHO [29]-based techniques.To guarantee that the comparison is made under equal conditions, the population or colony size of the algorithms was set to 50, and the maximum evaluation number was taken as 10,000 [51].The prc parameter of pIPA was set to 80 for the D4, D4N, and D12 problem instances and 90 for the D12N problem instance.The NoD and NoR parameters of the standard IPA were equal to 4 and 8, respectively [51].For the GA, the crossover rate was 0.95, and the mutation rate was 0.001.The inertia weight of PSO achieved its value between 0.2 and 0.9, and both c 1 and c 2 acceleration coefficients were set to 2. Although the scaling factor of DE achieved its value randomly between 0.2 and 0.8, the crossover rate was taken equal to 0.9.The limit parameter of ABC was set to the half of PS × D where D was equal to 1024 for D4 and D4N and 3072 for D12 and D12N.The calculation of the logarithmic spiral was completed by setting the b constant to 1 for MFO.Assuming that l and L are current and maximum iteration numbers, the c 1 coefficient of SSA was calculated as 2e −(16l 2 /L 2 ) .When the best, mean best objective function values and standard deviation over 30 independent runs given in Table 16 are controlled, it is seen that pIPA removes artifacts or noises more robustly compared to the other tested algorithms for all four problem instances.The percentile-based donor-receiver selection strategy that already proved its efficiency in solving classical benchmark problems also contributes to the performance of the algorithm, and more robust S 1 matrices are obtained.
As stated earlier, if the measurement of the EEG signal is extended to a period of time, a unique problem instance per second will be encountered, and algorithms should generate their solutions within a second to successfully handle the subsequent instance.To decide whether the pIPA and some of its competitors, including IPA and ABC, can produce their solutions within a second or not using the existing test configuration, the average execution times in terms of seconds were calculated and then presented in Table 17.The pIPA, IPA, and ABC were coded in C programming language.Also, all experiments were carried out on a PC equipped with a single-core 1.33 Ghz processor.The results of Table 17 help to state that neither pIPA nor IPA is capable of filtering EEG instances within a second.Especially for the problem instances with 12 time series, parallelization of the algorithms is seen as a necessity for processing the ongoing measurements.
The comparative studies between meta-heuristics should be supported with the appropriate statistical tests.By considering this requirement, the Wilcoxon signed rank test with the significance level 0.05 was used again for determining whether a statistical difference between pIPA and other tested meta-heuristics exists or not.The test results given in Table 18 represent that the contribution of the newly proposed selection mechanism is enough to generate a statistical difference in favor of pIPA.The results also help to state that pIPA outperforms its competitors in almost all the 30 different runs related to the D4, D4N, D12, and D12N instances when the calculated ρ values are considered.The operational success and safety of a UAV or UCAV are directly related to the path or flight route on the battlefield equipped by using sophisticated anti-air weapon systems, radars, missiles, and artilleries [67].The path or route determined on the task region for a UAV or UCAV should minimize the probability of being shot down and fuel consumption [67].By considering these objectives, Xu et al. proposed a mathematical model describing how a path from the start point P s = (x s , y s ) to the target point P t = (x t , y t ) can be found optimally [67].The model described by Xu et al. first divides the line between P s to P t equally into (D + 1) segments using D different segmentation points.Each segmentation point is intersected vertically by a line, and a set of lines showed as L = {L 1 , L 2 , . . ., L D } is generated [67].If a point is found on each line in the set L and then these points are connected one by one, a single path from the start point P s to target point P t can be described as a set of points showed as P = {P s , P 1 , P 2 , . . ., P D−1 , P D , P t }.
The search operations of points in the set P except the P s and P t can be further simplified by appropriately transforming the current coordinate system.If the current coordinate system is transformed in a manner that the line between the P s and P t corresponds to the horizontal axis in the new coordinate system, each point tried to be determined is represented only single parameter [67].For transforming the (x k , y k ) point of the original coordinate system into the suitable point of the new coordinate system, Equation ( 12) is employed [67].In Equation (12), θ is the rotation angle between the x-axis of the original coordinate system and the line between P s and P t and calculated as arctan((y t − y s )/(x t − x s )) [67].
When the required points are determined, the suitability of the path generated using these points can be estimated with Equation ( 13) [67].In Equation (13), J corresponds to the sum of costs related to the enemy threats and fuel consumption weighted using the λ and (1 − λ), respectively.Also, while the w t represents the cost of enemy threats changing with the length of path abbreviated as l, w f is used on behalf of the cost of fuel consumption changing with the l [67].
Even though the equation used for determining the suitability of the path is relatively simple, it can be further purified by replacing the integral calculations with their appropriate approximations [67].For this purpose, the w f is first taken equal to 1, and the integral calculation about the cost of fuel consumption becomes directly proportional to the length of the path [67].Second, the integral calculation about the cost of enemy threats is changed with an approximation in which the cost of threats is determined for each segment of the path.Assume that L ij is the segment between the segmentation points i and j.In addition to this, L ij is divided equally into ten sub-segments, and the first, third, fifth, seventh, and ninth sub-segmentation points are selected.For the cost of all N t threats related to the L ij , the summation described in Equation ( 14) is utilized [67].Given that t k is the degree of the threat k, if the segment of length L ij is within the effect range of the threat k, the cost of threat k showed as cost(k, m) for the sub-segmentation point m is found equal to t k /d(k, m) where d(k, m) corresponds to the Euclidean distance between the center of threat k and sub-segmentation point m.
For investigating the performance of the pIPA on the path planning problem, the battlefield whose details are given in Table 19 was used [58,59].The number of segmentation points or D was taken equal to 5, 10, 15, 20, 25, 30, 35 and 40 [58,59].The value of the λ coefficient was determined as 0.5 [58,59].The population size of the pIPA and maximum evaluation number were set to 30 and 6000 [58,59].Six different values including 50, 60, 70, 80, 90 and 95 were assigned to the prc sequentially.The pIPA was tested 100 times with random seeds for each combination about D and prc.The best, worst, mean best objective function values and standard deviations of 100 runs were recorded and presented in Table 20.The results presented in Table 20 state that the prc value should be chosen between 60 and 70.Although the pIPA obtains more suitable paths for the UCAV on the battlefield with D equal to 15, 20, 25, 30, and 35 by setting the prc to 60, the appropriate value of the prc for the remaining battlefield configurations is equal to 70.The best paths found by the pIPA with prc set to 60 for different cases can be visualized as in Figure 4.
The qualities of the paths found by the pIPA should be compared with the qualities of the paths obtained by different meta-heuristics.For this purpose, the best, worst, mean best objective function values and standard deviations found by the pIPA with prc equal to 60 were compared to the corresponding results of the IPA [68] with NoD and NoR equal to 1, ABC [68] with limit equal to 100, BA [58], BAM [58], ACO [59], BBO [59], DE [59], ES [59], FA [60], GA [59], MFA [60], PBIL [59], PSO [59], SGA [59] and PGSO [59]-based UCAV path planners.To guarantee that the results were obtained under the same conditions, the population or colony size of the mentioned algorithms was set to 30.Each algorithm was executed 100 times by taking a maximum evaluation number equal to 6000, and their results were summarized in Table 21.The results given in Table 21 showed that the pIPA is the best path planner with the average rank calculated as 1.500 among all 16 meta-heuristics when the mean best objective function values are considered.It outperforms other tested algorithms or shares the first rank for the battlefield with D set to 5, 15, 30, 35, and 40.Moreover, the paths found by the pIPA for the battlefield with D set to 20 and 25 are in the second rank by considering the mean best objective function values.Even though the path obtained by the pIPA for the battlefield with D set to 10 lags slightly behind its competitors, it is still in the third rank and produces a better path compared to 13 different algorithms.The contribution of the percentile-based selection strategy on the convergence performance can be guessed by referencing the paths and their qualities belonging to the pIPA.However, unique properties of the UCAV path planning problem require a further control for Sr and Me metrics.For this purpose, the Sr and Me values of the pIPA, IPA, and ABC were calculated by adjusting the threshold to 55 and given in Table 22.When the Sr and Me metrics of Table 22 are investigated, it can be seen that pIPA with prc equal to 50 or 60 obtains paths whose qualities are equal to the determined threshold or better for all eight battlefield configurations at each of 100 different runs.Moreover, the pIPA with prc set to 70, 80, or 90 still protects its stability and converges more quickly compared to the IPA and ABC for most of the test cases.Although the pIPA with prc set to 60 converges 1.600 times faster compared to IPA for the battlefield with D equal to 25, it converges 1.513, 1.393 and 1.450 times faster compared to IPA for the battlefield with D equal to 30, 35 and 40.The comparative studies between pIPA and other techniques for the UCAV path planning problem were concluded by controlling the results of the Wilcoxon signed rank test with the significance level of 0.05.The test results were calculated using the best objective function values and then presented in Table 23.As easily seen from the test results, the difference between pIPA and IPA, ABC, BA, ACO, BBO, DE, ES, GA, PBIL, PSO, SGA, or PGSO is enough to generate a statistical difference in favor of the pIPA.Only the difference between the pIPA and BAM, FA, or MFA is not enough to state that there is a statistical significance in favor of the pIPA.However, it should be noted that the ρ value calculated for the comparison between pIPA and BAM or FA is relatively close to 0.05 and supplies information about the qualities of the paths found by pIPA.

Results and Discussion
The standard implementation of the IPA determines the number of donors by assigning a constant to the NoD parameter and selects the best NoD individual or individuals from the population as donor or donors.Similarly, IPA determines the number of receivers by assigning a constant value to the NoR parameter and selects the worst NoR individual or individuals from the population as receiver or receivers.Even though the usage of NoD and NoR control parameters increases the flexibility of the IPA, deciding which values will be convenient for these control parameters and guessing the interaction between them are difficult.Moreover, solving some optimization problems with IPA can require adaptive adjustment for the number of donors and receivers.
The main idea lying behind the newly introduced donor-receiver selection strategy is providing an improved mechanism that both simplifies the initialization of the IPA and allows the algorithm to determine the number of donors and receivers adaptively.When the pIPA improves the qualities of the individuals in the population, it tries to extend the set of possible donors, as easily seen from Tables 3 and 9 presenting the change trends of the number of donors and receivers for 100 and 200-dimensional benchmark problems even though the prc remains unchanged.If the number of possible donors is increased by the pIPA, the chance of plasma transfer to a receiver from a different donor is also increased.Moreover, if the pIPA decides to increase the number of donors, the number of receivers is decreased simultaneously, and more critical receivers, i.e., poor solutions, have a chance of improving their qualities.The pIPA can also decrease the number of donors.If the number of donors is decreased, the number of receivers is increased simultaneously.Because some donor candidates with relatively low qualities are discarded from the set of possible donors, receivers have a chance of treatment with the more qualified or better donors.Finally, if the pIPA decides that there is no receiver in the current infection cycle, any treatment operations are not carried out, and the infection continues to spread between the individuals of the population and exploration characteristic of the search becomes more dominant.
As an expected result of the properties related to the percentile-based donor-receiver selection strategy, the pIPA outperformed standard implementation of IPA and other metaheuristics for the vast majority of the tested numerical and complex optimization problems.
Although the contribution of the proposed model on the qualities of the final solutions and convergence performance becomes more apparent for the 100 and 200-dimensional classical problems, pIPA loses its advantageous sides for some of the CEC 2015 problems.The most powerful side of the pIPA is adjusting the number of donors and receivers by considering the qualities of the individuals in the population.Although the value assigned to prc remains the same until the end of execution, pIPA utilizes the special property of the percentile calculation and changes the sets of possible donors and receivers.However, some problems introduced at CEC 2015 are generated by hybridization or composition of two or more basic functions.Because of this main reason, while the assigned value to the prc and set of possible donors and receivers are appropriate for a basic function, another participating function requires a more subtle number of donors and receivers for the plasma treatment as in the standard IPA.
When the results obtained by the pIPA for the complex engineering problems are investigated, the positive contribution of the newly proposed technique on the quality of the final solution and convergence speed is understood again.The EEG noise minimization is a big-data optimization problem and requires processing 1024 parameters at a second for D4 and D4N instances and 3072 parameters at a second for D12 and D12N instances.The difficulties of the problem stemmed from the high dimensionality and conflicting objectives, claiming a more sensitive search within the promising solutions.In the pIPA, the required sensitive search by considering the neighborhood of the promising solutions can be satisfied by decreasing the number of donors or assigning an initial value of the prc big enough.Another tested engineering problem, also called path planning, slightly differs from other problems when the number of segmentation points is considered.If the number of segmentation points or D increases, the possibility of finding a segmentation point within the circles representing the enemy air defense systems is also increased intrinsically.Moreover, it should be noted that even though the D is relatively small, some segmentation points can still be relatively close to the centers of enemy threats.Because of the specific properties of the UCAV path planning problem, the algorithms being tested should be capable of escaping local optimal solutions more quickly.In the pIPA, the exploration or exploitation dominant operations are tried to be managed adaptively.Although the value of the prc parameter is set to a constant such as 50, 60, 70 and even 80, the pIPA is capable of finding a balance between exploration and exploitation dominant operations and more safe and robust paths are obtained compared to the standard IPA and other metaheuristics.However, if the prc value is not determined appropriately and the maximum number of evaluations is not selected relatively high, it should be noted that the pIPA can consume a substantial amount of function evaluations for accessing the required balance and terminates without obtaining promising solutions.

Conclusions
In this study, the donor-receiver selection strategy of the immune plasma algorithm (IP algorithm or IPA) was modified by guiding a statistical measure known as the percentile and then an improved IPA variant called the percentile IPA (pIPA) was introduced.To analyze how the newly introduced donor-receiver selection strategy contributes to the solving capabilities of the pIPA, a set of experiments was carried out.In the first and second parts of the experimental studies, 22 numerical benchmark problems were solved with the pIPA by assigning different values to its control parameters, and the obtained results were compared to the classical and state-of-art meta-heuristics including IPA, PSO, GSA, CS, BA, FPA, SMS, FA, GA, MFO, ALO, SOA, SHO, GWO, MVO, SCA and DE.The third part of the experimental studies was devoted to the investigations about the pIPA using a big-data optimization problem requiring noise minimization in the EEG signals, and pIPA was compared to the IPA, GA, PSO, DE, ABC, GSA, MFO, SCA, SSA, and HHO-based techniques.Finally, in the fourth part of the experimental studies, pIPA was used to find an optimal flight path for a UCAV, and its results were compared to the results of the IPA, ABC, BA, BAM, ACO, BBO, DE, ES, FA, GA, MFA, PBIL, PSO, SGA and PGSO-based planners.
), x rcv kj and x dnr mj are matched with the jth parameters of the x rcv k and x dnr m individuals.Furthermore, x rcv−p k represents the plasma transferred x rcv k and its jth parameter is the x rcv−p kj

Figure 1 .
Figure 1.Workflow of the percentile-based donor-receiver selection strategy.

Algorithm 1
Plasma transfer in the pIPA by selecting donors 1: x prc ← find required individual 2: dCount ← get the number of donors 3: rCount ← get the number of receivers 4: x best ← get the best individual found so far 5: if dCount > rCount then 6:

Table 1 .
Classical benchmark functions used in experiments.

Table 3 .
Average number of donors and receivers for 100-dimensional problems.

Table 5 .
Comparison between pIPA and other meta-heuristics for 100-dimensional problems.

Table 6 .
Sr and Me metrics of pIPA and IPA for 100-dimensional problems.

Table 7 .
Statistical comparison between pIPA and other algorithms for 100-dimensional problems.

Table 8 .
Results of pIPA with different prc values for 200-dimensional problems.

Table 9 .
Average number of donors and receivers for 200-dimensional problems.

Table 10 .
Comparison between pIPA and other meta-heuristics for 200-dimensional problems.

Table 11 .
Statistical comparison between pIPA and other algorithms for 200-dimensional problems.

Table 12 .
CEC 2015 benchmark functions used in experiments.

Table 13 .
Results of pIPA with different prc values for CEC 2015 problems.

Table 14 .
Comparison between pIPA and other meta-heuristics for CEC 2015 problems.

Table 15 .
Results of pIPA with different prc values for D4, D4N, D12 and D12N instances.

Table 19 .
Information about the battlefield.

Table 20 .
Results of pIPA with different prc values for path planning.

Table 22 .
Sr and Me metrics of pIPA, IPA, and ABC for path planning.

Table 23 .
Statistical comparison between pIPA and other path planners.