4.1. Prey–Predator Algorithm (PPA)
PPA is a metaheuristic algorithm developed for handling complex optimization problems. It is inspired by the interaction between a carnivorous predator and its prey. The algorithm mimics how a predator runs after and hunts its prey, where each prey tries to stay within the pack, search for a hiding place, and run away from the predator.
In the algorithm, a set of initial feasible solutions will be generated and, for each solution, xi, is assigned a numerical value to show its performance in the objective function called survival value (SV(xi)). Better performance in the objective function implies a higher survival value. This means for solutions xi and xj, if xi performs better than xj in the objective function, SV(xi) > SV(xj). A solution with the smallest survival value will be assigned as a predator, xpredator, and the rest as prey. Among these prey, a prey, say xb, where SV(xb) ≥ SV(xi), for all i, is called the best prey. This means the best prey is a prey with the highest survival value among the solutions.
Once the prey and the predator are assigned, each prey needs to escape from the predator and try to follow other prey with better survival values or find a hiding place. The predator hunts the weak prey and scares the others which contribute to the exploration of the solution space. Exploitation is carried out by the preys, primarily the best prey, by using a local search. The best prey is considered as the one who has found a secure place and is safe from the predator. Thus, it will only focus on conducting a local search to improve its survival value. However, the other prey will follow the prey population with better-surviving values and run away from the predator. In the updating process of these solutions, there are two issues to deal with, the direction and the step length.
In the algorithm, the movement of an ordinary prey (not the best prey) depends on an algorithm parameter called the probability of follow up (or follow up prospect). If the follow up expectation is achieved, which is if a randomly generated number between zero and one from a uniform distribution is less than, or equal to, the probability of follow up, then the prey will follow other prey with better survival values and performs a local search; otherwise it will randomly run away from the predator.
Suppose the follow-up probability is met and there is prey with better survival values compared to
xi, say
x1,
x2, …,
xs. Mostly, a group of prey tends to stay in the pack and it tries to be with the nearest pack of prey animals. Therefore, the movement of
xi is highly dependent on the distance between itself and better prey. Hence, the direction of movement of
xi can be calculated as follows:
where
rij = ||
xi −
xj|| is a distance between
xi and
xj, and
v is an algorithm parameter which plays the role of magnifying or diminish the effect of the survival value over the distance.
By assigning different values of
v, it is possible to adjust the dependency of the updating direction on the survival value and the distance. If
v is too large, then the direction favors the performance than the distance of the better prey from the solution, this means the prey tries to catch up with the best pack with little consideration how far that pack is. If
v is too small, it implies that the prey prefers to follow the nearest better pack. Assigning a large or a small value for
v will affect the jump size of
xi. Hence, a unit direction will be used to represent the direction as:
Moreover, a local search is done by generating q random directions and taking the best direction, say yi, among these q directions. To choose yi the survival value will be checked if the solution moves in the q directions and the direction which results the highest survival value will be taken.
If the follow-up probability is not met, the prey will randomly run way from the predator. This is done by generating a random direction yrand and comparing the distance between the predator and the prey if it moves in yrand or −yrand, and the direction which will take the prey away from the predator will be considered.
Unlike the other ordinary prey, the movement of the best prey will perform only a local search. It will just move towards the direction which can improve its survival value, from a randomly generated q direction, or it stays in its current position if no such path exists among the q unit directions.
In the algorithm, the primary task of the predator is to motivate the prey for exploring the solution space, and it also does the exploration of the solution spaces. Thus, it will chase after the prey with the lowest survival value and moves randomly in the solution space. Hence, the movement direction will have two components: random direction, as well as towards the weak prey, x’.
Step length is the other issue related to the updating of solutions. In the carnivorous predation, a prey which is near to the predator runs faster than other preys. A similar concept is mimicked in such a way that a prey with small survival value runs more quickly than a more critical survival valued prey. Therefore, a prey has a step length
λ that is inversely proportional to its survival value. This can be expressed as the following formulation:
where
rand is a random number from a uniform distribution, 0 ≤
rand ≤ 1 and
max represents the size of the maximum jump of a prey. Parameter
w influences the dependency of the step length on the relative survival value. However, for practical purposes, it is also possible to omit the effect of the survival value in assigning the maximum jump, which eliminates the further study of changes in survival values to choose the parameter
w. Hence, we can just have:
Furthermore, another step length is a step length for the local search or exploitation purposes and it is denoted by . The step length for exploitation purposes should be smaller than the exploration step length, i.e., .
Summarizing all the points discussed, the movement of the common prey, the best prey, and the predator can be summarized as follows, and can be summarized as given in Algorithm 1.
Algorithm 1. Prey–predator algorithm. |
Algorithm parameter setup |
Generate a set of random solution, {x1, x2, …, xN} |
For Iteration = 1:MaximumIteration |
Calculate the intensity for each solution, {I1, I2, …, IN} and without losing |
generality sort them in brightness from x1 dimmer to xN brightest |
Update the predator x1 using Equation (13) |
For I = 2:N − 1 |
If probability_followup ≤ rand |
For j = i + 1:N |
Move solution i towards solution j using Equation (18) |
End |
Else |
Move solution j using Equation (19) |
End |
End |
Move the best solution, xN, in a promising direction using Equation (20) |
end |
Return the best result |
Movement of an ordinary prey:
- (i)
If follow up probability is met:
- (ii)
If the follow-up probability is not met:
Movement of the best prey:
Optimization Using the Prey–Predator Algorithm
The aim of microchannel heat sink optimization is to minimize the multi-objective function regarding choosing the optimum values for the variables from the feasible region. It is a bi-objective optimization with objectives of optimizing thermal resistance and the pumping power. By having a different combination of weights, the weighting method has an advantage of obtaining multiple solutions from the Pareto front, when compared to classical methods like the ideal point method. Hence, a weighted approach for the objectives
Rtot and
Pp can be used as follows:
With
where
w1 and
w2 are the weights associated with both performance parameters.
Even though assigning weights is subjective, and sometimes a challenging task for the decision-maker, a random or pseudo-random weight can be used to generate an approximate Pareto optimal solution that may well represent deferent corners of the Pareto front. If partial information can be accessed about the relationship on the weights, then by using that incomplete information pseudo-random weights can be generated and used. Otherwise, different weights can be generated randomly and utilized. After updating the weights, PPA is initialized with a population of random solutions and then upgraded through generations, as shown in
Figure 4. The variables for both performance parameters are
α and
β.
4.2. Radial Basis Function Neural Networks
RBFNNs typically have three layers: an input layer, a hidden layer with a non-linear activation function, and a linear output layer, as shown in
Figure 5. The input layer is made up of source neurons that connect the network to its environment. The hidden layer receives the input information, followed by specific decomposition, extraction, and transformation steps to generate the output data. Additionally, the hidden layer’s neurons are associated with parameters (centers and widths), that determine the behavioral structure of the network. The neurons are linked together by parameters called the weights. The weights are divided into two types: the input weights and the output weights. The input weights are linked between the input layer and the hidden layer, and equal one for all. On the other hand, the output weights are linked between the hidden layer and the output layer, and all of them are variables. The output weights are calculated with the hidden layer parameters by using the optimization algorithms. In this paper, we have used PPA to determine the optimal values of the neural networks parameters. The output layer provides the response of the network to the activation pattern of the input layer that serves as a summation unit [
54].
In this study, we used the RBFNNs to construct a model to represent the relationship between
Rtot and
Pp. In the proposed model, the input value is a
Pp value, and the output is the corresponding
Rtot value. The following equation is the Gaussian function (the activation function of the hidden neurons) that we use in RBFNNs [
54]:
where
is the radial basis function in hidden neuron
i, (
xi =
) is the input value in hidden neuron
i,
μ, and
are the center and the width of the hidden neuron
i, respectively,
N is the number of input neurons that are linked to hidden neuron
i,
is the constant input weight from the input neuron
j to the hidden neuron
i which we fixed to be equal 1, and
xj is the input value in input neuron
j.
Additionally, the actual values of the output layer can be determined using Equation (23):
where
Fk(
xi) is the actual output value of the output neuron
k which corresponding to the input value
xi,
is the output weight between the hidden neuron
j and the output neuron
k, and
is the activation function in a hidden neuron
j.