Micro-Grooved Pipe Design of Parabolic Trough by Metaheuristic Optimization: An Empirical Comparison

: Pipe design is one of the most signiﬁcant research lines in the area of parabolic semi-cylindrical solar collectors. The main idea behind pipe design is to increase the capillarity angle by expanding the total area being heated, therefore boosting the work capacity of the device. Such capillarity depends on several factors, whose numerical calculations are highly complex. Moreover, some of those variables are integers, whereas some others are real; hence, it is necessary to use optimization techniques that are capable of searching in those numerical spaces. There are several optimization tools that allow individual codiﬁcation as binary strings, granting the coding of integer, real, or any other, as part of the same individual. Consequently, in this paper we propose the comparison of four metaheuristics when they are utilized to maximize the capillarity angle of the pipe in a parabolic trough. Experimental results show a better performance of binary particle swarm optimization when compared against the other techniques, achieving improvements in the capillarity angle of on average 11% in comparison with a similar study.


Introduction
The use of fossil fuels for energy production has been the origin of many improvements to human life. However, it is also the cause of several problems, such as pollution and climate change. For that reason, research efforts have been focused on the utilization of alternative energy sources, like wind [1], geo-thermal [2], sun-based [3], biomass [4], hydrogen [5], bio-fuels [6], or even tidal [7]. Contrary to fossil fuels, these are considered renewable because they are sustainable, cost-effective, and environmentally friendly [8,9]. Because of its abundance and availability in many places around the world, the sun is the most critical source. Therefore, much research has focused on improvements to the associated technologies to extract its energy, such as photo-voltaic and concentrated solar power (CSP) systems [10].
In CSP systems of the parabolic trough collector (PTC) type, heat pipes (HPs) receive the concentrated solar power to transfer the temperature into a heat transfer fluid (HTF) that is later utilized to perform mechanical work. Among all the components of PTCs, HPs are considered the most crucial element to increase the efficiency in energy harvesting [11,12] by boosting the transformation of sunlight to heat. One way to improve the receiver efficiency is by modifying its parameters. Thus, the optimal design of HPs is an essential need to be covered by research endeavors [13]. This work proposes a comparison of four metaheuristic methods when they are used to increase the capillarity angle of micro-grooved pipes.

Design of Micro-Grooved Pipe
The study made in [21,27] considered the capillarity angle and the velocity of a heat transfer fluid (HTF) inside a heat pipe (HP) in the model. This work used the same design and held the same assumptions. The mathematical explanation of the model is in the next subsection, and the other paragraph describes its solution methodology.

Governing Equations
The use of micro-grooved pipes as a heat receiver is frequent in CSP systems. Authors in [27] developed a capillary-driven two-phase model with several design variables. In the first phase, it is assumed, that there is no heat flux for the calculus of the liquid front velocity. The second stage establishes two relationships: heat flux vs. liquid front velocity, and mass of liquid vs. liquid front velocity. Those relations are crucial to calculate the fluid supply to a micro-grooved pipe (Figure 1) under the CSP working conditions. It is important to remark that the model only predicts the behavior of saturated liquids with different operating temperatures. Additionally, the steam being displaced by the moving fluid in the micro-grooves is non-negligible. By considering those situations, the flow behavior in this kind of pipe is influenced only by the momentum balance of the liquid in the direction of the capillarity angle, θ, as shown in Figure 1. The momentum balance is given by: where F I are the inertial forces over the liquid layer, F c are the capillarity-driven forces, F g are the gravitational forces, and F v are the viscous forces. Other assumptions are: • Laminar and Newtonian flux in the micro-groove; • Isothermal layer of fluid, therefore its internal friction is negligible; and • The thickness of the liquid layer advancing on the micro-groove insignificant.
If we consider a mass for the liquid that flows in the direction of the capillarity angle, with a liquid front velocity, then Equation (1) becomes [23]: where the meaning of every variable and constant is in Appendix A. In this work, the objective is to maximize the capillarity angle (e.g., liquid front position). Equation (2) is prone to errors if used to calculate θ * , the maximum capillarity angle. Therefore, authors in [23] proposed its reformulation using time as the independent variable: Methodology to Solve the Governing Equations Equation (3) can be re-formulated as a system of differential algebraic equations: with initial conditions θ(t = 0) and v l (t = 0). In the experimental part, Equation (4) is evaluated in 10 seconds by using the Matlab solver ode45, to find the stable values of θ and v l . An illustrative example of the evaluation of Equation (4) is in Figure 2, where the initial conditions are θ(t = 0) = 0 and v l (t = 0) = 0.4. This Figure also depicts the dynamics of the capillarity angle and the velocity of the liquid front. Concerning the angle, it surpasses the stable point around the first second, with a tendency to stability after around 1.5 s. We consider maximizing the value of the capillarity angle in the stable state of the simulation.    The optimization process produces an increase in the capillarity angle, which enhances the heat transfer to the micro-grooved pipe. Then, the objective function can be described as the maximization of the capillarity angle: max θ = θ(R, β, γ, st, p, φ, r m , k f , tg).
The value θ is function of several variables that are defined in Appendix A. In this work, four fluids (Appendix B) are used in the optimization of the capillarity angle; while some parameters depend on the kind of fluid (k f ), such as liquid density, viscosity and dynamic density (Appendixes C-F), other are independent of such variable, e.g. the micro-groove geometry (Appendix G).

Metaheuristic Algorithms
This section briefly explains the four selected metaheuristic algorithms utilized to solve the problem of pipe design in the parabolic trough. In each technique, the individual codification was as binary strings, because they are capable of encoding the two kinds of variables optimized in this article: integers and real values. Another critical remark is the fact that in every algorithm, the initialization of the population is performed by considering a uniform probability distribution [28].

Discrete Binary Differential Evolution
This motivation of this metaheuristic is the evolutionary process, similar to the genetic algorithms. Nevertheless, and even though the operators' names are the same, they function in a slightly different manner. Such operators explore the search space efficiently, allowing the algorithm to achieve the right balance between exploration and exploitation [29]. The algorithm was first proposed to minimize nonlinear and non-differentiable functions. In their seminal paper, Storn and Price considered some essential features of this optimization technique: few control variables, easily parallelizable, and excellent convergence properties, among others [28]. The basic algorithm for real candidate solutions utilizes three operators, which modify the whole population of vectors: (1) mutation, (2) crossover, and (3) re-selection of the best individual, which are mostly the same operators used in the discrete binary differential evolution (DBDE) [30] utilized in this work. A detailed description of those mathematical operators is given in the following.

Mutation
For DBDE, this operator must select three random parents from the original population with the restriction that i = r 1 = r 2 = r 3 , where i is the actual individual and r 1 , r 2 , and r 3 are three randomly generated integers from a uniform distribution. Once selected, the mutated vector is calculated by where F ∈ [0, 2] represents the step size, and the binarization is achieved by with rand() being a uniform random number.

Crossover
The mutated vector, as well as the actual candidate solution, serve to construct a trial vector: where rand() is a uniform random number, CR ∈ [0, 1] is the crossover rate, jrand ∈ [0, N b ] is an integer number generated from a uniform distribution. This value is to ensure that at least one bit is mutated.

Selection
The final step of the algorithm consists of a greedy re-selection of the objective function value of the actual candidate solution, and the fitness value of the trial vector:

Clonal Selection Algorithm
The environment is a source of many health imbalances to many living beings, particularly mammals. To counteract that situation, those individuals have several internal systems, one of them being the immune system. This system has compelling attributes, such as learning capability, pattern recognition, and memory [31]. A theory formulated by immunologists is called the clonal selection theory; in this theory, the immune system's elements (antibodies) are continually monitoring for external substances which attacked the body in the past. If a component of that kind appears (e.g., an antigen), then an immune response is triggered, with the following steps: cloning, mutation, destruction of the foreign agent, re-selection, and memory. Those characteristics have served as an inspiration to computer scientists to create an area of soft computing called artificial immune systems, which has several variants. The version utilized in this paper is the clonal selection algorithm [32], whose steps are: (1) cloning, (2) hyper-mutation, (3) re-selection, and (4) diversity introduction.

Cloning and Hyper-Mutation
Even though there are other schemes, this paper uses the one related to the simple cloning of the individuals, considering a certain percentage, P c . First, it generates a cloned population, C, which has N c clones and then mutation is carried out with a probability pm to every bit in the population, to get a new cloned and mutated population, C m otherwise (11) It is essential to say that those individuals will compete against the original parents from B. Consequently, the evaluation of clones will be in groups: the fitness value of every parent will be contrasted against the fitness of its mutated clones, repeating the procedure for each parent of the original population.

Re-Selection and Diversity Introduction
In order to evaluate each individual from C m , it is necessary to transform those binary strings to real and integer values, obtaining x mi : where f is the objective function, x i is the real version of b i , x mi are the real version of the clones corresponding to individual i, and c best mi is the best binary candidate solution extracted from the clones corresponding to individual i. Once the re-selection between C m and B is complete, the last phase of the CSA is a diversity introduction consisting of replacing a percentage of the worst individuals in B with random individuals created with Equation (22). As in other evolutionary metaheuristics, all the steps repeat until a criterion is reached, usually until a maximum iteration number.

Binary Particle Swarm Optimization
This metaheuristic takes as inspiration the behavior of flocks of bird flocks and schools of fish. It was initially proposed as an optimizer by Kennedy and Eberhart in 1995 [33]. The algorithm considers a population of individuals called particles, which represent candidate solutions for a problem (usually an optimization problem); this set also describes a swarm from the analogous point of view. In the algorithmic implementation, the particles evolve by moving around the space of solutions, considering the velocity of each particle around itself and the best global particle found so far. In that sense, the original technique has two operators: (1) the velocity calculus, which considers the global and the local social behavior of each particle, and (2) position upgrade. The next lines describe the operators mathematically for the binary particle swarm optimization (BPSO) used in this paper [34].

Velocity Calculus
In BPSO, the global velocity change utilizes two vectors which hold the probability that every bit changes either to zero or to one; the respective equations to calculate these vectors are where w represents the inertia weight, and d 0 are temporary values calculated with the following conditions: with the indexes i and j representing the actual individual and the bit, respectively; c 1 , c 2 ∈ [0, 2] are the constriction factors taken from the original PSO; r 1 and r 2 are random numbers drawn from a uniform distribution; and b ibest j , b gbest j are the bits that belong to the local and global best individuals, respectively. After calculating v 0 i,j and v 1 i,j , the next step consists of transforming those probabilities into the velocity change of the actual particle: Position Upgrade and Selection The position calculus of the actual particle in the next iteration is whereb i,j is the 2 complement of b i,j , and r 1 is a uniform random number. The last stage in the BPSO is the comparison of the generated candidate solution against the local best as well as the global best: where x ibest , x gbest , and x i are the real versions of the binary b ibest , b gbest , and b i , respectively.

Genetic Algorithms
Considered as an evolutionary algorithm [22,35], this metaheuristic was proposed in the 1960s. This algorithm is based on Darwin's theory of evolution, where individuals are represented by bit strings; similar to the previous algorithms, this description is useful to codify simple data structures. As an optimizer, this technique is capable of finding competitive solutions to complex optimization problems [35]. Therefore, it has been extensively used to solve problems from different domains (machine learning, operations research, image processing, etc.), and it was even utilized in a similar study to the one presented in this article [21]. The operators used in the canonical version of the genetic algorithm (GA) are (1) interchange of genetic material, (2) a mutation scheme, and (3) re-selection of the fittest. Below is the mathematical description of these operators.

Crossover
One method to achieve crossover in GA is the roulette technique, in which it is necessary to calculate the fitness of every individual, and then the cumulative fitness as where x i is the real representation of the binary individual b i . Later, the selection probability as well as the cumulative selection probability of every individual are computed according to their fitness (Equations (17) and (18)): Two random numbers, r 1 and r 2 , from a uniform distribution are generated after the previous calculus, and the selection applies to the parents that meet q i < r1 and q i < r2. After the selection of two parents in the prior step of the GA, the next step corresponds to the crossover, which is the crossover of one point in this paper. This method works by choosing, with a given probability pc, a point of the selected parents to generate two new individuals by an interchange of genetic information. The crossover points are calculated with r 1 · Nb and r 2 · Nb.

Mutation and Reselection
The mutation considers the change of every bit in the binary individual by taking the mutation probability pm: where bm 1,j is the binary and mutated child number 1; the same procedure applies for child number 2. Finally, the last step in GA is the competence between the two parents and the two children:

Population Initialization
As mentioned previously, the optimization tool utilized in the problem of pipe design in a parabolic trough must be capable of searching in two numerical spaces, namely, real, and integer. For that reason we use binary versions of the aforementioned algorithms, which allow the management of those codifications. In that sense, the conformation of each individual both as real and as integer values is whose binary representation-denoted by b i -is a string of 150 bits. A more detailed explanation of each variable, their limits, as well as the number of bits used for a particular codification, is given in Table 1.
In the four compared algorithms, the population is created by where N p is the population size, N b is the total number of bits in each string, rand(1, N b ) is a row vector of N b uniformly generated random numbers, hardlim(.)is a positive limiting function, and l, u are respectively the lower and upper limits of the search space. Finally, Equation (5) representing the objective function can be written as:

Experimental Setup and Results
In the methodology applied in this paper, the optimization problem consists of maximizing the capillarity angle described by Equations (24) and (5). A virtual pipe was constructed to evaluate the objective function, and its behavior was simulated until 10 s (Figure 2). Once the optimization problem was defined, the next step was tuning the specific parameters that belong to every algorithm utilized in this work. To adjust the settings, we used a hyper-optimization scheme, where a statistical adjustment was employed. Finally, all the algorithms were compared under different criteria when they were utilized to solve the problem considered in this work.

Tuning
In order to make fair comparisons among the metaheuristic algorithms, we used the framework presented in [36], in which a higher-level algorithm (in this case, differential evolution) is utilized to tune every algorithm. This scheme is known as a hyper-optimization problem, where the main idea is to minimize the computational effort (represented by the number of iterations) of a given algorithm: considering that t δ is the iteration number achieved under the restriction | f min − f * | ≤ δ, f * is the best result achieved by a similar study [21], and f min is the best outcome achieved by the algorithm being tuned. Additionally, the value of the threshold δ is set to 1 × 10 −5 as used in [36], x is the candidate solution for the problem being optimized, and p is the candidate parameter tuning for the algorithm A. The tuning algorithm was run 30 times for every metaheuristic reported in this work to get the statistics of the best parameters. Results are shown in Table 2 and following the format µ(σ). It is important to note that the vector p is composed of the parameters of every algorithm, and A ∈ {CSA, GA, BPSO, DBDE}. Table 2. Statistical results of the parameter tuning, in the format mean(standard deviation). CSA: clonal selection algorithm; GA: genetic algorithm; BPSO: binary particle swarm optimization; DBDE: discrete binary differential evolution. As mentioned in [36], in general terms the parameter tuning depends on the problem being solved. In that sense, after reviewing the results shown in Table 2, and based on the standard deviation, it can be considered that the most sensitive parameters-at least for the problem of the parabolic trough-are P d , p m , w, and c r . A single run of every algorithm with the tuning parameters of Table 2 gave the results shown in Table 3 and Figure 3.

A General Comparison
The first experimental set consisted of completing 100 tests per algorithm, for statistical purposes. The main idea was to compare their performance by considering the best design solution discovered at every run, and by acknowledging that every metaheuristic is searching in the entire design space of nine variables (Table 3). Every run had 50 iterations and the same initial population, to make a fair comparison among the techniques. The first 30 results after every trial are in Table 4, with the best result of the four metaheuristics in bold letters. A brief review of Table 4 shows that BPSO was capable of finding the best result in the majority of cases, in comparison with the other metaheuristics. The second best algorithm was GA, which acquired the best results in fewer runs, and the third-best algorithm was the DBDE with only two hits, whereas CSA showed the worst performance of all. To statistically validate the mentioned observations, we used the nonparametric Wilcoxon test over the 100 runs of the complete experiment.
In the Wilcoxon paired test, the null hypothesis is that two independent data samples come from continuous distributions with equal medians. In contrast, the alternative hypothesis is that those samples are different in that regard. Usually, either the rejection or the failure to reject the null hypothesis considers a significance level of 5%. The test mentioned above was applied to the experimental data, thus obtaining the results shown in Table 5. After reviewing those values, we can conclude that the best algorithm was BPSO, which was capable of finding better values in the majority of the runs. According to the p-values obtained from the experiment, it is clear that the results acquired by every algorithm came from different medians; therefore, the following arrangement of metaheuristics can be given, from best to worst performance in this task, as BPSO, GA, DBDE, and CSA. The average and standard deviation of the 100 runs are given in Table 6. The information confirms that BPSO was, on average, the best metaheuristic to solve the design of pipe with micro-grooves. Additionally, the algorithm achieved improvements in the capillarity angle of around 11% in comparison with a similar study [21].

A Time-Based Comparison
The required time in seconds to complete every trial is in Table 7, given in the format mean and standard deviation of the 30 runs. In this experiment, the comparison did not need a statistical validation because the mean values were different for every metaheuristic. In that sense, the best metaheuristic for the micro-groove design was the genetic algorithm (GA), which had the minimum execution time. The second and third best algorithms were discrete binary differential evolution and binary particle swarm optimization, with very similar average times, with a slightly lower standard deviation of the DBDE. In terms of execution time, the clonal selection algorithm had the worst performance, due mainly to the fact that the fitness evaluation of the mutated clones is computationally expensive.

Conclusions
In this article, were used four metaheuristic algorithms to optimize the pipe design in a parabolic trough: genetic algorithms, binary particle swarm optimization, binary differential evolution, and clonal selection algorithm. In the proposal, we used two differential equations to simulate the dynamic behavior of several fluids contained in a pipe with two kinds of micro-grooves. The experimental results show an averaged improved performance (11%) of the binary particle swarm optimization in comparison with a similar study that explored the design of micro-grooved pipes. After several runs of the algorithm, it is clear that the best tube must have semi-circular micro-grooves, with water as the best working fluid when compared with liquid sodium and two different molten salts. This study also considers the values of angles β and γ to calculate the wet front, producing a better pipe design in a concentrated solar power system than a previous study [21].