Next Article in Journal
Polyphenol Profile and Antimicrobial and Cytotoxic Activities of Natural Mentha × piperita and Mentha longifolia Populations in Northern Saudi Arabia
Previous Article in Journal
Investigation and Analysis of a Hazardous Chemical Accident in the Process Industry: Triggers, Roots, and Lessons Learned
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quadratic Interpolation Based Simultaneous Heat Transfer Search Algorithm and Its Application to Chemical Dynamic System Optimization

Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
*
Author to whom correspondence should be addressed.
Processes 2020, 8(4), 478; https://doi.org/10.3390/pr8040478
Submission received: 14 March 2020 / Revised: 13 April 2020 / Accepted: 14 April 2020 / Published: 19 April 2020
(This article belongs to the Section Process Control and Monitoring)

Abstract

:
Dynamic optimization problems (DOPs) are widely encountered in complex chemical engineering processes. However, due to the existence of highly constrained, nonlinear, and nonsmooth environment in chemical processes, which usually causes nonconvexity, multimodality and discontinuity, handling DOPs is not a straightforward task. Heat transfer search (HTS) algorithm is a relative novel metaheuristic approach inspired by the natural law of thermodynamics and heat transfer. In order to solve DOPs efficiently, a new variant of HTS algorithm named quadratic interpolation based simultaneous heat transfer search (QISHTS) algorithm is proposed in this paper. The QISHTS algorithm introduces three modifications into the original HTS algorithm, namely the effect of simultaneous heat transfer search, quadratic interpolation method, and population regeneration mechanism. These three modifications are employed to provide lower computational complexity, as well as to enhance the exploration and exploitation capabilities. Therefore, the ensemble of these modifications can provide a more efficient optimization algorithm with well-balanced exploration and exploitation capabilities. The proposed variant is firstly investigated by well-defined benchmark problems and then applied to solve four chemical DOPs. Moreover, it is compared with different well-established methods existing in the literature. The results demonstrate that QISHTS algorithm has the greatest robustness and precision than other competitors.

1. Introduction

Many considerable chemical processes depend greatly on dynamic optimization, which aims at optimizing the performance index of the whole process by implementing optimization control on the established dynamic model [1,2], and these models generally contain dynamic variables with values that vary with time [3]. The optimization problems that involve dynamic variables are called dynamic optimization problems (DOPs); their dynamic systems are described by a set of algebraic and differential equations. There are several main fields involving DOPs, such as parameter estimation and optimal control for dynamic models. DOPs are also inherent in the reactor network synthesis, where the differential modeling of chemical reactors produces the dynamic models [4]. Due to the existence of highly constrained, nonlinear, and nonsmooth environments in chemical processes, which usually causes nonconvexity, utilizing new global optimization approaches is required to find appropriate solutions for solving DOPs efficiently.
The solution methods for DOPs can be simply classified into three methods, dynamic programming (DP), indirect, and direct methods. The DP is an effective approach to solve DOPs. It relies on Bellman’s principle of optimality [5]. However, this method suffers from the issues of dimensionality. Thus, in order to avoid such drawbacks, Luus proposed iterative dynamic programming (IDP) [6]. The IDP is an effective global optimization approach that can achieve the global optimum for multimodal optimization problems and can reduce the dimension expansion. However, the computational time of this approach is relatively large, since both control and state variables are required to be segmented [7]; therefore, this approach is merely appropriate for small-scale problems. The original DOP was expanded in the indirect method to a Hamiltonian system using the Pontryagin maximum theory [8], and this method is considered as the most precise approach to solve optimal control problems [9,10], but the application of this method in practical DOPs is complex. Two different parametrization methods are included in the direct methods, and they are the complete parametrization (CP) method and control vector parametrization (CVP) method. The CP method is also named simultaneous method, and the basic principle of this method is to segment both state and control variables simultaneously, while the CVP method only segments the control variables. The CP method can preferably deal with path constraint problems, but it results in a large-scale nonlinear programming (NLP) problem, and certain techniques are needed to solve the NLP efficiently. The CVP method is used to translate the basic DOP into an NLP problem; the dimension range of NLP problems in this method is much smaller than that in the CP method; therefore, the CVP method has been incorporated with various optimization algorithms to handle DOPs [11,12]. A number of deterministic methods have also been proposed for handling DOPs; the gradient-based methods [13,14], such as the sequence quadratic program and Newton’s algorithms, are considered as deterministic methods that can converge rapidly. However, they may converge to local optima according to the degree of initial guess and nonlinearity [15]. Other types of the deterministic methods are branch and bound algorithms [16,17]; they need rigorous bounds on the parameters required or strict values for these parameters, which might lead to difficulty in realization [15]. For detailed information about deterministic methods, the reader can refer to the literature [15].
Meta-heuristic algorithms (MHAs) are inspired by varieties of natural phenomena; they mimic some element of biology, ethology, physics, and swarm intelligence [18]. Comparing with other deterministic methods, MHAs possess superior global perspective and make few assumptions about the optimization problems. Due to the importance of dynamic optimization in chemical processes, various MHAs have been presented for handling chemical DOPs in recent years, such as genetic algorithm (GA) [19,20,21], differential evolution (DE) [22,23,24,25], ant colony optimization (ACO) [26,27], simulated annealing (SA) [28,29], particle swarm optimization (PSO) [30,31,32,33], artificial bee colony (ABC) [34], scatter search (SS) [35,36,37], line-up competition algorithm (LCA) [38], cuckoo search (CS) [39], biogeography-based optimization (BBO) [40], and teaching-learning based optimization (TLBO) [41].
Heat transfer search (HTS) algorithm is a relative new MHA presented by Patel and Savsani [42]; it simulates the natural laws of heat transfer and thermodynamics. The main framework of this algorithm basically consists of three basic phases: conduction phase, convection phase, and radiation phase. The HTS algorithm has emerged as one of the most efficient methods, and it has been empirically shown to be superior to other MHAs by its authors [42]. HTS and its modified variants have been applied for solving various optimization problems [43,44,45,46,47,48,49]. However, as an efficient optimization method, the report of the applications of HTS algorithm in engineering process optimization is still rarely seen, especially in the areas of chemical process optimization. It is worth mentioning that HTS-based algorithms have not been used to handle chemical DOPs. Thus, the main aim of this work is to apply the HTS algorithm to solve real-world chemical DOPs and, hence, extend the practical applications of this algorithm to the chemical dynamic optimization.
The exploration and exploitation capabilities of an MHA are generally contradictory to each other, and it is important to utilize suitable techniques to keep these two abilities well-balanced. Thus, to provide a more efficient HTS method with well-balanced exploration and exploitation capabilities, three modifications are introduced into the original HTS algorithm, namely the effect of simultaneous heat transfer search (SHTS), quadratic interpolation (QI) method, and population regeneration mechanism. Hence, a new version of HTS algorithm named QISHTS is proposed to solve real-world chemical DOPs efficiently. In the QISHTS algorithm, the effect of SHTS is employed to provide superior performance with lower computational complexity; the QI method is adopted to improve the exploitation capability, whereas the population regeneration mechanism is used to alleviate the probability of local optimum stagnation and, hence, enhance the exploration capability.
The main contributions of this work can be summarized as follows:
(1)
A new variant of HTS algorithm named the QISHTS algorithm is presented by integrating the effect of SHTS, QI method, and population regeneration mechanism. The ensemble of these three modifications can provide a more efficient HTS method with well-balanced exploration and exploitation capabilities.
(2)
The performance of the QISHTS algorithm is investigated by a set of 18 well-defined benchmark functions, and the obtained results are compared with those of other well-established MHAs.
(3)
The proposed QISHTS algorithm is applied for solving four real-world chemical DOPs, including two dynamic parameter estimation problems and two optimal control problems. To the best of our knowledge, HTS-based algorithms have not been used for handling chemical DOPs, and our work is the first attempt to utilize it for solving such problems.
(4)
The effectiveness of the QISHTS algorithm in solving chemical DOPs is compared with those of twelve well-established MHAs existing in the literature.
The rest of this paper is organized as follows. In Section 2, the basic formulation of DOPs is defined. In Section 3, the basic theoretical principle of HTS algorithm is explained. In Section 4, the proposed QISHTS algorithm is presented in detail. Some comparisons and investigation are shown in Section 5. In Section 6, the QISHTS algorithm is applied to solve real-word DOPs. Finally, conclusions are described in Section 7.

2. Dynamic Optimization Problems Formulation

In this study, the dynamic system of DOPs is addressed with algebraic and differential constraints, and its mathematical formula can be given as follow [35]:
min   o b j = ψ ( x ( t ) ,   u ( t ) ,   ρ ) Subjeted   to : { f ( x ˙ ( t ) , x ( t ) , u ( t ) , ρ ) = 0 x ( t 0 ) = x 0 h i ( x ( t ) , u ( t ) , ρ ) = 0   , i = 1 , , m g i ( x ( t ) , u ( t ) , ρ ) 0   , i = 1 , , m u L u ( t ) u U ρ L ρ ρ U t [ t 0 , t f ]
where ψ ( x ( t ) ,   u ( t ) ,   ρ ) represents the objective function, f indicates the set of differential equations that exhibits the description of system dynamics, x ( t ) represents the vector of time-dependent state variables, u ( t ) indicate the vector of time-dependent control variables, ρ are defined as the undetermined parameters, and g and h represent the inequality and equality constraints, respectively.
In general, the optimization variables of DOPs in chemical processes contain both the time-independent undetermined parameters ρ and time-dependent control variables u ( t ) . In this paper, two main types of DOPs are defined depending on the type of optimization variables, and these two types are as follows:
(1)
Dynamic parameter estimation problems: In these problems, the optimization variables only contain undetermined parameters, ρ [37].
(2)
Optimal control problems: In these problems, the optimization variables only include control variables, u ( t ) [13].
The detailed procedure of solving a generalized DOP can be summarized as follow: Firstly, determine the type of DOP depending on its optimization variables. In other words, if the time-independent undetermined parameters ρ exist, then it is a parameter estimation problem which can be directly translated into an NLP problem. While, if the time-dependent control variables u ( t ) exist, this problem is an optimal control problem which cannot be directly transformed into an NLP problem; thus, a discretization approach such as control vector parametrization (CVP) method [11,12] is required to translate it into an NLP problem. The time interval is divided by the CVP method into a number of stages (N), wherein the control variables u ( t ) in every sup-interval are approximated by means of basis functions, such as linear functions, constant functions, and wavelet-based functions. Secondly, apply the optimization methods such as MHAs to optimize the undetermined parameters that only exist after the transforming process. Then, utilize an ordinary differential equation integrator for acquiring the objective function value (OFV), since the optimization parameters are required to be integrated into Equation (1) during the solution process. Subsequent to these procedures, the solution of a DOP can be finally output.

3. Heat Transfer Search (HTS) Algorithm

The HTS algorithm is a population-based optimization method; it is based on the natural laws of heat transfer and thermodynamics, which states that “Any system always tries to achieve equilibrium state with its surroundings” [42]. This algorithm considers three heat transfer phases (conduction phase, convection phase, and radiation phase) to simulate the thermal equilibrium behavior of the systems [42]. These three phases have important roles for setting the thermal equilibrium and reaching an equilibrium state. The HTS algorithm begins with a randomly generated population (NP), which is considered as a group of molecules, and these molecules try to achieve an equilibrium state with its surroundings by interacting among themselves and their environment through the three phases of heat transfer. The population is updated in each iteration by one of the three heat transfer phases, where a uniformly distributed random number ( R n ) is generated in the interval [0, 1] to decide which phase should be activated to update the solutions in the particular iteration. In other words, if R n [ 0 ,   0.3333 ] , the population is performed by the conduction phase, while if R n [ 0.3333 ,   0.6666 ] , the population undergoes the radiation phase; whereas, if R n [ 0.6666 ,   1 ] , the population is performed by the convection phase. In the HTS algorithm, the selection procedure of the updated solutions is carried out by the greedy selection technique, which accepts the new solution only if it has a superior objective value. After that, the inferior solutions are substituted by the elite solutions. Thus, the superior solution can be obtained by performing the difference between the elite solutions and the current solution. The whole search process of HTS algorithm is carried out through the essential operations of the conduction, convection, and radiation phases, which are briefly explained in the following subsections.

3.1. Conduction Phase

The basic principle of this phase is that heat transfer occurs due to the conduction effect among molecules of the substance. Thus, the molecules with higher energy transfer heat to the molecules with lower energy for the sake of reaching a state of the thermal equilibrium, where the system makes an attempt to neutralize the thermal imbalance. During the conduction phase of the algorithm, a new solution is updated according to the subsequent equations:
X j , i n e w = { X k , i + ( R n 2 X k , i ) ,   i f   f ( X j ) > f ( X k ) X j , i + ( R n 2 X j , i ) ,     i f   f ( X j ) < f ( X k ) }   i f   F E F E max C D F
X j , i n e w = { X k , i + ( r i X k , i ) ,   i f   f ( X j ) > f ( X k ) X j , i + ( r i X j , i ) ,     i f   f ( X j ) < f ( X k ) }   i f   F E F E max C D F
where X j , i n e w represents the new updated solution; j = 1 , 2 , , n ; k denotes a randomly chosen solution; j k and k ( 1 , 2 , . . , n ) ; i indicates a randomly chosen decision variable where i ( 1 , 2 , . . , m ) ; R n denotes the random variable—it varies between 0 and 0.3333; r i is a random number in the range [0, 1]; R n 2 and r i correspond to the conductance parameters of the Fourier’s law of heat conduction [42]; F E represents the function evaluations; and C D F is the conduction factor—it is set to a value of 2 to balance the exploration and exploitation in the conduction phase [42].

3.2. Convection Phase

The basic principle of this phase is that heat transfer takes place because of convection between the system and the surrounding temperature. Thus, the mean temperature of the system interacts with the surrounding temperature for the sake of establishing a state of the thermal equilibrium. In this phase, the mean of the population members (denoted by X m s ) is regarded as the mean temperature, whereas the best solution (denoted by X s ) is regarded as the surrounding. In this part of the algorithm, a new solution is updated utilizing the subsequent formula:
X j , i n e w = X j , i + R n ( X s X m s T C F )
T C F = { a b s ( R n r i ) ,       i f   F E F E max / C O F r o u n d ( 1 + r i ) ,   i f   F E F E max / C O F
where X j , i n e w is the new updated solution; j = 1 , 2 , , n ; i = 1 , 2 , , m ; R n is the random variable—it varies between 0.6666 and 1; r i is a random number in the interval [0, 1]; R n and r i correspond to the convection parameters of Newton’s law of cooling [42]; F E represents the function evaluations; T C F is the temperature change factor; and C O F is the convection factor—it is set to a value of 10 to balance the exploration and exploitation in the convection phase [42].

3.3. Radiation Phase

The basic principle of this phase is that the system attempts to neutralize the thermal imbalance through interacting with the surrounding temperature (i.e., best solution) or within the system (i.e., other solution). Here, the system tries to reach a state of the thermal balance. In this part of the algorithm, a new solution is updated according to the subsequent equations:
X j , i n e w = { X j , i + R n ( X k , i X j , i ) ,   i f   f ( X j ) > f ( X k ) X j , i + R n ( X j , i X k , i ) ,   i f   f ( X j ) < f ( X k ) }   i f   F E F E max R D F
X j , i n e w = { X j , i + r i ( X k , i X j , i ) ,   i f   f ( X j ) > f ( X k ) X j , i + r i ( X j , i X k , i ) ,   i f   f ( X j ) < f ( X k ) }   i f   F E F E max R D F
where X j , i n e w represents the new updated solution; j = 1 , 2 , , n ; i = 1 , 2 , , m ; k indicates randomly chosen solutions; j k where k ( 1 , 2 , , n ) ; R n is the random variable—it varies between 0.3333 and 0.6666; r i is a random number in the range [0, 1]; R n and r i correspond to the radiation elements of the Stefan–Boltzmann law [42]; F E represents the function evaluations; and R D F is the radiation factor—it is set to a value of 2 to balance the exploration and exploitation in the radiation phase [42].
The search process of HTS algorithm is carried out through the three aforementioned phases, and the global optimum of a given problem can be estimated with these phases. In the HTS algorithm, every phase is divided into two sub-phases controlled by different factors of heat transfer phases and function evaluations. As the three phases of heat transfer almost have equivalent probability to transfer heat, the values of design variables fluctuate gradually or abruptly. The exploration and exploitation of a search space can be represented by the large or small changes of the design variables. The overall flowchart of the original HTS algorithm is illustrated in Figure 1.

4. Quadratic Interpolation Based Simultaneous Heat Transfer Search (QISHTS) Algorithm

In this section, aiming at providing a more efficient HTS method with well-balanced exploration and exploitation capabilities, three modifications are introduced into the original HTS algorithm, namely the effect of simultaneous heat transfer search (SHTS), quadratic interpolation (QI) method, and population regeneration mechanism. Thus, a new version of HTS algorithm named the quadratic interpolation-based simultaneous heat transfer search (QISHTS) algorithm is proposed. In the QISHTS algorithm, the effect of a SHTS is employed to provide superior performance with lower computational complexity; the QI method is adopted to improve the exploitation capability, and the population regeneration mechanism is used to enhance the exploration capability. These improvements are described in detail in the subsequent subsections.

4.1. Simultaneous Heat Transfer Search (SHTS)

In the original HTS algorithm, the search mechanism is carried out through one of the three phases (conduction, convection, and radiation phase) in each iteration, where a random number ( R n ) is generated between 0 and 1 in each iteration to determine which one of the three phases should be activated to update the new solutions. Thus, the population members are performed by either conduction phase, convection phase, or radiation phase in each generation, and the overall procedure is repeated until the stopping criterion is met. However, it should be obvious that randomly generating the probability R n in each iteration will lead to increase the computational complexity; meanwhile, it does not guarantee a better solution. Besides, if any phase of the three phases gets more chances during the optimization process, it will remarkably affect the overall performance of the algorithm. Therefore, in order to overcome these drawbacks, the idea of a simultaneous heat transfer search (SHTS) is incorporated into the original HTS algorithm. It is worth mentioning that all the three phases are needed to be performed during the optimization process to get efficient functioning on the HTS [42]. Thus, in the SHTS, the process of generating the probability R n in each iteration is cancelled, all the three phases are performed simultaneously, and the population members will undergo all the three phases. In other words, the entire population (NP) is divided into three sub-populations in each generation, and every sub-population is specified for one of the three phases of HTS algorithm. Let N C D , N C V and N R D be the set of population members for conduction phase, convection phase, and radiation phase, respectively, where N C D + N C V + N R D = N P . Hence, three new best solutions ( f ( x C D ) , f ( x C V ) and f ( x R D ) ) will be generated by the three phases in each iteration. The simultaneous operation of performing the three phases in each iteration not only can help to provide superior performance; it can also help to decrease the computational time. Therefore, the introduction of SHTS into the basic HTS algorithm is beneficial to provide superior performance with lower computational complexity. More details about SHTS can be found in the literature [47].

4.2. Quadratic Interpolation Method

The quadratic interpolation (QI) is a nonlinear one-dimensional optimization operator, which utilizes a parabola for fitting the shape of the objective function close to the optima. This operator can reach the minimum in the initial space. This method was firstly applied in the improved controlled random search [50]. Subsequently, it has been integrated into various MHAs, such as differential evolution [51], genetic algorithms [52], and the Alopex-based evolutionary algorithm [53] for solving complex continuous optimization problems. Moreover, it was also combined with teaching-learning-based optimization [41] for handling dynamic optimization problems. The main principle of a QI operator is to use three known points (e.g., two randomly chosen members and the most outstanding member in the whole population) for the sake of forming a quadratic curve, which is used for approximating the shape of the objective function. Hence, the extreme point of the quadratic function can approximate the optima of the objective function [53]. Assume that x 1 = ( x 1 1 , x 1 2 , , x 1 D ) , x 2 = ( x 2 1 , x 2 2 , , x 2 D ) , and x 3 = ( x 3 1 , x 3 2 , , x 3 D ) are the three selected members that use the QI method to generate a new member ( x Q I ) , then it can be calculated as follows:
x Q I = ( x Q I 1 , x Q I 2 , . . . . , x Q I D )
x Q I d = 0.5 [ ( x 3 d x 2 d ) 2 f ( x 1 ) ] + [ ( x 1 d x 3 d ) 2 f ( x 2 ) ] + [ ( x 2 d x 1 d ) 2 f ( x 3 ) ] [ ( x 3 d x 2 d ) f ( x 1 ) ] + [ ( x 1 d x 3 d ) f ( x 2 ) ] + [ ( x 2 d x 1 d ) f ( x 3 ) ] d = 1 , 2 , , D
where f ( x 1 ) , f ( x 2 ) and f ( x 3 ) represent the fitness values of the three selected members, respectively.
In the proposed method, the three members ( X C D , X C V and X R D ) obtained by the conduction phase, convection phase, and radiation phase in each iteration are selected to execute the QI method to generate a new member ( X Q I ). The new generated X Q I is evaluated; if it is superior to the existing one, then it will be accepted and used to replace the X w o r s t . In this method, the quadratic interpolation among different individual members can help to enhance the exploitation capability.

4.3. Population Regeneration Mechanism

In the proposed method, the solution extremely relies on the initial population members, which do not often contain a global optimum solution. Moreover, the solutions may tend to quickly group around local optima during the optimization process, which will lead to reduce the exploration capability of the algorithm. As a challenging task in various MHAs is how to alleviate the probability of local optima stagnation, therefore, the population regeneration mechanism is introduced into the proposed method to avoid local optima and to search for superior solutions. In the proposed mechanism, the population regenerator is utilized to regenerate the population when the fittest solution keeps identical without significant changes for a predefined number of function evaluations ( P D N F E S ) . This regeneration mechanism is carried out by flip population way, and its mathematical formula is given as:
X j , i f l i p = ( L j , i + U j , i ) X j , i ;   i f   R i P V
where X j , i f l i p represents an opposite point of the population member X j , i —it is produced by observing symmetry about the midpoint of its bound of ith design variable and jth population, respectively; L and U indicate the lower and upper bounds, respectively. R i is a random number in the range [0, 1], and P V is the population flip probability factor—it is a specific value between 0 and 1.
The population regeneration process is applied to all the three phases; it can help to alleviate the probability of local optima stagnation and improve the exploration capability. In this mechanism, two important controlling parameters ( P D N F E S and P V ) play an important role in the performance of the proposed algorithm. Thus, the setting values of these two parameters are set differently among the optimization problems.

4.4. The Overall Process of QISHTS Algorithm

By simultaneously integrating the SHTS, QI method, and population regeneration mechanism to the original HTS algorithm, we present a new version of HTS algorithm called the QISHTS. The overall flowchart of the QISHTS algorithm is illustrated in Figure 2, and the overall process of the QISHTS algorithm is described in detail below:
  • Step 1. Define the population size ( N P ) , the function optimization goal, the maximum number of function evaluations ( F E max ) , the setting values of two controlling parameters ( P D N F E S and P V ), the current function evaluations C F E S = 0 , and the stopping criteria.
  • Step2. Randomly generate the main population x i , i = 1 , 2 , , N P within the lower and upper boundaries [ L , U ] of the decision variables and calculate the objective function values f ( x i ) , i = 1 , 2 , , N P .
  • Step3. Determine the best individual ( X b e s t , f b e s t ) in the population according to the fitness.
  • Step 4. Randomly divide the whole population members into three sub-populations ( N C D , N C V and N R D ) and assign every sub-population to one of the three phases, where N C D , N C V and N R D are the set of population members for the conduction, convection, and radiation phase, respectively.
  • Step5. Generate the new solutions by the conduction phase, convection, and radiation phase.
    Step6. Select the three best members ( X C D , X C V and X R D ) obtained by the conduction phase, convection phase, and radiation phase to generate a new member ( X Q I ) by the QI method according to Equations (8) and (9).
  • Step7. Bound the new solution to the search domain.
  • Step8. Evaluate the new solution; if the new obtained solution is superior to the existing one, then accept it and use it to replace the worst one.
  • Step9. If the stopping criteria is met, then output the final solution. Otherwise, return to step 3.

5. Numerical Experiments and Discussions

In this section, in order to investigate the performance of the QISHTS algorithm, it is experimented by a set of well-defined traditional benchmark problems, and the computational results obtained by this algorithm are compared with those of other well-established MHAs. Moreover, to further evaluate the overall effectiveness of the new variant, comparisons between QISHTS algorithm and several state-of-the-art DEs are also conducted. The competitors were tested on these benchmark problems previously, and these problems offer challenge to an MHA. Hence, these benchmark problems are employed for the experimentation, and the above-mentioned algorithms are considered for the comparison with the proposed variant. The detailed mathematical formulation of each function with its dimension, range value, and optimum value are illustrated in Table 1. These benchmark problems involve different characteristics and can be classified into three types: f1f6 are unimodal functions, f7f11 are multi-modal high-dimensional functions, and f12f18 are multimodal low-dimensional functions. The considered benchmark functions are appropriate for testing the exploration and exploitation capability of an MHA.

5.1. Comparison of QISHTS with MHAs

In this section, to investigate the performance of the QISHTS algorithm, the results obtained by QISHTS algorithm are compared with the results of other MHAs, such as cuckoo search (CS) [54], gravitational search algorithm (GSA) [55], artificial bee colony (ABC) [56], animal migration optimization (AMO) [57], heat transfer search (HTS) [42], and improved heat transfer search (IHTS) [44]. To conduct a fair comparison with the considered competitors, a common experimental platform is needed. Thus, the maximum number of function evaluations (FEmax) is set as follows: 150,000 for functions f1, f5, and f10; 200,000 for functions f2 and f11; 500,000 for functions f3, f4, and f7; 300,000 for functions f6, f8, and f9; 10,000 for functions f12, f14, f15, and f17; 40,000 for function f13; 3000 for function f16; and 20,000 for function f18. Furthermore, the population size (NP) is set to 50, and each method is tested 25 times independently for each benchmark problem. The experimental parameters settings and results of the competitors are derived from the literature [44]. The controlling parameters of the QISHTS algorithm are set as follows: the population flip probability factor ( P V ) is set to 0.1 for all the test functions; the predefined number of function evaluations ( P D N F E S ) is set as 1000 for the test functions whose overall function evaluations (FEs) are larger than 100,000, and 150 for the test functions whose overall FEs are less than 50,000. The best results, such as the best Mean and Std obtained by the compared algorithms, are bolded. Besides, each algorithm is ranked between 1 and 7 in accordance to its corresponding results; the algorithm which obtains the perfect result is ranked 1, whereas the algorithm with the worst result is ranked 7.
Table 2 displays the comparative results of f1 to f6 for the QISHTS algorithm with other algorithms; it can be seen from the results table that the QISHTS, IHTS, and the basic HTS algorithm has achieved the greatest results on functions f1 and f2, followed by the other algorithms. Furthermore, the HTS and QISHTS algorithms can obtain the best solution on f3. On functions f4 and f6, the QISHTS performs significantly superior to the other competitive methods. On function f5, the QISHTS and AMO algorithms achieve global mean values. Comparing with IHTS and the original HTS algorithm, the performance of QISHTS is improved on f3, f4, f5, and f6. From the mean rank and final rank of all competitive methods, it can be seen that the QISHTS algorithm shows the best overall effectiveness on unimodal functions, followed by the IHTS, HTS, and the rest of the algorithms. Hence, it can be observed from the results that the exploitation capability of the QISHTS algorithm is enhanced by the proposed improvements.
Table 3 shows the comparative results of f7 to f11 for the QISHTS algorithm with the other algorithms; it can be observed from the results table that, on function f7, the ABC algorithm can find the best results, followed by other algorithms, while the IHTS and the HTS algorithms do not achieve the best results, whereas the QISHTS algorithm has shown more success on this function. Moreover, the QISHTS, IHTS, HTS, AMO, and ABC algorithms can find the best solution on functions f9 and f11. On functions f8 and f10, the QISHTS performs significantly superior to the other competitive methods. Comparing with IHTS and the original HTS algorithm, the performance of the QISHTS is improved on f7, f8, and f10. From the mean rank and final rank of each competitor, it can be seen that the QISHTS shows the greatest overall effectiveness, followed by the AMO, IHTS, ABC, and the rest of the algorithms. Thus, it can be observed from the results that the exploration ability of the QISHTS algorithm is enhanced by the proposed improvements.
Table 4 displays the comparative results of f12 to f18 for the QISHTS algorithm with the other algorithms; it is observed that, the QISHTS, IHTS, HTS, AMO, and ABC algorithms can find the best solution on function f12. On function f13, the QISHTS ranks first, followed by the rest of the algorithms, while the IHTS and the HTS algorithms do not achieve the best results, whereas the QISHTS algorithm has shown more success on this function. Moreover, the QISHTS, IHTS, HTS, AMO, and ABC algorithms can find the best solution on functions f9 and f11. On functions f8 and f10, the QISHTS performs significantly superior to the other competitive methods. Comparing with IHTS and the original HTS algorithm, the performance of the QISHTS is improved on f7, f8, and f10. From the mean rank and final rank of each competitor, it can be seen that the IHTS shows the greatest overall effectiveness, followed by the QISHTS, HTS, and the rest of the algorithms. Hence, it can be observed from the results that the exploration ability of the QISHTS algorithm is enhanced by the proposed improvements.

5.2. Comparison of QISHTS with State-Of-The-Art DEs

In order to further observe the overall effectiveness of the proposed variant, another comparison between the QISHTS algorithm with well-established state-of-the-art DEs is conducted. These methods, such as opposition-based differential evolution (ODE) [58], self-adaptive differential evolution (SaDE) [59], modified differential evolution (MoDE) [60], composite differential evolution (CoDE) [61], and modified differential evolution with p-best crossover (MDE-pBX) [62], have efficient robustness and global search ability. The experimental results of the competitors are acquired from [63]. To be consistent with the reference, a common experimental platform is required. Thus, the population size (NP) is set at 100, the dimension (D) of each benchmark problem is set at 30, as well as each method is repeated 25 times independently for all test problems. Moreover, the maximum number of function evaluations (maxFEs) is set differently for each function, as given in Table 5. The controlling parameters of the QISHTS algorithm after conducting many trials are set as: PDNFES = 103 and PV = 0.1. The best mean (Mean) and standard deviations (Std) results obtained by the compared methods are bolded.
It can be seen from Table 5 that, although the SaDE algorithm can find the greatest result among DEs on functions f1, f2, and f10, the QISHTS algorithm shows more success and ranks first on all these benchmark functions. The MoDE algorithm can obtain the best results among the DEs on function f5, but it wins the second place after the QISHTS algorithm, which ranks first on this function. On functions f7, f9, and f11, CoDE can find the best results among the DEs, but the QISHTS shows more success and wins first on these functions. From the mean rank and final rank of each competitor, it can be observed that the QISHTS ranks first on the whole benchmark functions; it performs significantly better than other competitive DEs, which prove that the proposed algorithm possesses efficient robustness for handling optimization problems.

6. Application to Chemical Dynamic System Optimization

In this section, in order to further evaluate the effectiveness of the QISHTS method in solving real-world optimization problems, the proposed algorithm is used to handle four chemical DOPs (two dynamic parameter estimation problems and two optimal control problems), involving different levels of difficulty. Moreover, the results obtained by the QISHTS algorithm are compared with those of twelve well-established algorithms, such as linearly decreasing inertia weight particle swarm optimization (LDWPSO) [64], comprehensive learning particle swarm optimization (CLPSO) [65], differential evolution with self-adapting control parameters (jDE) [66], artificial bee colony (ABC) [56], self-adaptive differential evolution (SaDE) [59], real-coded biogeography-based optimization (RCBBO) [67], hybrid biogeography-based optimization with differential evolution (DEBBO) [68], teaching-learning-based optimization (TLBO) [69], nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) [70], biogeography-based learning particle swarm optimization (BLPSO) [71], generalized oppositional teaching-learning-based optimization (GOTLBO) [72], and quadratic interpolation based teaching-learning-based optimization (QITLBO) [41]. These methods are very well-known due to their good performance, as well as they were examined on these DOPs previously in the literature [41]; thus, they were selected for comparison with the proposed method. The experimental parameter settings of the first twelve algorithms refer to corresponding literatures. To conduct a fair comparison, the population size (NP) is set to 50, and each algorithm is tested 30 times independently for the all problems. To be consistent with the references, the maximum number of function evaluations (maxFEs), as well as the acceptable precision (Ap), are used for recording the performance criteria, and these parameter values are set differently among the DOPs, as listed in Table 6. Moreover, the controlling parameters of the QISHTS algorithm ( P D N F E S and P V ) after conducting many trials are also set differently for each problem, as shown in Table 6. Some performance criterions are used for the effectiveness comparisons; these criterions, such as the best, mean, and worst objective function values and also the standard deviations (Std) obtained by the competitive algorithms on 30 independent runs, are listed. Besides, the success rate (SR) is given to show the ability of the algorithm in obtaining an acceptable solution with precision Ap before the maximum number of function evaluations is reached. Furthermore, the execution time (T) is also given to display the runtime complexity of all competitors. In order to facilitate the comparisons, the greatest results among the competitive methods are bolded. Moreover, each algorithm is ranked between 1 and 13 in accordance to its corresponding results; the algorithm which obtains the perfect result is ranked 1, whereas the algorithm with the worst result is ranked 13.

6.1. Dynamic Parameter Estimation Problems

In this subsection, the proposed QISHTS algorithm is applied for solving two dynamic parameter estimation problems. Moreover, the computational results are compared with those of twelve well-established algorithms.

6.1.1. First-Order Reversible Series Reaction Problem

This problem indicates the first-order reversible chain reaction [16,73]. In this reaction system ( A K 1 B K 2 C ), three parts (A, B, and C) are included, since only the concentrations of the first two parts (A with B) were measured, while the last part (C) is not available in the model used for estimation. The mathematical equation of this model takes the form as follows:
min θ   μ = 1 10 i = 1 2 ( z ^ μ , i z ˜ μ , i ) 2 S . t . { z ˙ 1 = θ 1 z 1 z ˙ 2 = θ 1 z 1 θ 2 z 2 z 0 = ( 1 , 0 ) ,   t [ 0 , 1 ] [ 0 , 1 ] θ [ 10 , 10 ]
where z ^ μ and z ˜ μ are defined as the vectors of the fitted point and experimental values, respectively, z 1 and z 2 represent the mole fractions of the first two parts (A and B), respectively, and θ 1 and θ 2 indicate the parameter vectors of the first and second reactions, respectively. The measurement data of this problem are derived from [4].
Table 7 displays the comparative results obtained by the QISHTS algorithm and other competitors for this problem. It can be seen from results table that the QISHTS algorithm can find the best results in terms of best, mean, std, and worst objective function value among all the comparative algorithms, followed by the QITLBO, NIWTLBO, ABC, and other competitors. While Figure 3a displays the convergence graph of the proposed algorithm, from Figure 3a and Table 7, it is obvious that the QISHTS algorithm has the best robustness, precision, and stability in performance. Figure 3b displays the fitting state trajectories with experimental points, which implies that a good agreement is obtained between the fitting state trajectories and the experimental points for this problem. Referring to the success rate, the QISHTS algorithm achieves a value of 100%, which means that it can successfully find trusted solutions in all the total runs. Considering the execution time, GOTLBO ranks first, followed by DEBBO and JDE, while the QISHTS takes 17.26 s to obtain the corresponding results.

6.1.2. Catalytic Cracking of Gas Oil Problem

This problem indicates the catalytic cracking process of gas oil to gasoline and other miscellaneous products. It involves three components (A, Q, and S), which represent the gas oil, gasoline, and other products, respectively. Only the concentrations of the first two components (A with Q) are available in this model, and the reaction design contains nonlinear reaction kinetics. This problem is formulated as below:
min θ   μ = 1 10 i = 1 2 ( z ^ μ , i z ˜ μ , i ) 2 S . t . { z ˙ 1 = ( θ 1 + θ 3 ) z 1 2 z ˙ 2 = θ 1 z 1 2 θ 2 z 2 z 0 = ( 1 , 0 )   t [ 0 , 0.95 ] [ 0 , 0 , 0 ] θ [ 20 , 20 , 20 ]
where θ1, θ2, and θ3 indicate the rate constants of the respective reactions, and z 1 and z 2 represent the mole fractions of the first two components (A with Q), respectively. The measurement data of this problem are acquired by utilizing parameter values [12,8,2], with a small amount of accidental error added, and are available in [4].
Table 8 displays the comparative results obtained by the QISHTS algorithm and other competitors for this problem. While Figure 4a displays the convergence graph of the proposed algorithm, Figure 4b displays the fitting state trajectories with experimental points. From Figure 4a and Table 8, it can be seen that the QISHTS algorithm can find the greatest results in terms of best, mean, std, and worst objective function values among all the comparative algorithms, followed by the QITLBO, TLBO, SaDE, and other competitors, meaning that the QISHTS algorithm has the best robustness and precision. Moreover, it can be observed from Figure 4b that a good agreement is obtained between the fitting state trajectories and the experimental points for this problem. Considering the success rate, the QISHTS algorithm achieves a value of 100%, which means that it can successfully find trusted solutions in all the total runs. Referring to the execution time, the DEBBO algorithm ranks the first, followed by the jDE, GOTLBO, and other competitors, while the QISHTS algorithm takes 32.14 s to obtain these results.

6.2. Optimal Control Problems

In this subsection, the proposed QISHTS algorithm is applied for solving two optimal control problems. Moreover, the computational results are compared with those of twelve well-established methods. Due to the existence of time-dependent control variables u ( t ) in these problems, a discretization approach such as the control vector parametrization (CVP) method [11,12] is utilized to discretize these problems. To be consistent with the reference, the time interval is divided by the CVP method into 20 stages with equal length, wherein the control variables u ( t ) in every sup-interval are approximated by piece-linear functions.

6.2.1. Multi-Modal Continuous Stirred Tank Reactor (CSTR) Problem

This model is defined as a multi-modal problem; it basically represents the optimal control of a first-order reversible chemical reaction performed in a continuous stirred tank reactor (CSTR) [12]. This problem offers a challenge to the optimization methods, since it mainly involves two local optimums. The gradient-based optimization methods usually fall into the inferior local optimum. The mathematical equation of this problem can be formulated as below:
min u J = 0 t f ( x 1 2 ( t ) + x 2 2 ( t ) + 0.1 u 2 ( t ) ) d t S . t . { x ˙ 1 = ( 2 + u ) ( x 1 + 0.25 ) + ( x 2 + 0.5 ) exp ( 25 x 1 x 1 + 2 ) x ˙ 2 = 0.5 x 2 ( x 2 + 0.5 ) exp ( 25 x 1 x 1 + 2 ) x ( 0 ) = [ 0.09 , .   0.9 ] 0 u ( t ) 5.0 ,   t f = 0.78
Table 9 displays the comparative results obtained by the QISHTS algorithm and other competitors for this problem. It can be seen from results table that the BLPSO algorithm gets the greatest results in terms of best, mean, std, and worst objective function values among all the comparative algorithms, while the QISHTS gets the second-best results, followed by the QITLBO, LDWPSO, DEBBO, and the rest of the competitors. Figure 5a shows the convergence graph of the proposed algorithm, and Figure 5b displays the optimal control trajectory for this problem. It is obvious from Figure 5 and Table 9 that, although the QISHTS algorithm is worse than BLPSO, its performance is clearly superior to that of other competitors. Referring to the success rate, the QISHTS algorithm achieves a value of 100%, which means that it can successfully find the acceptable accuracy in all the total runs. Considering the execution time, the NIWTLBO ranks first, followed by the BLPSO and other algorithms, while the QISHTS algorithm spends 455.86 s to obtain these results.

6.2.2. Six-Plate Gas Absorption Tower Problem

This model mainly includes the determination of two optimal control variables (u1 and u2) in a nonlinear six-plate gas absorption tower [74]. The mathematical equation of this problem can be given as follows:
min J = 0 t f ( i = 1 6 x i 2 + i = 1 2 u i 2 ) d t S . t . { x ˙ 1 = { [ 40.8 + 66.7 ( ν 1 + 0.08 x 1 ) ] x 1 + 66.7 ( ν 2 + 0.08 x 2 ) x 2 + 40.8 u 1 } / w 1 x ˙ 2 = { 40.8 x 1 [ 40.8 + 66.7 ( ν 2 + 0.08 x 2 ) ] x 2 + 66.7 ( ν 3 + 0.08 x 3 ) x 3 } / w 2 x ˙ 3 = { 40.8 x 2 [ 40.8 + 66.7 ( ν 3 + 0.08 x 3 ) ] x 3 + 66.7 ( ν 4 + 0.08 x 4 ) x 4 } / w 3 x ˙ 4 = { 40.8 x 3 [ 40.8 + 66.7 ( ν 4 + 0.08 x 4 ) ] x 4 + 66.7 ( ν 5 + 0.08 x 5 ) x 5 } / w 4 x ˙ 5 = { 40.8 x 4 [ 40.8 + 66.7 ( ν 5 + 0.08 x 5 ) ] x 5 + 66.7 ( ν 6 + 0.08 x 6 ) x 6 } / w 5 x ˙ 6 = { 40.8 x 5 [ 40.8 + 66.7 ( ν 6 + 0.08 x 6 ) ] x 6 + 66.7 ( ν 7 + 0.08 x 7 ) x 7 } / w 6 w i = ν i + 0.16 x i + 75 ,   i = 1 , , 6 ν = [ 0.7358 , 0.7488 , 0.7593 , 0.7593 , 0.7677 , 0.7744 , 0.7797 , 0.7838 ] T x ( 0 ) = [ 0.0342 , 0.0619 , 0.0837 , 0.1004 , 0.1131 , 0.1224 ] T 0 u 1 0.04 ,   0 u 2 0.09 ,   t f = 5
Table 10 displays the computational results obtained by the QISHTS algorithm and other competitors for this problem. Figure 6a displays the convergence graph of the proposed algorithm, and Figure 6b shows the optimal control trajectories for this problem. From Figure 6a and Table 10, it can be seen that the QISHTS algorithm can find the greatest results in terms of best, mean, std, and worst objective function values among all the comparative algorithms, followed by the jDE, BLPSO, QITLBO, and other competitors, meaning that the QISHTS algorithm has the best robustness and precision. Considering the success rate, the QISHTS algorithm achieves a value of 100%, which means that it can successfully find trusted solutions in all the total runs. Referring to the execution time, the jDE algorithm ranks the first, followed by the SaDE, NIWTLBO, and the rest of the competitors, while the QISHTS algorithm spends 558.96 s to obtain these results.

7. Conclusions

Many complex chemical engineering processes depend much on dynamic optimization. Due to the existence of highly constrained, nonlinear, and nonsmooth environments in chemical engineering processes, which usually causes nonconvexity, using efficient global optimization algorithms to reach appropriate solutions for handling DOPs is still a relatively difficult task. The heat transfer search (HTS) algorithm is a relative novel metaheuristic approach inspired by the natural law of thermodynamics and heat transfer. In order to handle the DOPs, a new variant of the HTS algorithm called the quadratic interpolation based simultaneous heat transfer search (QISHTS) algorithm is presented in this paper. The QISHTS algorithm introduces three modifications into the original HTS algorithm, namely the idea of simultaneous heat transfer search (SHTS), quadratic interpolation (QI) method, and population regeneration mechanism. The effect of the SHTS is employed to provide superior performance with lower computational complexity, the QI method is adopted to improve the exploitation capability, and the population regeneration mechanism is used to alleviate the probability of local optimum stagnation, hence enhancing the exploration ability. Thus, the ensemble of these three modifications can provide a more efficient optimization algorithm with well-balanced exploration and exploitation abilities.
To demonstrate the effectiveness of the QISHTS algorithm, it is investigated by a set of 18 well-defined benchmark problems. Moreover, the obtained results are compared with those of other well-established methods. Furthermore, it is applied for solving four real-world chemical DOPs, involving varying levels of difficulty, and the computational results obtained by QISHTS algorithm are compared with those of twelve well-established methods. The results demonstrate that the QISHTS algorithm has the perfect robustness and precision than other the methods tested.

Author Contributions

Conceptualization, E.A.; methodology, E.A.; software, K.A.; validation, K.A.; formal analysis, E.A.; investigation, E.A.; resources, H.S.; data curation, E.A.; writing—original draft preparation, E.A.; writing—review and editing, E.A.; visualization, H.S.; supervision, H.S.; project administration, H.S.; and funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China under grants 61703161 and 61673173 and the Fundamental Research Funds for the central universities under grant 222201714031.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Srinivasan, B.; Bonvin, D.; Visser, E.; Palanki, S. Dynamic optimization of batch processes: Ii. Role of measurements in handling uncertainty. Comput. Chem. Eng. 2003, 27, 27–44. [Google Scholar] [CrossRef]
  2. Irizarry, R. A generalized framework for solving dynamic optimization problems using the artificial chemical process paradigm: Applications to particulate processes and discrete dynamic systems. Chem. Eng. Sci. 2005, 60, 5663–5681. [Google Scholar] [CrossRef]
  3. Jiménez-Hornero, J.E.; Santos-Dueñas, I.M.; García-García, I. Optimization of biotechnological processes. The acetic acid fermentation. Part iii: Dynamic optimization. Biochem. Eng. J. 2009, 45, 22–29. [Google Scholar]
  4. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposito, W.R.; Gümüs, Z.H.; Harding, S.T.; Klepeis, J.L.; Meyer, C.A.; Schweiger, C.A. Handbook of Test Problems in Local and Global Optimization; Springer Science & Business Media: Berlin, Germany, 2013; Volume 33. [Google Scholar]
  5. Bellman, R. Dynamic Programming [M]; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  6. Luus, R. Iterative Dynamic Programming [M]; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  7. Sundaralingam, R. Two-step method for dynamic optimization of inequality state constrained systems using iterative dynamic programming. Ind. Eng. Chem. Res. 2015, 54, 7658–7667. [Google Scholar] [CrossRef]
  8. Bryson, A.E. Applied Optimal Control: Optimization, Estimation and Control; Routledge: Abingdon, UK, 2018. [Google Scholar]
  9. Cervantes, A.; Biegler, L.T. Optimization strategies for dynamic systems. Encycl. Optim. 2009, 4, 216–227. [Google Scholar]
  10. Sarkar, D.; Modak, J.M. Optimization of fed-batch bioreactors using genetic algorithm: Multiple control variables. Comput. Chem. Eng. 2004, 28, 789–798. [Google Scholar] [CrossRef]
  11. Du, W.; Bao, C.; Chen, X.; Tian, L.; Jiang, D. Dynamic optimization of the tandem acetylene hydrogenation process. Ind. Eng. Chem. Res. 2016, 55, 11983–11995. [Google Scholar] [CrossRef]
  12. Chen, X.; Du, W.; Tianfield, H.; Qi, R.; He, W.; Qian, F. Dynamic optimization of industrial processes with nonuniform discretization-based control vector parameterization. IEEE Trans. Autom. Sci. Eng. 2014, 11, 1289–1299. [Google Scholar] [CrossRef]
  13. Canto, E.B.; Banga, J.R.; Alonso, A.A.; Vassiliadis, V.S. Restricted second order information for the solution of optimal control problems using control vector parameterization. J. Process Control 2002, 12, 243–255. [Google Scholar] [CrossRef]
  14. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: Berlin, Germany, 2006. [Google Scholar]
  15. Angira, R.; Santosh, A. Optimization of dynamic systems: A trigonometric differential evolution approach. Comput. Chem. Eng. 2007, 31, 1055–1063. [Google Scholar] [CrossRef]
  16. Papamichail, I.; Adjiman, C.S. Global optimization of dynamic systems. Comput. Chem. Eng. 2004, 28, 403–415. [Google Scholar] [CrossRef]
  17. Esposito, W.R.; Floudas, C.A. Global optimization for the parameter estimation of differential-algebraic systems. Ind. Eng. Chem. Res. 2000, 39, 1291–1310. [Google Scholar] [CrossRef]
  18. BoussaïD, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  19. Patel, N.; Padhiyar, N. Modified genetic algorithm using box complex method: Application to optimal control problems. J. Process Control 2015, 26, 35–50. [Google Scholar] [CrossRef]
  20. Qian, F.; Sun, F.; Zhong, W.; Luo, N. Dynamic optimization of chemical engineering problems using a control vector parameterization method with an iterative genetic algorithm. Eng. Optim. 2013, 45, 1129–1146. [Google Scholar] [CrossRef]
  21. Dai, K.; Wang, N. A hybrid DNA based genetic algorithm for parameter estimation of dynamic systems. Chem. Eng. Res. Des. 2012, 90, 2235–2246. [Google Scholar] [CrossRef]
  22. Babu, B.; Angira, R. Modified differential evolution (MDE) for optimization of non-linear chemical processes. Comput. Chem. Eng. 2006, 30, 989–1002. [Google Scholar] [CrossRef]
  23. Chen, X.; Du, W.; Qian, F. Multi-objective differential evolution with ranking-based mutation operator and its application in chemical process optimization. Chemom. Intell. Lab. Syst. 2014, 136, 85–96. [Google Scholar] [CrossRef]
  24. Penas, D.R.; Banga, J.R.; González, P.; Doallo, R. Enhanced parallel differential evolution algorithm for problems in computational systems biology. Appl. Soft Comput. 2015, 33, 86–99. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, X.; Du, W.; Qian, F. Solving chemical dynamic optimization problems with ranking-based differential evolution algorithms. Chin. J. Chem. Eng. 2016, 24, 1600–1608. [Google Scholar] [CrossRef]
  26. Zhang, B.; Chen, D.; Zhao, W. Iterative ant-colony algorithm and its application to dynamic optimization of chemical process. Comput. Chem. Eng. 2005, 29, 2078–2086. [Google Scholar] [CrossRef]
  27. Schluter, M.; Egea, J.A.; Antelo, L.T.; Alonso, A.A.; Banga, J.R. An extended ant colony optimization algorithm for integrated process and control system design. Ind. Eng. Chem. Res. 2009, 48, 6723–6738. [Google Scholar] [CrossRef]
  28. Shelokar, P.; Jayaraman, V.K.; Kulkarni, B.D. Multicanonical jump walk annealing assisted by tabu for dynamic optimization of chemical engineering processes. Eur. J. Oper. Res. 2008, 185, 1213–1229. [Google Scholar] [CrossRef]
  29. Faber, R.; Jockenhövel, T.; Tsatsaronis, G. Dynamic optimization with simulated annealing. Comput. Chem. Eng. 2005, 29, 273–290. [Google Scholar] [CrossRef]
  30. Chen, X.; Du, W.; Qi, R.; Qian, F.; Tianfield, H. Hybrid gradient particle swarm optimization for dynamic optimization problems of chemical processes. Asia-Pac. J. Chem. Eng. 2013, 8, 708–720. [Google Scholar] [CrossRef]
  31. Zhou, Y.; Liu, X. Control parameterization-based adaptive particle swarm approach for solving chemical dynamic optimization problems. Chem. Eng. Technol. 2014, 37, 692–702. [Google Scholar] [CrossRef]
  32. Yang, J.; Lu, L.; Ouyang, W.; Gou, Y.; Chen, Y.; Ma, H.; Guo, J.; Fang, F. Estimation of kinetic parameters of an anaerobic digestion model using particle swarm optimization. Biochem. Eng. J. 2017, 120, 25–32. [Google Scholar] [CrossRef]
  33. Sun, J.; Palade, V.; Cai, Y.; Fang, W.; Wu, X. Biochemical systems identification by a random drift particle swarm optimization approach. BMC Bioinform. 2014, 15, S1. [Google Scholar] [CrossRef] [Green Version]
  34. Castellani, M.; Pham, Q.T.; Pham, D.T. Dynamic optimisation by a modified bees algorithm. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2012, 226, 956–971. [Google Scholar] [CrossRef]
  35. Egea, J.A.; Balsa-Canto, E.; García, M.a.-S.G.; Banga, J.R. Dynamic optimization of nonlinear processes with an enhanced scatter search method. Ind. Eng. Chem. Res. 2009, 48, 4388–4401. [Google Scholar] [CrossRef]
  36. Villaverde, A.F.; Egea, J.A.; Banga, J.R. A cooperative strategy for parameter estimation in large scale systems biology models. BMC Syst. Biol. 2012, 6, 75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Penas, D.R.; González, P.; Egea, J.A.; Doallo, R.; Banga, J.R. Parameter estimation in large-scale systems biology models: A parallel and self-adaptive cooperative strategy. BMC Bioinform. 2017, 18, 52. [Google Scholar] [CrossRef] [PubMed]
  38. Sun, D.-Y.; Lin, P.-M.; Lin, S.-P. Integrating controlled random search into the line-up competition algorithm to solve unsteady operation problems. Ind. Eng. Chem. Res. 2008, 47, 8869–8887. [Google Scholar] [CrossRef]
  39. Rakhshani, H.; Dehghanian, E.; Rahati, A. Hierarchy cuckoo search algorithm for parameter estimation in biological systems. Chemom. Intell. Lab. Syst. 2016, 159, 97–107. [Google Scholar] [CrossRef]
  40. Nikumbh, S.; Ghosh, S.; Jayaraman, V.K. Biogeography-based optimization for dynamic optimization of chemical reactors. In Applications of Metaheuristics in Process Engineering; Springer: Berlin, Germany, 2014; pp. 201–216. [Google Scholar]
  41. Chen, X.; Mei, C.; Xu, B.; Yu, K.; Huang, X. Quadratic interpolation-based teaching-learning-based optimization for chemical dynamic system optimization. Knowl.-Based Syst. 2018, 145, 250–263. [Google Scholar] [CrossRef]
  42. Patel, V.K.; Savsani, V.J. Heat transfer search (hts): A novel optimization algorithm. Inf. Sci. 2015, 324, 217–246. [Google Scholar] [CrossRef]
  43. Degertekin, S.; Lamberti, L.; Hayalioglu, M. Heat transfer search algorithm for sizing optimization of truss structures. Lat. Am. J. Solids Struct. 2017, 14, 373–397. [Google Scholar] [CrossRef]
  44. Tejani, G.G.; Savsani, V.J.; Patel, V.K.; Mirjalili, S. An improved heat transfer search algorithm for unconstrained optimization problems. J. Comput. Des. Eng. 2019, 6, 13–32. [Google Scholar] [CrossRef]
  45. Tejani, G.; Savsani, V.; Patel, V. Modified sub-population based heat transfer search algorithm for structural optimization. Int. J. Appl. Metaheuristic Comput. (IJAMC) 2017, 8, 1–23. [Google Scholar] [CrossRef]
  46. Tejani, G.G.; Kumar, S.; Gandomi, A.H. Multi-objective heat transfer search algorithm for truss optimization. Eng. Comput. 2019, 1–22. [Google Scholar] [CrossRef]
  47. Maharana, D.; Kotecha, P. Simultaneous heat transfer search for computationally expensive numerical optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2982–2988. [Google Scholar]
  48. Maharana, D.; Kotecha, P. Simultaneous heat transfer search for single objective real-parameter numerical optimization problem. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, Singapore, 22–25 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2138–2141. [Google Scholar]
  49. Tawhid, M.A.; Savsani, V. ∈-constraint heat transfer search (∈-HTS) algorithm for solving multi-objective engineering design problems. J. Comput. Des. Eng. 2018, 5, 104–119. [Google Scholar] [CrossRef]
  50. Ali, M.M.; Törn, A.; Viitanen, S. A numerical comparison of some modified controlled random search algorithms. J. Glob. Optim. 1997, 11, 377–385. [Google Scholar] [CrossRef]
  51. Li, H.; Jiao, Y.-C.; Zhang, L. Hybrid differential evolution with a simplified quadratic approximation for constrained optimization problems. Eng. Optim. 2011, 43, 115–134. [Google Scholar] [CrossRef]
  52. Deep, K.; Das, K.N. Quadratic approximation based hybrid genetic algorithm for function optimization. Appl. Math. Comput. 2008, 203, 86–98. [Google Scholar] [CrossRef]
  53. Yang, Y.; Zong, X.; Yao, D.; Li, S. Improved alopex-based evolutionary algorithm (AEA) by quadratic interpolation and its application to kinetic parameter estimations. Appl. Soft Comput. 2017, 51, 23–38. [Google Scholar] [CrossRef]
  54. Yang, X.-S.; Deb, S. Engineering optimisation by cuckoo search. arXiv 2010, arXiv:1005.2908. [Google Scholar] [CrossRef]
  55. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  56. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  57. Li, X.; Zhang, J.; Yin, M. Animal migration optimization: An optimization algorithm inspired by animal migration behavior. Neural Comput. Appl. 2014, 24, 1867–1877. [Google Scholar] [CrossRef]
  58. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  59. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  60. Maulik, U.; Saha, I. Automatic fuzzy clustering using modified differential evolution for image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3503–3510. [Google Scholar] [CrossRef]
  61. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  62. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. ManCybern. Part B Cybern. 2012, 42, 482–500. [Google Scholar] [CrossRef]
  63. Cai, Y.; Wang, J. Differential evolution with hybrid linkage crossover. Inf. Sci. 2015, 320, 244–287. [Google Scholar] [CrossRef]
  64. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; IEEE: Piscataway, NJ, USA, 1988; pp. 69–73. [Google Scholar]
  65. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  66. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  67. Gong, W.; Cai, Z.; Ling, C.X.; Li, H. A real-coded biogeography-based optimization with mutation. Appl. Math. Comput. 2010, 216, 2749–2758. [Google Scholar] [CrossRef]
  68. Gong, W.; Cai, Z.; Ling, C.X. DE/BBO: A hybrid differential evolution with biogeography-based optimization for global numerical optimization. Soft Comput. 2010, 15, 645–665. [Google Scholar] [CrossRef] [Green Version]
  69. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  70. Wu, Z.-S.; Fu, W.-P.; Xue, R. Nonlinear inertia weighted teaching-learning-based optimization for solving global optimization problem. Comput. Intell. Neurosci. 2015, 2015. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Chen, X.; Tianfield, H.; Mei, C.; Du, W.; Liu, G. Biogeography-based learning particle swarm optimization. Soft Comput. 2017, 21, 7519–7541. [Google Scholar] [CrossRef]
  72. Chen, X.; Yu, K.; Du, W.; Zhao, W.; Liu, G. Parameters identification of solar cell models using generalized oppositional teaching learning based optimization. Energy 2016, 99, 170–180. [Google Scholar] [CrossRef]
  73. Lin, Y.; Stadtherr, M.A. Deterministic global optimization for parameter estimation of dynamic systems. Ind. Eng. Chem. Res. 2006, 45, 8438–8448. [Google Scholar] [CrossRef] [Green Version]
  74. Wang, F.-S.; Chiou, J.-P. Nonlinear optimal control and optimal parameter selection by a modified reduced gradient method. Eng. Optim. + A35 1997, 28, 273–298. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the heat transfer search (HTS) algorithm. CDF; conduction factor, COF: convection factor, RDF: radiation factor, and Rn: random variable.
Figure 1. Flowchart of the heat transfer search (HTS) algorithm. CDF; conduction factor, COF: convection factor, RDF: radiation factor, and Rn: random variable.
Processes 08 00478 g001
Figure 2. Flowchart of the quadratic interpolation based simultaneous heat transfer search (QISHTS) algorithm.
Figure 2. Flowchart of the quadratic interpolation based simultaneous heat transfer search (QISHTS) algorithm.
Processes 08 00478 g002
Figure 3. (a) Convergence graph of the QISHTS algorithm for the first-order reversible series reaction model. (b) Fitting state trajectories and experimental points for the first-order reversible series reaction model.
Figure 3. (a) Convergence graph of the QISHTS algorithm for the first-order reversible series reaction model. (b) Fitting state trajectories and experimental points for the first-order reversible series reaction model.
Processes 08 00478 g003aProcesses 08 00478 g003b
Figure 4. (a) Convergence graph of QISHTS algorithm for the catalytic cracking of gas oil model. (b) Fitting state trajectories and experimental points for the catalytic cracking of gas oil model.
Figure 4. (a) Convergence graph of QISHTS algorithm for the catalytic cracking of gas oil model. (b) Fitting state trajectories and experimental points for the catalytic cracking of gas oil model.
Processes 08 00478 g004aProcesses 08 00478 g004b
Figure 5. (a) Convergence graph of the QISHTS algorithm for the multi-modal CSTR model. (b) Optimal control trajectory for the multi-modal CSTR model. u portrays the optimal control variable of this problem.
Figure 5. (a) Convergence graph of the QISHTS algorithm for the multi-modal CSTR model. (b) Optimal control trajectory for the multi-modal CSTR model. u portrays the optimal control variable of this problem.
Processes 08 00478 g005aProcesses 08 00478 g005b
Figure 6. (a) Convergence graph of the QISHTS algorithm for the six-plate gas absorption tower model. (b) Optimal control trajectories for the six-plate gas absorption tower model. u1 and u2 portray the optimal control variables of this problem.
Figure 6. (a) Convergence graph of the QISHTS algorithm for the six-plate gas absorption tower model. (b) Optimal control trajectories for the six-plate gas absorption tower model. u1 and u2 portray the optimal control variables of this problem.
Processes 08 00478 g006
Table 1. The traditional benchmark problems used in investigation.
Table 1. The traditional benchmark problems used in investigation.
Test FunctionDimensionRangeOptimum
f 1 = i = 1 n x i 2 30[−100, 100]0
f 2 = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
f 3 = i = 1 n ( j = 1 i x j ) 2 30[−100, 100]0
f 4 = max i { | x i | , 1 i n } 30[−100, 100]0
f 5 = i = 1 n ( x i + 0.5 ) 2 30[−100, 100]0
f 6 = i = 1 n i x i 4 + random [ 0 , 1 ] 30[−1.28, 1.28]0
f 7 = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
f 8 = i = 1 n x i sin ( | x i | ) 30[−500, 500]−418.98 × 105
f 9 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 10 = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos 2 π x i ) + 20 + e 30[−32, 32]0
f 11 = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0
f 12 = [ 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ] 1 2[−65, 65]9.9800 × 101
f 13 = i = 1 11 [ a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]3.0000 × 104
f 14 = 4 x 1 2 2.1 x 1 4 + x 1 6 / 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 15 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]3.9800 × 101
f 16 = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 17 = i = 1 4 c i exp [ j = 1 3 a i j ( x j p i j ) 2 ] 3[1, 3]−3.86
f 18 = i = 1 4 c i exp [ j = 1 6 a i j ( x j p i j ) 2 ] 6[0, 1]−3.322
Table 2. Computational results of f1f6 for the QISHTS algorithm with other meta-heuristic algorithms (MHAs) in Investigation 1. (The results of the competitors are adapted from the literature [44]). CS: cuckoo search, GSA: gravitational search algorithm, AMO: animal migration optimization, ABC: artificial bee colony, HTS: heat transfer search, IHTS: improved heat transfer search, and QISHTS: quadratic interpolation based simultaneous heat transfer search.
Table 2. Computational results of f1f6 for the QISHTS algorithm with other meta-heuristic algorithms (MHAs) in Investigation 1. (The results of the competitors are adapted from the literature [44]). CS: cuckoo search, GSA: gravitational search algorithm, AMO: animal migration optimization, ABC: artificial bee colony, HTS: heat transfer search, IHTS: improved heat transfer search, and QISHTS: quadratic interpolation based simultaneous heat transfer search.
FunctionCSGSAABCAMOHTSIHTSQISHTS
f1Mean5.6565 × 10−63.3748 × 10−182.9860 × 10−208.6464 × 10−400.00000.00000.0000
STD2.8611 × 10−68.0862 × 10−192.1455 × 10−201.0435 × 10−390.00000.00000.0000
Rank7654111
f2Mean2.0000 × 10−38.9212 × 10−91.4213 × 10−158.2334 × 10−320.00000.00000.0000
STD8.0959 × 10−41.3340 × 10−95.5340 × 10−163.4120 × 10−320.00000.00000.0000
Rank7654111
f3Mean1.1400 × 10−31.1260 × 10−12.4027 × 1038.8904 × 10−40.00002.5658 × 10−360.0000
STD6.0987 × 10−41.2660 × 10−16.5696 × 1028.7256 × 10−40.00007.3407 × 10−360.0000
Rank5674131
f4Mean3.23889.9302 × 10−101.8523 × 1012.8622 × 10−57.8547 × 10−432.3219 × 10−330.0000
STD6.6440 × 10−11.1899 × 10−104.24772.3468 × 10−52.1944 × 10−424.9053 × 10−330.0000
Rank6475231
f5Mean5.4332 × 10−63.3385 × 10−183.0884 × 10−200.00001.3920 × 10−11.3906 × 10−70.0000
STD2.2446 × 10−65.6830 × 10−194.0131 × 10−200.00004.5100 × 10−12.7467 × 10−70.0000
Rank6431751
f6Mean9.6000 × 10−33.900 × 10−33.2400 × 10−21.7000 × 10−31.7370 × 10−31.5430 × 10−31.0744 × 10−4
STD2.8000 × 10−31.3000 × 10−35.9000 × 10−34.7058 × 10−47.5904 × 10−46.3685 × 10−48.9991 × 10−5
Rank6573421
Mean Rank6.165.165.663.502.662.501.00
Final Rank7564321
Table 3. Computational results of f7 to f11 for the QISHTS algorithm with other MHAs in Investigation 1. (The results of the competitors are adapted from the literature [44]).
Table 3. Computational results of f7 to f11 for the QISHTS algorithm with other MHAs in Investigation 1. (The results of the competitors are adapted from the literature [44]).
FunctionCSGSAABCAMOHTSIHTSQISHTS
f7Mean8.00922.0082 × 1014.4100 × 10−24.18172.6156 × 1012.2876 × 1012.0024 × 101
STD1.91881.7220 × 10−17.0700× 10−22.16183.03114.37418.1310 × 10−1
Rank3512764
f8Mean−9.1492 × 103−3.0499 × 103−1.2507 × 104−1.2569 × 104−1.2569 × 104−1.2569 × 104−5.9959 × 104
STD2.5314 × 1023.3886 × 1026.1118 × 1011.2384 × 10−75.7799 × 10−13.7130 × 10−121.1102
Rank6754321
f9Mean5.1220 × 1017.28310.00000.00000.00000.00000.0000
STD8.10691.89910.00000.00000.00000.00000.0000
Rank7611111
f10Mean2.37501.4717 × 10−91.1946 × 10−94.4409 × 10−152.8741 × 10−142.8599 × 10−148.8818 × 10−16
STD1.12381.4449 × 10−105.0065 × 10−100.00005.6806 × 10−154.3511 × 10−150.0000
Rank7652431
f11Mean4.4900 × 10−51.265 × 10−20.00000.00000.00000.00000.0000
STD8.9551 × 10−52.1600 × 10−20.00000.00000.00000.00000.0000
Rank6711111
Mean Rank5.806.202.602.003.202.601.60
Final Rank6732531
Table 4. Computational results of f14f23 for the QISHTS algorithm with other MHAs in Investigation 1. (The results of the competitors are adapted from the literature [44]).
Table 4. Computational results of f14f23 for the QISHTS algorithm with other MHAs in Investigation 1. (The results of the competitors are adapted from the literature [44]).
FunctionCSGSAABCAMOHTSIHTSQISHTS
f12Mean9.9810 × 10−15.95339.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−1
STD4.8277 × 10−43.48193.7921 × 10−163.3858 × 10−123.3993 × 10−163.3993 × 10−163.3993 × 10−16
Rank6745111
f13Mean5.0310 × 10−44.8000 × 10−37.4715 × 10−43.9738 × 10−45.3397 × 10−43.4416 × 10−43.1988 × 10−4
STD1.1180 × 10−43.3000 × 10−32.1481 × 10−44.4503 × 10−53.9577 × 10−41.8313 × 10−43.0497 × 10−5
Rank4763521
f14Mean−1.03163−1.03163−1.0316−1.0316−1.03163−1.03163−1.03163
STD1.5821 × 10−74.7536 × 10−161.1269 × 10−145.2006 × 10−114.5325 × 10−164.5325 × 10−166.5323 × 10−8
Rank7345116
f15Mean3.9790 × 10−13.9790 × 10−13.9790 × 10−13.9790 × 10−13.9789 × 10−13.9789 × 10−13.9790 × 10−1
STD3.2449 × 10−60.00005.3819 × 10−80.00005.6656 × 10−175.6656 × 10−175.5872 × 10−17
Rank7161443
f16Mean3.00133.74033.00003.00183.00003.00003.0000
STD2.6000 × 10−31.60552.6164 × 10−55.5000 × 10−30.00000.00003.7456 × 10−4
Rank5736114
f17Mean−3.8628−3.8625−3.8628−3.8628−3.8628−3.8628−3.8628
STD1.4043 × 10−53.8767 × 10−41.3654 × 10−101.3669 × 10−159.0649 × 10−169.0649 × 10−161.6997 × 10−16
Rank6754221
f18Mean−3.321−3.32203.3220−3.3220−3.2837−3.2935−3.3228
STD7.5186 × 10−44.7967 × 10−168.5733 × 10−105.085 × 10−65.6553 × 10−25.1824 × 10−24.8300 × 10−2
Rank4123765
Mean Rank5.574.714.283.853.002.423.00
Final Rank7654212
Table 5. Computational results for the QISHTS algorithm with the state-of-the art different evolutions (DEs) in Investigation 2. (The results of the competitors are adapted from the literature [63]). FEs: function evaluations, ODE: opposition-based differential evolution, SaDE: self-adaptive differential evolution, MoDE: modified differential evolution, CoDE: composite differential evolution, MDE-pBX: modified differential evolution with p-best crossover, and QISHTS: quadratic interpolation based simultaneous heat transfer search.
Table 5. Computational results for the QISHTS algorithm with the state-of-the art different evolutions (DEs) in Investigation 2. (The results of the competitors are adapted from the literature [63]). FEs: function evaluations, ODE: opposition-based differential evolution, SaDE: self-adaptive differential evolution, MoDE: modified differential evolution, CoDE: composite differential evolution, MDE-pBX: modified differential evolution with p-best crossover, and QISHTS: quadratic interpolation based simultaneous heat transfer search.
FFEs ODESaDEMoDECoDEMDE-pBXQISHTS
f1150,000Mean2.24 × 10−271.62 × 10−486.51 × 10−42.28 × 10−71.24 × 10−30.00
STD2.72 × 10−274.46 × 10−482.81 × 10−38.29 × 10−84.83 × 10−30.00
Rank325461
f2150,000Mean1.52 × 10−116.11 × 10−452.89 × 1031.16 × 10−74.17 × 10−80.00
STD1.06 × 10−111.54 × 10−441.23 × 10−23.79 × 10−81.85 × 10−70.00
Rank326541
f5150,000Mean2.32 × 1041.00 × 1046.05 × 1026.25 × 1048.12 × 1030.00
STD1.76 × 1031.65 × 1035.43 × 1021.91 × 1031.18 × 1040.00
Rank542631
f7500,000Mean2.52 × 1011.28 × 1014.513.334.70 × 1010.00
STD1.10005.865.301.963.15 × 1010.00
Rank543261
f8300,000Mean6.99 × 1033.55 × 1014.14 × 1031.21 × 10−121.22 × 1033.74 × 10−4
STD3.08 × 1026.34 × 1016.51 × 1028.72 × 10−134.34 × 1020.00
Rank635142
f9300,000Mean3.60 × 1011.436.01 × 1011.02 × 10−117.800.00
STD1.91 × 1011.131.41 × 1011.73 × 10−112.250.00
Rank536241
f10150,000Mean3.53 × 10−144.02 × 10−157.701.31 × 10−41.77 × 10−28.88 × 10−16
STD2.04 × 10−146.49 × 10−161.823.74 × 10−59.30 × 10−20.00
Rank326451
f11200,000Mean2.47 × 10−42.79 × 10−32.49 × 10−14.19 × 1094.92 × 10−30.00
STD1.35 × 10−36.61 × 10−32.33 × 10−13.72 × 1091.24 × 10−20.00
Rank346251
Mean Rank4.1234.873.254.621.12
Final Rank426351
Table 6. Parameter setting values for recording the performance criterion. maxFEs: maximum number of function evaluations, Ap: acceptable precision, PDNFES: predefined number of function evaluations, PV: population flip probability factor, and CSTR: continuous stirred tank reactor.
Table 6. Parameter setting values for recording the performance criterion. maxFEs: maximum number of function evaluations, Ap: acceptable precision, PDNFES: predefined number of function evaluations, PV: population flip probability factor, and CSTR: continuous stirred tank reactor.
Dynamic Optimization ProblemmaxFEsApPDNFESPV
First-order reversible series reaction problem10000.000002100.1
Catalytic cracking of gas oil problem10000.0030100.1
Multi-modal CSTR problem10,0000.14001000.3
Six-plate gas absorption tower problem10,0000.11251000.3
Table 7. Results of the compared algorithms for the first-order reversible series reaction model. Best: the best objective function value, Mean: the mean objective function value, STD: the standard deviation, Worst: the worst objective function value, SR: success rate, and T(S): time (second).
Table 7. Results of the compared algorithms for the first-order reversible series reaction model. Best: the best objective function value, Mean: the mean objective function value, STD: the standard deviation, Worst: the worst objective function value, SR: success rate, and T(S): time (second).
MethodBestMeanSTDWorstSRT(S)Rank
LDWPSO1.18620 × 1061.25679 × 1068.30596 × 1081.47490 × 106100%18.0311
CLPSO3.73979 × 1061.98549 × 1041.54809 × 1045.19505 × 1040.00%17.9712
jDE1.18585 × 1061.18586 × 1061.43989 × 10111.18589 × 10−6100%15.605
ABC1.18584 × 1061.18585 × 1062.52200 × 10111.18597 × 10−6100%17.524
SaDE1.18585 × 1061.18643 × 1061.11590 × 1091.19024 × 10−6100%17.908
RCBBO9.85416 × 1061.68216 × 1032.87773 × 1039.61824 × 10−30.00%18.2313
DEBBO1.18585 × 1061.18664 × 1061.55322 × 1091.19254 × 10−6100%15.4610
TLBO1.18585 × 1061.18598 × 1062.84734 × 10101.18703 × 10−6100%15.626
NIWTLBO1.18584 × 1061.18585 × 10−65.14840 × 10111.18613 × 10−6100%18.923
BLPSO1.18585 × 1061.18650 × 1061.13912 × 1091.19090 × 10−6100%18.069
GOTLBO1.18585 × 1061.18611 × 1065.20662 × 10101.18805 × 10−6100%15.297
QITLBO1.18584 × 10−61.18585 × 1062.06093 × 10121.18586 × 10−6100%17.482
QISHTS1.18584 × 10−61.18584 × 10−60.000001.18585 × 10−6100%17.261
Table 8. Results of the compared algorithms for the catalytic cracking of gas oil model.
Table 8. Results of the compared algorithms for the catalytic cracking of gas oil model.
MethodBestMeanSTDWorstSRT(S)Rank
LDWPSO2.65569 × 1032.65663 × 1031.15296 × 1062.66061 × 103100%32.106
CLPSO2.71167 × 1033.54925 × 1037.98382 × 1046.06821 × 10337%35.1713
jDE2.65567 × 10−32.65702 × 1032.73064 × 1062.66749 × 103100%30.187
ABC2.65674 × 1033.13821 × 1031.33151 × 1038.51756 × 10383%32.5311
SaDE2.65567 × 10−32.65587 × 1032.91411 × 1072.65698 × 103100%32.414
RCBBO2.68559 × 1033.47463 × 1031.00151 × 1036.97048 × 10353%32.3512
DEBBO2.65582 × 1032.78029 × 1034.26505 × 1044.85929 × 10390%29.8210
TLBO2.65567 × 10−32.65578 × 1031.32328 × 1072.65626 × 103100%30.593
NIWTLBO2.65567 × 10−32.73768 × 1034.49100 × 1045.11551 × 10397%36.619
BLPSO2.65568 × 1032.65901 × 1037.55110 × 1062.68741 × 103100%34.178
GOTLBO2.65568 × 1032.65649 × 1031.54447 × 1062.66225 × 103100%30.225
QITLBO2.65567 × 10−32.65570 × 1038.30733 × 1082.65607 × 103100%34.312
QISHTS2.65567 × 10−32.65568 × 1031.81904 × 10−92.65599 × 10−3100%32.141
Table 9. Results of the compared algorithms for the multi-modal CSTR model.
Table 9. Results of the compared algorithms for the multi-modal CSTR model.
MethodBestMeanSTDWorstSRT(S)Rank
LDWPSO1.3311 × 1011.3317 × 1013.89 × 1051.3329 × 101100%458.994
CLPSO1.4920 × 1011.6569 × 1018.35 × 1031.7957 × 1010.00%478.3212
jDE1.3315 × 1011.3327 × 1016.96 × 1051.3341 × 101100%456.756
ABC1.3968 × 1011.4788 × 1014.51 × 1031.5592 × 1013%464.7611
SaDE1.3315 × 1011.3338 × 1012.32 × 1041.3421 × 101100%458.897
RCBBO1.3346 × 1011.3550 × 1012.07 × 1031.4314 × 10197%463.429
DEBBO1.3317 × 1011.3324 × 1014.37 × 1051.3333 × 101100%457.525
TLBO1.3311 × 1011.3700 × 1012.03 × 1022.4445 × 10197%466.6310
NIWTLBO1.3383 × 1012.3744 × 1012.75 × 1022.4515 × 1017%419.0913
BLPSO1.3310× 10−11.3311× 10−11.03 × 1051.3314× 10−1100%448.591
GOTLBO1.3311 × 1011.3392 × 1014.30 × 1031.5669 × 10197%460.768
QITLBO1.3311 × 1011.3314 × 1012.39 × 1051.3320 × 101100%460.913
QISHTS1.3310× 10−11.3312 × 1011.98 × 1051.3318 × 101100%455.862
Table 10. Results of the compared algorithms for the six-plate gas absorption tower model.
Table 10. Results of the compared algorithms for the six-plate gas absorption tower model.
MethodBestMeanSTDWorstSRT(S)Rank
LDWPSO1.1244 × 1011.1245 × 1016.86 × 1061.1246 × 101100%552.456
CLPSO1.1321 × 1011.1390 × 1012.07 × 1041.1420 × 1010.00%554.2213
jDE1.1243× 10−11.1243× 10−11.14 × 1061.1244 × 101100%548.632
ABC1.1286 × 1011.1326 × 1011.85 × 1041.1357 × 1010.00%588.1312
SaDE1.1244 × 1011.1245 × 1015.95 × 1061.1247 × 101100%549.596
RCBBO1.1244 × 1011.1247 × 1011.55 × 1051.1251 × 10193%584.348
DEBBO1.1243× 10−11.1244 × 1011.17 × 1061.1244 × 101100%583.574
TLBO1.1244 × 1011.1248 × 1011.97 × 1051.1253 × 10187%581.0110
NIWTLBO1.1257 × 1011.1280 × 1011.68 × 1041.1333 × 1010.00%549.6911
BLPSO1.1243× 10−11.1243× 10−12.96 × 1081.1243× 10−1100%618.022
GOTLBO1.1244 × 1011.1247 × 1012.74 × 1051.1254 × 10190%582.168
QITLBO1.1243× 10−11.1244 × 1015.14 × 1061.1246 × 101100%561.714
QISHTS1.1243× 10−11.1243× 10−12.45× 10−81.1243× 10−1100%558.961

Share and Cite

MDPI and ACS Style

Alnahari, E.; Shi, H.; Alkebsi, K. Quadratic Interpolation Based Simultaneous Heat Transfer Search Algorithm and Its Application to Chemical Dynamic System Optimization. Processes 2020, 8, 478. https://doi.org/10.3390/pr8040478

AMA Style

Alnahari E, Shi H, Alkebsi K. Quadratic Interpolation Based Simultaneous Heat Transfer Search Algorithm and Its Application to Chemical Dynamic System Optimization. Processes. 2020; 8(4):478. https://doi.org/10.3390/pr8040478

Chicago/Turabian Style

Alnahari, Ebrahim, Hongbo Shi, and Khalil Alkebsi. 2020. "Quadratic Interpolation Based Simultaneous Heat Transfer Search Algorithm and Its Application to Chemical Dynamic System Optimization" Processes 8, no. 4: 478. https://doi.org/10.3390/pr8040478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop