Next Article in Journal
An Optimised Spider-Inspired Soft Actuator for Extraterrestrial Exploration
Previous Article in Journal
Biomimetic Additive Manufacturing: Engineering Complexity Inspired by Nature’s Simplicity
Previous Article in Special Issue
Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaotic Mountain Gazelle Optimizer Improved by Multiple Oppositional-Based Learning Variants for Theoretical Thermal Design Optimization of Heat Exchangers Using Nanofluids

by
Oguz Emrah Turgut
1,
Mustafa Asker
2,*,
Hayrullah Bilgeran Yesiloz
3,
Hadi Genceli
4 and
Mohammad AL-Rawi
5,*
1
Department of Industrial Engineering, Faculty of Engineering and Architecture, Izmir Bakircay University, Menemen, İzmir 35665, Türkiye
2
Department of Mechanical Engineering, Faculty of Engineering, Aydın Adnan Menderes University, Efeler, Aydın 09010, Türkiye
3
Graduate School of Natural and Applied Sciences, Aydın Adnan Menderes University, Efeler, Aydın 09010, Türkiye
4
Faculty of Mechanical Engineering, Yıldız Technical University, Istanbul 34349, Türkiye
5
School of Computing, Mathematics & Engineering, Charles Sturt University, Bathurst, NSW 2795, Australia
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(7), 454; https://doi.org/10.3390/biomimetics10070454
Submission received: 23 May 2025 / Revised: 4 July 2025 / Accepted: 5 July 2025 / Published: 10 July 2025

Abstract

This theoretical research study proposes a novel hybrid algorithm that integrates an improved quasi-dynamical oppositional learning mutation scheme into the Mountain Gazelle Optimization method, augmented with chaotic sequences, for the thermal and economical design of a shell-and-tube heat exchanger operating with nanofluids. The Mountain Gazelle Optimizer is a recently developed metaheuristic algorithm that simulates the foraging behaviors of Mountain Gazelles. However, it suffers from premature convergence due to an imbalance between its exploration and exploitation mechanisms. A two-step improvement procedure is implemented to enhance the overall search efficiency of the original algorithm. The first step concerns substituting uniformly random numbers with chaotic numbers to refine the solution quality to better standards. The second step is to develop a novel manipulation equation that integrates different variants of quasi-dynamic oppositional learning search schemes, guided by a novel intelligently devised adaptive switch mechanism. The efficiency of the proposed algorithm is evaluated using the challenging benchmark functions from various CEC competitions. Finally, the thermo-economic design of a shell-and-tube heat exchanger operated with different nanoparticles is solved by the proposed improved metaheuristic algorithm to obtain the optimal design configuration. The predictive results indicate that using water + SiO2 instead of ordinary water as the refrigerant on the tube side of the heat exchanger reduces the total cost by 16.3%, offering the most cost-effective design among the configurations compared. These findings align with the demonstration of how biologically inspired metaheuristic algorithms can be successfully applied to engineering design.

Graphical Abstract

1. Introduction

There has been an ongoing and relentless pursuit of finding efficient and adaptive problem-solving strategies in engineering design problems. Although many valuable efforts have been made to overcome this challenging issue, nature remains the leading option and continues to inspire research in this area. Biomimetics is an exemplary interdisciplinary field in this realm, simulating models, systems, and elements of nature to solve complex problems and has significantly contributed to the development of state-of-the-art algorithms, materials, and architectural design [1,2]. Biomimetics is based on the fundamental scientific principle that the evolution of individuals over millions of years has fine-tuned biological systems to perform advanced tasks under severe constraints. This accumulated evolutionary knowledge is utilized as an innovative guide for designing artificial systems that can exhibit similar adaptability and efficiency [3].
Biological inspiration from different natural sources has led to the rise of metaheuristic algorithms, a class of high-level, nature-inspired, problem-independent solvers capable of navigating highly complex, multidimensional search spaces. Metaheuristic algorithms are inspired by natural processes and derive their primary structures from the biological evolution [4], food search [5], and social behaviors [6] of animals, as well as biochemical interactions [7]. There is a profound and reciprocal synergy between biomimetics and metaheuristics, promoting the development of each scientific topic in a mutually supportive way. The components of biomimetics provide a conceptual framework for developing intelligently devised metaheuristic algorithms that can mimic sophisticated biological phenomena such as swarm intelligence, tool use, or camouflage. On the other hand, metaheuristic algorithms serve as a complementary tool in terms of optimization purposes, such that they are capable of solving challenging problems in biomimetic-based design, such as the aerodynamics of UAV design inspired by the flapping of bird wings [8], the self-cleaning of wetted surfaces through lotus leaves [9], or the thermoregulation of systems via terms of termite mounds [10]. This mutual collaboration has led to the emergence of a novel research area, focusing on the development of nature-inspired new generation algorithms that not only simulate the activities of living creatures but also benefit from the intrinsic problem-solving strategies encountered in daily life.
Metaheuristic algorithms are traditionally classified into two subgroups: modern and classical algorithms. According to this classification, algorithms developed before the 1990s are considered classical, while those created afterward are referred to as modern optimization methods. Current metaheuristic algorithms generally rely on the source of inspiration from which they originated. They can be categorized into four main sub-branches: Evolutionary Algorithms [4], Swarm Optimization Algorithms [6], Physics-Based Algorithms [11], and Human-Based Algorithms [12]. Each algorithm available in the literature can be used to solve a diverse range of real-world optimization problems, each with its advantages and disadvantages. The No Free Lunch (NFL) theorem [13] posits that no specific optimization algorithm can address all optimization problems. Therefore, researchers focus on finding alternative methods to overcome the challenges posed by NFL theory, which necessitates the development of a robust optimization framework capable of yielding promising results for the defined optimization problem. One alternative to enhance the search capacity of any algorithm is to hybridize it with another metaheuristic that possesses complementary characteristics to offset the deficiencies of the base algorithm [14]. Integrating chaos into the original metaheuristic optimizer by replacing uniformly distributed random numbers with sequential chaotic numbers is another method to improve the search effectiveness of the algorithm. The straightforward implementation of chaotic numbers in any metaheuristic algorithm significantly broadens their applicability, supported by their non-repetitive and ergodic numerical properties, which can enable the algorithm to perform iterative random searches at greater speeds [15]. The basic principles of Opposition-Based Learning concepts and their various algorithmic variants have been effectively incorporated into different metaheuristic algorithms to date, thanks to the efficiency of the solutions generated by using opposite numbers, which encourages the algorithm to identify candidate solutions very close to the global optimum point [16].
One of the main concerns of this research study is to examine the enhanced thermo-economic performance optimization of shell and tube heat exchangers using various types of nanofluids through the proposed novel hybrid optimizer. The literature is gradually improving with an increasing number of experimental and theoretical studies on nanofluid-integrated refrigerants that drive the heat transfer mechanism of shell-and-tube heat exchangers. Researchers have explored the potential applications of various types of nanofluids in operating streams. They utilized single- or multi-objective design optimization to evaluate the configurations and identify the most suitable design parameters, including the volumetric ratio of suspended nanoparticles [17,18,19]. This research study proposes an alternative solution strategy to address the rigid assumptions of the NFL theorem by concurrently integrating a novel mutation scheme based on different variants of Opposition-Based Learning and pseudorandom-chaotic numbers generated from various chaotic maps into the recently developed metaheuristic, the Mountain Gazelle Optimizer (MGO). This is a new optimization method developed by Abdollahzadeh et al. [20] that simulates the social life of mountain gazelles and establishes a characteristic hierarchy among them. Despite its relatively recent development, this algorithm has been applied to several engineering design problems [21,22,23]. Although this optimizer has many algorithmic advantages, such as balanced search strategies, rapid convergence to optima in early iterations due to aggressive exploration, and low parameter sensitivity, the specific shortcomings of this algorithm must be meticulously addressed, including the lack of an adaptive search mechanism to guide the transition between exploration and exploitation, leading to premature convergence at local points in high-dimensional optimization problems. To alleviate the algorithmic drawbacks of this algorithm, an intelligently devised hybrid method is proposed and integrated into the standard Mountain Gazelle Optimizer. The first step in hybridization involves a comprehensive population initialization strategy that integrates Latin Hypercube Sampling [24] with chaotic number-based randomization and the principles of Opposition-Based Learning [25] to generate trial candidate solutions. The second step in algorithm development focuses on evaluating the optimization performance of various chaotic variants of the MGO algorithm to identify the best-performing chaotic method among the twenty-one chaos-induced MGO optimizers. Forty-eight artificially generated multidimensional optimization benchmark problems, comprising twenty-four unimodal and multimodal test cases, have been utilized to determine which chaotic MGO optimizer among the competing chaotic algorithm alternatives produces the most accurate predictions. The third procedural step of algorithm development primarily involves creating a novel mutation scheme, based on the valuable contributions of two variants of the Quasi-Dynamic Opposition-Based algorithm [26], guided by an intelligently designed adaptive switch mechanism. Chaotic numbers generated from the best-performing chaotic map and the proposed novel mutation scheme have been integrated into the original MGO algorithm to enhance its overall optimization capability regarding solution accuracy and robustness. Complex benchmark instances taken from the suite of CEC 2013 and CEC 2014 test problems with varying dimensionalities have been solved using the proposed method, which will then be evaluated on artificially generated test problems used in the CEC 2006 competitions. Finally, a real-world benchmark case related to the thermo-economic design of a shell-and-tube heat exchanger operating with different nanofluids will be simulated, and optimal decision parameters that minimize the total cost of the heat exchanger while satisfying operational constraints will be determined. This case involves numerous restrictive design constraints and includes integer and continuous decision parameters that must be meticulously optimized. This research study aims to introduce four significant novelties to the existing literature, which the following terms can briefly convey.
  • Proposing a novel framework for initial population generation that integrates chaotic Latin Hypercube Sampling with the foundational principles of Opposition-Based Learning.
  • Evaluating the optimization efficiency of twenty-one different chaotic Mountain Gazelle Algorithms and determining which chaotic method produces the most accurate predictions.
  • Developing an innovative dexterous mutation scheme utilizing two efficient variants of an Opposition-Based Learning search mechanism, coordinated by an adaptive switch mechanism, and incorporating this manipulative search equation into the Chaos-Assisted Mountain Gazelle Optimizer.
  • Maintaining the thermal design of shell and tube heat exchangers involves working with various nanoparticles in the tube bundle through the proposed enhanced Mountain Gazelle Optimizer.
The remaining sections of this research study are organized as follows: Section 2 explains the fundamentals of the Mountain Gazelle Optimizer. Section 3 introduces the preliminaries of the chaotic algorithms and the procedural integration steps of various chaotic maps into the original MGO, explaining the numerical experiments used to determine the best chaotic algorithm among competing alternatives. Section 4 outlines the fundamental algorithmic steps of the proposed mutation scheme and describes its integration into the chaos-induced MGO. Section 5 evaluates the optimization performance of the improved algorithm using various multidimensional benchmark problems. Section 6 focuses on determining the most suitable topological design parameters for shell-and-tube heat exchangers operating with different nanoparticles. This topic has not been studied in-depth in past literature approaches. Section 7 concludes this study with notable comments and outlines future directions for upcoming studies on the thermal design of nanofluid-based heat exchangers and the development of OBL-based mutation schemes.

2. Fundamentals of Mountain Gazelle Optimizer

The social herding behaviors of mountain gazelles inspire the development of the Mountain Gazelle Optimization algorithm. The basic concepts of their inhabitation in the natural habitat are the main elements behind the algorithm’s development and the production of the governing mathematical model. MGO algorithm performs four distinctive operators during trial solution generation, considering the influential factors of bachelor male herds, maternity herds, solitary and territorial males, and migrating mountain gazelles while searching for available food sources. Each trial member of the population (Xi) can be produced among the subgroups of maternity herds, bachelor male herds, or solitary males during the iterative process. The adult male gazelle generated from the alternative subgroup of herds is considered the best solution for the current iteration. The algorithm identifies one-third of the entire population as part of the subpopulation of young gazelles, which also has the highest fitness cost. Other solutions extracted from the evolving gazelle population are considered to belong to the subgroup of maternity herds. The fittest (strongest) gazelles with high-quality solutions are preserved to be considered for the upcoming generations. The remaining lower-cost solutions are considered sick and old gazelles and are selected from the iteratively adjusted gazelle population. The algorithm executes the exploration and exploitation phases in parallel, two of which are run by the four main search mechanisms, which also makes it possible for the search agents to probe around the so-far-obtained best solution to perform exploitation as well as create high and unexpected jumps to avoid local pitfalls to maintain a reliable exploration along the search space. The subsections below explain the details of the responsible search mechanisms associated with different categories of herding gazelles.

2.1. Territorial Solitary Males

When newborn gazelles reach adulthood and become strong, they establish their territory and mark spatial distances to separate themselves from neighboring gazelles. A fierce battle erupts between these solid and young gazelles over the occupation of the territory or possession of the females. Young gazelles strive to capture the territory currently occupied by the established individuals, while adult males attempt to defend their territories. The defined search equation mathematically models this dispute between adult and young gazelles
T S M = M a l e G r d i n t 1 · M C V r d i n t 2 · X i t e r F F C o f 1 , r
In the above equation, MaleG is the best solution obtained throughout the iterative process; model parameters rdint1 and rdint2 are randomly chosen integers between 1 and 2; MCV is an algorithm parameter called young male herd coefficient, calculated by Equation (2); FF is another model coefficient computed using Equation (3); Cof1,r is randomly generated coefficient vector renewed in each iteration to enhance the search efficiency of the algorithm, calculated by Equation (4); the symbol represents a dot product that performs the multiplication of D-dimensional vectors; and the term r d i n t 1 · M C V r d i n t 2 · X i t e r F F is the absolute value operator.
M C V = X r s · r d 01 1 + M p r · r d 01 2 ,         r s = N 3 , , N
In Equation (2), N is the number of gazelle in the population; Xrs is a random solution selected from young males within the interval of rs; Mpr is the mean value of search agents, which are randomly chosen from the interval between N/3 and N; random parameters rd011 and rd012 are randomly chosen integers between 0 and 1; and x is the ceiling function that maps the real number “x” to the smallest integer greater than or equal to that number.
F F = S ( D ) · e x p 2 i t e r · 2 M a x i t e r
where S(D) is a set of D-dimensional random numbers generated from the uniform distribution; exp() is the mathematical exponential function; and Maxiter is the maximum number of iterations, while iter is the current iteration counter.
C o f r = a a + 1 + r n d 1 a a · S 2 D r n d 3 D S 3 D S 4 D 2 c o s 2 · r n d 2 · S 3 D
In Equation (4), aa is an iteratively adjusted parameter calculated by Equation (5); rnd1 and rnd2 are random numbers within the range 0 and 1; and rnd3(D) is a D-dimensional random number from a uniform distribution.
a a = 1 + i t e r · 1 M a x i t e r

2.2. Maternity Herds

Maternal gazelles play a crucial role in the sustainable life cycle of mountain gazelles, as they give birth to the young males that form the herding population. Male gazelles also influence the rearing of newborn male gazelle offspring and the behavior of female gazelles. The following expression can translate this gazelle behavior into a mathematical equation.
M H = M C V + C o f 2 , r + r d i n t 3 · M a l e G r d i n t 4 · X r a n d , N C o f 3 , r
In Equation (6), MCV contributes to the influence of young males into the gazelle population, which was previously explained in Equation (2); Cof2,r and Cof3,r are coefficient vectors independently calculated by using Equation (4); parameters rdint3 and rdint4 are random integers between 1 and 2. MaleG is the gazelle with the best fitness value for the current iteration, and Xrand,N is the random gazelle chosen from the gazelle population.

2.3. Bachelor Male Herds

The gradual maturation of young male gazelles leads to them owning a specific territory and possessing female gazelles. Young gazelles engage in a fierce battle with the male gazelles over control and possession of the females during this algorithm phase, as formulated by the following.
B M H = X i t e r D D + r d i n t 5 · M a l e G r d i n t 6 · M C V C o f 4 , r
In Equation (7), Xiter is the current position of the gazelles for the current iteration iter; Cof4,r is a randomly generated coefficient vector independently calculated by using Equation (4); rdint5 and rdint6 are integers randomly chosen from 1 and 2; and DD is calculated by
D D = X i t e r + M a l e G · 2 · r n d 4 1
where rnd4 is a random number within the range [0, 1].

2.4. Migration for Searching Food

Mountain gazelles in the population look for fertile areas where food sources are likely to be abundant. They travel along the possible paths to reach these promising areas. Randomly generated solutions between upper and lower search limits are utilized to formulate this foraging behavior, as shown in the following.
M S F = L B + r n d 5 · U B L B
where rnd5 is a random value between 0 and 1, and LB and UB are the lower and upper limits of the defined search space. These four defined search mechanisms are applied to all gazelle population members to generate new solutions for offspring. After producing new gazelle individuals, each generation’s solutions are reordered in ascending order. The best gazelles, having the fittest solution values, remain in the population, while the gazelle members with the worst fitness values are removed from the population. The best gazelle can also be considered the adult male that owns the territory in terms of dominance.

3. Chaotic Mountain Gazelle Optimization Algorithm

Integrating chaos into metaheuristic algorithms has been a widespread application for nearly two decades, augmenting the efficiency of the base algorithm by maintaining a proper balance between exploitation and exploration search mechanisms, two essential features of any metaheuristic optimizer. In general, chaos can be defined as an unpredictable yet practical approach to influencing the sequential behaviors of any natural event. Minimal changes in the initial conditions may result in a considerable deviation in the nonlinear characteristics of the system’s future behavior. Chaotic systems can also be mathematically defined as semi-random behaviors of number sequences generated by well-tailored nonlinear deterministic systems [27]. As mentioned, many chaos-induced metaheuristic algorithms have been developed in recent years, leveraging the favorable properties of chaotic numbers over randomly generated numbers drawn from a uniform distribution [28]. Beyond this utilization of chaotic sequences, the primary idea is to transform the pseudo-random chaotic numbers into the prescribed solution space. The tedious process of searching global optimum points heavily depends on the intrinsic features of the employed chaotic maps, such as ergodicity, regularity, and stochastic properties. Chaos-enhanced algorithms successfully converge more quickly to the optimal solution and avoid local pitfalls within the solution domain. These favorable and reliable function characteristics put chaotic algorithms one step ahead of their contemporary alternatives.
Contrary to the majority of previous works, which have been based on chaotic metaheuristic algorithms available in the existing literature and have considered only a limited number of chaotic maps for comparative evaluation, this research study utilizes twenty-one non-invertible deterministic chaotic maps to be integrated into the original Mountain Gazelle Algorithm. These twenty-one different chaotic maps have been separately implemented into the standard Mountain Gazelle Optimizer to evaluate the optimization performance improvement made through the merits of the chaotic numbers. A comparative assessment of the solution quality improvement is achieved by analyzing the mean fitness values obtained after running thirty independent algorithms run for each chaotic variant of the Mountain Gazelle Optimizer. The twenty-one distinct chaotic maps separately to be embedded into standard Mountain Gazelles are given as follows: Arnold map (CH01) [29], Bernoulli Map (CH02) [30], Chebyshev Map (CH03) [31], Chirikov Map (CH04) [32], Gauss Map (CH05) [33], Gingerbreadman Map (CH06) [34], Henon Map (CH07) [35], Ikeda Map (CH08) [36], Baker Map (CH09) [37], Iterative Map (CH10) [38], Kent Map (CH11) [39], Logistic Map (CH12) [40], Lozi Map (CH13) [41], Piecewise Map (CH14) [42], Sawtooth Map (CH15) [43], Sinai Map (CH16) [44], Sine Map (CH17) [45], Singer Map (CH18) [46], Standard Map (CH19) [37], Tent Map (CH20) [47], and Zaslavskii Map (CH21) [48].

Decision Process of the Best-Performing Chaotic Mountain Gazelle Algorithm Variant

This subsection provides a brief explanation of the procedural steps for selecting the best-performing chaotic MGO variant among competing alternatives. As previously described in the above section, chaotic pseudo-number sequences extracted from different chaotic maps are separately substituted into the main running MGO by removing uniformly generated random numbers employed in the original version of the algorithm. Past studies on chaotic metaheuristic approaches have revealed that integrating chaos into the optimization method significantly improves the overall search efficiency of the governing algorithm. Building on previous literature, this study will utilize twenty-one distinct chaotic maps collected from literature studies to enhance the two pivotal yet complementary search mechanisms of exploration and exploitation. This entails a significant boost in convergence rates and improved solution diversity.
All MGO-based chaotic algorithms compared are developed in MATLAB 2018 and run on a laptop computer with an Intel Core i7 processor and 16 GB of RAM. Twenty-one chaotic MGO algorithms are benchmarked against each other using forty-eight widely used optimization test functions, comprising 30D multimodal and unimodal artificially generated problems, with functional descriptions in Table 1. Each competitive, chaotic MGO variant has been applied to this pool of challenging test functions. The chaotic MGO with the best mean rank value will be considered for further evaluation. All chaotic methods are independently run 50 times for 2000 function evaluations. Algorithms are ranked according to the mean fitness values obtained for each benchmark problem. The most effective method is the chaotic optimizer with the lowest mean value, based on its cumulative performance averaged over forty-eight distinct optimization test functions. Figure 1 compares the ranking performance of the chaotic algorithms based on their mean fitness values for each test function used. In Figure 1, MG stands for the original Mountain Gazelle algorithm. For 30D multimodal test problems, the CH02 chaotic variant becomes the dominant algorithm for most test cases and ultimately wins the contest for multimodal problems. CH09 is the second-best-performing algorithm for multimodal test problems, yielding relatively persistent estimates for all test cases except those involving the F4, F15, and F22 functions. Although CH02 provides the best mean predictions in 16 out of 24 unimodal test problems, the estimation incompetence exhibited by F26 and F45 places this chaotic variant in second place after the CH03 algorithm, which consistently ranks high across the unimodal benchmark suite when the mean results are averaged. Another algorithm demonstrating robust solution consistency is CH09, which suffers from the poor estimation obtained for the F45 test problem, rendering this algorithm the third-best method for 30D unimodal problems. The overall ranking performance of the compared algorithms, as reported at the bottom part of Figure 1, reveals that CH02 (Bernoulli map) is the most accurate algorithm, with a ranking point of 2.7, which is the average value of the ranking points obtained for multimodal and unimodal test problems in the defined test suite. CH03 (Chebyshev) ranks second with a ranking point of 3.3. CH09 (Baker map), CH08 (Ikeda map), and CH16 (Sinai map) are found to occupy the third, fourth, and fifth positions in the overall performance evaluation of the optimization. Figure 2 illustrates the sequential chaotic numbers generated by the five best-performing algorithms.

4. Improved Mountain Gazelle Optimization Algorithm

The algorithm begins with the chaotic Latin Hypercube Sampling mechanism to ensure that initial candidate solutions are well distributed across the multidimensional solution domain, a crucial step in augmenting diversity in the population before subsequent iterations proceed. Then, the iterative process starts with applying chaos-induced MGO to the defined optimization problem. After following the basic steps of the chaotic algorithm, the proposed scheme is activated with a probability of 0.5, thereby avoiding the need to expend extravagant computational resources. The first modified model integrates the Quasi-Oppositional and Dynamic Opposition-Based Learning methods, combining them into a single procedure. It also considers the influences of cosine and sine trigonometric functions to ensure nonlinear and diverse movements, thereby helping to avoid entrapment in local minima. The second modified version of QDOPP proposed in this study utilizes a novel random number generator based on trigonometric functions, founded on the concept of spiraling dynamics and adaptive fitness-driven perturbations, which adjust the direction of the perturbation in response to changes in the fitness landscape. These two intelligently devised perturbation schemes have been guided by a novel adaptive switch mechanism, which simultaneously considers the varying solution diversity in the population, the ratio of the improved solutions in the current population, and the quality of the best fitness value in the evolving populations of competing alternate QDOPP algorithms. The upcoming subsections briefly explain the basic principles underlying these innovative search procedures.

4.1. Population Initialization with Chaos-Induced Hybrid Latin Hyper Cube Sampling and Opposition-Based Learning

Many past literature studies emphasize the importance of generating highly diverse population members in the initialization phase, which enables significantly better predictive results during the iterative process. The concept of Opposition-Based Learning has been widely integrated in the population initialization phase to diversify the search space as much as possible [49]. This is a favorable option for initializing candidate solutions with higher qualities, as computing the opposite numbers is a relatively straightforward process that does not excessively consume computational resources. Consider an evolving population X with N-sized population members, each comprising D-dimensional candidate solutions. The opposite point of each trial solution is calculated by
X i , j O P = l o w j + u p j X i , j                       i = 1 , 2 , 3 , . . , N           j = 1 , 2 , 3 , , D
where XOP is the opposite point of each individual in the X population, and low and up represent the lower and upper limits of the jth dimension. Another option can be to embed pseudo-random numbers generated by various chaotic maps into the representative model responsible for producing the initial population. Apart from the different learning schemes, each with its intrinsic computational advantages in generating the initial population for metaheuristic algorithms, statistical tools have also been consistently employed in the population initialization process. Among them, the Latin Hypercube Sampling (LHS) method is a well-established example of a statistical approach that has proven successful. It has also been one of the most adopted statistical procedures for metaheuristic initialization. This method is an efficient way to produce sample inputs drawn from a multidimensional distribution [24]. It ensures that the generated samples are well scattered across the defined range, resulting in uniform stratification along each dimension and the avoidance of clustering in the population while preserving high diversity. This is much better guided than pure random sampling through the LHS method. However, although this algorithmic procedure is a prolific alternative for generating samples evenly distributed across the dimensions, it may not be advantageous for sampling high-dimensional spaces, as the imposed complexity may not allow for efficiently filling the search space. In addition, the artificially structured stratification of samples can introduce an unexpected correlation between them, jeopardizing the generated randomness. Moreover, despite its effectiveness in covering the entire search span with evenly distributed random numbers, LHS might not be sufficient to probe these critical regions, where promising areas to reach global optimum solutions probably reside. Therefore, to eliminate the procedural drawbacks of the LHS method, this study proposes integrating chaotic sequences generated by the logistic map [40] to achieve enhanced diversity in the population, reduce the correlation between sampled parameters, and avoid entrapment in troublesome local minima during the early phase of iterations. Below, the mathematical model explains how chaotic numbers of the logistic map are introduced into the algorithmic procedure of the Latin Hypercube Sampling method.
Suppose that X is an N-sized D-dimensional population representing the sampled individuals and each member is defined by x i = x i , 1 , x i , 2 , x i , 3 , , x i , D . For each different variable of x j , the domain of the ith individual is divided into N equally spaced non-overlapping intervals, expressed by the following equation
M j k = k 1 N ,   k N ,             k = 1 , 2 , , N ,             j = 1 , 2 , 3 , , D
The above formulation is a crucial step in the algorithm, specifically in stratifying the sampled parameters, ensuring that at least one sample is placed within the defined interval. Therefore, for each variable xj, N intervals are prepared to provide a uniform marginal distribution. The next step is to generate a random offset within the range of interval sets, using sequential chaotic numbers from the logistic map rather than uniformly distributed Gaussian random numbers. The following equation computes the randomized placement of samples within the interval.
r j k = k 1 + c h x j k N ,       k = 1 , 2 , , N ,       j = 1 , 2 , 3 , , D
where chx is a chaotic number ensuring the random offset in the interval, which is generated by the logistic map calculated by the following equation [40].
c h x k + 1 = 4 · c h x k · 1 c h x k ,           k = 1 , 2 , , N  
The sampling order in each dimension is shuffled to avoid correlation among the generated variables, assuming that C j = c 1 j , c 2 j , , c N j is a random permutation vector of {1, 2, 3, …, N} for each j. Applying a randomly generated permutation vector of C j rearranges the stratified samples through the below expression.
x i , j = r j c i j     i = 1 , 2 , , N ,         j = 1 , 2 , 3 , , D
Finally, stratified random solutions are scaled into the predefined range between lowj and upj using the equation below.
x i , j = l o w j + u p j l o w j · r j c i j         i = 1 , 2 , . . , N     j = 1 , 2 , 3 , . . , D
This equation ensures that each sample remains within its predefined range. The algorithm below explains the basic steps of the proposed hybrid population initialization scheme. MATLAB code of the proposed Latin Hypercube Sampling scheme can be found in the Supplementary Materials (LHS_Logistic.m).

4.2. Hybrid Chaotic Quasi-Opposition-Based Learning and Quasi-Dynamic Opposition-Based Learning Search Mechanism

This section examines innovative approaches for integrating Quasi-Opposition-Based Learning [50] (QOBL) and Quasi-Dynamic Opposition-Based Learning [26] (QDOPP) algorithms to enhance solution diversity and prevent convergence stagnation during iterations. There are various Opposition-Based Learning models previously discussed in existing literature. Despite its recent emergence, the Dynamic Opposition-Based Learning search mechanism [51] presents a favorable alternative to the remaining opposition-based variants, which can achieve high-quality solutions within a reasonable runtime comparable to those of the available alternative variants. QOBL is a widely known method that performs perturbations between the center and opposite points to enhance solution quality in the evolving population. However, its overall impact may be limited when the governing search space is irregular or if opposite points do not reflect a plausible symmetry. Additionally, quasi-opposition points may be over-randomized, which puts the algorithm at risk of wasting unnecessary function evaluations on regions that are not fertile. Past experiences with the probing efficiency of QDOPP reveal that similar algorithmic drawbacks evident for QOBL are also applicable to QDOPP. The success of QDOPP in the search heavily depends on the spatial structure of the search space. Therefore, if dynamic adjustments are performed too aggressively, the algorithm tends to delay convergence due to exploring irrelevant areas. Furthermore, the adverse effects of the curse of dimensionality hinder the generation of asymmetric sample points in highly irregular problems, which are prevalent in many real-world optimization problems. These two methods are integrated into a single scheme to alleviate the algorithm-specific disadvantages of the QDOPP and QOBL search strategies. Trigonometric sine and cosine functions are also embedded into this proposed scheme in the random number generation stage. Employing sine and cosine functions introduces a periodic and smooth variation between exploration and exploitation phases through controlled oscillations in the search space, which is also beneficial in avoiding premature convergence. In addition to these features, a structured non-uniform distribution created by sine and cosine functions, when synergized with an iteration counter or other time-varying parameters, can help focus on fertile regions more effectively. Finally, chaotic numbers generated by the Chebyshev map are integrated into this manipulation procedure to enhance the contributions of the above-defined improvements, aiming to boost population diversity and eliminate local pitfalls in the search space. A simple yet effective formulation of the Chebyshev chaotic map [31] is given below.
c h x 2 t + 1 = c o s t · a r c c o s ( c h x 2 t )
where chx2 is a chaotic number defined in the range 0 and 1; t represents the current iteration; and t + 1 stands for the next iteration. When t 2 , chaotic behaviors are observed with increasing iterations. The Quasi-Dynamic Opposition Learning and Quasi-Opposition-Based Learning search mechanisms are provided below.
Let X = x 1 , x 2 , , x D represent a vector in a D-dimensional space and x j   [ l o w j , u p j ] where j = 1, 2, …, D, and low and up correspondingly stand for the lower and upper limits of the search space. The quasi-opposite solution (XQOBL) can be expressed in D-dimensional space by the following.
X j Q O B L = r a n d     0.5 l o w j + u p j ,     l o w j + u p j X j    
In the above equation, rand(a,b) generates a uniform random number between a and b. Similarly, the Dynamic Opposite Point (XDOPP) in a D-dimensional space is calculated by using the following set of equations.
X j O P = l o w j + u p j X j    
X j D O P P = X j + ω · r a n d 1 ( 0,1 ) · r a n d 2 0,1 · X j O P X j
where ω is a weight factor, and rand1,2(0,1) are two random numbers between 0 and 1 drawn from a uniform distribution. The algorithm below outlines the procedural steps for integrating these two opposing learning-based variants and synergizing the trigonometric functions of sine and cosine with chaotic numbers generated from the Chebyshev map.
The algorithmic procedure between lines 6 and 11 explains the integration of the QOBL and DOPP learning methods. As seen from Algorithm 1, the governing equation of DOPP (Line 8 and Line 11) uses the Quasi-Opposite solution rather than the Opposite points—XOP. This preference offers several advantages to achieving high solution efficiency. Increased population diversity is one of the significant consequences of this integration since the adaptive opposition of DOPP enables the preservation of diversity at early stages. At the same time, the center-based solution generation of QOBL maintains diverse solutions in the latest iterations. Another advantage is the improved robustness across various problem types, as different problems require different search characteristics. For instance, DOPP can be considered a favorable alternative search mechanism for complex and rugged fitness landscapes, while QOBL’s local search characteristics render this learning method suitable for smooth problems. That is to say, the dynamic nature of the DOPP algorithm allows search agents to perform intense exploration. At the same time, the alternative use of QOBL enables a smooth shift between exploration and exploitation, counterbalancing the contradictory search mechanisms. In addition, trigonometric sine and cosine functions provide smooth transitions between values of −1 and 1, enabling fine-grained control over generating opposites and eliminating the risk of generating opposite points that fall outside a reasonable range. Integrating Chebyshev map-based chaotic numbers with trigonometric functions significantly enhances the impact of these advantages, which is conducive to avoiding premature convergence and providing better coverage of the search space. Between lines 15 and 17 in Algorithm 2, the population individuals of XQOBL and XDOPP are combined into a new XALL population. Infeasible solutions in XALL are amended through the responsible boundary check mechanism, and the fittest N members are selected and stored in XBESTN population to be evaluated in upcoming iterations. The respective MATLAB code of the developed Hybrid Opposition-Based Learning procedure is provided in the Supplementary Materials (HOBL.m).
Algorithm 1: Hybrid population initialization scheme
Inputs: Population size–N, Problem dimension–D,
      Upper and lower limits of the search space (up and low)
      Initialize: N-sized D-Dimensional X population defined within the search limits
      Employ: logistic map induced LHS to generate evenly distributed population members
          XLHS = LHS (low, up, N, D, chx)
      Produce: opposite points (XOP) of X population using Equation (10)
      Combine: population individuals of X, XOP, and XLHS
          and select the fittest N solutions (XBESTN) between the competitive candidates
Output: The best N solutions (XBESTN) to be used for the iterative process
Algorithm 2: Hybrid Opposition-Based Learning Procedure (HOBL)
Inputs: Evolving population—X; Chebyshev chaotic numbers—chx
       Lower and upper limits of the search space (low and up)
 1     [N, D] = size (X)
 2     for i = 1 to N
 3         for j = 1 to D
 4           X i , j O P = l o w j + u p j X i , j
 5             C j = 0.5 · l o w j + u p j
 6            if X i , j < C j
 7                 X i , j Q O B L = C j + X i , j O P C j · s i n c h x i , j 1
 8               X i , j D O P P = X i , j + s i n c h x i , j 2 · c h x i , j 3 · X i , j Q O B L X i , j
 9            else
 10                X i , j Q O B L = X i , j O P + c o s c h x i , j 4 · C j X i , j O P
 11              X i , j D O P P = X i , j Q O B L + c o s c h x i , j 5 · c h x i , j 6 · X i , j X i , j Q O B L
 12             end
 13         end
 14    end
 15        XALL = [XQOBL;XDOPP] // Combine two populations
 16        XALL = boundary (XALL) // Apply boundary check mechanism
 17        XBESTN1 = sort (XALL,1:N)   // Select the fittest N individuals
Output: XBESTN1

4.3. Hybridization of Quasi-Dynamic Opposite Learning Search Mechanism (QDOPP) with a Novel Trigonometric Random Number Generator and Adaptive Fitness-Based Perturbation Scheme

This section addresses the algorithmic drawbacks of Quasi-Dynamic Opposition-Based Learning by introducing a novel trigonometric random number generator and an adaptive search equation guided by the respective fitness values of population individuals. Although the effectiveness of the QDOPP algorithm was verified over various benchmark cases of unconstrained and constrained high optimization problems, evident shortcomings still prevail, such as potential over-exploration caused by the concurrent utilization of dynamic adjustments and quasi-opposite numbers. This may lead to problematic outcomes where solution refinement in near-optimal areas is more crucial and necessary. Similar to most opposition-based algorithms, QDOPP also suffers from problem dependency and fitness landscape characteristics; that is, it may excel in problems with multimodal characteristics but may underachieve in more straightforward convex cases. Furthermore, generating opposite numbers to achieve promising solutions can be meaningless for high-dimensional problems due to inefficient exploration. Therefore, this section proposes a novel trigonometric random number generator with numerous functional advantages to address the evident drawbacks of QDOPP. A brand-new Inverse Sinusoidal Spiral Map-Based Random Number Generator (ISSM-RNG) is proposed in this section, which facilitates the generation of diverse yet stable random numbers. This method leverages the functional characteristics of inverse sinusoidal functions, generating spiraling dynamics that evolve chaotic yet bounded sequences. The following equation generates a recursive sequence of random numbers.
r t + 1 = s i n 2 · r t + 1.5 · c o s 3 · r t 2 + 2.2 · s i n 1.8 · r t 1 + 2.6 · c o s 1.9 · r t · r t 1 + 1 1 + r t 2       m o d   1
The inverse function 1 / 1 + r t 2 in Equation (20) introduces spiral shrinkage, preventing runaway values in the consecutive sequence. The interaction between the sine and cosine functions incorporates oscillatory actions that promote diversity in the sequential random numbers. The sequence is bounded between 0 and 1 by using the mod 1 function. This random number generation method can be beneficial for utilizing metaheuristic algorithms, as the dynamic spirals result from combining a sinusoidal term and an inverse function, which helps diversify the search space. Additionally, the inverse term prevents the system from diverging rapidly and prematurely. Furthermore, integrating the previous random number (rt−1) into the model introduces a memory effect, effectively improving adaptability. Figure 3 shows the dynamically updated random values produced by the ISSM-RNG. The MATLAB code of this sequential random number generation procedure is given in the Supplementary Materials (ISSM_RNG.m).
Another novelty proposed in this section is the introduction of the adaptive fitness-driven perturbation (AFDP) strategy. This parameter-free scheme significantly enhances the diversity of the general population in metaheuristic optimizers. The main idea of this concept relies on adjusting the magnitude and adaptation of perturbations based on the variation in fitness values among population members. The self-regulation nature of this algorithmic procedure does not necessitate predefined parameters, and the fitness difference between the best and ordinary members guides the magnitude of the perturbations. In this context, fitter individuals undergo more minor perturbations to avoid over-exploration, while less fit members experience more significant perturbations to improve diversification. The equation below defines the fundamental equation of the fitness-driven perturbation scheme, which incorporates diversity and different probability distribution functions.
Assume that the population X is comprised of N-sized D-dimensional candidate solutions represented by X = x 1 , x 2 , , x N ,   x i R D . The objective function of the problem can be expressed by f : R D R , where the best fitness in the population is f b e s t = min f ( x i ) .
For each trial solution in population X, the normalized fitness gap is calculated by
δ ( x ) = f ( x ) f b e s t m a x f ( x ) , f b e s t + ϵ
where ϵ is a small positive constant very close to zero, used to avoid the possibility of division by zero. The proposed scheme considers the current diversity in the population to define an adaptive scale among individuals, calculated as follows.
D i v e r = 1 D j = 1 D 1 N i = 1 N x i , j x j ¯ 2 where   x j ¯ = 1 N i = 1 N x i , j       j = 1 , 2 , , D
An adaptive perturbation scale is introduced to the algorithm, coupling the fitness gap with the current diversity.
σ ( x ) = δ ( x ) · D i v e r
This mathematically explains that the trial solution, which is farther from the optimal solution (larger δ(x)) in a more diverse population (higher Diverpop), experiences higher perturbation. In contrast, members starting to converge on optimal solutions have comparatively lower fitness gap values and diversity rates, promoting local exploitation. The perturbed individual is calculated by
x n e w , i = x i + σ ( x ) · v e c r a n d ( x ) i = 1 , 2 , , N
where v e c r a n d ( x ) is a random perturbation vector drawn from alternative probability distributions whose shape is decided by the quality of the online performance feedback. The adaptive scores related to solution improvement rates determine the probability of switching between Gaussian, Cauchy, or Levy distributions. This adaptive distribution operator selection is described in the following.
Assume a set of operation operators is available to algorithm O = {o1, o2, …, ok}. For each operator o in O, a respective performance score is assigned to So(t) and updated in each iteration. Consider an operator o chosen from available candidates employed at iteration t, which improves solution quality, is computed by
Δ f o , i = f ( x i ) f ( x n e w , i ) i = 1 , 2 , , N
where f ( x i ) and f ( x n e w , i ) are, respectively, the objective function values of the current and updated ith member of the population. Then, its respective performance score is updated using the following formula.
S o t + 1 = S o t + Δ f o , i t P o t i = 1 , 2 , , N
where S o t is the current performance score when the distribution operator o is employed, Δ f o , i t is the fitness improvement observed when the perturbation operator o is applied to the ith member at iteration t, and P o , i t is the penalization factor when fitness improvement is not observed by the application of the distribution operator o. Then, the probabilistic selection of the distribution operator o valid for iteration t is calculated by
p r o , i t = S o , i t n O S n , i t
This probabilistic selection equation enables the algorithm to self-adjust the considered perturbation schemes without using preset constant parameters. Algorithm 3 provides the algorithmic steps of the proposed adaptive fitness-driven perturbation model.
The MATLAB code of the Adaptive Fitness-Driven Perturbation scheme can be found in the Supplementary Materials (AFDP.m).
Algorithm 3: Adaptive Fitness-Driven Perturbation algorithm—AFDP
Inputs: Population members—X; objective function—fobj ()
        [N,D] = size (X)  // Determine the population size N and problem dimension D
        Calculate: the diversity of the population Diver using Equation (22)
        Decide: the best individual among the current population—fbest
        Initialize: the operator performance scores–So
        for i = 1 to N          // each population member in X
           Calculate: the normalized fitness gap δ ( x i ) using Equation (21)
           Compute: the adaptive scale factor   σ x i using Equation (23)
           Determine: the operator selection probability–pro
                      from the current scores using Equation (27)
           if rand(0,1) < pro// Randomly select the operator according to pro             
             o = 1           // Consider perturbation based on Gaussian distribution
             vecrand (xi) = Gaussian(1,D)
           else
             o = 2           // Consider perturbation based on Cauchy distribution
             vecrand (xi) = Cauchy(1,D)
           end
           Obtain: the updated population member xnew,i using Equation (24)
           Perform: boundary check on xnew,i to verify solution feasibility
           Evaluate: the fitness value of xnew,ifobj(xnew,i)
           if fobj (xnew,i) < fobj (xi)     // Accept candidate if improvement is observed
             xi = xnew,i
             Δi = fobj (xi) − fobj (xnew,i)  // Employ Equation (25) to calculate improvement
             So = So + Δi      // Update the operator score (So)
           else
             So = So • 0.99 // Po—Penalize the operator if no improvement occurs
           end
        end
Output: XUPTD—updated population members
The proposed algorithm, founded upon the hybridization of the ISSM-RNG-induced QDOPP and Adaptive Fitness Driven Perturbation Scheme, takes the final form, which is explained in Algorithm 4.
Algorithm 4: Hybrid ISSM-RNG induced QDOPP–AFDP algorithm (AFD-QDOPP)
        Inputs: Population individuals—X; objective function—fobj ()
              Lower and upper bounds (low and up)
1       [N, D] = size (X)
2       Generate: N-sized D-dimensional random number sequences using ISSM-RNG
3       for i = 1 to N
4          for j = 1 to D
5            X i , j O P = l o w j + u p j X i , j
6             C j = 0.5 · l o w j + u p j
7            if X i , j < C j
8              X i , j Q D O P P = X i , j + S R N i , j 1 · S R N i , j 2 · X i , j O P X i , j
9            else
10              X i , j R O P = S R N i , j 3 · X i , j O P
11                X i , j Q D O P P = X i , j R O P + S R N i , j 4 · S R N i , j 5 · X i , j X i , j R O P
12             end
13          end
14       end
15       Employ: boundary check mechanism to repair infeasible solution in XQDOPP
16       XUPDT = AFDP (XQDOPP, fobj)
17       Amend: the violated solutions in XUPDT
18       XALL2 = [XQDOPP; XUPDT]
19       XBESTN2 = sort (XALL2,1:N)
20       Output: XBESTN2
In Algorithm 4, ISSN-RNG-based random number generation is repeated five times (Line 2) to be integrated into QDOPP, whose fundamental search equations are performed between lines 3 and 14. The boundary check mechanism is activated in Line 15 to repair feasible solutions in XQDOPP. The AFDP, as described in Algorithm 3, is run for the input values of XQDOPP to enhance the overall solution quality within the population. Following that, a boundary control scheme is employed for the newly generated solutions of XUPDT. Two different N-sized populations, XQDOPP and XUPDT, are combined in XALL2 and sorted according to their fitness values. The N-fittest individuals among them are stored in XBESTN2. The MATLAB code for the QDOPP-AFDP algorithm is available in the Supplementary Materials (QDOPP_AFDP.m).

4.4. Majority Voting Adaptive Switch Mechanism

This study proposes a novel parameter-free adaptive switch mechanism to determine which of the proposed procedures in Algorithm 2 or Algorithm 4 to use in the ongoing iterative process. An adaptive shift addresses the intrinsic challenges of deciding which algorithm to apply between the two methods during consecutive iterations, considering the inherent diversity, best fitness quality, and total fitness improvement in the evolving population. This intelligently devised model makes timely decisions about when to switch between two competing algorithms, employing a democratic voting system that evaluates the current situation of the population without making any assumptions or using preset constant parameters. The proposed mechanism evaluates three primary votes based on the changes in the state metrics. The model switches to the other algorithm if at least two out of three votes suggest the change. At each iteration after the initialization, the algorithm computes the current diversity of the population Diver, which can be calculated by Equation (22); the best fitness in the population fbest; and the improvement in best fitness value It, which is calculated by
I t = m a x 0 , f b e s t t 1 f b e s t t
In this respect, a positive value of It indicates progress (if function minimization is considered), while zero indicates no further progress, leading to stagnation. This adaptive switch mechanism utilizes the Moving Weighted Average (MWA) of each metric, which is iteratively updated based on its current and previous values. For each different metric, MWA updates itself using the following equations.
M W A D i v e r t = α t · D i v e r t + 1 α t · M W A D i v e r t 1
M W A f b e s t t = α t · f b e s t t + 1 α t · M W A f b e s t t 1    
M W A I t = α t · I t + 1 α t · M W A I t 1
In the above equations, a smoothing factor α is incorporated that gives importance to the recent data rather than the previous one, balancing responsiveness and stability. This time-varying model parameter α is calculated by the following expression:
α t = α s t a r t α s t a r t α e n d t M a x i t e r
In Equation (32), α s t a r t and α e n d are initial and final values of the iterative α t parameter, respectively, assigned to 0.5 and 0.1; t is the current iteration, while Maxiter is the total number of iterations required to terminate the iterative process.
The model accounts for variations in trends by calculating the differences between consecutive MWA values obtained for different metrics.
Δ M W A D i v e r t = M W A D i v e r t M W A D i v e r t 1
Δ M W A f b e s t t = M W A f b e s t t M W A f b e s t t 1
Δ M W A I t = M W A I t M W A I t 1
These sequential differences represent the current situation of whether each metric is improving or worsening. If Δ M W A D i v e r t < 0 , it means that diversity is decreasing and the population is converging. If Δ M W A f b e s t t > 0 , it indicates that fitness quality is worsening, suggesting that the switch between two algorithms is necessary. If Δ M W A I t < 0 , the improvement in fitness rates is slowing down. Then, the voting mechanism is put into practice, in which each different metric casts a binary vote to determine whether to switch (1) or stay (0), which prevails based on the variations in their current trends.
V o t e D i v e r t = 1                         i f Δ M W A D i v e r t < 0 0                                                           o t h e r w i s e
V o t e f b e s t t = 1                             i f Δ M W A f b e s t t > 0 0                                                                 o t h e r w i s e
V o t e I t = 1                   i f Δ M W A I t < 0 0                                   o t h e r w i s e
The switching decision occurs if V o t e D i v e r t + V o t e f b e s t t + V o t e I t ≥ 2, otherwise the current one survives into the next iteration.

4.5. Hybrid Chaos Induced Integrated Quasi-Dynamic Opposition-Based Learning (HCQDOPP)-Enhanced Mountain Gazelle Optimizer

This research study aims to integrate various perturbation schemes with different characteristics to improve the Mountain Gazelle Optimizer’s overall optimization capability. Initially, the logistic chaotic map, integrated with the Latin Hypercube Sampling method and augmented by the contributions of the Opposition-Based Learning strategy, was utilized in the population initialization phase to diversify the search space before the iterative process commenced. Then, random numbers generated by a uniform distribution used in the search equations of MGO are replaced with chaotic numbers from the Bernoulli map to balance the exploration and exploitation phases more effectively and avoid premature convergence. Two different improved Quasi-Dynamic Opposition-Based Learning variants have been proposed to overcome the algorithmic disadvantages of the MGO algorithm, one of which is compromised search quality when problem dimensionalities are increased, and the other is that the algorithm may exhibit slow convergence characteristics when exploring complex spaces. This may occur due to the imbalance between competing search mechanisms that guide the search process. This can sometimes delay the focus on promising regions and promote exploration, or vice versa. In addition, although the assertive claim in the reference paper of the MGO is that there is a proper balance between the exploration and exploitation phases maintained by the conducive communication between the intelligently devised search mechanisms, the fact of the matter is that the algorithm lacks a solution refinement when it is needed to probe around the areas near to the global optimum. It is also observed that MGO may underperform on problems with deceptive and highly constrained search spaces, where gazelle-inspired search strategies are prone to collapse and do not get on well with the structure of the search region. These algorithmic drawbacks of MGO hinder the ongoing probing process to identify fertile areas where the global optimum may reside. The primary motivation behind proposing these complementary and improved QDOPP search mechanisms is to address the deficiencies of MGO and extend its optimization success to a wide range of optimization problems. However, a tedious problem arises as to which algorithm should be used instead of the other one during the iterations, which is resolved by the majority voting adaptive switch mechanism that considers the overall population diversity, total fitness improvement, and variations of the current best fitness values in the population to decide a reasonable selection between two improved QDOPP algorithms. The HCQDOPP-improved MGO algorithm repeats the above-defined manipulation scheme until the predefined termination criterion is met. Algorithm 5, as defined below, explicitly explains the basic algorithmic steps of the proposed method in pseudo-code form.
Figure 4 shows the algorithmic flowchart representation of the HCQDOPP improved MGO optimizer.
Algorithm 5: HCQDOPP enhanced MGO optimizer
Inputs: Objective function—fobj(); problem size—N; problem dimension—D
       Upper and lower bounds (up and low), maximum number of iterations (Maxiter)
       Initialize: evolvable population X using Algorithm 1
       Initialize: the model parameters of the running algorithms
       Initialize: the responsible metrics for the adaptive switch mechanism
       Calculate: the population diversity (Diver) using Equation (22)
       Decide: the best individual (XBEST) and its respective fitness value fbest
       Set: the current fitness improvement to zero (I = 0)
       Initialize: Moving Weighted Average parameter values defined for each metric
               MWADiver = Diver, MWAfbest = fbest, MWAI = I
       Initialize: chaotic numbers generated by the Chebyshev, Bernoulli, and logistic maps
       Assign: HOBL to the current switchable procedure and set currentAlg = 1
       Set: iteration counter to 1 (iter = 1)
       While (iter ≤ Maxiter) do
         Run: Bernoulli map improved MGO algorithm
           [XMGO, XBEST, fbest] = MGO(X)
         if rand(0,1) < 0.5
            Calculate: Population diversity using XMGO through Equation (22)
                Diveriter = Diversity (XMGO)
            Use: fbest to decide on fitness quality
                fbestiter = fbest
            Compute: Fitness improvement I through Equation (28)
                I = if iter > 1 ? max(0, fbestprevfbestiter): 0
           Apply: Equation (32) to calculate the numerical value of smoothing factor αiter
           Update: MWADiver, MWAfbest, MWAI using Equation (29) to Equation (31)
                                                                                                                                M W A D i v e r i t e r = α i t e r · D i v e r i t e r + 1 α i t e r · M W A D i v e r p r e v
                                                                                                                              M W A f b e s t i t e r = α i t e r · f b e s t i t e r + 1 α i t e r · M W A f b e s t p r e v
                                                                                                                              M W A I i t e r = α i t e r · I i t e r + 1 α i t e r · M W A I p r e v
           Compute: the difference trends by using Equation (33) to Equation (35)
                                                                                                                              Δ M W A D i v e r i t e r = i f   i t e r > 1 ?   M W A D i v e r i t e r M W A D i v e r p r e v : 0
                                                                                                                              Δ M W A f b e s t i t e r = i f   i t e r > 1 ?   M W A f b e s t i t e r M W A f b e s t p r e v : 0
                                                                                                                              Δ M W A I i t e r = i f   i t e r > 1 ?   M W A I i t e r M W A I p r e v : 0
           Cast: votes for each decisive metric
                                                                                                                                      V o t e D i v e r i t e r = i f   Δ M W A D i v e r i t e r < 0   ? 1     : 0
                                                                                                                                      V o t e f b e s t i t e r = i f   Δ M W A f b e s t i t e r > 0   ? 1     : 0
                                                                                                                                      V o t e I i t e r = i f     Δ M W A I i t e r < 0   ? 1     : 0
             Calculate: the total votes
                                                                                                                                      V o t e t o t a l i t e r = V o t e D i v e r i t e r + V o t e f b e s t i t e r + V o t e I i t e r
             Activate: the switching mechanism if necessary conditions are met
                                                                                                                                      c u r r e n t A l g = i f   V o t e t o t a l i t e r   ≥ 2 ? 3–currentAlg
             Store: the previous MWA metric values and fbest for the upcoming iteration
                                                                                                                                      M W A D i v e r p r e v = M W A D i v e r i t e r ,   M W A f b e s t p r e v = M W A f b e s t i t e r ,
                                                                                                                                        M W A I p r e v = M W A I i t e r ,   fbestprev = fbestiter
             Apply: the current algorithm according to currentAlg value
                if currentAlg == 1 then
                 XNEW = HOBL(XMGO)
                else
                 XNEW = AFD-QDOPP(XMGO)
                end
           else
            Assign: XMGO to XNEW
        end
            Activate: boundary search mechanism
            Update: X population from the newly generated members of XNEW
            Determine: the best solution vector XBEST and its respective fitness value fbest
            Update: chaotic sequences generated by Bernoulli and Chebyshev maps
            Increment: iteration counter (iter = iter + 1)
        end
Output: XBEST, fbest

4.6. Time Complexity of the HCQDOPP Algorithm

This section evaluates the total time complexity of the proposed enhanced HCQDOPP algorithm. Although the integration of chaotic numbers increases the total runtime of the algorithm, it does not introduce extra complexity to the algorithmic procedure. Significant complexity differences between the standard MGO and HCQDOPP algorithms stem from the respective position update mechanisms of the two variants, which are utilized in the improvement process of the standard QDOPP algorithm. In the first step, the complexity of population initialization is calculated by O ( 3 · N · D ) , where N is the population size and D is the dimensionality of the problem. Incorporating Latin Hypercube Sampling-based and opposition learning-based population generation adds an extra load to computer processors. Yet, its effectiveness in producing diverse members at the initial phase eases the convergence of the iterative process. The manipulation of candidate solutions through the MGO algorithm has the complexity of O ( N · D ) . The proposed adaptive switch mechanism guiding the improved QDOPP algorithm variant has computational complexity O ( 2 · N · D ) . If low and negligible complexities are eliminated, and only significant complexities are considered for the entire range of the iterations (T), then the overall complexity of the algorithm becomes O 3 · N · D + O ( 3 · N · D · T ) . However, it is worth mentioning that the proposed search scheme is applied to the base MGO with a random probability of 0.5, which transforms the total complexity into its current and more reliable form of O 3 · N · D + O ( 2 · N · D · T ) .

5. Simulation Results and Discussion

This section compares the proposed HCQDOPP-MGO algorithm with different Opposition-Based Learning-enhanced versions of MGO and newly emerging state-of-the-art metaheuristic algorithms. Detailed information about the configurational settings of tunable algorithm parameters will be provided for a fair comparison between the metaheuristic optimizers. Convergence analysis and box plot simulations based on statistical results are comparatively investigated to verify the superiority of the HCQDOPP-MGO. For simplicity in titling the proposed method, the abbreviated form of HCQDOPP is considered.

5.1. Benchmark Problems

This research study considers 28 test problems used in CEC 2013 competitions and 22 benchmark functions employed in the CEC 2014 competitions. Benchmark problems used in CEC competitions can be favorable alternatives to standard test functions, as they offer a greater diversity of problem types, including separable, non-separable, rotated, shifted, and noisy functions. In addition, they are abstract formulation models that can incorporate the complexities of real-world problems, rendering them a promising scenario for comprehending the capabilities of the developed algorithms. The CEC 2013 problems comprise five unimodal, fifteen multimodal, and eight composite test problems, each with a defined search space of [−100, 100]D. Unimodal test problems have only one critical optimum point within the defined search landscape, which is both a global and local point. In contrast, multimodal problems have multiple local minima or critical points within the search region. Considered CEC2014 test problems consist of 22 functions with distinctive characteristics; the first three are unimodal test problems with rotated versions of standard test problems; the fourth to sixteenth problems are shifted and rotated versions of standard test problems; and the remaining problems from seventeen to twenty-two are hybrid functions composed of integration of two or more standard test problems. Hybrid functions can be practical tools for benchmarking the maintained balance between exploration and exploitation mechanisms since each is constructed by combining multimodal and unimodal test problems. Variables are split into subgroups, and each decision variable set belonging to the respective subset is assigned to a different base function. This is an exemplary case that models the complexities of real-world problems, where different subsets have the potential to influence the objective in various ways. Like the previous case, the defined search domain is restricted to [−100, 100]D for each problem in the CEC2014 test suite. Descriptions of the employed CEC 2013 and CEC 2014 benchmark functions are, respectively, given in Table 2 and Table 3.

5.2. Parameter Settings of the Comparative Algorithms

In recent years, numerous metaheuristic algorithms have emerged, each with varying search capacities due to their distinct governing search equations, which are enhanced by various complementary mathematical models incorporating unpredictable randomness. To perform a comparative study between the proposed improved MGO algorithm, this study considers nine relatively recently emerged metaheuristic optimizers, including the Manta Ray Foraging Optimization Algorithm (MANTA) [52], Runge–Kutta Optimizer (RUNGE) [53], African Vultures Optimization Algorithm (AVOA) [54], Gannet Optimization Algorithm (GANNET) [55], Electric-Eel Optimizer (EEL) [56], Equilibrium Optimizer (EQUIL) [57], Gradient-Based Optimizer (GRAD) [58], Reptile Search Algorithm (REPTILE) [59], and Coati Optimization Algorithm (COATI) [60]. A comparative study also extends to the different variants of Opposition-Based Learning algorithms embedded into standard MGO algorithms, which include standard Opposition-Based Learning (OBL) [25], Quasi Opposition-Based Learning (QOBL) [61], Quasi-Dynamic Opposition-based Learning (QDOPP), Super Opposition-Based Learning (SOBL) [62], Centroid Opposition-Based Learning (COBL) [63], and Multi-Individual Opposition-Based Learning (MIOBL) [64] search procedures. Table 4 provides the default parameter settings of the state-of-the-art algorithms considered for performance evaluations. Opposition-Based Learning variants lack tunable algorithm parameters, which are prolific tools for enhancing the diversity of the base algorithm’s population.

5.3. Comparison of the Statistical Results

This section evaluates the convergence analysis of the proposed HCQDOPP against cutting-edge metaheuristic algorithms from the literature, using benchmark problems from the CEC 2013 and CEC 2014 competitions. When considering the CEC2013 and CEC2014 benchmark problems, 10,000 function evaluations are performed for each comparative algorithm, with 30 independent runs, due to the relatively higher complexities and nonlinearities of these test instances. The scalability of the compared algorithms will also be analyzed in the upcoming section, which serves as another reasonable indicator of the superiority of a metaheuristic optimizer. Figure 5 shows the box plot representation of the statistical results from the 30D benchmark problems utilized in the CEC 2013 competitions. In general, it is observed that different OBL-improved variant-enhanced MGO algorithms provide significantly better predictions compared to those obtained from state-of-the-art optimizers. The success of the MGO in obtaining accurate estimations is a direct outcome of its compelling exploration through navigation over multimodal landscapes and the precise refinement of promising regions detected during the diversification phase, which are significantly superior to those of the competing optimizers used in the comparison in Figure 5. HCQDOPP obtains the most accurate estimations in 20 out of 28 30D test problems and becomes the dominant algorithm among the methods compared. Nevertheless, the proposed method fails to yield accurate predictions for the following test instances: CEC2013-04 (Rotated Discus function), CEC2013-05 (Different Powers function), CEC2013-06 (Rotated Rosenbrock function), CEC2013-10 (Rotated Griewank function), CEC2013-16 (Rotated Katsuura function), and CEC2013-23 (Composition Function 3). Total collapse during the convergence process is observed for CEC2013-03 (Rotated Bent Cigar function) for all compared optimizers. This test problem features an elongated, stretched landscape, exhibiting extreme difficulties, particularly in higher dimensions, where even slight deviations make significant differences in objective function values. Additionally, the rotated landscape employs an algorithm for a general search space, which slows down the overall convergence speed. Nearly all algorithms fail to capture the correct trends as they converge to the optimal solution. Similar convergence tendencies are observed for CEC2013-02 (Rotated High-Conditioned Elliptic Function) and CEC2013-04 (Rotated Discus Function). These two unimodal functions exhibit functional characteristics of non-separability via rotation, making it nearly impossible to reach the optimum solution by treating each variable independently and forcing the algorithm to probe the coupled search space. Furthermore, narrow valleys and sharp ridges generated by highly curved spaces require a delicate balance between global exploration and local refinement to reach fertile regions where the most optimal solutions reside. All algorithms, including the proposed method and its variants, struggle to overcome the challenges of these functions and accurately estimate these test problems. Despite its significant failure in unimodal test problems, HCQDOPP provides relatively higher precision for predicting composite functions from CEC2013-21 to CEC2013-28, obtained after only 10,000 function evaluations for each problem, which is very low compared to the various applications in the literature. Figure 6 compares the statistical results for the contender algorithms on the 30D CEC2014 test problems in a box-plot representation. HCQDOPP and the remaining algorithms find the known global optimal solutions of CEC2014-01, CEC2014-02, CEC2014-03, CEC2014-07, and CEC2014-08 at least once after 10,000 function evaluations, which is a notable achievement for any metaheuristic in solving such complex unimodal and multimodal test problems. Considering the mean results, HCQDOPP performs best in 13 out of 22 test cases and outperforms the remaining algorithms in terms of solution robustness. Although HCQDOPP shows poor performance relative to the compared algorithms in terms of mean results for the CEC2014-05, CEC2014-08, and CEC2014-16 test problems, the results obtained are close to the known global optimum answers. Large deviations are observed between the best and worst predictions for hybrid functions of CEC2014-17, CEC2014-18, and CEC2014-21 test problems for the compared methods. It appears that algorithms struggle to adapt to these cases, particularly in accommodating diverse landscapes within a single function while maintaining a proper shift mechanism for diversification and intensification phases. HCQDOPP is more effective than the most competitive method among the 30D CEC2014 test problems when considering the best results.

5.4. Scalability Analysis and Statistical Test Results

This section evaluates the performance variations of the proposed method and other remaining optimizers in the case of increasing problem dimensionalities of the employed test functions. The ranking analysis is performed to detect significant differences between multiple datasets, which are statistical results of the contender algorithms obtained for 30D, 50D, and 100D variants of the CEC2013 and CEC2014 benchmark problems, as well as standard test problems with 1000D, 2000D, and 3000D versions. Figure 7 shows the corresponding rankings of the compared algorithms for varying problem dimensionalities of the CEC 2013 test problems concerning both mean and best mean results. HCQDOPP ranks first among the competitive algorithms for two scenarios and proved superior over the remaining methods for CEC 2013 problems. The second-best algorithm, considering the best predictions for 30D, 50D, and 100D problems, is MIOBL. However, when mean results are considered, COBL algorithms rank second, while MIOBL ranks third as the best optimizer. GRAD, REPTILE, and COATI optimizers are the worst predictors for CEC 2013 problems. Figure 8 visualizes the comparison of the ranking results of the proposed method with those of other contestant algorithms for the CEC 2014 benchmark problems. HCQDOPP continues to dominate in terms of mean and best results for the 30D, 50D, and 100D benchmark functions, while the second-best algorithm is COBL. REPTILE and COATI algorithms again yield the most deviated predictions among the employed algorithms.
The Wilcoxon signed-rank test [65], an effective and practical statistical tool for assessing the significance of differences between two datasets, is used to verify the applicability of the HCQDOPP algorithm. Table 5 presents the statistical results of the Wilcoxon signed-rank test at significance levels of 0.05 and 0.1, comparing HCQDOPP with other algorithms across various test problems. The respective signs “+”, “=”, and “−” correspondingly indicate that the proposed HCQDOPP is better than, comparable to, and worse than its designated competitor. The p-values obtained for HCQDOPP across various test instances are below 0.05, confirming the statistical significance of HCQDOPP across all test functions employed in the performance comparison. In conclusion, HCQDOPP is the most competitive algorithm for various benchmark problems, significantly surpassing the contestant algorithms in terms of cumulative ranking performance and Wilcoxon test results.

5.5. Performance Assessment on CEC2006 Constrained Test Problems

In this section, the optimization capability of the proposed method will be verified on twelve test functions retrieved from the suite of CEC 2006-constrained problems. These are widely recognized benchmark suites, frequently utilized to evaluate the optimization performance of algorithms on constrained test cases. These problems can be considered practical test beds to assess the algorithm’s ability to handle various functional challenges, such as coping with nonlinear equality and inequality constraints, balancing feasibility and optimality, and overcoming the inherent multimodality of the test problems to avoid premature convergence. Table 6 presents the functional characteristics of the considered test functions for evaluating the performance of the compared algorithms. In Table 6, D is the problem dimensionality, LI is the number of linear inequality constraints, NI is the number of nonlinear inequality constraints, LE is the number of linear equality constraints, NE is the number of nonlinear equality constraints, and fopt(x) stands for the best-known optimal solution of the defined test problem.
Table 7 reports the statistical results of different state-of-the-art literature optimizers including MANTA, Marine Predators Algorithm (MARINE) [66], MGO, AVOA, Dandelion Optimizer (DANDEL) [67], EQUIL, Honey Badger Optimizer (HBADGER) [68], Kepler Optimization (KEPLER) [69], Mantis Search (MANTIS) [70], RUNGE, Slime Mould Optimizer (SLIME) [71], and Walrus Optimizer (WALRUS) [72], which are obtained after 10,000 function evaluations of 30 independent algorithm run.
HCQDOPP achieves the best predictions for G04, G09, G10, G13, G14, and G19 problems, while obtaining the second-best estimations for G01, G03, G07, and G18 test problems in terms of mean results, and it becomes the superior algorithm with a respective average ranking point of 1.666. The MANTIS algorithm is the second-best method, with a corresponding ranking value of 2.833. WALRUS is the worst predictor among them, with a ranking point of 10.971. It is observed that the MGO, KEPLER, and WALRUS optimizers fail to find any feasible solution in 30 consecutive runs. The G03 test problem challenges the applied algorithm due to its complex mathematical structure. Although this problem has only one active constraint, ensuring precise equality constraint satisfaction can be troublesome, as many optimization algorithms struggle to comply with this requirement. Minor deviations in decision variables may render the optimal solution infeasible. In addition, combining a nonlinear problem objective and nonlinear equality constraints can result in a highly non-convex feasible optimal region, making it challenging to explore. Algorithms with unbalanced feasibility and optimality struggle to find optimal solutions in feasible areas and may collapse at certain stages of the iteration.

6. A Complex Real-World Optimization Case: A Shell and Tube Heat Exchanger Design Operated with Nanofluids

This section is primarily associated with the thermo-economic design of a shell-and-tube heat exchanger operating with different nanofluids on the tube side. Designing any heat exchanger is challenging and time-consuming, as the entire thermal design process requires complying with numerous restrictive thermal and structural constraints that must meet the expected requirements of end-users and working practitioners. The defined optimization problem involves highly nonlinear design objectives that must satisfy several constraints to achieve optimal construction. In this case, the primary motivation behind using a nanofluid-based refrigerant rather than a standard in-tube fluid is to investigate the valuable effects of the nanofluid’s influence on the overall heat exchange rates between the shell and tube side streams. However, one critical case that should be thoroughly scrutinized is the detrimental effects of the suspended nanoparticles in the nanofluid. A designer should make a plausible trade-off between the improved heat transfer rates and increased pressure drop rates resulting from the inclusion of nanoparticle effects. To evaluate the concurrent influences of nanoparticles on heat transfer and pressure drop rates, a comparative study is conducted among different heat exchanger configurations to determine which configuration yields the best thermo-economic performance under the specified heat load. The following section describes the heat transfer model employed in this study, which also considers the contribution of nanoparticle effects, providing end-users and practitioners with insights into the nanoparticle-based configuration that yields the most efficient design in total energy cost. Figure 9 illustrates a schematic of a shell-and-tube heat exchanger, labeling the essential components of the main construction.

6.1. Representative Heat Transfer Model

This section presents the primary heat exchanger equations used in the simulation. The total heat transfer rate is expressed using the ε-NTU approach as follows [73]
Q = ε · C m i n · T h , i T c , i
where Cmin indicates the minimum heat capacity value; Th,i and Tc,i are the hot and cold side inlet temperatures, respectively; and ε represents the effectiveness, which is the ratio between the actual heat transfer and the maximum heat transfer between two running streams [74] and can be formulated by
ε = Q Q m a x
where Q is the current heat transfer rate and is only available if the inlet and outlet conditions of the running streams are known in advance or an iterative procedure is applied to obtain the actual heat transfer rate.
Q = C h o t T h o t , i T h o t , o = C c o l d T c o l d , o T c o l d , i
Alternatively, by employing ε–NTU, the actual heat exchange value can also be redefined by the following:
Q = ε · C m i n · T h o t , i T c o l d , i
For any heat exchanger, effectiveness is the direct function of the Number of Transfer Units (NTUs) and the ratio between the minimum and maximum heat capacity rates.
ε = f N T U , C m i n C m a x
where the heat capacity ratio is defined as
C r = C m i n C m a x
The number of transfer units (NTUs) can be calculated as follows.
N T U = U · A o C m i n
The total heat transfer area A o is calculated by
A o = N t π d o L t
where Lt represents tube length, and Nt symbolizes the number of tubes, which can be obtained as follows [75].
N t = C D s d o n 1
Here, the coefficients C and n1 are model constants based on tube configuration and the number of tube passes [76].
The overall heat transfer coefficient is evaluated as follows [74].
U o = 1 1 h s + R s f + d o l n d o d i 2 k w + R t f d o d i + 1 h t d o d i
d i = d o 2 · δ
Rsf and Rtf are the shell and tube side fouling factors; δ is the tube thickness; hs and ht indicate the shell and tube side heat transfer coefficient. To evaluate the Uo value of Equation (48), the hs and ht should be calculated beforehand. A convective heat transfer coefficient of nanofluid is used as the working fluid for the tube side. Additionally, the Dittus–Boelter equation [76] calculates the tube-side heat transfer coefficient–ht
N u n f = h t d i k = 0.023 · R e 0.8 · P r 0.4
where the dimensionless Reynolds Re and Prandtl Pr numbers can be obtained as follows:
P r = μ · C p k
R e = m ˙ · d i A o · μ
The thermophysical properties of the nanofluids are evaluated based on the mean bulk temperature of the nanofluids. Additionally, the viscosity of water is 0.000759 kg/ms. The main thermophysical properties of the materials, which are the thermal conductivity, specific heat, and density that generate Pr in Equation (51) and the viscosity that is used to compute Re defined in Equation (52), are calculated in the presence of nanofluid using correlations obtained from literature.
The thermal conductivity of nanofluids can be computed by using the following correlation [77]
k n f = k f k p + 2 k f + 2 k p k f φ k p + 2 k f k p k f φ
where φ represents the volume concentration, and the nanofluid-specific heat is determined as follows [78]:
c p , n f = ρ f c p , f 1 φ + ρ p c p , p φ ρ n f
Here, ρnf stands for the density of nanofluids, which can be evaluated using the following correlation [78]:
ρ n f = φ ρ p + 1 φ ρ f
The viscosity of nanofluid can be found as follows [78]:
μ n f = μ f 1 + 2.5 φ
This work considers six nanofluids—Al2O3, CuO, TiO2, Cu, SiO2, and Boehmite—as working fluids on the tube side. The thermophysical properties of these considered nanofluids are given in Table 8. Then, the effectiveness of the counter-current flow heat exchanger, which is also considered for the main configuration for two running streams in this study, is computed by:
ε = 1 e x p N T U 1 C r 1 C r · e x p N T U 1 C r
Water is utilized as a working fluid. Shell-side investigation is not straightforward because it involves complex and intricate flow characteristics. The Bell–Delaware approach, as described by Kakac et al. [76], is utilized to evaluate the convective heat transfer coefficient. This method considers five different leakages and bypass streams when assessing the convective heat transfer coefficient and pressure drop.
The essential equation for computing the convective heat transfer coefficient is as follows.
h s = h i d · Y c · Y l · Y b · Y s · Y r
Here, Yc corrects the irregularities related to baffle configuration; Yl represents the baffle leakage effects; Yb is the parameter responsible for the bypass effect; and Ys accounts for the change in baffle spacing at the inlet and outlet sections compared to the central baffle spacing. Yr is the factor that can be applied when Res is less than 100. Table 9 illustrates these representative correction factor formulations and the parameters involved in these equations. hid stands for the ideal heat transfer coefficient, considering the assumption that the fluid is a pure crossflow along the tube bank; hid can be calculated as follows [76].
h i d = j 1 c p s m s ˙ A o , c r k s c p s μ s 2 / 3 μ s μ s , w 0.14
where Ao,cr is the crossflow area. The parameter j1 represents the Colburn j-factor for an ideal tube bank, which can be obtained as follows:
j 1 = a 1 1.33 P t / d o a 3 1 + 0.14 R e s a 4 R e s a 2
Here, α1, α2, α3, and α4 symbolize the model parameters whose exact calculation procedure can be obtained from Kakac et al. [76].
In the STHE configuration, the shell side pressure drop consists of three different regions: the pressure drop in the entrance crossflow zone, Δpcr, the pressure drop in the window section, Δpw, and the pressure drop in the inlet and outlet part, Δpe. The total pressure drop over the shell side is estimated using the Bell–Delaware approach, which combines these three factors in a single equation below.
Δ P s = Δ P c r + Δ P w + Δ P e = N b 1 Δ P b , i d ζ b ζ l + N b Δ P w , i d ζ l + 2 Δ P b , i d 1 + N r , c w N r , c c ζ b ζ s
Here, Nb denotes the number of baffles; Nr,cc represents the number of tube rows taking place in one crossflow part, Nr,cw is the number of tube rows occurring in each window. Parameters ζ l ,   ζ b , and ζ s are correction factors. ΔPb,id symbolizes the pressure drop in an equivalent tube bank in the window part and can be found as follows.
Δ P b , i d = 4 · f i d · G s 2 · N r , c c 2 · ρ s μ s w μ s 0.25
where fid represents the friction factor and is calculated based on the following equations.
f i d = b 1 1.33 P t / d o b 3 1 + 0.14 R e s b 4 R e s b 2
In Equation (63), the numerical values of the model constraints b1, b2, b3, and b4 can be retrieved from Kakac et al. [76]. In Equation (64), ΔPw,id represents the pressure drop regarding one window section and is computed as follows.
Δ P w , i d = 2 + 0.6 N r , c w m s 2 ˙ 2 · ρ s · A s · A o , w
The tube-side pressure drop can be calculated by the equation below [74]
Δ p t = m t ˙ 2 · ρ n f · A o , t 2 4 · f · L d i + 1 σ 2 + K c 1 σ 2 K e n p
where Ke and Kc are sudden expansion and sudden contraction coefficients, σ is a function of the contraction ratio whose calculation method is described in Shah and Sekulic [74], np is the number of tube passes, and f is the fanning friction factor, computed by the formulation
f = 0.046 R e t 0.2
After defining the respective mathematical models for heat transfer and pressure rates resulting from the circulation of the running streams across the heat exchanger tubes, the primary design objective should be clearly defined to facilitate the optimization process. Minimizing the total heat exchanger cost is the primary design objective that the proposed optimization method, HCQDOPP, aims to maintain in this study. Defined objective function Ctot is the total cost of the heat exchanger, which is comprised of the capital investment cost Cc,inv and operational costs Cop.
C t o t = C c , i n v + C o p
where the capital investment cost Cc,inv is computed by employing the Hall equation, which is defined by [79]
C c , i n v = 8000 + 259.2 A o 0.93
The following equation can calculate the total discounted cost expenditures related to the overall pumping power.
C o p = j = 1 n y C o 1 + i j
C o = P p u m p · C E C · H
P p u m p = 1 η m t ˙ ρ t Δ P t + m s ˙ ρ s Δ P s
where Co stands for the annual operating cost of the heat exchanger to be optimized; i is the annual interest rate; ny refers to the number of active years in which the heat exchanger is operated; Ppump represents the consumed pumping power; CEC is the total energy cost in €/kWh; and H is the annual operational hours of the operating heat exchanger.

6.2. Optimization Results and Related Discussion

The HCQDOPP algorithm has been utilized to minimize the total cost of shell-and-tube heat exchangers. In this context, the Bell–Delaware method is adopted to solve the design problem, as this approach is considered one of the most effective design strategies available in the literature for handling the structural error that causes alternating flow currents and leakages.
The examined problem of a shell-and-tube heat exchanger has a heat load of 391.3 kW, as determined by the imposed heat transfer rate. Oil serves as the heat transfer fluid in the shell side, while the nanofluid circulates through the smooth tubes. Here, in this case, the main aim in examining this complex design process is to investigate how different in-tube flowing nanofluids influence the overall thermo-economic performance of a shell and tube heat exchanger and to gain valuable insights on the total structural and investment cost increases resulting from the inclusion of the suspended nanoparticles in the running refrigerant water stream in the tubes. A comprehensive comparative study discusses the thermo-economic efficiency of six different configurations, including Al2O3, CuO, TiO2, Cu, SiO2, and Boehmite nanofluids flowing on the tube side. Each different configuration is evaluated separately in terms of total cost considerations. The best-performing nanofluid configuration among the six available alternatives is decided based on the minimum cost expenditure. Another critical point to be scrutinized in this case is the feasibility of leveraging the merits of nanoparticles to reduce overall costs. It is of utmost importance to maintain a balance between the increased pumping power rates induced by the nanoparticle’s influence, caused by increased friction factor values, and the amount of increased heat transfer rates between the shell and tube side streams, which also result from the nanoparticle effects. To achieve the most feasible configuration, a successful designer should find a plausible trade-off between these contradictory and decisive design considerations. Additionally, definitive design constraints are established for the shell and tube side pressure drop rates to prevent excessive electrical costs and expenditures associated with the compressor’s operation. In this context, the pressure drop rate at the shell side must not exceed 25 kPa and should not exceed 6.0 kPa at the tube side. Another design constraint is imposed on the total heat exchange area, which should not exceed 50 m2. The last design constraint is subject to the pitch ratio, which must not violate the limit value of 1.25 times the tube outside diameter. The operational conditions of the shell-and-tube heat exchanger for six different nanofluid-based configurations are reported in Table 10. The upper and lower search limits for the sixteen design parameters of the shell-and-tube heat exchanger are reported in Table 11. Thirteen decision parameters are continuous, while the remaining three are integers. Table 12 presents the optimal values of the decision parameters obtained by the proposed HCQDOPP algorithm for various heat exchanger configurations. The proposed HCQDOPP has performed 50 algorithm runs, each with 50,000 function evaluations, for each heat exchanger configuration whose optimal design parameters are reported in Table 12, along with the optimal values of decisive model parameters.
It is observed that the heat exchanger design with SiO2 + water nanofluid running in the tubes has the lowest total cost of € 21,116.82 compared to the other remaining configurations, which also include the preliminary design operated with pure water for in-tube flow. In terms of minimum total cost, the second-best heat exchanger design involves a configuration operating with Al2O3 + water nanofluid on the tube side. Among the six contestant heat exchanger configurations, the worst design alternative in terms of overall cost values is operated with water and Boehmite nanofluid, having a minimum fitness value of 24,915.59 €. This is only marginally better than the preliminary design, which operates with pure water, with a total cost value of € 25,231.71. When total discounted capital cost and operating cost rates are thoroughly examined, all six different design configurations have approximately similar capital investment cost rates, which are directly related to the total area of the heat exchanger. As is known, the total area of the heat exchanger is directly associated with the overall heat transfer coefficient values, which are also quite similar between the compared design configurations. Therefore, it can be concluded that the decisive factor influencing the total cost rates is the discounted operating cost values, which are a direct function of the pumping power (Ppump) resulting from the pressure drop across the shell side and tube side flows. As shown in Table 12, the heat exchanger configuration operating with water + SiO2 yields the lowest shell side pressure rate (Δpshell) of 11,961.01 Pa and a moderate tube side pressure rate (Δpt) of 5522.524 Pa, resulting in a total discounted operating cost rate of € 4815.064. As mentioned earlier, the total heat exchange surface significantly impacts the investment cost rate of a shell-and-tube heat exchanger. The configuration operated with SiO2 + water has the overall investment cost rate of 16,301.76 €, which is better than those obtained from the configurations running with pure water (16,298.73 €), and water + Al2O3 (16,525.932 €), water + CuO (16,946.78 €), water + TiO2 (16,488.13 €), and water + Cu (16,615.21 €) nanofluids.
One can also observe a significant decrease in total cost values (16.3%) when water + SiO2 nanofluid is utilized in the tube side. This primarily results from the substantial decline in total discounted operating cost rates (46.1%) compared to the preliminary design, which is operated with pure water in tubes. This decrease is attributed to the reductions in shell-side (50.3%) and tube-side (4.5%) pressure drop values. The SiO2 nanoparticle ratio of 4.69% in the water + SiO2 nanofluid significantly contributes to the tube-side pressure rates, which are entailed by the increased friction between the nanofluid and adjacent tube walls. The total heat exchanger surface of the configuration running with water and SiO2 is negligibly higher than that of the preliminary design, resulting in an insignificant increase in investment cost values.
Figure 10 and Figure 11 illustrate the parametric analysis of the sixteen decision parameters, whose numerical behaviors are examined as the design variables vary from lower to upper bounds. In these figures, an increase in the shell side and tube outside diameters leads to a decline in total case rates. However, considerable increases are observed as these design parameter values are increased to their upper limits. The total cost rate is the maximum when the tube layout is 30° and takes its minimum value when the tube layout is 60°. The increase in tube length entails an expected increase in total cost rates due to the increase in the total heat exchange area. An increase in the number of tubes results in a considerable increase in total cost values due to the increased pressure drop rates on both the tube and shell sides, as well as the total heat exchange area. It is also observed that variations in the number of sealing strip pairs, tube-to-baffle hole diameter clearance, and shell-to-baffle hole diameter clearance, within their allowed bounds, have a negligible influence on total cost rates. Relatively massive declines are observed in total cost rates when the design variable of central baffle spacing varies from its lower to upper search limit. However, changes in the objective function of total cost rates are comparatively minor when outlet baffle spacing and inlet baffle spacing design variables vary from lower to upper search limits. Total cost rates are reduced when the height of the baffles in the heat exchanger increases. It is worth mentioning that tube pitch, which refers to the distance between adjacent tubes in a tube bundle, is a crucial factor in shell-and-tube heat exchangers, as significant declines are observed when tube pitch values are increased to their maximum allowable limit. It is also seen that the bypass lane width has a negligible impact on total cost rates. Changes in tube thickness have a negligible effect on total cost rates. One crucial issue that is extensively scrutinized is the effect of nanoparticles on heat transfer and pressure drop rates. As discussed in the preceding paragraphs, integrating nanoparticles into the refrigerant fluid has a variable impact on these two essential design parameters, decreasing total cost rates as the volumetric nanoparticle ratio increases from lower to upper bounds. End users, researchers, and designers should remember that an increase in suspended nanoparticles in the refrigerant stream enhances the heat transfer coefficient values on the tube side. However, it also entails increased pressure drop rates, negatively influencing the tube-side pressure drop rates. When nanoparticle effects are considered, a plausible balance should be maintained in pressure drop and heat transfer coefficient rates to obtain the minimum total cost value.

7. Conclusive Remarks

This study proposes two novel variants of Quasi-Dynamic Oppositional-based Learning methods to be integrated into the newly emerging metaheuristic optimizer, the Mountain Gazelle Optimizer, to enhance its overall optimization performance. At the initial phase, where population initialization commences, a combination of Latin Hypercube sampling, empowered by Logistic chaotic sequences, and basic principles of the Opposition-based Learning mechanism is considered to generate highly diverse, evolvable candidate solutions before proceeding to the consecutive iterations. To further enhance the overall solution quality in the gazelle population, chaotic numbers generated from various chaotic maps have been separately embedded into the corresponding search equations to assess their impact on the total solution quality. Twenty-one chaos-induced Mountain Gazelle Optimizers have solved forty-eight standard optimization benchmark problems, comprising unimodal and multimodal test functions. The best-performing chaotic variant among them has been decided based on its success rate over the employed test problems. The second phase of the algorithm development procedure involves enhancing the overall solution accuracy of the best chaotic variant of the Mountain Gazelle Optimizer, utilizing a novel mutation scheme that leverages the concurrent contributive influences of two different improved versions of Quasi-Dynamic Opposition Learning algorithms, which is administered by a novel adaptive switch mechanism, responsible for selecting which variants should be employed at the current iteration, decided by taking into account the previous successes of the two competing algorithms. The proposed mutation equation has significantly increased solution diversity in the iteratively evolving population, dramatically improving the solution qualities of applied test problems, which include the most renowned benchmark functions used in CEC 2006, CEC 2013, and CEC 2014 competitions.
Finally, the enhanced algorithm comprehensively investigates and optimizes a full-fledged shell-and-tube exchanger with various nanofluids. The optimization problem is inherently a mixed-integer design case, involving thirteen continuous and three integer decision variables, significantly increasing the problem’s complexity. As a significant contribution to the existing literature, the comparative performance of different heat exchanger configurations operating with various nanofluids has been investigated, and the proposed algorithm has been utilized for structural and topological optimization. The following remarkable conclusions can be drawn from this theoretical research study.
  • When ranking points of the competitive chaotic Mountain Gazelle Optimization algorithm variants are averaged to a mean ranking point for each optimization problem, it is seen that integration of the chaotic numbers produced from the CH02 (Bernoulli map) yields the best predictive results of the employed forty-eight unimodal and multimodal test problems with different problem dimensionalities.
  • The proposed intelligently guided manipulation scheme has significantly improved the solution diversity within the population, thanks to the unpredictable yet conducive features of the Opposition-Based Learning, Quasi-Dynamic Opposition-Based Learning, and Quasi Opposition-Based Learning methods, all three of which have complementary capabilities that can eliminate the algorithmic disadvantages of each method by the created synergy between them during the hybridization process. Numerical simulations demonstrate that shaping these three innovative learning schemes into a solid, structured form renders them so dexterous and prolific that the proposed search strategy has acquired the ability to explore unexplored regions of the search space without incurring excessive computational costs. Comprehensive evaluations of the proposed mutation scheme’s versatility suggest that it can enhance the overall optimization performance of any metaheuristic optimizer, thereby demonstrating the method’s efficiency on a global scale.
  • It is also understood that Opposition-Based Learning has been proven effective in improving metaheuristic algorithms for optimizing complex structural design problems, such as finding the optimal configuration of a shell and tube heat exchanger or other challenging real-world design cases. The highly randomized characteristics of these improved methods make it effortless to obtain optimal solutions to complex design problems with high nonlinearity and binding, restrictive constraints.
  • A detailed investigation of the influences of streaming nanofluids in the tubes of a shell-and-tube heat exchanger indicates that suspended nanoparticles in the refrigerant fluid can increase the tube-side heat transfer rates to some degree. However, it can also increase the tube-side pressure drop rates, which necessitates carefully weighing the optimum amount of nanoparticles in the nanofluid, as both the tube-side heat transfer coefficient and pressure drop rates directly affect the total cost of a heat exchanger.
  • Among the six available design alternatives, a heat exchanger configuration operating with a water + SiO2 nanofluid on the tube side yields the minimum total cost rate, thanks to its superior thermophysical properties.
  • As a reasonable future projection inspired by this research study, it would be beneficial for metaheuristic algorithm developers to focus on the mutation equations based on the integration of two or more oppositional learning search mechanism variants since they are capable of making abrupt movements in the search space to avoid the local optimum points encountered throughout the iterative process.

Supplementary Materials

Matlab codes of the chaotic Latin Hyper Cube sampling method described in Section 4.1, Hybrid Opposition-Based Learning Procedure provided in Algorithm 2, the proposed random number generator based on ISSM-RNG, explained in Equation (20), Adaptive Fitness Driven Perturbation algorithm explained in Algorithm 3, and Hybrid QDOPP-AFDP given in Algorithm 4 are publicly available and can be downloaded at: https://www.mdpi.com/article/10.3390/biomimetics10070454/s1.

Author Contributions

O.E.T.: writing—original draft, visualization, validation, software, resources, methodology, formal analysis, conceptualization. M.A.: writing—review and editing, supervision, project administration, investigation, data curation, conceptualization. H.B.Y.: writing—review and editing, writing—original draft, methodology, investigation, formal analysis, conceptualization. H.G.: writing—review and editing, methodology, investigation, formal analysis. M.A.-R.: writing—review and editing, methodology, investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Source codes of the algorithms developed in this research study are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bar-Cohen, Y. Biomimetics: Nature-Based Innovation; CEC Press: Moon Township, PA, USA, 2012. [Google Scholar]
  2. Bhushan, B. Biomimetics: Lessons from Nature—An Overview. Philos. Trans. R. Soc. A 2009, 367, 1445–1486. [Google Scholar] [CrossRef] [PubMed]
  3. Vincent, J.F.V.; Bogatyreva, O.A.; Bogatyrev, N.R.; Bowyer, A.; Pahl, A.K. Biomimetics: Its Practice and Theory. J. R. Soc. Interface 2006, 3, 471–482. [Google Scholar] [CrossRef] [PubMed]
  4. Holland, J.H. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  5. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  7. de Castro, L.N.; Timmis, J. Artificial Immune Systems: A New Computational Intelligence Approach; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  8. Liu, T.; Wang, S.; Liu, H.; Fe, G. Engineering Perspective on Bird Flight: Scaling, Geometry, Kinematics, and Aerodynamics. Prog. Aerosp. Sci. 2023, 142, 100933. [Google Scholar] [CrossRef]
  9. Collins, C.M.; Safiuddin, M. Lotus-Leaf-Inspired Biomimetic Coatings: Different Types, Key Properties, and Applications in Infrastructures. Infrastructures 2022, 7, 46. [Google Scholar] [CrossRef]
  10. Vesela, R.; Harjuntausta, A.; Hakkarainen, A.; Rönnholm, P.; Pellikka, P.; Rikkinen, J. Termite Mound Architecture Regulates Nest Temperature and Correlates with Species Identities of Symbiotic Fungi. PeerJ 2019, 6, e6237. [Google Scholar] [CrossRef]
  11. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  12. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. Comput. Aided Des. 2011, 41, 303–315. [Google Scholar] [CrossRef]
  13. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  14. Nayyef, H.M.; Ibrahim, A.A.; Zainuri, M.A.A.M.; Zulkifley, M.A.; Shareef, H. A Novel Hybrid Algorithm Based Jellyfish Search and Particle Swarm Optimization. Mathematics 2023, 11, 3210. [Google Scholar] [CrossRef]
  15. Coelho, L.; Mariani, V.C. Use of Chaotic Sequences in a Biologically Inspired Algorithm for Engineering Design Optimization. Expert Syst. Appl. 2008, 34, 1905–1913. [Google Scholar] [CrossRef]
  16. Wang, H.; Li, H.; Liu, Y.; Li, C.; Zeng, S. Opposition-Based Particle Swarm Algorithm with Cauchy Mutation. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4750–4756. [Google Scholar]
  17. Azad, A.V.; Azad, N.V. Application of Nanofluids for the Optimal Design of Shell and Tube Heat Exchangers Using Genetic Algorithm. Case Stud. Therm. Eng. 2016, 8, 198–206. [Google Scholar] [CrossRef]
  18. Hajabdollahi, H. Economic Optimization of Shell and Tube Heat Exchanger Using Nanofluid. Int. J. Mech. Mechatron. Eng. 2017, 11, 1436–1440. [Google Scholar]
  19. Singh, S.; Sarkar, J. Energy, Exergy, and Economic Assessments of Shell and Tube Condenser Using Nanofluid as Coolant. Int. Commun. Heat Mass Transf. 2018, 98, 41–48. [Google Scholar] [CrossRef]
  20. Abdollahzadeh, B.; Gharehchopogh, F.S.; Khodadadi, N.; Mirjalili, S. Mountain Gazelle Optimizer: A New Nature-Inspired Metaheuristic Algorithm for Global Optimization Problems. Adv. Eng. Softw. 2022, 174, 103282. [Google Scholar] [CrossRef]
  21. Abbasi, R.; Saidi, S.; Urooj, S.; Alhasnawi, B.N.; Alawad, M.A.; Premkumar, M. An Accurate Metaheuristic Mountain Gazelle Optimizer for Parameter Estimation of Single- and Double-Diode Photovoltaic Cell Models. Mathematics 2023, 11, 4565. [Google Scholar] [CrossRef]
  22. Zellagui, M.; Belbachir, N.; El-Sehiemy, R.A. Solving the Optimal Power Flow Problem in Power Systems Using the Mountain Gazelle Algorithm. Eng. Proc. 2023, 56, 176. [Google Scholar]
  23. Houssein, E.H.; Dirar, M.; Khalil, A.A.; Ali, A.A.; Mohamed, W.M. Optimized Deep Learning Architecture for Predicting Maximum Temperatures in Key Egyptian Regions Using Hybrid Genetic Algorithm and Mountain Gazelle Optimizer. Artif. Intell. Rev. 2025, 58, 258. [Google Scholar] [CrossRef]
  24. McKay, M.D.; Beckman, R.J.; Conover, W.J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1978, 21, 239–245. [Google Scholar]
  25. Tizhoosh, H.R. Oppositional Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 275–281. [Google Scholar]
  26. Turgut, O.E.; Turgut, M.S. Quasi-Dynamic Oppositional Learning Enhanced Runge-Kutta Optimizer for Solving Complex Optimization Problems. Evol. Intell. 2024, 17, 2899–2962. [Google Scholar] [CrossRef]
  27. Tharwat, A.; Hassanian, A. Chaotic Antlion Algorithm for Parameter Optimization of Support Vector Machine. Appl. Intell. 2018, 48, 670–686. [Google Scholar] [CrossRef]
  28. Sayed, G.; Hassanien, A.; Azar, A. Feature Selection via a Novel Chaotic Crow Search Algorithm. Neural Comput. Appl. 2019, 31, 171–188. [Google Scholar] [CrossRef]
  29. Arnold, V.I.; Avez, A. Problèmes Ergodiques de la Mécanique Classique; Gauther–Villars: Paris, France, 1967. [Google Scholar]
  30. Gonzales, R.F.M.; Mendez, J.A.D.; Luenges, L.P.; Hernandez, J.L.; Median, R.V. A Steganographic Method Using Bernoulli’s Chaotic Map. Comput. Electr. Eng. 2016, 54, 435–449. [Google Scholar]
  31. Wang, X.; Zhao, J. An Improved Key Agreement Protocol Based on Chaos. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 4052–4057. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Xiao, D. Double Optical Image Encryption Using Discrete Chirikov Standard Map and Chaos-Based Fractional Random Transform. Opt. Lasers Eng. 2013, 51, 472–480. [Google Scholar] [CrossRef]
  33. Hilborn, R.C. Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers, 2nd ed.; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  34. Debaney, R.L. Fractal Patterns Arising in Chaotic Dynamical Systems. In The Science of Fractal Images; Peitgen, H.O., Saupe, D., Eds.; Springer: Berlin/Heidelberg, Germany, 1988; pp. 137–168. [Google Scholar]
  35. Henon, M. A Two-Dimensional Mapping with a Strange Attractor. Commun. Math. Phys. 1976, 50, 69–77. [Google Scholar] [CrossRef]
  36. Ikeda, K. Multiple-Valued Stationary State and Instability of the Transmitted Light by a Ring Cavity System. Opt. Commun. 1979, 30, 257–261. [Google Scholar] [CrossRef]
  37. Ott, E. Chaos in Dynamical Systems; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  38. Konno, H.; Kondo, T. Iterative Chaotic Map as Random Number Generator. Ann. Nucl. Energy 1997, 24, 1183–1188. [Google Scholar] [CrossRef]
  39. Chua, L.O.; Yao, Y. Generating Randomness from Chaos and Constructing Chaos with Desired Randomness. Int. J. Circuit Theory Appl. 1990, 18, 215–240. [Google Scholar] [CrossRef]
  40. May, R.M. Simple Mathematical Models with Very Complicated Dynamics. Nature 1976, 261, 459–467. [Google Scholar] [CrossRef]
  41. Zheng, W.M. Symbolic Dynamics for the Lozi Map. Chaos Solitons Fractals 1991, 1, 243–248. [Google Scholar] [CrossRef]
  42. Wang, X.; Jin, C. Image Encryption Using Game of Life Permutation and PWLCM Chaotic System. Opt. Commun. 2012, 285, 412–417. [Google Scholar] [CrossRef]
  43. Dastgheib, M.A.; Farhang, M. A Digital Pseudo-Random Number Generator Based on Sawtooth Chaotic Map with a Guaranteed Enhanced Period. Nonlinear Dyn. 2017, 89, 2957–2966. [Google Scholar] [CrossRef]
  44. Skanderova, L.; Zelinka, I. Arnold Cat Map and Sinai as Chaotic Numbers Generators in Evolutionary Algorithms. In Recent Advances in Electrical Engineering and Related Sciences (AETA 2013); Springer: Berlin/Heidelberg, Germany, 2013; pp. 381–389. [Google Scholar]
  45. Mansouri, A.; Wang, X. A Novel One-Dimensional Sine Powered Chaotic Map and Its Application in a New Image Encryption Scheme. Inf. Sci. 2020, 520, 46–62. [Google Scholar] [CrossRef]
  46. Tubishat, M.; Ja’afar, S.; Idris, N.; Al-Betar, M.A.; Alswaitti, M.; Jarrah, H.; Ismail, M.A.; Omar, M.S. Improved Sine-Cosine Algorithm with Simulated Annealing and Singer Chaotic Map for Hadith Classification. Neural Comput. Appl. 2022, 34, 1385–1406. [Google Scholar] [CrossRef]
  47. Li, Y.; Han, M.; Guo, Q. Modified Whale Optimization Algorithm Based on Tent Chaotic Mapping and Its Application in Structure Optimization. KSCE J. Civ. Eng. 2020, 24, 3703–3713. [Google Scholar] [CrossRef]
  48. Zaslavskii, G.M. The Simplest Case of a Strange Attractor. Phys. Lett. A 1978, 69, 145–147. [Google Scholar] [CrossRef]
  49. Yang, W.; Liu, J.; Tan, S.; Zhang, W.; Liu, Y. Evolutionary Dynamic Grouping Based Cooperative Co-Evolution Algorithm for Large-Scale Optimization. Appl. Intell. 2024, 54, 4585–4601. [Google Scholar] [CrossRef]
  50. Wu, C.; Teng, T. An Improved Differential Evolution with a Quasi-Oppositional Based Learning Strategy for Global Optimization Problems. Appl. Soft Comput. 2013, 13, 4581–4590. [Google Scholar]
  51. Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic Opposite Learning Enhanced Teaching-Learning-Based Optimization. Knowl. Based Syst. 2020, 188, 104966. [Google Scholar] [CrossRef]
  52. Zhao, W.; Zhang, Z.; Wang, L. Manta Ray Foraging Optimization: An Effective Bio-Inspired Optimizer for Engineering Applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  53. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge–Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  54. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African Vultures Optimization Algorithm: A New Nature-Inspired Metaheuristic Algorithm for Global Optimization Problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  55. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snasel, V.; Chu, S.C. Gannet Optimization Algorithm: A New Metaheuristic Algorithm for Solving Engineering Optimization Problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  56. Zhao, W.; Wang, L.; Zhang, Z.; Fan, H.; Zhang, J.; Mirjalili, S.; Khodadadi, N.; Cao, Q. Electric Eel Foraging Optimization: A New Bio-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2024, 238, 122200. [Google Scholar] [CrossRef]
  57. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A Novel Optimization Algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  58. Ahmadianfar, I.; Haddad, O.B.; Chu, X. Gradient-Based Optimizer: A New Metaheuristic Optimization Algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  59. Abualigah, L.; Abd-Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A Nature-Inspired Meta-Heuristic Optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  60. Dehghani, M.; Montazeri, Z.; Trojovska, E.; Trojovsky, P. Coati Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  61. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Quasi-Oppositional Differential Evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2229–2236. [Google Scholar]
  62. Tizhoosh, H.R.; Ventresca, M.; Rahnamayan, S. Opposition-Based Computing. In Oppositional Concepts in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2008; pp. 11–28. [Google Scholar]
  63. Rahnamayan, S.; Jesuthasan, J.; Bourennani, F.; Salehinejad, H.; Natarer, G.F. Computing Opposition by Involving Entire Population. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1800–1807. [Google Scholar]
  64. Tang, J.; Zhao, X. On the Improvement of Opposition-Based Differential Evolution. In Proceedings of the 2010 Sixth International Conference on Natural Computation, Yantai, China, 10–12 August 2010; Volume 4, pp. 2407–2411. [Google Scholar]
  65. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  66. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-Inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  67. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A Nature-Inspired Metaheuristic Algorithm for Engineering Applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  68. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  69. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler Optimization Algorithm: A New Metaheuristic Algorithm Inspired by Kepler’s Laws of Planetary Motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  70. Abdel-Basset, M.; Mohamed, R.; Zidan, M.; Jameel, M.; Abouhawwash, M. Mantis Search Algorithm: A Novel Bio-Inspired Algorithm for Global Optimization and Engineering Design Problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116200. [Google Scholar] [CrossRef]
  71. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime Mould Algorithm: A New Method for Stochastic Optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  72. Han, M.; Du, Z.; Yuen, K.F.; Zhu, H.; Li, Y.; Yuan, Q. Walrus Optimizer: A Novel Nature-Inspired Metaheuristic Algorithm. Expert Syst. Appl. 2024, 239, 122413. [Google Scholar] [CrossRef]
  73. Incropera, F.; DeWitt, D.; Bergman, T.; Lavine, A. Fundamentals of Heat and Mass Transfer, 6th ed.; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  74. Shah, R.K.; Sekulic, D.P. Fundamentals of Heat Exchanger Design; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  75. Rosenhow, W.M.; Harnett, P.J. Handbook of Heat Transfer; McGraw-Hill: New York, NY, USA, 1973. [Google Scholar]
  76. Kakac, S.; Liu, H.; Pramuanjaroenkij, A. Heat Exchangers: Selection, Rating, and Thermal Design, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  77. Żyła, G.; Fal, J. Viscosity, Thermal and Electrical Conductivity of Silicon Dioxide–Ethylene Glycol Transparent Nanofluids: An Experimental Study. Thermochim. Acta 2017, 650, 106–113. [Google Scholar] [CrossRef]
  78. Anitha, S.; Thomas, T.; Parthiban, P.; Pichumani, M. What Dominates Heat Transfer Performance of Hybrid Nanofluid in Single Pass Shell and Tube Heat Exchanger? Adv. Powder Technol. 2019, 30, 3107–3117. [Google Scholar] [CrossRef]
  79. Taal, M.; Bulatov, I.; Klemes, J.; Stehlik, P. Cost Estimation and Energy Price Forecast for Economic Evaluation of Retrofit Projects. Appl. Therm. Eng. 2003, 23, 1819–1835. [Google Scholar] [CrossRef]
Figure 1. Rankings of the compared chaotic MGO variants based on their mean fitness values obtained from 30 independent runs for 30D test functions: (a) multimodal, (b) unimodal, and (c) overall ranking performance.
Figure 1. Rankings of the compared chaotic MGO variants based on their mean fitness values obtained from 30 independent runs for 30D test functions: (a) multimodal, (b) unimodal, and (c) overall ranking performance.
Biomimetics 10 00454 g001
Figure 2. Sequential pseudo-random numbers generated by (a) Bernoulli map, (b) Chebychev map, (c) Ikeda map, (d) Baker map, and (e) Sinai map.
Figure 2. Sequential pseudo-random numbers generated by (a) Bernoulli map, (b) Chebychev map, (c) Ikeda map, (d) Baker map, and (e) Sinai map.
Biomimetics 10 00454 g002
Figure 3. Random numbers bounded between 0 and 1 generated by ISSM-RNG.
Figure 3. Random numbers bounded between 0 and 1 generated by ISSM-RNG.
Biomimetics 10 00454 g003
Figure 4. Flow chart representation of HCQDOPP−enhanced MGO algorithm.
Figure 4. Flow chart representation of HCQDOPP−enhanced MGO algorithm.
Biomimetics 10 00454 g004
Figure 5. Statistical comparison of the competing algorithms for CEC 2013 problems.
Figure 5. Statistical comparison of the competing algorithms for CEC 2013 problems.
Biomimetics 10 00454 g005
Figure 6. Box plot results of the compared algorithms for CEC 2014 test problems.
Figure 6. Box plot results of the compared algorithms for CEC 2014 test problems.
Biomimetics 10 00454 g006
Figure 7. Ranking of the performance of the contender algorithms concerning the (a) best and (b) mean results.
Figure 7. Ranking of the performance of the contender algorithms concerning the (a) best and (b) mean results.
Biomimetics 10 00454 g007
Figure 8. Overall optimization performance of the compared algorithms for different problem dimensionalities with respect to the (a) best and (b) mean results.
Figure 8. Overall optimization performance of the compared algorithms for different problem dimensionalities with respect to the (a) best and (b) mean results.
Biomimetics 10 00454 g008
Figure 9. Schematic view of a shell and tube heat exchanger.
Figure 9. Schematic view of a shell and tube heat exchanger.
Biomimetics 10 00454 g009
Figure 10. Influences of design variables on total cost rates.
Figure 10. Influences of design variables on total cost rates.
Biomimetics 10 00454 g010
Figure 11. Variations of heat exchanger cost values with increasing values of design parameters.
Figure 11. Variations of heat exchanger cost values with increasing values of design parameters.
Biomimetics 10 00454 g011
Table 1. Description of the multimodal and unimodal test functions with varying search ranges.
Table 1. Description of the multimodal and unimodal test functions with varying search ranges.
Multimodal Function NameRangeUnimodal Function NameRange
F1—Ackley[−32, 32] DF25—Sphere[−5.12, 5.12] D
F2—Griewank[−600, 600] DF26—Rosenbrock[−30.0, 30.0] D
F3—Rastrigin[−5.12, 5.12] DF27—Brown[−1.0, 4.0] D
F4—Zakharov[−5.0, 10.0] DF28—Stretched V Sine Wave[−10.0, 10.0] D
F5—Alpine[0, 10] DF29—Powell[0.0, 10.0] D
F6—Penalized1[−50.0, 50.0] DF30—Sum of Different Powers[−1.0, 1.0] D
F7—Csendes[−5.0, 5.0] DF31—Bent cigar[−5.0, 5.0] D
F8—Schaffer[−100.0, 100.0]F32—Discus[−100.0, 100.0] D
F9—Salomon[−50.0, 50.0] DF33—Schwefel 2.20[−100.0, 100.0] D
F10—Inverted cosine mixture[−10.0, 10.0] DF34—Schwefel 2.21[−100.0, 100.0] D
F11—Wavy[−3.14, 3.14] DF35—Schwefel 2.23[−10.0, 10.0] D
F12—Xin She Yang1[−5.0, 5.0] DF36—Schwefel 2.25[0.0, 10.0] D
F13—Xin She Yang4[−6.28, 6.28] DF37—Dropwave[−5.12, 5.12] D
F14—Xin She Yang2[−10.0, 10.0] DF38—Trid[D2, D2] D
F15—Pathological[−10.0, 10.0] DF39—Generalized White & Holst[−10.0, 10.0] D
F16—Quintic[−10.0, 10.0] DF40—BIGGSB1[−10, 10] D
F17—Levy[−10.0, 10.0] DF41—Anescu01[−2.0, 2.0] D
F18—Qing[−500.0, 500.0] DF42—Anescu02[1.39, 4.0] D
F19—Diagonal1[−10.0, 10.0] DF43—Anescu03[−4.0, 1.39] D
F20—Hager[−10.0, 10.0] DF44—Anescu04[0.001, 2.0] D
F21—Diagonal4[−10.0, 10.0] DF45—Anescu06[0.001, 2.0] D
F22—Perturbed Quadratic Diagonal[−10.0, 10.0] DF46—Anescu07 [−2.0, 2.0] D
F23—SINE[−10.0, 10.0] DF47—Schumer-Steiglitz 3[−100.0, 100.0] D
F24—Diagonal9[−10.0, 10.0] DF48—Schumer-Steiglitz 2[−100.0, 100.0] D
D—Problem dimension.
Table 2. Summary of CEC 2013 test problems.
Table 2. Summary of CEC 2013 test problems.
NoFunctionsGlobal Optimum–f(x)
1Sphere function−1400
Unimodal2Rotated High Conditioned Elliptic Function−1300
Functions3Rotated Bent Cigar Function−1200
4Rotated Discus Function−1100
5Different Powers Function−1000
6Rotated Rosenbrock’s Function−900
7Rotated Schaffers F7 Function−800
8Rotated Ackley’s Function−700
9Rotated Weierstrass Function−600
10Rotated Griewank’s Function−500
Basic11Rastrigin’s Function−400
Multimodal12Rotated Rastrigin’s Function−300
Functions13Non-Continuous Rotated Rastrigin’s Function−200
14Schwefel’s Function−100
15Rotated Schwefel’s Function100
16Rotated Katsuura Function200
17Lunacek–Bi-Rastrigin Function300
18Rotated Lunacek–Bi-Rastrigin Function400
19Expanded Griewank’s plus Rosenbrock’s Function500
20Expanded Scaffer’s F6 Function600
21Composition Function 1 (n = 5, Rotated)700
22Composition Function 2 (n = 3, Unrotated)800
Composite23Composition Function 3 (n = 3, Rotated)900
Functions24Composition Function 4 (n = 3, Rotated)1000
25Composition Function 5 (n = 3, Rotated)1100
26Composition Function 6 (n = 5, Rotated)1200
27Composition Function 7 (n = 5, Rotated)1300
28Composition Function 8 (n = 5, Rotated)1400
Search Range: [−100, 100]D D: problem dimension.
Table 3. Descriptions of the employed CEC 2014 test problems.
Table 3. Descriptions of the employed CEC 2014 test problems.
NoFunctionsGlobal Optimum–f(x)
1Rotated High Conditioned Elliptic Function100
Unimodal2Rotated Bent Cigar Function200
Functions3Rotated Discus Function300
4Shifted and Rotated Rosenbrock’s Function400
5Shifted and Rotated Ackley’s Function500
6Shifted and Rotated Weierstrass Function600
7Shifted and Rotated Griewank’s Function700
8Shifted Rastrigin’s Function800
9Shifted and Rotated Rastrigin’s Function900
10Shifted Schwefel’s Function1000
Basic11Shifted and Rotated Schwefel’s Function1100
Multimodal12Shifted and Rotated Katsuura Function1200
Functions13Shifted and Rotated HappyCat Function1300
14Shifted and Rotated HGBat Function1400
15Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function1500
16Shifted and Rotated Expanded Scaffer’s F6 Function1600
17Hybrid Function 1 (n = 3)1700
18Hybrid Function 2 (n = 3)1800
Hybrid19Hybrid Function 3 (n = 4)1900
Functions20Hybrid Function 4 (n = 4)2000
21Hybrid Function 5 (n = 5)2100
22Hybrid Function 6 (n = 5)2200
Search Range: [−100, 100]D D: problem dimension.
Table 4. Default parameter settings of the comparative algorithms.
Table 4. Default parameter settings of the comparative algorithms.
AlgorithmsParameters
AVOAp1 = 0.6, p2 = 0.4, p3 = 0.6, α = 0.8, β = 0.2, γ = 2.5
GANNETM = 2.5, Velocity = 1.5
REPTILEα = 0.1, β = 0.1
EQUILa1 = 2.0, a2 = 1.0, GP = 0.5
COATINo tunable algorithm parameters
EELNo tunable algorithm parameters
GRADNo tunable algorithm parameters
MANTANo tunable algorithm parameters
RUNGENo tunable algorithm parameters
MGONo tunable algorithm parameters
Table 5. Statistical results for CEC2013 and CEC2014 (30D, 50D, 100D) benchmark problems.
Table 5. Statistical results for CEC2013 and CEC2014 (30D, 50D, 100D) benchmark problems.
HCQDOPP vs.Wilcoxon Signed-Rank Test
+=p-Valueα = 0.05α = 0.1
COATI14163<0.05YesYes
REPTILE14055<0.05YesYes
GANNET13974<0.05YesYes
GRAD14163<0.05YesYes
AVOA13686<0.05YesYes
EEL13479<0.05YesYes
MANTA13776<0.05YesYes
EQUIL132108<0.05YesYes
MGO134106<0.05YesYes
OBL13398<0.05YesYes
SOBL13596<0.05YesYes
QOBL131109<0.05YesYes
EOBL1271310<0.05YesYes
QDOPP1291110<0.05YesYes
MIOBL1301010<0.05YesYes
COBL1251510<0.05YesYes
Table 6. Functional characteristics of the employed CEC 2006 constrained test problems.
Table 6. Functional characteristics of the employed CEC 2006 constrained test problems.
TypeDLINILENEfopt (x)
G01Quadratic139000−15.000000
G02Nonlinear200200−0.8036191
G03Polynomial100001−1.0005001
G04Quadratic50600−30,665.538
G06Cubic20200−6961.8138
G07Quadratic10350024.306209
G09Polynomial70400680.63005
G10Linear833007048.24802
G13Nonlinear50003 0.05394151
G14Nonlinear100030−47.764888
G18Quadratic901300−0.8660254
G19Nonlinear15050032.6555929
Table 7. Comparison of the statistical results for CEC2006 constraint test problems.
Table 7. Comparison of the statistical results for CEC2006 constraint test problems.
HCQDOPPMANTAMARINEMGOAVOADANDELEQUILHBADGERKEPLERMANTISRUNGESLIMEWALRUS
G01Best−1.500 × 101−1.499 × 101−1.499 × 101−1.500 × 101−1.275 × 101−1.299 × 101−1.483 × 101−1.425 × 101−1.492 × 101 −1.497 × 101 −1.446 × 101 −1.499 × 101 −1.483 × 101
Mean−1.500 × 101−1.033 × 101−1.496 × 101−1.500 × 101−9.764 × 100−8.686 × 100−1.402 × 101−1.151 × 101−1.243 × 101−1.492 × 101 −1.252 × 101 −1.153 × 101 −9.454 × 100
Std4.632 × 10−53.039 × 1003.752 × 10−2 01.462 × 100 1.585 × 100 8.060 × 10−1 1.173 × 100 2.444 × 100 4.832 × 10−2 9.763 × 10−1 2.026 × 100 2.264 × 100
rank21031111359746812
G02Best−7.842 × 10−1 −7.682 × 10−1 −7.878 × 10−1 −7.241 × 10−1 −7.818 × 10−1 −7.508 × 10−1 −7.977 × 10−1 −7.807 × 10−1 −7.444 × 10−1 −7.906 × 10−1 −7.741 × 10−1 −6.698 × 10−1 −6.502 × 10−1
Mean−7.312 × 10−1 −6.645 × 10−1 −7.354 × 10−1 −5.521 × 10−1 −6.456 × 10−1 −5.971 × 10−1 −7.473 × 10−1 −6.836 × 10−1 −6.204 × 10−1 −7.261 × 10−1 −6.823 × 10−1 −5.632 × 10−1 −4.851 × 10−1
Std3.097 × 10−2 8.249 × 10−2 3.505 × 10−2 8.650 × 10−2 8.144 × 10−2 6.564 × 10−2 3.525 × 10−2 4.411 × 10−2 6.350 × 10−2 4.941 × 10−2 5.737 × 10−2 3.911 × 10−2 4.286 × 10−2
Rank37212810159461113
G03Best−9.058 × 10−1 −7.680 × 10−1 −1.908 × 10−1 N/A−1.462 × 10−1 −8.390 × 10−2 −2.971 × 10−1 −1.440 × 10−1 N/A−5.955 × 10−1 −3.105 × 10−1 −1.000 × 100 N/A
Mean−2.878 × 10−1 −9.094 × 10−2 −3.652 × 10−2 N/A−1.405 × 10−2 −3.818 × 10−3 −5.114 × 10−2 −2.046 × 10−2 N/A−7.851 × 10−2 −5.196 × 10−2 −1.000 × 100 N/A
Std 2.393 × 10−1 1.310 × 10−1 5.024 × 10−2 N/A3.257 × 10−2 1.449 × 10−2 6.136 × 10−2 3.444 × 10−2 N/A1.121 × 10−1 7.927 × 10−2 0N/A
rank23713910681345113
G04Best−3.066 × 104 −3.066 × 104 −3.066 × 104 −3.066 × 104 −3.062 × 104 −3.066 × 104 −3.066 × 104 −3.065 × 104 −3.066 × 104 −3.066 × 104 −3.066 × 104 −3.066 × 104 −3.066 × 104
Mean−3.066 × 104 −3.066 × 104 −3.066 × 104 −3.065 × 104 −3.035 × 104 −3.065 × 104 −3.066 × 104 −3.058 × 104 −3.066 × 104 −3.066 × 104 −3.059 × 104 −3.066 × 104 −3.065 × 104
Std5.825 × 10−3 1.360 × 100 1.732 × 10−1 3.078 × 101 2.175 × 102 4.046 × 101 4.193 × 101 4.560 × 101 1.143 × 10−1 5.724 × 10−1 5.833 × 101 1.589 × 100 4.858 × 101
rank15381371012421169
G06Best−6.961 × 103 −6.961 × 103 −6.961 × 103 −6.961 × 103 −6.955 × 103 −6.960 × 103 −6.961 × 103 −6.961 × 103 −6.961 × 103 −6.961 × 103 −6.961 × 103 −6.961 × 103 −6.953 × 103
Mean−6.961 × 103 −6.942 × 103 −6.961 × 103 −6.961 × 103 −6.771 × 103 −6.951 × 103 −6.943 × 103 −6.956 × 103 −6.903 × 103 −6.961 × 103 −6.960 × 103 −6.957 × 103 −6.877 × 103
Std7.055 × 10−3 9.623 × 100 2.407 × 10−1 9.400 × 10−7 7.553 × 102 6.473 × 100 1.349 × 101 3.912 × 100 2.768 × 102 1.206 × 10−2 1.391 × 100 4.309 × 100 6.939 × 101
rank39421381071115612
G07Best2.445 × 101 2.437 × 101 2.435 × 101 2.443 × 101 2.595 × 101 2.512 × 101 2.462 × 101 2.638 × 101 2.452 × 101 2.441 × 101 2.550 × 101 2.530 × 101 2.625 × 101
Mean2.475 × 101 2.612 × 101 2.469 × 101 2.712 × 101 4.238 × 101 2.827 × 101 2.716 × 101 3.488 × 101 2.501 × 101 2.499 × 101 3.236 × 101 2.801 × 101 3.133 × 101
Std1.984 × 10−1 1.540 × 100 2.216 × 10−1 2.048 × 100 3.044 × 101 3.011 × 100 3.549 × 100 8.601 × 100 3.970 × 10−1 4.310 × 10−1 8.339 × 100 2.519 × 100 3.945 × 100
rank25161397124311810
G09Best6.806 × 102 6.806 × 102 6.806 × 102 6.808 × 102 6.819 × 102 6.806 × 102 6.806 × 102 6.808 × 102 6.806 × 102 6.806 × 102 6.806 × 102 6.814 × 102 6.813 × 102
Mean6.806 × 102 6.807 × 102 6.806 × 102 6.827 × 102 6.899 × 102 6.814 × 102 6.808 × 102 6.838 × 102 6.806 × 102 6.806 × 102 6.834 × 102 6.852 × 102 6.844 × 102
Std4.337 × 10−3 6.221 × 10−2 2.991 × 10−2 1.519 × 100 9.938 × 100 5.173 × 10−1 1.765 × 10−1 4.592 × 100 2.086 × 10−2 2.182 × 10−2 1.688 × 100 4.879 × 100 2.155 × 100
rank15381376104291211
G10Best7.105 × 103 7.145 × 103 7.066 × 103 7.261 × 103 7.619 × 103 7.503 × 103 7.257 × 103 7.535 × 103 7.178 × 103 7.096 × 103 7.557 × 103 7.676 × 103 7.648 × 103
Mean7.211 × 103 7.907 × 103 7.341 × 103 8.137 × 103 9.327 × 103 8.786 × 103 7.901 × 103 8.290 × 103 7.468 × 103 7.389 × 103 8.359 × 103 8.933 × 103 8.838 × 103
Std8.199 × 101 6.408 × 102 1.976 × 102 5.553 × 102 1.191 × 103 1.106 × 103 3.619 × 102 7.257 × 102 2.188 × 102 1.509 × 102 5.715 × 102 6.497 × 102 7.369 × 102
rank16271310584391211
G13Best1.297 × 10−1 7.531 × 10−2 6.086 × 10−2 7.356 × 10−2 4.242 × 10−1 1.826 × 10−1 8.835 × 10−2 1.652 × 10−1 N/A5.446 × 10−2 7.527 × 10−2 7.556 × 10−1 4.431 × 10−1
Mean3.414 × 10−1 5.663 × 10−1 2.786 × 10−1 8.042 × 10−1 7.657 × 10−1 8.090 × 10−1 7.819 × 10−1 7.304 × 10−1 N/A2.497 × 10−1 6.734 × 10−1 9.619 × 10−1 8.094 × 10−1
Std1.833 × 10−1 2.847 × 10−1 1.581 × 10−1 2.621 × 10−1 2.081 × 10−1 2.554 × 10−1 2.616 × 10−1 2.867 × 10−1 N/A1.766 × 10−1 2.939 × 10−1 8.488 × 10−2 2.038 × 10−1
rank14297108613151211
G14Best−4.763 × 101 −4.728 × 101 −4.709 × 101 −4.608 × 101 −4.686 × 101 −4.707 × 101 −4.703 × 101 −4.673 × 101 −4.727 × 101 −4.752 × 101 −4.759 × 101 N/A−4.653 × 101
Mean−4.697 × 101 −4.576 × 101 −4.630 × 101 −4.482 × 101 −4.486 × 101 −4.365 × 101 −4.511 × 101 −4.462 × 101 −4.666 × 101 −4.676 × 101 −4.424 × 101 N/A−4.438 × 101
Std7.152 × 10−1 8.714 × 10−1 4.795 × 10−1 7.260 × 10−1 1.008 × 100 1.758 × 100 1.148 × 100 1.147 × 100 5.842 × 10−1 5.433 × 10−1 1.441 × 100 N/A1.122 × 100
rank15487126932111310
G18Best−8.626 × 10−1 −8.660 × 10−1 −8.660 × 10−1 −8.656 × 10−1 −8.617 × 10−1 N/A−8.657 × 10−1 −7.704 × 10−1 −8.637 × 10−1 −8.657 × 10−1 −8.610 × 10−1 −8.642 × 10−1 −8.526 × 10−1
Mean−8.389 × 10−1 −7.115 × 10−1 −8.590 × 10−1 −8.107 × 10−1 −5.609 × 10−1 N/A−7.020 × 10−1 −5.391 × 10−1 −7.964 × 10−1 −8.308 × 10−1 −6.052 × 10−1 −8.208 × 10−1 −6.413 × 10−1
Std 2.238 × 10−2 1.427 × 10−1 8.768 × 10−3 8.711 × 10−2 1.199 × 10−1 N/A 1.419 × 10−1 1.099 × 10−1 9.135 × 10−2 6.708 × 10−2 1.232 × 10−1 8.417 × 10−2 1.606 × 10−1
rank27151113812631049
G19Best3.640 × 101 3.973 × 101 4.031 × 101 3.684 × 101 4.595 × 101 N/A4.833 × 101 3.963 × 101 3.730 × 101 4.166 × 101 4.414 × 101 3.521 × 101 3.989 × 101
Mean4.166 × 101 5.776 × 101 4.793 × 101 5.816 × 101 1.064 × 102 N/A6.579 × 101 5.079 × 101 4.626 × 101 5.384 × 101 7.811 × 101 5.397 × 101 7.276 × 101
Std4.308 × 100 1.112 × 101 4.350 × 100 1.629 × 101 5.790 × 101 N/A1.203 × 101 9.063 × 100 5.127 × 100 7.024 × 100 1.819 × 101 1.350 × 101 2.083 × 101
rank17381213942511610
Aver.rank 1.6666.0832.9167.25010.83310.1676.7508.5006.6662.8338.258.2510.971
Rankings 14371211610528913
N/A indicates the associated algorithm cannot find any feasible solution in any of the consecutive runs.
Table 8. Thermophysical properties of suspended nanoparticles and water as a working base fluid [78].
Table 8. Thermophysical properties of suspended nanoparticles and water as a working base fluid [78].
Base FluidNanoparticles
ComponentsWaterAl2O3CuOTiO2CuSiO2Boehmite
ρ (kg/m3)995397060004250893322203050
Cp (J/kg.K)4178765551686385745618.8
k (W/mK)0.61940338.94001.430
Table 9. Formulations of the correction factors employed in calculating Equation (58) of shell side heat transfer c, as follows.
Table 9. Formulations of the correction factors employed in calculating Equation (58) of shell side heat transfer c, as follows.
FormulaParameters Employed in the Equation
Y c = 0.55 + 0.72 F c The value of Fc can be found in Shah and Sekulic [74].
Y l = 0.44 1 r s + 1 0.44 1 r s e x p ( 2.2 r m ) r s = A o , s b A o , s b + A o , t b r l m = A o , s b + A o , t b A o , c r
Here, Ao,sb, Ao,tb, Ao,cr are given in Shah and Sekulic [74].
Y b = 1                                                     f o r     N s s + 0.5 e x p C · r b · 1 2 · N s s + 1 / 3   f o r     N s s + < 0.5 r b = A o , b p A o , c r , N s s + = N s s N r , c c , C = 1.35       f o r     R e s 100 1.25       f o r     R e s > 100
Explicit formulations of Nss and Nr,cc are given in Shah and Sekulic [74]
Y s = N b 1 + L i + 1 n + L o + 1 n N b 1 +   L i + +   L o + L i + = L b i L b c , L o + = L b o L b c , n = 0.6       f o r     t u r b u l e n t   f l o w 0.33     f o r     l a m i n a r     f l o w
Lbi, Lbo, and Lbc are, respectively, baffle spacing at the inlet, outlet, and center
Y r = 1                         f o r     R e s 100 10 / N r , c 0.18     f o r     R e s < 100 N r , c = N r , c c + N r , c w , where Nr,cc and Nr,cw are calculated by the formulations given in Shah and Sekulic [74].
Table 10. Operational conditions of different heat exchanger configurations.
Table 10. Operational conditions of different heat exchanger configurations.
Shell Side Tube Side
Process FluidsOilAl2O3 + H2OCuO + H2OTiO2 + H2OCu + H2OSiO2 + H2OBoehmite + H2O
Flow rate (kg/s)36.35.15.15.15.15.15.1
Inlet Temp. (°C)65.632.232.232.232.232.232.2
Outlet Temp. (°C)60.452.552.353.253.752.350.8
Density (kg/m3)8491080.881076.561114.831151.121052.49997.34
Heat Capacity (J/kg.K)20943816.143848.593687.923599.113838.184165.62
Viscosity (Pa.s)0.06460.000810.000790.0008290.0007960.0008480.000761
Thermal Conductivity (W/m.K)0.140.671640.6480680.6765730.6560780.6451640.620988
Table 11. Upper and lower search bounds for the considered design parameters.
Table 11. Upper and lower search bounds for the considered design parameters.
ParameterLower BoundUpper Bound
Shell-side inside diameter—Ds (m)0.30.6
Tube-side outside diameter—do (m)0.0120.025
Tube length—L (m)310
Tube pitch—pt (m)0.0150.03
Central baffle spacing—Lbc (m)0.20.5
Inlet baffle spacing—Lbi (m)0.20.5
Outlet baffle spacing—Lbo (m)0.20.5
Baffle spacing (%)1540
Width of bypass lane—wp (m)0.010.03
Tube-to-baffle hole diametral clearance—δtb (m)0.00010.001
Shell-to-baffle diametral clearance—δsb (m)0.0010.005
Tube thickness—thck (m)0.00020.002
The nanoparticle ratio—φv (%)00.6
Number of tube passes—Npass1 2 4 6 8
Number of sealing strip pairs—Nss1 2 4 8
Tube layout—Tlayout (°)30 45 90
Table 12. Optimal values of the considered design parameters for various heat exchanger configurations.
Table 12. Optimal values of the considered design parameters for various heat exchanger configurations.
WaterAl2O3CuOTiO2CuSiO2Boehmite
Optimization Variables
Shell-side inside diameter—Ds (m)0.4580.4560.4510.4950.4330.4290.457
Tube-side outside diameter—do (mm)22.917.516.820.417.216.322.3
Tube layout—Tlayout (°)45454545454545
Number of tube passes—Npass2222222
Tube length—L (m)4.593.123.223.013.523.244.41
Tube pitch—pt (mm)28.729.729.329.727.829.128.8
Central baffle spacing—Lbc (m)0.4930.4480.4840.4590.4270.4690.491
Inlet baffle spacing—Lbi (m)0.3980.3350.4690.4250.4460.4630.399
Outlet baffle spacing—Lbo (m)0.3870.4380.4970.4910.3610.4590.392
Baffle spacing (%)41.24230.93233.37226.83239.32431.63239.873
Width of bypass lane—wp (mm)14.923.813.719.327.617.415.2
Tube-to-baffle hole diametral clearance—δtb (mm)0.3540.650.5270.4210.4190.4220.367
Shell-to-baffle diametral clearance—δsb (mm)3.5623.7623.2893.3612.7724.1833.601
Number of sealing strip pairs—Nss8212828
Tube thickness—thck (mm)1.21.21.610.711.2
The nanoparticle ratio—φv (%)0.1112.9751.623.6651.9834.6930.121
Model parameters
Transverse tube pitch—Xt (mm)40.342.141.842.439.340.940.6
Longitudinal tube pitch—Xl (mm)20.321.520.721.319.720.520.4
Total number of tubes—N149277297246256271151
Tube clearance—Cl (mm)6.412.312.69.810.712.56.7
Shell side mass velocity—Gs (kg/m2 s)523.482309.642282.694347.893382.134305.045506.132
Shell side Reynolds number—Res278.333117.932101.987153.245142.832107.99249.783
Shell side heat transfer coefficient—hs (W/m2K)531.892519.983481.563542.981512.782531.697526.891
Pressure drops in the central section—Δpcr (Pa)3601.3742306.4821824.6983179.4212581.2322232.913533.792
Pressure drops in the window area—Δpw (Pa)10,193.7425453.3925192.7845767.9118041.2735979.7510,053.56
Pressure drops in inlet and outlet section—Δpi-o (Pa)9856.3924471.7443163.9424796.4725198.4813748.349457.232
Total shell side pressure drop—Δpshell (Pa)24,083.97412,231.4210,181.8413,740.1415,817.811,961.023,042.74
Total number of baffles—Nb8666668
Tube-side Reynolds number—Ret23,331.63514,177.415,825.7213,259.1315,261.3213,823.2522,106.07
Tube-side heat transfer coefficient—hi (W/m2 K)4421.7324039.8314960.7533224.1294009.934186.974384.231
Overall heat transfer coefficient—Uo (W/m2 K)409.572400.535379.753405.223398.184410.464408.932
Total heat transfer area—Ao (m2)45.11346.48448.42146.7847.53245.12745.001
Effectiveness (ε)0.26750.26960.26410.28470.29050.26260.2441
Tube-side pressure drop—Δpt (Pa)5783.3213899.9326898.9332181.424119.1345522.5245637.842
Annual operating cost—Co (€/year)1423.848760.231713.123806.071958.283783.631406.124
Total discounted operating cost—CoD (€)8932.9824672.8324379.5924947.9415885.1334815.068638.201
Capital investment cost—Ci (€)16,298.73316,525.93216,946.7816,488.1316,615.21216,301.716,277.391
Total cost of heat exchanger—Ctot (€)25,231.7121,198.7621,326.3721,436.0722,500.34521,116.1324,915.591
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Turgut, O.E.; Asker, M.; Yesiloz, H.B.; Genceli, H.; AL-Rawi, M. Chaotic Mountain Gazelle Optimizer Improved by Multiple Oppositional-Based Learning Variants for Theoretical Thermal Design Optimization of Heat Exchangers Using Nanofluids. Biomimetics 2025, 10, 454. https://doi.org/10.3390/biomimetics10070454

AMA Style

Turgut OE, Asker M, Yesiloz HB, Genceli H, AL-Rawi M. Chaotic Mountain Gazelle Optimizer Improved by Multiple Oppositional-Based Learning Variants for Theoretical Thermal Design Optimization of Heat Exchangers Using Nanofluids. Biomimetics. 2025; 10(7):454. https://doi.org/10.3390/biomimetics10070454

Chicago/Turabian Style

Turgut, Oguz Emrah, Mustafa Asker, Hayrullah Bilgeran Yesiloz, Hadi Genceli, and Mohammad AL-Rawi. 2025. "Chaotic Mountain Gazelle Optimizer Improved by Multiple Oppositional-Based Learning Variants for Theoretical Thermal Design Optimization of Heat Exchangers Using Nanofluids" Biomimetics 10, no. 7: 454. https://doi.org/10.3390/biomimetics10070454

APA Style

Turgut, O. E., Asker, M., Yesiloz, H. B., Genceli, H., & AL-Rawi, M. (2025). Chaotic Mountain Gazelle Optimizer Improved by Multiple Oppositional-Based Learning Variants for Theoretical Thermal Design Optimization of Heat Exchangers Using Nanofluids. Biomimetics, 10(7), 454. https://doi.org/10.3390/biomimetics10070454

Article Metrics

Back to TopTop