You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Feature Paper
  • Article
  • Open Access

16 November 2023

Application of Diversity-Maintaining Adaptive Rafflesia Optimization Algorithm to Engineering Optimisation Problems

,
,
,
and
1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
3
School of Advanced Manufacturing, Fuzhou University, Quanzhou 362200, China
4
College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China

Abstract

The Diversity-Maintained Adaptive Rafflesia Optimization Algorithm represents an enhanced version of the original Rafflesia Optimization Algorithm. The latter draws inspiration from the unique characteristics displayed by the Rafflesia during its growth, simulating the entire lifecycle from blooming to seed dispersion. The incorporation of the Adaptive Weight Adjustment Strategy and the Diversity Maintenance Strategy assists the algorithm in averting premature convergence to local optima, subsequently bolstering its global search capabilities. When tested on the CEC2013 benchmark functions under a dimension of 30, the new algorithm was compared with ten optimization algorithms, including commonly used classical algorithms, such as PSO, DE, CSO, SCA, and the newly introduced ROA. Evaluation metrics included mean and variance, and the new algorithm outperformed on a majority of the test functions. Concurrently, the new algorithm was applied to six real-world engineering problems: tensile/compressive spring design, pressure vessel design, three-bar truss design, welded beam design, reducer design, and gear system design. In these comparative optimizations against other mainstream algorithms, the objective function’s mean value optimized by the new algorithm consistently surpassed that of other algorithms across all six engineering challenges. Such experimental outcomes validate the efficiency and reliability of the Diversity-Maintained Adaptive Rafflesia Optimization Algorithm in tackling optimization challenges. The Diversity- Maintained Adaptive Rafflesia Optimization Algorithm is capable of tuning the parameter values for the optimization of symmetry and asymmetry functions. As part of our future research endeavors, we aim to deploy this algorithm on an even broader array of diverse and distinct optimization problems, such as the arrangement of wireless sensor nodes, further solidifying its widespread applicability and efficacy.

1. Introduction

1.1. Meta-Heuristic Algorithms

In recent years, with the rapid development of intelligent optimization algorithms, numerous optimization problems have been addressed effectively. An optimization problem is characterized by the quest to find the best solution or parameter values that maximize or minimize an objective function within a vast array of solutions and parameters, subject to certain constraints. Meta-heuristic optimization algorithms are useful to solve both the optimization of symmetry functions and asymmetry functions with constrains. These problems encompass essential components such as the objective function, variables, and constraints. The domains covered by optimization problems are diverse, spanning areas such as engineering optimization, data mining, machine learning, wireless sensor deployment, resource scheduling, digital image processing, mechanical design, and path planning, among others. When confronted with these challenges, traditional optimization methods, such as Newton’s method, necessitate an exhaustive traversal of the entire search space, a process that is often time-consuming. In the context of specific complex optimization issues, due to the vastness of their search spaces, high complexity, and the presence of constraints and nonlinearities, pinpointing the optimal solutions becomes increasingly challenging. Hence, there is a pronounced emphasis on seeking high-performance, rapidly converging intelligent optimization algorithms to tackle these intricate problems.
Inspired by natural social laws and collective biological behaviors, researchers have begun to introduce a category of algorithms known as metaheuristic algorithms. Compared to traditional optimization algorithms, these metaheuristics exhibit superior robustness, explorative capacity, and adaptability. Some metaheuristic algorithms, originating as early as the 1970s and 1980s, have been proposed. For instance, the classic Genetic Algorithm and Simulated Annealing Algorithm were introduced, leveraging the simulation of natural biological evolution and the annealing process of solid materials to search spaces for optimal solutions. Empirical tests indicated that these algorithms achieved favorable outcomes in solving intricate problems.
With the continued evolution of scientific research, an increasing number of metaheuristic algorithms have been developed and studied. In 1995, for example, Storn and colleagues introduced the Differential Evolution Algorithm, rooted in swarm intelligence theory. Initially conceived for Chebyshev polynomial problems, it was subsequently found to be effective for other optimization problems. There are numerous algorithms based on swarm intelligence, such as Particle Swarm Optimization [] (PSO), inspired by the flocking and clustering behaviors of birds during foraging. Owing to its simplicity and ease of implementation, PSO quickly garnered considerable attention. By simulating the behavior of ants in their unaided search for the shortest route between food and their nest, Ant Colony Optimization [,] (ACO) was proposed, primarily for shortest-path problems. Drawing inspiration from the behavioral traits and hunting strategies of cats, researchers introduced Cat Swarm Optimization [] (CSO). By emulating the echolocation features and flight patterns of bats, researchers presented the Bat Algorithm [,] (BA). The Sine-Cosine Algorithm [] (SCA), developed based on the continuity and periodicity of sine and cosine functions, simulates solution optimization and searching. In recent years, newer optimization algorithms have emerged, such as the Goose Optimization Algorithm [] (GOA), which is founded on the behavior of geese during predation, particularly their exploratory behavior prior to spotting fish and their chasing behavior once fish are detected. The Artificial Fish Swarm Algorithm [,] (AFSA) was proposed, inspired by fish interactions during predation and predator avoidance. The Artificial Bee Colony Algorithm [] (ABCA) emanates from bee foraging behaviors, replicating the bees’ information exchange and foraging strategies. Bamboo Forest Optimization [] (BFGO) was inspired by the growth characteristics of bamboo, while the Rafflesia Optimization Algorithm [] (ROA) and the Binary Rafflesia Optimization Algorithm were conceived based on the blooming and reproduction patterns of the Rafflesia flower. The Grey Wolf Optimizer [,] (GWO) mimics the societal behaviors of grey wolves, particularly in their hunting endeavors. Based on the migratory and foraging behaviors of whales, the Whale Optimization Algorithm [,] (WOA) was introduced to optimize solution spaces. Additionally, algorithms such as the Gaining–Sharing Knowledge-Based Algorithm [] (GSK) have been introduced. Upon their introduction, these metaheuristic algorithms have been successfully deployed in path planning, engineering applications, and sensor placements, among other complex optimization challenges, consistently delivering commendable results.
With the continued advancements in intelligent optimization algorithms in recent years, numerous optimization strategies have been proposed by researchers and successfully applied to real-world optimization problems. However, no single algorithm has been identified that can universally address all optimization challenges. This observation aligns with the No Free Lunch Theorem. When confronted with a particular complex optimization problem, an algorithm might produce commendable outcomes post-optimization. Yet, the same algorithm may underperform when tasked with different challenges. This suggests that algorithms can often become trapped in local optima when applied in practical scenarios. To mitigate this issue, researchers have sought to refine existing algorithms and introduce enhanced ones, aiming to bolster their convergence capabilities. One such effort, the Butterfly Optimization Algorithm (BOA), was indeed grounded in the No Free Lunch Theorem.
To achieve superior convergence results, modifications were made to the Rafflesia Optimization Algorithm, incorporating adaptive weight adjustment strategies and diversity preservation techniques. This revamped algorithm, designed to avoid local optima entrapments and demonstrate augmented convergence capabilities, has been termed the Diversity-Maintained Adaptive Rafflesia Optimization Algorithm (AROA). Subsequently, the AROA’s performance was assessed and tested using the CEC2013 benchmark function set. It was compared with algorithms such as ROA, PSO, SCA, WOA, GSA [], DE [,], CSO [], BA, and BOA for problems with a dimensionality of 30. Comparative outcomes revealed that AROA exhibited superior convergence performance. Moreover, AROA was applied to engineering optimization problems to gauge its efficacy in practical applications. When juxtaposed with other algorithms in this context, AROA consistently delivered superior results.

1.2. Algorithmic Features or Principles

Meta-heuristic algorithms, such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Cat Swarm Optimization (CSO), the Artificial Bee Colony Algorithm (ABCA), Artificial Fish Swarm Algorithm (AFSA), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Bat Algorithm (BA), Sine-Cosine Algorithm (SCA), Gaining–Sharing Knowledge-Based Algorithm (GSK), and the Rafflesia Optimization Algorithm (ROA), are evolutionary or swarm intelligence algorithms designed to address optimization problems. The fundamental distinctions between them lie in their basic principles and characteristics.
A common trait amongst these algorithms is that they emulate behaviors observed in nature or mathematical principles to identify optimal solutions. For instance, PSO mimics the flocking behavior of birds, ACO simulates ant foraging, and CSO emulates hunting behavior of cats. These algorithms employ various strategies to guide their search processes, exploring the search space to eventually pinpoint either global optima or high-quality solutions.
However, these meta-heuristic algorithms have unique foundational principles and essential characteristics. For instance, in PSO, which imitates the flocking behavior of birds, particles adjust their velocities and positions based on personal and global best positions. ACO, drawing inspiration from ants searching for food, uses pheromones as guidance for the search process. CSO, emulating cat hunting and migration behaviors, introduces hunting and migration phases to enhance diversity. ABCA mirrors bee recruiting, foraging, and information-sharing behaviors, fostering inter-bee cooperation to pinpoint optimal solutions. AFSA emulates fish foraging and migration, encompassing both individual and collective behaviors. GWO replicates social behaviors within wolf packs, incorporating alpha leadership and followership. WOA imitates whale search and foraging, integrating linearly decreasing predation behaviors. BA emulates bat foraging and echolocation, using frequency and amplitude adjustments for exploration. SCA employs the mathematical principles of sine and cosine functions to generate novel solutions, while GSK, predicated on knowledge-sharing and acquisition mechanisms, optimizes solutions through individual cooperation.
Regarding application domains, certain algorithms, such as PSO, GWO, and ROA, are more suited for continuous optimization problems, whereas ACO and CSO are tailored for discrete optimization problems. Algorithms such as ABCA and AFSA can often be employed across diverse problem types, including both continuous and discrete optimization.
From an information dissemination perspective, ACO deploys pheromones to guide the search, with pheromone deposition and evaporation influencing path selection. ABCA and AFSA, despite also involving information dissemination, differ in their mechanisms: bees and fish collaborate and exchange information to locate optimal solutions.
In terms of update strategies, algorithms such as CSO, AFSA, and GWO introduce unique diversity maintenance strategies, such as hunting and migration behaviors and individual versus collective behaviors, to enhance search diversity. In summary, the critical distinguishing features among these meta-heuristic algorithms encompass their foundational principles, application domains, information dissemination mechanisms, diversity maintenance strategies, parameter tuning, and adaptiveness. The choice of the appropriate algorithm largely depends on the nature and specific requirements of the problem at hand.
The advantages of these algorithms over traditional optimization techniques lie in their ability to guide a search by simulating behaviors observed in nature or by leveraging mathematical principles, thereby enhancing their exploration capabilities for complex, multimodal problems. During the search process, they are able to maintain diversity and possess both global and local search capabilities. As a result, they are typically more adept at locating global optima or high-quality solutions. The choice of algorithm often hinges upon the specific nature of the problem and the configuration of algorithmic parameters. The flexibility and versatility of these algorithms render them potent tools for addressing a wide array of optimization challenges.
The remaining structure of this paper is organized as follows: Section 2 presents the preparatory work, introducing the original Rafflesia Optimisation Algorithm, the strategies employed for the improved algorithm, and the optimization domains and challenges addressed in this study. Section 3 delves into the specific improvement details of applying the adaptive weight adjustment strategy and diversity maintenance strategy to the Rafflesia Optimisation Algorithm. Section 4 conducts tests on the algorithm, presenting comparative test results and their analysis. Section 5 evaluates the algorithm’s performance in engineering application problems. Section 6 discusses the algorithm’s applicability and time complexity and also highlights the performance capabilities of some benchmark algorithms. The final section, Section 7, summarizes the main research contributions of this paper and offers perspectives on future research directions.

3. Method

This section primarily delves into the intricate details of the enhancements made to the AROA algorithm. It elucidates how the strategies employed within this improved algorithm confer it an edge over other optimization algorithms. The necessity of incorporating an adaptive weight updating strategy and a diversity maintenance strategy is also underscored.

3.1. Improvement Details

Comparing the original algorithm (ROA) with the newly enhanced version (AROA), several key modifications and refinements have been undertaken.
Firstly, the integration of an Adaptive Weight Adjustment Strategy has been introduced, primarily focusing on the third phase of the algorithm. A mechanism for adaptive weight adjustment has been newly added, allowing for dynamic updates of parameters such as w 0 , w 1 and p h i . This ensures that the algorithm’s movement within the search space is more flexible, allowing it to adjust its search strategy based on iteration counts. Such adjustments empower the algorithm with a robust global search capability in the early stages, followed by an efficient local search ability in the later stages, thereby enhancing its convergence rate.
Secondly, the inclusion of a diversity maintenance strategy has been made, with modifications also being made in the third phase of the algorithm. A similarity computation method has been introduced to assess if individuals within the population are overly alike. When two individuals are found to be too similar, one is reset. This strategy ensures the maintenance of diversity within the population, preventing the premature convergence to local optima.
Thirdly, probabilistic controls for weight adjustment and diversity maintenance have been implemented. Using the condition i f r a n d < 0.7 , weight adjustments and diversity maintenance are not conducted at every iteration. Instead, they are executed with a certain probability. This introduces an element of randomness to the algorithm, preventing premature convergence to a certain extent.
For a more in-depth understanding of the improvements, one can refer to the pseudocode of ROA labeled as Algorithm 1 and that of AROA depicted as Algorithm 2. Additionally, the flowchart in Figure 2 can also be consulted for further clarity.
Figure 2. The flowchart of AROA.

3.1.1. Adaptive Weight Adjustment Improvement

Within the adaptive weight adjustment strategy, the initial focus revolves around the selection of weight parameters. The algorithm predominantly targets the adaptive adjustments of the weight parameters w 0 , w 1 and p h i . Weights hold significant importance in evolutionary algorithms given their capability to govern the direction and speed of the search process. To initialize the weight parameters, w 0 and w 1 are set to 1 / f , where f represents the wing-beat frequency. The p h i value is initialized to −0.78545.
As for the weight updating strategy, within the enhanced Rafflesia algorithm, the weight parameters are updated based on iteration count, denoted as i t e r , and the periodic parameter p d . An exponential decay strategy is employed for these updates, wherein the weights are diminished by multiplying them with an exponential function. This can be expressed through the subsequent formula:
w 0 = w 0 · exp iter pd 2 pd 2
w 1 = w 1 · exp iter pd 2 pd 2
p h i = p h i · exp iter pd 2 pd 2
The weight reduction strategy is devised based on the iteration number, represented as i t e r , and a periodic parameter, denoted as p d . As iterations progress, these weight parameters are methodically diminished, subsequently influencing the motion pattern of the insects. The behavior of insects in different phases is governed by these weight parameters. Both w 0 and w 1 are instrumental in striking a balance between the exploration and convergence of the insects, while p h i modulates the phase difference between the translational and rotational motions of the insects. Through variations in the fitness function, these parameters are adaptively set to influence the algorithm’s exploration and convergence performance. The frequency of weight parameter updates is regulated by the periodic parameter p d . In the presented algorithm, p d orchestrates the updates of weight parameters, invoking a weight update operation whenever the condition rem ( iter , pd ) > pd 2 is met.
The weight parameters play a pivotal role in dictating the methodology of updating the insect’s position. This positional adjustment can be encapsulated by the subsequent formula for the first phase, which will be elaborated upon, where t a r g e t _ p o s i t i o n is the target position and r a n d o m _ v e c t o r is the random vector. This formula elucidates how the insect’s position is adjusted based on the target and influenced by the random vector, modulated by the weight parameters.
new _ position = old _ position + w 0 · ( target _ position old _ position ) + w 1 · ( random _ vector 0.5 ) + p h i

3.1.2. Diversity Maintenance Improvement

The enhancements made to the algorithm also encompass the integration of a diversity maintenance strategy. The primary objective of this strategy is to uphold population diversity in each iteration, forestalling the algorithm from becoming ensnared in local optima, thereby augmenting its global search performance.
When refining the ROA with the Diversity Maintenance Strategy, a similarity threshold, denoted as s i m i l a r i t y t h r e s h o l d , was initially defined. This threshold serves as a metric to gauge the similarity between individual entities within the population. A simplified form of the Euclidean distance was employed as the similarity measure. The formula for this similarity measure is presented as follows:
similarity = 1 dim k = 1 dim pop ( i , k ) pop ( j , k )
In this context, p o p ( i , k ) and p o p ( j , k ) represent the position vectors of the i-th and j-th individuals within the population, respectively, while dimdim indicates the dimensionality of the problem. The similarity measure calculates the sum of the absolute differences between the dimensions of two individuals and then divides the result by the dimensionality, dimdim. Upon the computation of similarity, the algorithm embarks on the process of diversity maintenance. For each pair of individuals within the population, nested iterative loops ensure that each individual is juxtaposed against the rest. If the similarity, denoted as s i m i l a r i t y , between two individuals exceeds the predefined similarity threshold, s i m i l a r i t y _ t h r e s h o l d , diversity maintenance operations are executed. The condition for diversity maintenance can be articulated as if similarity ( i , j ) > similarity _ threshold .
Within the diversity maintenance operation, one individual, termed r e s e t i n d e x , is randomly chosen from the two similar entities. This selected individual is subsequently reverted to its initial state by invoking the i n i t i a l i s a t i o n function to regenerate a random individual. This resetting maneuver enhances population diversity and furnishes the algorithm with the means to break free from local optima. The individual reset is characterized by
reset _ index = randi ( [ i , j ] )
pop ( reset _ index , : ) =   random _ initialization ( )
where r a n d o m i n i t i a l i s a t i o n ( ) stands for a function devised to spawn a random individual. In essence, the Diversity Maintenance Strategy ensures population diversity by gauging inter-individual similarity and initiating an individual reset whenever similarity surpasses the set threshold. Such an approach augments the algorithm’s global search performance, curtailing the risks associated with premature convergence to local optima. This strategy proves invaluable across numerous optimization algorithms, especially when grappling with intricate problems.
Algorithm 2 The pseudo code of AROA.
1:
// Third stage
2:
if rem(iter, pd) > (pd / 2) then//rem denotes the remainder operation
3:
     for i = 1 to sizepop do
4:
          r d rand × ( u b l b ) + l b
5:
         for j = 1 to dim do
6:
              p o p ( i , j ) p o p _ b e s t ( j ) + r d × exp ( i t e r M a x _ i t e r 1 ) × sign ( rand 0.5 )
7:
         end for
8:
     end for
9:
     if rand < 0.7 then
10:
        // Update weights
11:
         w 0 w 0 × exp ( i t e r p d 2 p d 2 )                          ▹ Equation (16)
12:
         w 1 w 1 × exp ( i t e r p d 2 p d 2 )                          ▹ Equation (17)
13:
         ϕ ϕ × exp ( i t e r p d 2 p d 2 )                            ▹ Equation (18)
14:
        // Diversity maintenance
15:
         s i m i l a r i t y _ t h r e s h o l d 0.8
16:
        for i = 1 to sizepop do
17:
           for j = i + 1 to sizepop do
18:
                s i m i l a r i t y k = 1 d i m abs ( p o p ( i , k ) p o p ( j , k ) ) d i m                    ▹ Equation (20)
19:
               if  s i m i l a r i t y > s i m i l a r i t y _ t h r e s h o l d  then
20:
                    r e s e t _ i n d e x randi ( [ i , j ] )                         ▹ Equation (21)
21:
                    p o p ( r e s e t _ i n d e x , : ) initialization ( 1 , d i m , u b , l b )         ▹ Equation (22)
22:
               end if
23:
           end for
24:
        end for
25:
    end if
26:
end if

3.2. Role and Necessity of Strategy

Adaptive weight adjustment fosters an improved balance between global and local searches within the algorithm, thereby amplifying the probability of pinpointing the global optimum. Diversity maintenance ensures adequate variation among individuals within the population, facilitating a more comprehensive exploration of the search space. This reduces the risk of entrapment in local optima at the expense of overlooking the global optimum. Through probabilistic control of weight adjustment and diversity maintenance, an element of randomness and unpredictability is injected into the algorithm, bolstering its robustness. With these enhancements, the novel optimization algorithm might, in certain scenarios, outstrip the original algorithm and other optimization techniques in terms of convergence speed and solution quality. Indeed, its efficacy has been corroborated using the CEC2013 benchmark function suite and in addressing specific engineering application problems.
To delve deeper, let us ponder over six distinct engineering applications and contemplate the potential advancements and superiority of AROA in these domains. In Section 5, this paper delineates several specific engineering applications, highlighting the treatment of varying constraint types. For instance, spring design might encompass different types of elasticity models and materials. Pressure vessel design may be riddled with myriad constraints, such as tensile strength and pressure endurance. Welded beam design might encompass constraints relating to weld strength and material durability, while gearbox design would perhaps necessitate considerations of multiple parameters and objectives. Gear system design could involve constraints on gear ratios and gear strength. AROA, through its adaptive weight adjustment strategy, can deftly adapt to these diverse design variables, ensuring all constraints are aptly met. Especially when grappling with tension/compression spring design, gearbox design, and the intricate design space commonly found in gearbox design, AROA’s integration of the diversity maintenance strategy amplifies its global search prowess. Endowed with a refined search mechanism, it evades premature local optima entrapment and sweeps the design space more exhaustively, tailoring its approach to the problem’s nuances. In these engineering realms, AROA’s enhancements might manifest as heightened adaptability, swifter convergence speeds, and superior global search capabilities, rendering it more adept at handling complex, multi-objective real-world engineering issues. The specific merits might necessitate validation through actual case studies and empirical results. Indeed, Section Five of this article provides a meticulous exposition of the experimental outcomes.

4. Experiments

The experimental section primarily entailed a performance assessment of the algorithm. In this evaluation, the algorithm was validated against the CEC2013 benchmark test suite. Finally, an analysis of the experimental data was completed.

4.1. Experiments Results

Testing was conducted using a 30-dimensional configuration, and the average fitness values (mean) and standard deviations (std) were documented after 30 independent runs of each algorithm. This approach provided a holistic assessment, affirming the efficacy of the newly proposed method. An additional tabular reference (Table 1) was incorporated, detailing each algorithm under comparison, alongside their respective parameter settings. This augmentation was made to enhance the clarity and observability of the comparative framework. Table 2 and Table 3 showcase the statistical outcomes for each algorithm. At the conclusion of each table, the term “win” quantifies the number of occasions on which the AROA surpassed its competitors in terms of the evaluation metric “Mean”. From the data presented in these tables, it can be observed that AROA exhibited superior performance in terms of both average fitness values (mean) and standard deviations (std).
Table 1. Parameter settings for each related algorithm.
Table 2. Assessment result 1.
Table 3. Assessment result 2.
This study juxtaposed the enhanced algorithm, AROA, with its predecessor, ROA, while also drawing comparisons with eight other optimization algorithms. These encompass the Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Cat Swarm Optimization (CSO), Butterfly Optimization Algorithm (BOA), Bat Algorithm (BA), and the Sine Cosine Algorithm (SCA).
All experiments were conducted using MATLAB 2021a software. For algorithm evaluation, the C E C 2013 [] benchmark test functions were employed to assess the performance of AROA. The C E C 2013 [] test function set serves as a standard benchmark for evaluating the efficacy of optimization algorithms. Comprising 28 distinct test functions, the suite encompasses unimodal, multimodal, and some intricate test functions. These are designed to simulate various types of optimization problems, ensuring a comprehensive appraisal of an algorithm’s capabilities. Throughout the testing procedure, the dimensionality of the test functions was set at D = 30 . Consistent parameter settings were maintained, with a population size of N (where ( N = 30 ) ), and the evaluation was executed 1000 times for the algorithm.
Table 2 and Table 3 present the results of the algorithm on the test function suite. A comparative assessment, based on the average values obtained after 30 independent runs, revealed performance differences between AROA, ROA, and nine other intelligent optimization algorithms. As depicted in the data, the AROA algorithm outperformed the ROA algorithm on 20 of the CEC2013 test functions, predominantly emerging superior. Comparable results were observed for the functions F 5 , F 10 , F 16 , F 19 and F 20 . When pitted against the PSO algorithm, AROA outshone in 19 test functions. Against WOA, AROA excelled in 17 test functions. AROA demonstrated superior performance in 21 functions when contrasted with GSA and in 17 functions each against DE and CSO. In comparisons with BOA and SCA, AROA prevailed in 27 functions and outperformed BA in 23 functions. Collectively, this assessment underscores AROA’s enhanced performance across the test functions.
To offer a more discernible visualization of the experimental outcomes, convergence curves were plotted for a random selection of three diverse function types (unimodal, multimodal, and complex). Twelve test functions were selected: F1, F2, F6, F9, F10, F11, F12, F13, F21, F23, F27, and F28. The graphical depictions of these convergence outcomes are presented in Figure 3a–l.
Figure 3. Convergence curves of the 10 algorithm on selected CEC2013 benchmark test functions.

4.2. Experimental Analysis

Upon analyzing the experimental data, it can be discerned that the integration of Adaptive Weight Updating and Diversity Maintenance Strategies typically exerts a positive influence on algorithmic performance.
Primarily, the Diversity Maintenance Strategy enhances particle diversity. When an algorithm maintains a diverse particle swarm, it is more apt to encompass a larger search space. This proves particularly beneficial in circumventing local optima and potentially discovering the global optimum.
Additionally, adaptive weight updates facilitate exploration during the algorithm’s early stages. Broad-ranging searches, or exploration, are paramount in these initial phases. Such a strategy ensures that the algorithm does not become confined to a specific region prematurely, granting it the opportunity to discern promising regions within the entirety of the search space. This aspect becomes particularly salient when scrutinized alongside specific CEC2013 test functions.
For unimodal problems, adaptive weight updating aids the algorithm in swiftly pinpointing the global optimum. During the algorithm’s nascent stages, a preference for higher weight values is observed to boost exploration. As iterations increase, this weight gradually diminishes, accentuating exploitation and steering the algorithm closer to the global optimum. Even though the search space of unimodal problems is relatively straightforward, retaining diversity is essential to ensure the algorithm does not succumb prematurely to potential local optima or suboptimal zones.
The complexity of multimodal problems stems from the presence of numerous local optima. In the early stages, a substantial weight is required for extensive exploration. As the algorithm progresses, the weight may gradually decrease. Adaptive weight updating assists the algorithm in conducting precise searches within discerned peak regions. Thus, in multimodal problems, maintaining diversity is pivotal. It ensures that the algorithm comprehensively scours the search space and does not become easily enamored by local optima.
Complex functions may exhibit diverse characteristics, such as local optima, plateaus, peaks, and valleys. Adaptive weight updating enables the algorithm to adapt to these features at various stages, broadly exploring initially and conducting precise searches in promising regions later on. To eschew complexity pitfalls, upholding diversity ensures the algorithm does not become ensnared within any specific region throughout the search, retaining the potential to unearth novel, promising areas.
In summation, the inclusion of adaptive weight updating and diversity maintenance strategies empowers the algorithm to adeptly balance exploration and exploitation, resulting in enhanced performance across diverse problem types. For unimodal problems, the algorithm can identify the global optimum more swiftly. In the context of multimodal and complex functions, the algorithm is less prone to being trapped in local optima, having a heightened chance of locating the global optimum. To vividly illustrate convergence efficacy, we have also selected several illustrative convergence plots from each category of test functions for display.

5. Application

This section predominantly elucidates the contextual relationship between the ROA and other metaheuristic algorithms with engineering optimization problems. A comparative performance evaluation of various algorithms across six distinct engineering optimization problems is undertaken, followed by an analytical appraisal of the gathered data.

5.1. Application Background

Metaheuristic algorithms emulate the evolutionary processes observed in nature. These algorithms are conceived through observing behaviors in nature and abstracting them into computational methodologies. Natural evolution has been proven to be an effective optimization process; thus, leveraging these strategies to address engineering challenges becomes intuitively appealing. In terms of search capabilities, nature-inspired algorithms are often endowed with robust global search abilities. This suggests that they can explore various regions of the solution space without easily succumbing to local optima. Such a capability is paramount for specific engineering optimization problems, given that they frequently exhibit multiple local optima and intricate search spaces. Metaheuristic algorithms, aptly equipped in this domain, proffer an efficacious approach. Pertaining to adaptability, metaheuristic algorithms typically demonstrate high adaptability. This denotes their capacity to self-adjust to tackle disparate problems and dynamic environments. In engineering optimization, this becomes a salient trait, as real-world problems can evolve over time. Addressing multi-objectivity and constraint handling, myriad engineering challenges are multi-objective and encompass diverse constraints. Nature-inspired algorithms often incorporate inherent mechanisms to manage these multi-objectives and constraints, rendering them ideal solutions for such quandaries. In practical engineering problem solving, due to their intuitive and flexible nature, these nature-inspired algorithms have been successfully applied across various real-world engineering domains, ranging from aerospace engineering to power system optimization.
The ROA algorithm, introduced herein, is inspired by the pollen dissemination mechanism. This mechanism permits the algorithm to extensively scour the solution space, enhancing its likelihood of pinpointing the global optimum. Moreover, the ROA algorithm can be seamlessly extended to multi-objective optimization problems, concurrently handling multiple constraints. It contemplates multiple objectives and identifies solutions that satisfy all constraints. The advanced version, AROA, as iterations unfold, can autonomously adjust its search strategy and adaptively update weights, thereby seeking optimal solutions more efficiently. Furthermore, the algorithm is not reliant on specific problem characteristics, bestowing it with commendable versatility, making it apt for a gamut of engineering optimization challenges. Conclusively, leveraging the attributes of the King Protea Optimization Algorithm, its integration with engineering optimization challenges can substantially aid in discerning optimal solutions.

5.2. Applied Experiments

For the initial five engineering applications, the performance of the Enhanced Rafflesia Optimization Algorithm (AROA), the original Rafflesia Optimization Algorithm [] (ROA), Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO), Harris Hawk Optimizer [] (HHO), and Osprey Optimization Algorithm [] (OOA) were assessed. The evaluation results are delineated in Table 1. For the sixth engineering application [], the performance of the Enhanced Rafflesia Optimisation Algorithm (AROA), the original Rafflesia Optimisation Algorithm (ROA), Grey Wolf Optimizer (GWO), Dung Beetle Optimizer [] (DBO), Dandelion Optimization [] (DO), Harris Hawk Optimizer (HHO), and Snake Swarm Optimization [,] (SO) were evaluated.
In each engineering application, the efficacy of the novel algorithm (AROA) was gauged by contrasting its optimal values against those derived from other algorithms. In this segment, a total of 12 tabulated datasets are provided, detailing the constraints and objective functions for six engineering application problems, along with the optimal values obtained by various algorithms across these six applications, as portrayed in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15. Through a close inspection of the optimal values, it was discerned that the AROA algorithm frequently surpassed its counterparts, thereby underscoring its superior performance in practical applications.
Table 4. Engineering optimization problem 1.
Table 4. Engineering optimization problem 1.
NameFunction
Consider x = [ x 1 x 2 x 3 ] = [d D N]
Minimize f ( x ) = ( x 3 + 2 ) · x 2 · ( x 1 2 )
Subject to g 1 ( x ) = 1 ( x 2 3 ) · x 3 71785 · ( x 1 4 ) 0
g 2 ( x ) = 4 · ( x 2 2 ) x 1 · x 2 12566 · ( x 2 · ( x 1 3 ) ( x 1 4 ) ) + 1 5108 · ( x 1 2 ) 1 0
g 3 ( x ) = 1 140.45 · x 1 ( x 2 2 ) · x 3 0
g 4 ( x ) = ( x 1 + x 2 ) 1.5 1 0
Parametersrange0.05 ≤   x 1  ≤ 2, 0.25 ≤  x 2  ≤ 1.3, 2 ≤  x 3  ≤ 15
Table 5. Engineering optimization problem 1 results.
Table 5. Engineering optimization problem 1 results.
Algorithm x 1 x 2 x 3 fbest
WOA 5.890 × 10 2 5.563 × 10 1 5.018 × 10 0 1.355 × 10 2
HHO 5.000 × 10 2 3.137 × 10 1 1.453 × 10 1 1.297 × 10 2
GWO 5.000 × 10 2 3.173 × 10 1 1.406 × 10 1 1.274 × 10 2
OOA 5.659 × 10 2 4.865 × 10 1 6.398 × 10 0 1.308 × 10 2
AROA 5.089 × 10 2 3.377 × 10 1 1.250 × 10 1 1.268 × 10 2
ROA 6.257 × 10 2 6.791 × 10 1 3.516 × 10 0 1.466 × 10 2

5.2.1. Tension/Compression Spring Design Problems

The design of tension and compression springs [,] is a pivotal concern within the realm of engineering optimization. Such challenges encompass the design and optimization of springs utilized for regulating force, displacement, or energy to meet specific performance and constraint demands inherent to certain engineering applications. These springs find extensive deployment across diverse sectors, including automotive suspension systems, architectural structures, mechanical apparatuses, and electronic devices.
In addressing the tension/compression spring design quandary, engineers are initially compelled to select an apt spring material. This selection process is intrinsically governed by considerations such as the material’s modulus of elasticity, yield strength, and density, ensuring the spring’s optimal performance and durability. Subsequently, the choice of geometric parameters is paramount, with attributes such as wire diameter, outer diameter, coiling cycles, coiling diameter, and spring length, all of which directly impinge upon the spring’s rigidity and displacement characteristics.
To discern the most propitious amalgamation of design parameters, engineers typically resort to engineering optimization techniques, including genetic algorithms, particle swarm optimization, and simulated annealing. These algorithms are adept at autonomously traversing the design space to satisfy the stipulated performance and constraint criteria. The selection of an optimization algorithm hinges on the intricacy of the problem and the availability of computational resources.
The primary objective of this particular optimization endeavor for tension/compression spring design is to effectuate weight minimization by selecting three variables: wire diameter (d), mean coil diameter (D), and the number of active coils (N). The objective function is delineated as follows.

5.2.2. The Problem of Pressure Vessel Design

The task of designing pressure vessels [] stands as one of the cardinal undertakings in the engineering domain. This encompasses the apparatuses designated for the containment and transport of diverse gases, liquids, or vapors. Their design must be meticulously calibrated, adhering stringently to engineering standards and regulatory frameworks, thereby ensuring safety and reliability.
Foremost, the material selection for pressure vessels is paramount. Engineers are compelled to contemplate the strength, corrosion resistance, temperature attributes, and cost implications of prospective materials. Common materials such as stainless steel, carbon steel, and aluminum alloys are typically employed, with the specific choice being contingent upon the vessel’s intended application and the surrounding environment.
Subsequently, the vessel’s geometric design is inextricably linked to its performance. The shape and dimensions must be harmoniously orchestrated, balancing volumetric demands, structural strength criteria, and spatial availability. This design trajectory often encompasses stress analyses, ensuring the vessel’s robustness against potential failures when subjected to internal or external pressures.
Pressure vessels must be congruent with regulatory standards, exemplified by the likes of the American ASME Pressure Vessel Code, which underpins their safety and legitimacy. These standards stipulate the design, fabrication, and inspection requisites for the vessels, anchoring their safety across myriad operational scenarios.
Lastly, consideration must be extended to the vessel’s life span and requisite maintenance. Internal facets of the vessel might be vulnerable to corrosion, wear, or fatigue, mandating periodic assessments and upkeep to vouchsafe its enduring reliability. Rational design, coupled with rigorous quality control, is indispensable for guaranteeing the vessel’s safe and reliable operation.
The primary objective in this optimization endeavor for pressure vessel design is the minimization of manufacturing costs while preserving vessel functionality, achieved by calibrating four variables: shell thickness (Ts), head thickness (Th), inner radius (R), and cylindrical section length, excluding the head (L). The objective function is delineated as follows:
Table 6. Engineering optimization problem 2.
Table 6. Engineering optimization problem 2.
NameFunction
Consider x = [ x 1 x 2 x 3 x 4 ] = [Ts Th R L]
Minimize f ( x ) = 0.6224 · x 1 · x 3 · x 4 + 1.7781 · x 2 · x 3 2 + 3.1661 · x 1 2 · x 4 + 19.84 · x 1 2 · x 3
Subject to g 1 ( x ) = x 1 + 0.0193 · x 3 0
g 2 ( x ) = x 2 + 0.00954 · x 3 0
g 3 ( x ) = π · x 3 2 · x 4 4 3 π · x 3 3 + 1296000 0
g 4 ( x ) = x 4 240 0
Parameter ranges0  ≤  x 1 , x 2  ≤  99, 10  ≤  x 3 , x 4  ≤  200
Table 7. Engineering optimisation problem 2 results.
Table 7. Engineering optimisation problem 2 results.
Algorithm x 1 x 2 x 3 x 4 fbest
WOA 1.439 × 10 0 7.641 × 10 1 6.523 × 10 1 1.000 × 10 1 9.108 × 10 3
HHO 9.918 × 10 1 5.057 × 10 1 5.123 × 10 1 8.889 × 10 1 6.448 × 10 3
GWO 7.801 × 10 1 3.860 × 10 1 4.036 × 10 1 1.996 × 10 2 5.901 × 10 3
OOA 7.478 × 10 0 3.362 × 10 1 5.517 × 10 1 6.200 × 10 1 2.701 × 10 5
AROA 7.782 × 10 1 3.847 × 10 1 4.032 × 10 1 2.000 × 10 2 5.885 × 10 3
ROA 8.588 × 10 1 4.250 × 10 1 4.449 × 10 1 1.491 × 10 2 6.041 × 10 3

5.2.3. The Triple Rod Truss Design Problem

The problem of designing a three-bar truss [] presents a significant challenge within the realm of structural engineering. It encompasses the design and optimization of a three-bar truss structure to meet specific structural strength, stability, and load-bearing requisites. Such truss configurations are typically assembled from multiple bars and nodes, serving as the foundational supports for edifices such as buildings, bridges, towers, and other engineered structures. When grappling with this design conundrum, engineers are necessitated to deliberate over myriad pivotal considerations such as material selection, geometric design, structural analysis, and load computations, all to ensure that the resultant truss configuration operates with impeccable safety and reliability under diverse conditions. This entails a meticulous balance between the truss’s structural integrity and its weight, aiming to fulfill engineering mandates whilst endeavoring to minimize the consumption of structural materials. By harnessing computational tools and optimization algorithms, engineers are capacitated to pinpoint the optimal bar dimensions, node configurations, and material selections. This, in turn, ensures compliance with performance metrics and structural constraints, thereby bolstering the efficiency and reliability of the engineering system. The implications of the three-bar truss design quandary resonate profoundly across fields such as bridge construction, architectural endeavors, aerospace initiatives, and myriad other engineering domains, holding paramount significance in safeguarding the stability and safety of engineered structures.
The primary goal of this optimization exercise centers around minimizing the overall weight of the structure by modulating two parameter variables: x 1 and x 2 , with x 1 being synonymous with x 2 . The optimization objective is expounded as follows.
Table 8. Engineering optimization problem 3.
Table 8. Engineering optimization problem 3.
NameFunction
Consider x = [ x 1 x 2 ] ; l = 100 cm; P = 2 kN/(cm2); q = 2 kN/(cm2)
Minimize f ( x 1 , x 2 ) = l · ( 2 2 x 1 + x 2 )
Subject to g 1 ( x 1 , x 2 ) = P ( 2 x 1 + x 2 ) 2 x 1 2 + 2 x 1 x 2 q 0
g 2 ( x 1 , x 2 ) = P x 2 2 x 1 2 + 2 x 1 x 2 q 0
g 3 ( x 1 , x 2 ) = P 2 x 2 + x 1 q 0
Parameters fall in the range0  ≤  x 1 , x 2  ≤  1
Table 9. Engineering optimization problem 3 results.
Table 9. Engineering optimization problem 3 results.
Algorithm x 1 x 2 fbest
WOA 7.662 × 10 1 4.760 × 10 1 2.643 × 10 2
HHO 7.840 × 10 1 4.217 × 10 1 2.639 × 10 2
GWO 7.875 × 10 1 4.116 × 10 1 2.639 × 10 2
OOA 7.632 × 10 1 4.856 × 10 1 2.644 × 10 2
AROA 7.880 × 10 1 4.101 × 10 1 2.639 × 10 2
ROA 7.910 × 10 1 4.017 × 10 1 2.639 × 10 2

5.2.4. Welded Beam Design Problems

The challenge of designing a welded beam [,] encompasses the design and optimization of beam structures constituted by welding connections tailored to meet specific engineering criteria and performance benchmarks. Such configurations are ubiquitously employed across a myriad of engineering domains, spanning architecture, bridge construction, manufacturing, and beyond, functioning primarily to support and convey loads. In navigating this design puzzle, engineers are compelled to weigh a series of pivotal elements, including material selection, structural geometric design, welding techniques, strength analysis, and load estimations. The aim is to guarantee, through judicious design and optimization, that the welded beam structures exhibit ample strength, rigidity, and stability under diverse operational conditions, while simultaneously striving to diminish the structure’s weight and overall cost. Leveraging computational tools and engineering optimization techniques, engineers are equipped to identify the optimal design parameters, such as welding positions, material attributes, geometric parameters of the beam, and welding processes, ensuring compliance with both performance and design constraints. The intricacies of welded beam design hold extensive applicability in various engineering endeavors and are indispensable in assuring the quality and safety of engineering structures.
The principal objective of this optimization endeavor is to minimize economic costs by finetuning four parameter variables: the beam’s thickness (h), length (l), height (t), and width (b). The optimization goal is elucidated as follows:
Table 10. Engineering optimization problem 4.
Table 10. Engineering optimization problem 4.
NameFunction
Consider x = [ x 1 x 2 x 3 x 4 ] = [h l t b]
Minimize f ( x 1 , x 2 , x 3 , x 4 ) = 1.10471 · ( x 1 2 ) · x 2 + 0.04811 · x 3 · x 4 · ( 14 + x 2 )
Subject to g 1 ( x 1 , x 2 , x 3 , x 4 ) = t t max 0
g 2 ( x 1 , x 2 , x 3 , x 4 ) = σ x σ max 0
g 3 ( x 1 , x 2 , x 3 , x 4 ) = δ x δ max 0
g 4 ( x 1 , x 2 , x 3 , x 4 ) = x 1 x 4 0
g 5 ( x 1 , x 2 , x 3 , x 4 ) = P P c 0
g 6 ( x 1 , x 2 , x 3 , x 4 ) = 0.125 x 1 0
g 7 ( x 1 , x 2 , x 3 , x 4 ) = 1.10471 · ( x 1 2 ) · x 2 + 0.04811 · x 3 · x 4 · ( 14 + x 2 ) 5 0
Parameter range0.1  ≤  x 1 , x 2  ≤  2, 0.1  ≤  x 3 , x 4  ≤  10
Table 11. Engineering optimization problem 4 results.
Table 11. Engineering optimization problem 4 results.
Algorithm x 1 x 2 x 3 x 4 fbest
WOA 2.173 × 10 1 3.260 × 10 0 8.487 × 10 0 2.333 × 10 1 1.814 × 10 0
HHO 5.171 × 10 1 2.306 × 10 0 4.576 × 10 0 8.024 × 10 1 3.561 × 10 0
GWO 1.796 × 10 1 4.374 × 10 0 9.245 × 10 0 2.047 × 10 1 1.829 × 10 0
OOA 4.620 × 10 1 4.413 × 10 0 5.594 × 10 0 5.408 × 10 1 3.720 × 10 0
AROA 1.924 × 10 1 3.423 × 10 0 9.393 × 10 0 2.076 × 10 1 1.775 × 10 0
ROA 2.761 × 10 1 3.114 × 10 0 6.884 × 10 0 3.591 × 10 1 2.298 × 10 0

5.2.5. The Problem of Gearbox Design

The task of designing mechanical reducers [,] occupies a paramount role in the realm of mechanical engineering. The central objective lies in designing and optimizing mechanical gear reducers to effectively reduce the output speed of rotating machinery and simultaneously enhance torque. Such mechanical devices are ubiquitously utilized across various engineering sectors, encompassing industrial manufacturing, automotive engineering, aerospace, wind power generation, and robotic technology, among others.
In addressing the complexities of reducer design [,], engineers are mandated to holistically evaluate a multitude of key factors. Foremost, the load requirements must be explicitly ascertained, delineating the required output torque and speed of the reducer in accordance with specific application demands, thus ensuring the fulfillment of the mechanical system’s performance criteria. Moreover, the selection of the transmission ratio—the speed proportion between the input and output shafts—stands as a critical determinant, shaping the efficacy of the reducer. Material choices, encompassing gears, bearings, and casings also exert a profound influence on the reducer’s design. It is imperative to ensure that these components exhibit sufficient strength and wear-resistance.
The main objective of this study revolves around seven design variables: the face width ( x 1 ), modules ( x 2 ), the number of teeth in the smaller gear ( x 3 ), the length between bearings on the first shaft ( x 4 ), the length between bearings on the second shaft ( x 5 ), the diameter of the first shaft ( x 6 ), and the diameter of the second shaft ( x 7 ). The principal aim is to minimize the overall weight of the reducer by optimizing these seven parameters. The underlying mathematical formula is presented as follows:
Table 12. Engineering optimization problem 5.
Table 12. Engineering optimization problem 5.
NameFunction
Consider x = [ x 1 x 2 x 3 x 4 x 5 x 6 x 7 ]
Minimize f ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) = 0.7854 · x 1 · x 2 2 · ( 3.3333 · x 3 2
+ 14.9334 · x 3 43.0934 ) 1.508 · x 1 · ( x 6 2 + x 7 2 ) + 7.4777 · ( x 6 3 + x 7 3 )
+ 0.7854 · ( x 4 · x 6 2 + x 5 · x 7 2 )
Subject to g 1 ( x 1 , x 2 , x 3 ) = 27 x 1 · x 2 2 · x 3 1 0
g 2 ( x 1 , x 2 , x 3 ) = 397.5 x 1 · x 2 2 · x 3 2 1 0
g 3 ( x 2 , x 3 , x 4 , x 6 ) = 1.93 · x 4 3 x 2 · x 3 · x 6 4 1 0
g 4 ( x 2 , x 3 , x 5 , x 7 ) = 1.93 · x 5 3 x 2 · x 3 · x 7 4 1 0
g 5 ( x 2 , x 3 , x 4 , x 6 ) = 1 110 · x 6 3 745 · x 4 x 2 · x 3 2 + 16.9 × 10 6 0.5 1 0
g 6 ( x 2 , x 3 , x 5 , x 7 ) = 1 85 · x 7 3 745 · x 5 x 2 · x 3 2 + 157.5 × 10 6 0.5 1 0
g 7 ( x 2 , x 3 ) = x 2 · x 3 40 1 0
g 8 ( x 1 , x 2 ) = 5 · x 2 x 1 1 0
g 9 ( x 1 , x 2 ) = x 1 12 · x 2 1 0
g 10 ( x 4 , x 6 ) = 1.5 · x 6 + 1.9 x 4 1 0
g 11 ( x 5 , x 7 ) = 1.1 · x 7 + 1.9 x 5 1 0
Parameter range2.6  ≤  x 1  ≤  3.6, 0.7  ≤  x 2  ≤  0.8, 17  ≤  x 3  ≤  28
7.3  ≤  x 4  ≤  8.3, 7.8  ≤  x 5  ≤  8.3, 2.9  ≤  x 6  ≤  3.9, 5.0  ≤  x 7  ≤  5.5
Table 13. Engineering optimization problem 5 results.
Table 13. Engineering optimization problem 5 results.
Algorithm x 1 x 2 x 3 x 4 x 5 x 6 x 7 fbest
WOA 3.510 × 10 0 7.000 × 10 1 1.700 × 10 1 7.639 × 10 0 7.926 × 10 0 3.556 × 10 0 5.287 × 10 0 3.062 × 10 3
HHO 3.590 × 10 0 7.000 × 10 1 1.700 × 10 1 7.300 × 10 0 8.052 × 10 0 3.475 × 10 0 5.287 × 10 0 3.070 × 10 3
GWO 3.501 × 10 0 7.000 × 10 1 1.700 × 10 1 7.924 × 10 0 8.210 × 10 0 3.364 × 10 0 5.287 × 10 0 3.015 × 10 3
OOA 3.600 × 10 0 7.506 × 10 1 2.461 × 10 1 8.294 × 10 0 7.851 × 10 0 3.900 × 10 0 5.500 × 10 0 1.971 × 10 98
AROA 3.500 × 10 0 7.000 × 10 1 1.700 × 10 1 7.669 × 10 0 7.715 × 10 0 3.351 × 10 0 5.287 × 10 0 2.998 × 10 3
ROA 3.501 × 10 0 7.000 × 10 1 1.700 × 10 1 8.290 × 10 0 8.053 × 10 0 3.353 × 10 0 5.287 × 10 0 3.012 × 10 3

5.2.6. The Problem of Gear Train Design

The challenge of gear system design [] is a pivotal issue within the sphere of mechanical engineering. The objective at its heart is to design and optimize mechanical gear transmission systems to meet specific motion and power transmission requirements. Such transmission systems, composed of varying types of gears, serve to modify rotational speed, torque, and direction, thereby catering to an array of engineering applications ranging from automobile transmissions, industrial machinery and aerospace equipment to wind power generation in the energy sector.
Key considerations in gear system design encompass the clear articulation of transmission needs, the selection of appropriate gear types, the design and optimization of the gears’ geometric parameters, and the choice of suitable gear materials. Furthermore, determining the layout and arrangement of gears, maximizing transmission efficiency to minimize energy losses, and addressing concerns related to noise and vibrations are crucial to enhance the comfort and reliability of the operational environment. With the assistance of computational tools, CAD software, and specialized gear transmission analysis tools, engineers are equipped to simulate, analyze, and optimize gear system designs, tailoring them to the unique demands of diverse applications. The successful resolution of gear system design challenges is quintessential for the smooth implementation of various engineering applications and the reliability of mechanical systems.
The primary objective of this optimization issue is to reduce the specific transmission costs of the gear system. Variables encompass the number of teeth on four gears, labeled as Na( x 1 ), Nb( x 2 ), Nd( x 3 ), and Nf( x 4 ). The underlying mathematical formula is delineated as follows.
Table 14. Engineering optimization problem 6.
Table 14. Engineering optimization problem 6.
NameFunction
Consider x = [ x 1 x 2 x 3 x 4 ]
Minimize f ( x 1 , x 2 , x 3 , x 4 ) = 1 6.931 x ( 2 ) · x ( 3 ) x ( 1 ) · x ( 4 ) 2
Parameter range12  ≤  x 1 , x 2 , x 3 , x 4  ≤  60
Table 15. Engineering optimization problem 6 results.
Table 15. Engineering optimization problem 6 results.
Algorithm x 1 x 2 x 3 x 4 fbest
DBO 5.875 × 10 1 1.200 × 10 1 3.900 × 10 1 5.511 × 10 1 3.300 × 10 9
HHO 4.846 × 10 1 1.702 × 10 1 2.219 × 10 1 5.372 × 10 1 1.166 × 10 10
GWO 4.699 × 10 1 1.578 × 10 1 2.533 × 10 1 5.873 × 10 1 9.746 × 10 10
SO 4.265 × 10 1 2.103 × 10 1 1.265 × 10 1 4.427 × 10 1 1.545 × 10 10
DO 6.000 × 10 1 1.713 × 10 1 2.822 × 10 1 5.526 × 10 1 1.362 × 10 9
AROA 5.650 × 10 1 3.061 × 10 1 1.273 × 10 1 4.881 × 10 1 9.940 × 10 11
ROA 5.942 × 10 1 3.897 × 10 1 1.200 × 10 1 5.550 × 10 1 3.300 × 10 9

6. Discussion

This section primarily addresses the applicability of optimization algorithms, delves into the time complexity associated with solving application problems using these algorithms, and emphasizes the performance of some benchmark algorithms.

6.1. Discussion on the Applicability of the AROA

Optimization algorithms are widely recognized for their broad applicability. In this paper, we delve into the efficacy and potential of the optimization algorithm, AROA, within the realm of engineering applications. In fact, even prior to our study, scholars had extensively investigated engineering problems that bear resemblance to the issues addressed in this manuscript. For instance, some research successfully integrated basic Variable Neighborhood Search with Particle Swarm Optimization [], aiming to proficiently address sustainable routing and fuel tank management in maritime vessels. Moreover, given the fuel costs associated with container terminal ports, there exists literature that has employed chemical reaction optimization techniques to proffer innovative strategies for dynamic berth allocation []. Other studies have honed in on the challenge of sustainable maritime inventory routing with time window constraints []. These cutting-edge studies not only furnished a robust theoretical foundation for our exploration but also further corroborated the wide applicability of metaheuristic algorithms across diverse application scenarios.
In this study, the AROA has been applied to six engineering scenarios. However, heuristic algorithms are often noted for their intrinsic adaptability, and the AROA is no exception, boasting a certain degree of universality. Notably, the AROA was not solely restricted to engineering applications within this investigation. The algorithm was also tested against the standard optimization test functions, specifically cec2013. These tests further attest to the algorithm’s versatility.
A literature review revealed that the ROA algorithm has been extended to several non-linear domains. For instance, the continuous form of ROA was modified into a binary version, known as BROA (Binary Rafflesia Optimization Algorithm), and was successfully applied to feature selection tasks. The original ROA was implemented to tackle the locational decision-making challenges in logistics distribution centers. Additionally, an ROA variant improved via a reverse learning strategy was employed for optimizing both pipe diameter selections and construction costs within water supply networks, and the results were promising.
However, while the ROA algorithm might exhibit exemplary performance in certain scenarios, its limitations cannot be overlooked. Future research must entail rigorous testing against a broader and diverse set of optimization challenges. This approach will not only solidify its extensive applicability but also highlight any requisite modifications or adjustments to enhance its efficacy.

6.2. Discussion of Time Complexity of AROA

In both engineering problems and various practical applications, computational time serves as a pivotal metric for evaluating algorithmic performance. Especially in complex scenarios, an algorithm’s utility can be questioned if, despite yielding superior solutions, it requires prohibitively long durations for execution. Every strategic addition or modification to an algorithm potentially escalates its computational complexity. For instance, with the AROA algorithm, two strategies were incorporated into its original design, potentially leading to prolonged execution times. In certain instances, more intricate strategies might offer enhanced optimization outcomes but at the expense of extended computational durations. It is imperative to assess this trade-off between performance and time to discern the worthiness of embedding new strategies.
Taking the design of pressure vessels as a specific engineering example, which inherently involves numerous parameters and constraints, optimization becomes increasingly intricate. We revisited the time required by the algorithm to optimize this particular issue, and the results are delineated in Table 16. Upon examining the experimental data, it was discerned that the AROA algorithm outperformed three other algorithms in execution time, namely, WOA, HHO, and OOA, and was slightly outpaced by the GWO and the original ROA algorithms. However, the differences were marginal. The AROA operated within what can be deemed an “acceptable computational timeframe,” justifying the inclusion of the new strategies. By incorporating these strategies, the AROA algorithm not only located high-quality solutions swiftly but also did so without consuming excessive time, rendering it viable in real-world engineering contexts. Therefore, when tackling problems, the delicate balance between solution quality and computational efficiency must be diligently observed. If an algorithm can furnish a satisfactory solution within an “acceptable computational timeframe,” it can better address real-world challenges.
Table 16. Table of time complexity.

6.3. Discussion of the Performance Capabilities of Algorithms

Upon analyzing our experimental data, it was observed that the AROA algorithm attained the optimal average fitness values (Mean) in unimodal function F1, multimodal functions F6 and F9, and complex functions F21, F24, and F26. In contrast, other algorithms, such as PSO, secured sub-optimal average fitness values (Mean) in multimodal functions F8 and F12 and in the complex function F26, achieving the best average fitness value (Mean) for F8. This highlights the PSO algorithm’s commendable explorative capabilities, demonstrating its proficiency in evading local optima and navigating toward optimal fitness values in both multimodal and complex functions. Likewise, the DE algorithm showcased sub-optimal average fitness values (Mean) in multimodal functions F11 and F12 and in the complex function F25. Additionally, it reached peak average fitness values (Mean) for F13, F20, and F28, reinforcing its promising convergence properties and its resilience against entrapment in local optima.

6.4. Discussions of General Optimization Challenges

In the realms of modern analytical and pharmaceutical chemistry, precise mathematical modeling is indispensable for addressing real-world issues, such as the analysis of acid–base reactions [] and the optimization of structure–activity relationship (SAR) models [] for compounds.
When it comes to the complex issue of acid–base reaction modeling, particularly when considering the nonlinear dynamics and multivariate interactions underlying it, optimization algorithms can offer unique resolution strategies. Techniques such as Genetic Algorithms [] (GA) are commonly employed to tackle problems that are intractable through analytical methods, especially in the simulation of chemical reactions. For instance, simulating acid–base reactions requires consideration of ion dissociation, mixing, and potential complex coordination reactions. The dynamic nature of these processes renders the creation of precise models particularly challenging. Optimization algorithms, such as GAs [], assist in finding optimal model parameters within a vast parameter space that can replicate the experimentally observed acid–base properties under various conditions, such as pH and pOH values.
In applying GAs, key steps include encoding the solutions to the problem (which in chemistry might be a set of reaction rate constants or concentrations), selecting an appropriate fitness function (which could be the disparity between model predictions and experimental data), and determining a suitable set of genetic operations (including selection, crossover, and mutation). Selection and crossover processes filter for more fit genotypes, while mutation introduces new genetic diversity. The algorithm will then determine through survival strategies which individuals should be preserved, thus simulating the process of natural selection. As depicted in Figure 2 of Reference [], the combination of different survival strategies and selection tactics can lead to diverse evolutionary outcomes. For example, various selection tactics might lead to faster convergence, while different survival strategies might promote genetic diversity. The fine-tuning of these strategies demonstrates the GA’s capacity to find solutions for multivariate linear regression problems and reveals how algorithm adjustments could potentially impact the final model’s accuracy and reliability. All these steps require careful design to ensure the algorithm can find a viable solution within a reasonable timeframe.
In summary, the application of optimization algorithms to acid–base reaction modeling not only showcases the potential of computational approaches to chemical issues but also offers an opportunity to discuss and assess the adaptability and efficiency of these algorithmic design choices for specific problems. Whether it is the dissociation and mixing of acids and bases or the understanding of the structure–activity relationships of compounds, these optimization challenges are formidable due to the necessity of managing extensive data, complex chemical interactions, and the need for both accuracy and efficiency in modeling. GA optimization of Multivariate Linear Regression (MLR) models reveals the relationships between the structure of compounds and their activities. SAR models are crucial in drug design and discovery as they can aid scientists in predicting the biological activity of new compounds. GAs select molecular descriptors by simulating natural selection and survival strategies, constructing SARs. This approach offers a natural and effective means of searching for optimized structure–activity models, addressing the challenge of finding the most relevant descriptor set within a vast search space.

7. Conclusions

This study primarily investigates the enhancement of the Rafflesia Optimisation Algorithm through the incorporation of diversity maintenance strategies and adaptive weight update mechanisms. An assessment was conducted by comparing this improved algorithm against nine other intelligent optimization algorithms on the CEC2013 benchmark test functions. The evaluation results indicate that the modified Rafflesia Optimisation Algorithm exhibits superior performance on multiple test functions, achieving commendable results. To gauge the algorithm’s efficacy in real-world applications, it was tested on six engineering optimization problems. These problems included tension/compression spring design, pressure vessel design, three-bar truss design, welded beam design, speed reducer design, and gear train design. The results demonstrate that the new algorithm exhibits robust performance in practical applications as well. Intelligent optimization algorithms have evolved to be potent tools in the engineering domain for addressing intricate problems and optimizing system designs. Drawing inspiration from intelligent behaviors observed in nature and advancements in computer science, these algorithms assist engineers by mimicking and optimizing processes, thus confronting an array of challenging issues, enhancing system performance, efficiency, and reliability. Not only have they augmented the efficiency, reliability, and sustainability of engineering systems, they also offer engineers a powerful arsenal for tackling complex problems, holding promise for further advancements in the engineering sector in the future.

Author Contributions

Conceptualization, J.-S.P. and S.-C.C.; data curation, Z.-J.L.; formal analysis, J.-S.P. and S.-C.C.; investigation, J.-S.P. and Z.Z.; methodology, J.-S.P., Z.Z., S.-C.C. and Z.-J.L.; resources, Z.Z. and W.L.; software, Z.Z. and S.-C.C.; validation, J.-S.P., S.-C.C. and W.L.; writing—original draft, Z.Z.; writing—review and editing, J.-S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  2. Blum, C. Ant colony optimization: Introduction and recent trends. Phys. Life Rev. 2005, 2, 353–373. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  4. Bahrami, M.; Bozorg-Haddad, O.; Chu, X. Cat swarm optimization (CSO) algorithm. In Advanced Optimization by Nature-Inspired Algorithms; Springer: Singapore, 2018; pp. 9–18. [Google Scholar]
  5. Yang, X.S. Bat algorithm for multi-objective optimisation. Int. J. Bio-Inspired Comput. 2011, 3, 267–274. [Google Scholar] [CrossRef]
  6. Yang, X.S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  7. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  8. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  9. Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
  10. Zhang, C.; Zhang, F.M.; Li, F.; Wu, H.S. Improved artificial fish swarm algorithm. In Proceedings of the 2014 9th IEEE Conference on Industrial Electronics and Applications, Hangzhou, China, 9–11 June 2014; pp. 748–753. [Google Scholar]
  11. He, X.; Wang, W.; Jiang, J.; Xu, L. An improved artificial bee colony algorithm and its application to multi-objective optimal power flow. Energies 2015, 8, 2412–2437. [Google Scholar] [CrossRef]
  12. Chu, S.C.; Feng, Q.; Zhao, J.; Pan, J.S. BFGO: Bamboo Forest Growth Optimization Algorithm. J. Internet Technol. 2023, 24, 1–10. [Google Scholar]
  13. Pan, J.S.; Fu, Z.; Hu, C.C.; Tsai, P.W.; Chu, S.C. Rafflesia Optimization Algorithm Applied in the Logistics Distribution Centers Location Problem. J. Internet Technol. 2022, 23, 1541–1555. [Google Scholar]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  15. Rezaei, H.; Bozorg-Haddad, O.; Chu, X. Grey wolf optimization (GWO) algorithm. In Advanced Optimization by Nature-Inspired Algorithms; Springer: Singapore, 2018; pp. 81–91. [Google Scholar]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Rana, N.; Latiff, M.S.A.; Abdulhamid, S.M.; Chiroma, H. Whale optimization algorithm: A systematic review of contemporary applications, modifications and developments. Neural Comput. Appl. 2020, 32, 16245–16277. [Google Scholar] [CrossRef]
  18. Pan, J.S.; Liu, L.F.; Chu, S.C.; Song, P.C.; Liu, G.G. A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles. Mathematics 2023, 11, 2953. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
  21. Karaboğa, D.; Ökdem, S. A simple and global optimization algorithm for engineering problems: Differential evolution algorithm. Turk. J. Electr. Eng. Comput. Sci. 2004, 12, 53–60. [Google Scholar]
  22. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat swarm optimization. In PRICAI 2006: Trends in Artificial Intelligence, Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 854–858. [Google Scholar]
  23. Dai, C.; Lei, X.; He, X. A decomposition-based evolutionary algorithm with adaptive weight adjustment for many-objective problems. Soft Comput. 2020, 24, 10597–10609. [Google Scholar] [CrossRef]
  24. Dong, Z.; Wang, X.; Tang, L. MOEA/D with a self-adaptive weight vector adjustment strategy based on chain segmentation. Inf. Sci. 2020, 521, 209–230. [Google Scholar] [CrossRef]
  25. Ruan, G.; Yu, G.; Zheng, J.; Zou, J.; Yang, S. The effect of diversity maintenance on prediction in dynamic multi-objective optimization. Appl. Soft Comput. 2017, 58, 631–647. [Google Scholar] [CrossRef]
  26. Chen, B.; Lin, Y.; Zeng, W.; Zhang, D.; Si, Y.W. Modified differential evolution algorithm using a new diversity maintenance strategy for multi-objective optimization problems. Appl. Intell. 2015, 43, 49–73. [Google Scholar] [CrossRef]
  27. Liang, J.J.; Qu, B.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization. Comput. Intell. Lab. Zhengzhou Univ. Zhengzhou China Nanyang Technol. Univ. Singap. Tech. Rep. 2013, 201212, 281–295. [Google Scholar]
  28. Tvrdík, J.; Poláková, R. Competitive differential evolution applied to CEC 2013 problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1651–1657. [Google Scholar]
  29. Pan, J.S.; Shi, H.J.; Chu, S.C.; Hu, P.; Shehadeh, H.A. Parallel Binary Rafflesia Optimization Algorithm and Its Application in Feature Selection Problem. Symmetry 2023, 15, 1073. [Google Scholar] [CrossRef]
  30. Bandyopadhyay, R.; Basu, A.; Cuevas, E.; Sarkar, R. Harris Hawks optimisation with Simulated Annealing as a deep feature selection method for screening of COVID-19 CT-scans. Appl. Soft Comput. 2021, 111, 107698. [Google Scholar] [CrossRef] [PubMed]
  31. Dehghani, M.; Trojovskỳ, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
  32. Pan, J.S.; Sun, B.; Chu, S.C.; Zhu, M.; Shieh, C.S. A parallel compact gannet optimization algorithm for solving engineering optimization problems. Mathematics 2023, 11, 439. [Google Scholar] [CrossRef]
  33. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  34. Elhammoudy, A.; Elyaqouti, M.; Arjdal, E.H.; Hmamou, D.B.; Lidaighbi, S.; Saadaoui, D.; Choulli, I.; Abazine, I. Dandelion Optimizer algorithm-based method for accurate photovoltaic model parameter identification. Energy Convers. Manag. X 2023, 19, 100405. [Google Scholar] [CrossRef]
  35. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  36. Klimov, P.V.; Kelly, J.; Martinis, J.M.; Neven, H. The snake optimizer for learning quantum processor control parameters. arXiv 2020, arXiv:2006.04594. [Google Scholar]
  37. Tzanetos, A.; Blondin, M. A qualitative systematic review of metaheuristics applied to tension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  38. Çelik, Y.; Kutucu, H. Solving the Tension/Compression Spring Design Problem by an Improved Firefly Algorithm. IDDM 2018, 1, 1–7. [Google Scholar]
  39. Yang, X.S.; Huyck, C.; Karamanoglu, M.; Khan, N. True global optimality of the pressure vessel design problem: A benchmark for bio-inspired optimisation algorithms. Int. J. Bio-Inspired Comput. 2013, 5, 329–335. [Google Scholar] [CrossRef]
  40. Liu, T.; Deng, Z.; Lu, T. Design optimization of truss-cored sandwiches with homogenization. Int. J. Solids Struct. 2006, 43, 7891–7918. [Google Scholar] [CrossRef]
  41. Kamil, A.T.; Saleh, H.M.; Abd-Alla, I.H. A multi-swarm structure for particle swarm optimization: Solving the welded beam design problem. In Proceedings of the Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1804, p. 012012. [Google Scholar]
  42. Almufti, S.M. Artificial Bee Colony Algorithm performances in solving Welded Beam Design problem. Comput. Integr. Manuf. Syst. 2022, 28, 225–237. [Google Scholar]
  43. Deb, K.; Jain, S. Multi-speed gearbox design using multi-objective evolutionary algorithms. J. Mech. Des. 2003, 125, 609–619. [Google Scholar] [CrossRef]
  44. Hall, J.F.; Mecklenborg, C.A.; Chen, D.; Pratap, S.B. Wind energy conversion with a variable-ratio gearbox: Design and analysis. Renew. Energy 2011, 36, 1075–1080. [Google Scholar] [CrossRef]
  45. Golabi, S.; Fesharaki, J.J.; Yazdipoor, M. Gear train optimization based on minimum volume/weight design. Mech. Mach. Theory 2014, 73, 197–217. [Google Scholar] [CrossRef]
  46. Meng, Z.; Zhong, Y.; Mao, G.; Liang, Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022, 586, 176–191. [Google Scholar] [CrossRef]
  47. De, A.; Pratap, S.; Kumar, A.; Tiwari, M. A hybrid dynamic berth allocation planning problem with fuel costs considerations for container terminal port using chemical reaction optimization approach. Ann. Oper. Res. 2020, 290, 783–811. [Google Scholar] [CrossRef]
  48. De, A.; Kumar, S.K.; Gunasekaran, A.; Tiwari, M.K. Sustainable maritime inventory routing problem with time window constraints. Eng. Appl. Artif. Intell. 2017, 61, 77–95. [Google Scholar] [CrossRef]
  49. Bolboacă, S.D.; Roşca, D.D.; Jäntschi, L. Structure-activity relationships from natural evolution. MATCH Commun. Math. Comput. Chem. 2014, 71, 149–172. [Google Scholar]
  50. JÄNtschi, L. Modelling of acids and bases revisited. Stud. Univ. Babes-Bolyai Chem. 2022, 67, 73–92. [Google Scholar] [CrossRef]
  51. Dasari, S.K.; Fantuzzi, N.; Trovalusci, P.; Panei, R.; Pingaro, M. Optimal Design of a Canopy Using Parametric Structural Design and a Genetic Algorithm. Symmetry 2023, 15, 142. [Google Scholar] [CrossRef]
  52. Fan, H.; Ren, X.; Zhang, Y.; Zhen, Z.; Fan, H. A Chaotic Genetic Algorithm with Variable Neighborhood Search for Solving Time-Dependent Green VRPTW with Fuzzy Demand. Symmetry 2022, 14, 2115. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.