Next Article in Journal
Bioenergy and Biopesticides Production in Serbia—Could Invasive Alien Species Contribute to Sustainability?
Previous Article in Journal
Effect of Composite Interface Enhancer on the Cementation Strength of Shale Formation–Cement Ring Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaos-Enhanced Archimede Algorithm for Global Optimization of Real-World Engineering Problems and Signal Feature Extraction

1
Laboratory of Engineering, Systems, and Applications, National School of Applied Sciences, Sidi Mohamed Ben Abdellah-Fez University, Fez 30040, Morocco
2
National School of Applied Sciences, Sidi Mohamed Ben Abdellah-Fez University, Fez 30040, Morocco
3
Laboratory of Electronic Signals and Systems of Information, Faculty of Science, Sidi Mohamed Ben Abdellah-Fez University, Fez 30040, Morocco
*
Author to whom correspondence should be addressed.
Processes 2024, 12(2), 406; https://doi.org/10.3390/pr12020406
Submission received: 23 November 2023 / Revised: 10 January 2024 / Accepted: 14 February 2024 / Published: 18 February 2024

Abstract

:
Optimization algorithms play a crucial role in a wide range of fields, from designing complex systems to solving mathematical and engineering problems. However, these algorithms frequently face major challenges, such as convergence to local optima, which limits their ability to find global, optimal solutions. To overcome these challenges, it has become imperative to explore more efficient approaches by incorporating chaotic maps within these original algorithms. Incorporating chaotic variables into the search process offers notable advantages, including the ability to avoid local minima, diversify the search, and accelerate convergence toward optimal solutions. In this study, we propose an improved Archimedean optimization algorithm called Chaotic_AO (CAO), based on the use of ten distinct chaotic maps to replace pseudorandom sequences in the three essential components of the classical Archimedean optimization algorithm: initialization, density and volume update, and position update. This improvement aims to achieve a more appropriate balance between the exploitation and exploration phases, offering a greater likelihood of discovering global solutions. CAO performance was extensively validated through the exploration of three distinct groups of problems. The first group, made up of twenty-three benchmark functions, served as an initial reference. Group 2 comprises three crucial engineering problems: the design of a welded beam, the modeling of a spring subjected to tension/compression stresses, and the planning of pressurized tanks. Finally, the third group of problems is dedicated to evaluating the efficiency of the CAO algorithm in the field of signal reconstruction, as well as 2D and 3D medical images. The results obtained from these in-depth tests revealed the efficiency and reliability of the CAO algorithm in terms of convergence speeds, and outstanding solution quality in most of the cases studied.

1. Introduction

The exploration of problems characterized by high nonlinearities and multiplicities of local optima is a major concern in the fields of computer science, artificial intelligence, and machine learning. Traditional methods often face insurmountable challenges in regard to solving complex, multimodal problems. In such cases, nature-inspired approaches stand out by exploiting their ability to combine diversification and stochastic intensification to achieve optimal solutions in these complex contexts [1,2]. In recent decades, a considerable range of metaheuristic optimization algorithms has emerged as valuable alternatives for tackling these problems. These approaches have proven their effectiveness in finding solutions to demanding computational challenges, and they continue to gain in importance as the practical applications of artificial intelligence and machine learning multiply [3,4].
Two key features of metaheuristic methods are intensification and diversification, which play a crucial role in the search for optimal solutions [5]. Intensification targets the currently most promising solutions and selects the best-performing candidates, while diversification enables the optimizer to explore the search space more efficiently, largely through the use of randomization.
The ongoing evolution of the field of global optimization has seen the emergence of several new metaheuristic algorithms, each bringing its own innovations for improving computational efficiency, solving large-scale problems, and implementing robust optimization codes. Among these newcomers, ABC (Artificial Bee Colony) has adopted an approach that simulates the behavior of bees in their search for nectar [6]. Cuckoo search (CS) was inspired by the brooding behavior of certain birds to explore the search space [7]. HS (Harmony Search) introduced a notion of harmonious search inspired by music [8]. The BA (Bat Algorithm) exploits bat behavior to optimize functions [9]. Other algorithms, such as Genetic Algorithm (GA) [10,11,12], Chicken Swarm Optimization (CSO), Gray Wolf Optimizer (GWO), Firefly Algorithm (FA), Whale Optimization Algorithm (WOA), and Antlion Optimizer (ALO), also saw the light of day, each bringing their own perspective to solving complex problems [13,14,15,16,17,18].
This diversity of approaches testifies to the richness and adaptability of metaheuristic algorithms in the search for solutions to a variety of domains and problems. It also reflects the scientific community’s ongoing commitment to exploring new avenues for pushing back the boundaries of optimization.
Recently, a new population-based metaheuristic optimization algorithm called Archimedes’ Optimization Algorithm (AO) was proposed [19], inspired by Archimedes’ Law from the field of physics. The operating principle of Archimedes’ optimization algorithm (AO) is inspired by Archimedes’ famous law governing the buoyancy of bodies immersed in a fluid. This fundamental law establishes a balance between the buoyant force exerted by the fluid on an immersed object and the weight of that object. When we apply this principle to AOA, the entities in the population are analogous to objects immersed in the fluid. Each of these entities is characterized by a series of parameters, including acceleration, density, and volume, which are key elements in determining whether an entity will float or sink in the algorithm’s search space. The upward force, in the context of AO, represents the ability of an entity to emerge as an optimal solution. It is equivalent to the buoyancy of the entity, which is determined by the relationship between the weight of the object (representing the quality of the solution) and the weight of the displaced fluid (which can be interpreted as the potential of the solution). If the fluid displaced (the potential of the solution) is less than the weight of the object (the quality of the solution), the entity (the solution) will tend to ‘sink’, in the sense that it will not be retained as the optimal solution. On the other hand, when the weight of the object (the quality of the solution) is equal to the weight of the fluid displaced (the potential of the solution), the floating entity (solution) is in equilibrium. Herein lies the central conception of AO: finding the equilibrium point where a retained solution is in equilibrium, and where the net force of the fluid is zero. Although the AO algorithm is effective for solving complex problems, it cannot always overcome the pitfalls of the local optimum.
The phenomenon of chaos, characterized by its extreme sensitivity to initial conditions, has long intrigued researchers and become a key component in the field of optimization. Chaotic functions, often described as complex and unpredictable, vary irregularly over time, and are particularly sensitive to these initial conditions [20,21,22,23]. This sensitivity to initial values is an essential indicator of chaotic functions, as even a small, seemingly insignificant change in these conditions can lead to significant variations that cannot be neglected. It is in this context that metaheuristic algorithms have explored the potential offered by chaotic functions to improve their performance. Chaotic functions have proven particularly effective in the search for solutions to problems of local or premature convergence. Chaotic optimization methods have thus been adopted in many recent studies to break the vicious cycle of optimization. In addition, these chaotic functions have added value to global optimization algorithms by diversifying the search space. Chaotic theory has significant potential for improving the performance of metaheuristic algorithms.
The integration of chaos-based enhancement within various global optimization algorithms offers significant advantages. Consider, for example, its application within PSO, where it has the potential to deliver substantial improvements. In the context of PSO, chaos-based enhancement offers advantages in terms of increased diversification and exploration of the search space [24]. The chaotic properties of this approach play an essential role in preventing premature convergence to local optima. By promoting particle diversity, it enables a more exhaustive and dynamic exploration of the optimization landscape. Similarly, the introduction of chaos-based improvement within Genetic Algorithms (GA) opens up promising prospects [25]. It offers an opportunity to enhance genetic diversity within populations, thus promoting more efficient exploration of the search space. This diversification contributes to faster convergence towards optimal solutions by widening the field of exploration of potential solutions. The application of chaos in ACO (Ant Colony Optimization) mechanisms can significantly enhance the ability of ants to discover optimal paths [26]. By introducing chaotic exploration, this approach stimulates the search for alternative solutions, thus optimizing convergence towards quality solutions within the ant network. In the case of Artificial Bee Algorithms (ABC), the use of chaos-based enhancement can promote a more dynamic exploration of the search space [27]. This dynamic approach overcomes the limitations of traditional methods, stimulating further diversification of the solutions explored by the artificial bee colony.
This encouraged us to use chaotic maps to further optimize the Archimedean Optimization Algorithm (AO). In this study, we introduce a new chaos-enhanced Archimedean optimization algorithm (CAO). Our approach consists of using chaotic map functions to influence the generation of the random parameters of the AO algorithm. This integration allows us to scan the search space more dynamically and to refine various aspects of the algorithm. Thanks to this synergy, we have been able to achieve optimal fitness function values with increased success. To achieve this, we incorporated ten distinct chaotic maps to replace pseudorandom sequences in three AO components.
To evaluate the performance of the chaos-enhanced Archimedes optimization (CAO) algorithm, we have undertaken exhaustive testing using three distinct problem groups, each representing a set of specific challenges. The aim of this rigorous evaluation is to demonstrate the robustness and effectiveness of the CAO algorithm. The first group of problems consists of twenty-three benchmark functions (unimodal, multimodal, and multimodal with fixed dimension). Group 2 comprises three engineering problems, namely, the design of a welded beam, the design of a spring subjected to tension/compression stresses, and the design of a pressure vessel. These practical problems test the CAO algorithm’s ability to solve complex engineering optimization challenges. This evaluation will determine the applicability of the CAO algorithm to real-world problems. The third group of problems is dedicated to validating the CAO algorithm in the field of 2D and 3D medical image and signal reconstruction. We use discrete orthogonal Meixner moments (MMs) for this purpose. The aim of this series of tests is to demonstrate the CAO algorithm’s ability to solve image and signal processing problems, with an emphasis on accuracy and efficiency. To establish a meaningful comparison, we compare the performance of the CAO algorithm with that of other optimization algorithms in the literature.
The main contributions of this work are summarized as follows:
(a)
With the aim of achieving an optimal balance between exploitation and exploration for the Archimedean optimization (AO) algorithm, we introduce a new Archimedean optimization algorithm enhanced by chaotic maps (CAO).
(b)
To assess the effectiveness of the CAO method, we conducted extensive experiments using a set of twenty-three well-known numerical reference functions. In addition, the method was successfully applied to three real-world engineering problems, confirming its relevance in real-world contexts.
(c)
As an innovative application, CAO is used to optimize the selection of Meixner polynomial parameters, contributing to optimal reconstruction of medical signals and images. This application demonstrates the versatility of the CAO method for solving a variety of problems, from numerical optimization to medical image reconstruction.
The structure of the rest of this article is as follows: Section 3 looks at the presentation of the mathematical models underlying the standard Archimedes optimization algorithm. Section 4 is devoted to an examination of chaotic maps, while Section 5 explains in detail the Archimedes chaotic algorithm we have developed. Section 6 is dedicated to evaluating the performance of the proposed algorithm through a series of in-depth tests. Section 7 concludes our study.

2. Problem Formulation

In optimization problems, the task is often to find optimal solutions in a defined decision space. This formulation of the problem encompasses the effort to minimize a specific objective function over a set of admissible solutions, where the limits of each coordinate are precisely specified. Such a structured approach enables systematic exploration of the decision space, ensuring that the solutions sought respect the prescribed constraints.
  • Objective function
    min F ( x ) = f ( x 1 , x 2 , , x n )
  • Decision variables
    X = ( x 1 , x 2 , , x n ) R n
  • Bounds for each coordinate
    u b i x i l b i f o r    i = 1 , 2 , , n
    where u b i and l b i are the specified lower and upper bounds for each coordinate x i , and n is the number of variables.
This complete formulation encapsulates the optimization problem, clearly defining the objective, decision variables, coordinate constraints, and decision space. It thus establishes a formal basis for analyzing and solving the optimization problem.

3. The Standard Archimedes Optimization Algorithm

The Archimedean Optimization Algorithm (AO) is a physics-inspired metaheuristic optimization algorithm that has emerged recently, and its operating principle is inspired by Archimedes’ law of physics [28]. AO is a population-based algorithm where the individuals in the population are immersed objectives. AO starts a review process using an initial population of individuals endowed with random accelerations, densities, and volumes. The evaluation process begins with the suitability of this preliminary population and then continues through iterations until the end condition is reached. During each iteration, the density and volume of everyone are updated. Based on its interaction with other neighboring individuals, the acceleration of everyone is revised. The updated volume, density, and acceleration values determine the new position of everyone. AO has the distinct advantage of achieving a harmonious balance between exploration and exploitation, making it well suited to engineering optimization problems [29,30].
The main steps of the Archimedean Optimization (AO) algorithm are described below:
Step 1: Initialize the positions ( X ( i ) ), volume ( V t ( i ) ), density ( D t ( i ) ), and acceleration ( a c c i ) of all objects.
X ( i ) = l b i + r a n d × ( u b i l b i )
V t ( i ) = r a n d
D t ( i ) = r a n d
a c c i = l b i + r a n d × ( u b i l b i )
where l b i and u b i are the lower and upper bounds of the search-space, respectively.
Step 2: Update the densities ( D t ( i ) ) and volumes ( V t ( i ) ) of an object i using the following equation:
D t + 1 ( i ) = D t ( i ) + r a n d 1 × ( D b e s t D t ( i ) ) V t + 1 ( i ) = V t ( i ) + r a n d 2 × ( V b e s t V t ( i ) )
where V b e s t and D b e s t are the best volume and best density, respectively, thus far.
Step 3: Calculation of the density factor and transfer operator.
When two objects first collide, they attempt to attain a stable state after some time has passed, and T F is used in the AO to do this. The shift from exploration to exploitation in the search process is accomplished by utilizing the following formula:
T F = exp ( t t max t max )
Similarly, the density factor d t + 1 decreases over time, allowing one to concentrate in a favorable area.
d t + 1 = exp ( t max t t max ) ( t t max )
where t represents the iteration number and t max denotes the maximum iterations.
Step 4: Exploration phase
If TF ≤ 0.5 (objects are colliding), for t + 1 , the update of the object’s acceleration is performed using Equation (8).
a c c t + 1 ( i ) = D m r + V m r × a c c m r D t + 1 ( i ) × V t + 1 ( i )
where D m r and V m r denote the density and volume of the random material, respectively.
Step 5: Exploitation phase
If TF > 0.5 (objects are not colliding), for t + 1 , the update of the object’s acceleration is performed using Equation (9).
a c c t + 1 ( i ) = D b e s t + V b e s t × a c c b e s t D t + 1 ( i ) × V t + 1 ( i )
Step 6: Normalize the acceleration.
The normalized acceleration is calculated using:
a c c t + 1 ( i ) n o r m = u a c c t + 1 ( i ) min ( a c c ) max ( a c c ) min ( a c c ) + l
where l = 0.1 and u = 0.9 are the normalization ranges, and min ( a c c ) is the minimum acceleration value while max ( a c c ) is the maximum acceleration value.
Step 7: Update the position.
The calculation of the object’s position t + 1 is determined by the following equation:
X t + 1 ( i ) = { X t ( i ) + C 1 × r a n d 3 × a c c t + 1 ( i ) n o r m × d × ( X r a n d X t ( i ) ) ,     T F 0.5 X b e s t + F × C 2 × r a n d 4 × a c c t + 1 ( i ) n o r m × d × ( T × X b e s t X t ( i ) ) ,     o t h e r w i s e
T increases with time and is directly proportional to the transfer operator defined by:
T = C 3 × T F
F is the flag used to change the direction of movement using Equation (13).
F = { + 1     i f     P 0.5 1     i f     P > 0.5
where P = 2 × r a n d C 4 .
Step 8: Evaluation
Select the object’s position that exhibits the best fitness value upon evaluating each object.
The global optimization algorithms use random variables to explore the search space, commonly in the form of a uniform distribution (rand). However, this use of random variables frequently leads to local optima, thus limiting search efficiency. This is why we suggest using sequences generated by chaotic maps to determine the values of parameters, which, in the AO algorithm, were of a random nature. The integration of chaotic variables into the search process represents a significant advance over an entirely random approach. The advantages inherent in this approach are substantial, and chaotic maps, which we will detail in the next section, play a key role in the search for optimal solutions.

4. Chaotic Maps

In recent years, chaos theory has made significant advances and has been successfully exploited in a variety of scientific fields. Promising applications include image and signal encryption, feature selection, and parameter optimization. Chaotic maps, in particular, have proven highly useful, possessing three fundamental characteristics: sensitivity to initial conditions, randomness, and dynamics. These unique properties have enabled chaotic maps to be incorporated into several renowned optimization algorithms, including moth flame optimization (MFO), firefly algorithm (FA), artificial bee colony (ABC), biogeography-based optimization (BBO), particle swarm optimization (PSO), and gray wolf optimizer (GWO). This fusion of chaos theory with optimization algorithms has paved the way for more efficient solutions and better performance, both in solving complex problems and in improving engineering processes [31,32,33]. Chaotic maps have added an extra dimension to the search for solutions by introducing an element of dynamics and unpredictability, which has proved beneficial in escaping local minima and improving the quality of the solutions found.
In this subsection, we outline ten one-dimensional chaotic maps of particular relevance to optimization algorithms used to generate chaotic sequences, as detailed in Table 1 and illustrated in Figure 1. It is important to note that the initial point can be chosen arbitrarily, in the range 0 to 1 (or according to the specific scope of the chaotic map in question). The use of these dynamic values is of paramount importance, as it contributes significantly to improving the search capability of the AO.
The idea behind this method is based on three key principles: (i) Introduction of a chaotic state into the optimization variables using a similar support approach. (ii) Extension of the range of chaotic movements to encompass the interval of optimization variables. (iii) Use of chaotic sequences to enhance the efficiency of the search process.
By combining these principles, it becomes possible to inject an element of controlled chaos into the optimization variables, explore a wider range of potential solutions, and thus improve search efficiency in a well-controlled way. This innovative approach offers promising prospects for the optimization of complex problems, in particular by widening the range of solutions explored and enabling a more diversified and efficient search.

5. Proposed Chaotic-Archimede Optimization Algorithm (CAO)

In the classic configuration of the Archimedean optimization algorithm, an initial set of random solutions originates in the extent of the search space. This set of solutions is then subjected to a series of updating equations, each contributing to the distinct exploration and exploitation of the search space. The exploration equations are responsible for guiding the solutions to various regions of the search space, seeking to exhaustively explore the different possibilities. In contrast, exploitation equations steer solutions towards the best solution identified so far, while probing the surroundings of this optimal solution. However, it is crucial to note that these equations have a random component, meaning that solutions mutate at random intervals and in random directions This random nature of the equations underlines the sensitivity to random parameters, which exert a substantial influence on the quality of the solutions obtained and, by extension, on the final results of the algorithm. A judicious modification of these random parameters could therefore play a decisive role in optimizing the overall process, potentially leading to more robust and accurate solutions.
Although not yet mathematically proven, various studies converge towards the conclusion that the integration of chaotic maps significantly improves the performance of metaheuristic optimization algorithms, as developed by many researchers [34,35]. For example, Wang et al. in [34] demonstrated the performance improvement of the Remora optimization algorithm (ROA) based solely on chaotic tent mapping. Similarly, Wang et al. in [35] introduced the Levy operator to help the crystal structure algorithm (CryStAl) to effectively free itself from the attraction of the local optimal value.
However, in the context of this study, our aim is to take advantage of ten chaotic maps, thus opening up several perspectives for improving the performance of our proposed algorithm, specifically in terms of avoiding local optimum, rather than being limited to the use of a single chaotic map or operator.
This study presents the development of a new algorithm called the Chaos-Enhanced Archimedean Optimization (CAO) algorithm. The introduction of chaotic variables into the AO search process confers significant advantages over a purely random approach. More specifically, the ergodic properties of chaotic maps are exploited to increase search efficiency by bypassing local minima.
In the CAO, chaotic sequences are generated by ten distinct types of chaotic maps, namely Chebyshev, circular, Gaussian, iterative, logistic, patchy, sinusoidal, Singer, sinusoidal, and tent, as illustrated in Table 1. These sequences replace the random sequences of the original AO in three crucial components of the optimization algorithm, namely initialization, density and volume updating, and position updating. This integration enhances the AO algorithm’s ability to bypass local optima, thus increasing the probability of converging to the global optimum in a limited number of iterations.
Each chaotic map is tested independently in each of these components. The pseudocode of the CAO algorithm is shown in Algorithm 1, while Figure 2 illustrates the CAO flowchart. In this algorithm, c c ( i ) represents a chaotic sequence generated by a specific chaotic map. The results obtained in this study show that chaotic maps increase the performance of optimization methods.
Algorithm 1. pseudo code of CAO
1     Initialization (N, tmax, C1, C2, C3 and C4).
2     Initialize chaotic value  c c ( i ) .
3     for i = 1:n
4                   for j = 1: n
5                c c ( i ) = c h a o t i c ( c c ( i ) ) ;
6            X ( i ) = l b i + c c ( i ) × ( u b i l b i ) ;
7             V t ( i ) = c c ( i ) ;
8            D t ( i ) = c c ( i ) ;
9            a c c ( i ) = l b i + c c ( i ) × ( u b i l b i ) ;
10          end
11  end
12      Evaluate the initial population and select the one with the best fitness value.
13      t (iteration counter) = 1
14      While t < tmax do
15      for each object i do
16                for j = 1: n
17      //Update density and volume of each object.
18                 c c ( i ) = c h a o t i c ( c c ( i ) ) ;
19                 D t + 1 ( i ) = D t ( i ) + c c ( i ) × ( D b e s t D t ( i ) )     V t + 1 ( i ) = V t ( i ) + c c ( i ) × ( V b e s t V t ( i ) ) ;
20                end
21      //Update transfer and density decreasing factors TF and  d t + 1 .
22                 T F = exp ( t t max t max ) ;  d t + 1 = exp ( t max t t max ) ( t t max ) ;
23                if TF ≤ 0.5 then ---//Exploration phase
24                        //Update acceleration and normalize acceleration.
25                         a c c t + 1 ( i ) n o r m = u a c c t + 1 ( i ) min ( a c c ) max ( a c c ) min ( a c c ) + l ;
26                        for i = 1:n
27                         c c ( i ) = c h a o t i c ( c c ( i ) ) ;
28                         X t + 1 ( i ) = X t ( i ) + C 1 × c c ( i ) × a c c t + 1 ( i ) n o r m × d t + 1 × ( X r a n d X t ( i ) ) ;
29                        end
30                else ---//Exploitation phase
31                        //Update acceleration and normalize acceleration.
32                         a c c t + 1 ( i ) n o r m = u a c c t + 1 ( i ) min ( a c c ) max ( a c c ) min ( a c c ) + l ;
33                        for i = 1:n
34                         c c ( i ) = c h a o t i c ( c c ( i ) ) ;
35                         P = 2 × c c ( i ) C 4 ;
36                         X t + 1 ( i ) = X b e s t + F × C 2 × c c ( i ) × a c c t + 1 ( i ) n o r m × d t + 1 × ( T × X b e s t X t ( i ) ) ;
37                        end
38                end
39      end while
40      Evaluate each object and select the one with the best fitness value.
41      t = t + 1
42      Return
(a)
Utilization of chaotic maps in initializing the population
Since AO adopts the traditional initialization method, the randomness and diversity of the initial population cannot be guaranteed. In contrast to traditional initialization methods, the use of chaotic for initialization significantly preserves population diversity.
However, in our chaotic optimization algorithm (CAO), we make an innovative choice by using chaotic sequences c c ( i ) instead of simple random generation rand (0, 1). These chaotic sequences are used to initialize the position, volume, density, and acceleration of all objects, according to the following formulas:
X ( i ) = l b i + c c ( i ) × ( u b i l b i )
V t ( i ) = c c ( i )
D t ( i ) = c c ( i )
a c c ( i ) = l b i + c c ( i ) × ( u b i l b i )
In order to verify the rationality of this method, we compared the initial value generated by the chaotic map with that generated by traditional initialization, and the results are shown in Figure 3. As this figure shows, the initial value generated by the chaotic map is more extensive than that generated by the uniform random distribution, and it is not difficult to observe that the initial positions obtained on the basis of initialization by chaotic mapping are more uniformly distributed in the search space. Consequently, the former can improve the ergodicity of the initial value and accelerate the convergence speed of the algorithm.
(b)
Chaotic maps applied to update density and volume
The density and volume of object i are updated at iteration t + 1 according to Equation (5). In this equation, the parameters rand1 and rand2 are essential to the AO algorithm, and their values are usually randomly generated from the interval [0, 1]. However, a significant innovation in our approach is the use of chaotic maps to generate these parameters. Instead of using random values at each iteration to determine the optimal points, we adopt a more sophisticated method using chaotic maps to assign values to rand1 and rand2. This innovative approach promises substantial improvements in the search for optimal solutions. The density and volume of object i for iteration t + 1 are updated using the following equation:
D t + 1 ( i ) = D t ( i ) + c c ( i ) × ( D b e s t D t ( i ) ) V t + 1 ( i ) = V t ( i ) + c c ( i ) × ( V b e s t V t ( i ) )
(c)
Chaotic maps applied to update position
In the standard optimization algorithm (AO), the position of objects i is updated in accordance with Equation (11). It is important to note that the values rand3 and rand4, needed in this equation, are usually randomly generated in the interval [0, 1]. However, in our chaotic optimization algorithm (CAO), we introduce a crucial innovation: instead of relying on random values at each iteration, we use chaotic maps to determine these parameters. This innovative approach promises to open up new perspectives in improving algorithm efficiency and finding optimal solutions. The calculation of the new position of object i is determined by the following equation:
X t + 1 ( i ) = { X t ( i ) + C 1 × c c ( i ) × a c c t + 1 ( i ) n o r m × d t + 1 × ( X r a n d X t ( i ) ) ,     T F 0.5 X b e s t + F × C 2 × c c ( i ) × a c c t + 1 ( i ) n o r m × d t + 1 × ( T × X b e s t X t ( i ) ) ,     o t h e r w i s e
The variable X r a n d in expression represents a random position associated with a random number generation process. This variable can take on different values at random.
F is the flag used to change the direction of movement using (Equation (20)).
F = { + 1     i f     P 0.5 1     i f     P > 0.5 w i t h    P = 2 × c c ( i ) C 4
The following observations reinforce the demonstration of the theoretical effectiveness of the proposed chaotic algorithms:
As a first significant improvement, the chaos-enhanced Archimedean optimization algorithm determines the positions, volumes, densities, and accelerations of all objects by choosing the best solution from among those randomly generated.
Chaotic map integration supports CAO by orchestrating chaotic updates of density, volume, and position, substantially improving the exploration process.
When it comes to probing a promising area in the search space, these chaotic parameters play an essential role in fostering chaotic neighborhood exploitation.

6. Simulation Results

To evaluate the effectiveness of the proposed Chaos-Archimede Optimization Algorithm (CAO), we undertook tests on three distinct problem sets. The first group, comprising twenty-three benchmark functions (unimodal, multimodal, and multimodal with fixed dimension) identified in Table 2, served as the basis for our evaluation. The second group consists of three significant engineering problems: the design problem for a welded beam, the design problem for a tension/compression spring, and the design problem for a pressure vessel. These engineering problems enable us to test the effectiveness of our algorithms in practical contexts. Finally, the third group of tests highlights the versatility of the CAO approach. It is devoted to validating our CAO algorithm for signal reconstruction, as well as 2D and 3D medical images. In this series of tests, we used discrete orthogonal Meixner moments (MMs) as a key tool. These experiments cover a diverse range of application domains and demonstrate the adaptability and efficiency of our CAO algorithm in a variety of contexts.
For the first group, made up of 23 test functions in Table 2, the aim is to determine the value of x that minimizes the function f(x) in each specific case. This solution search is formulated under the constraint minf(x). It is important to note that the search range, the possible values for the components of x, varies according to the specific nature of each function. Each function may have distinct range requirements for the different variables, and this variability must be considered when searching for the optimal solution for each test function.
In the second group, the aim is to find optimal values for x that minimize the cost functions associated with the welded beam, tension/compression spring, and pressure vessel design problems. The search scope is adjusted accordingly to meet the specifics of each engineering problem [36].
In the third problem, we seek to optimize the polynomial Meixner parameters (β, u) by minimizing the objective function MSE (Mean Square Error), with the aim of obtaining optimal parameters that enable 1D, 2D, and 3D signals to be reconstructed with excellent quality. The dimension of this problem is equal to 2, as we are looking for optimal values of β and u. The search range for the parameters is defined as follows: The search range for β extends from 0 to N (the size of the signal), while the search range for u varies from 0 to 1. This choice of range reflects the specific conditions of the problem, and is intended to guarantee an exhaustive exploration of the possible values of β and u with the aim of obtaining optimal parameters for the reconstruction of multidimensional signals.
Chaos-Archimede Optimization Algorithms (CAO) take advantage of ten types of chaotic maps, as illustrated in Table 1. This diversity of chaotic maps has enabled us to sequentially develop ten variations of Chaos-Archimede optimization algorithms, which we have designated as follows: CAO1 (Chebyshev-AO), CAO2 (Circular-AO), CAO3 (Gauss-AO), CAO4 (Iterative-AO), CAO5 (Logistic-AO), CAO6 (Piecewise-AO), CAO7 (Sine-AO), CAO8 (Singer-AO), CAO9 (Sinusoidal-AO), and CAO10 (Tent-AO). The performance of these ten new algorithms was evaluated by comparing them with the standard AO algorithm. Notably, the comparative analysis shows that CAO10 stands out with the most remarkable performance of the set.
Continuing our exploration, we extended the comparison by including CAO10 in a set of original optimization algorithms, such as the whale optimization algorithm [36], the gray wolf optimizer [37], salp swarm algorithm [38], multiverse optimizer [39], gravitational search algorithm [40], sine cosine algorithm [41], particle swarm optimization [42], and moth-flame optimization [43]. The results obtained highlight the efficiency of CAO10, which competes with these original algorithms in a promising way, opening new perspectives for diverse optimization applications.

6.1. Reference Function Validation

(a)
Extensibility test
To evaluate the performance of the Chaos-Archimede Optimization (CAO) algorithm, this section carries out an in-depth comparison between the results obtained by CAO and those of the original AO algorithm. Tests are specifically carried out on twenty-three benchmark functions (F1–F23), involving rigorous evaluations of the scalability of both algorithms.
The aim of scalability tests is to analyze the impact of dimensions on the efficiency of stochastic optimizers. They enable us to understand how problem dimensions affect the quality of the solutions generated, as well as the efficiency of the CAO when the dimension is dynamically increased. Three different dimensions are therefore examined in this study: 20, 30, and 50. To assess the performance of the algorithms, several performance measures are used. These include: (1) the mean and standard deviation of the final solutions obtained for each function. (2) Analysis of the convergence of the functions obtained to assess the speed with which the algorithms converge toward optimal solutions. (3) Ranking to identify the chaotic map that performs best among all available chaotic maps.
All optimization algorithms are subjected to the same experimental conditions, including an identical population size and a predefined maximum value set at 500. This rigorous methodological approach ensures a fair comparison of CAO versus AO performance, highlighting the potential advantages of using chaotic maps in the context of stochastic optimization.
(1)
In statistics, the standard deviation is a fundamental measure used to quantify the amplitude of variation and dispersion within a data set. A standard deviation close to zero indicates that optimal solutions tend to be very close to the mean, reflecting a high concentration of results. On the other hand, a high standard deviation reflects a greater dispersion of optimal solutions over a wide range of values, suggesting greater variability. Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 show the mean error and standard deviation of solutions obtained from experiments with the ten CAO optimization algorithms and the standard AO for dimensions of 20, 30, and 50, respectively. Analysis of these results reveals that CAO optimization algorithms outperform AO in all functions (F1-F23), regardless of the number of dimensions. Moreover, these CAO algorithms systematically display a clear superiority in the higher dimensions, underlining their ability to handle complex problems. Notably, the CAO10 algorithm stands out as offering exceptional accuracy compared to other CAO variants, reinforcing its status as the preferred choice for optimizing varied, multidimensional problems. These results demonstrate that integrating chaotic maps into CAO optimization algorithms can significantly improve their efficiency, paving the way for more accurate solutions and greater convergence in complex optimization contexts.
(2)
The convergence test represents an essential criterion for assessing the performance of algorithms in achieving the global optimum. Figure 4, Figure 5 and Figure 6 illustrate the convergence curves of test functions using CAO optimization algorithms and standard AO for dimensions of 20, 30, and 50, respectively. As these figures show, all CAO algorithms demonstrate a remarkable ability to find optimal solutions to reference functions in all functions (F1–F23). These algorithms demonstrate reliability and stability, standing out for their ability to explore search spaces more thoroughly than the standard AO algorithm. Moreover, they converge on optimal solutions considerably faster than the standard algorithm. These observations show that the use of chaotic maps in optimization significantly improves algorithm performance, contributing to greater efficiency and a significant reduction in the time needed to reach optimal solutions.
(3)
In ranking-based analysis, algorithms are evaluated and ranked according to their average performance. For this purpose, a standard ranking system is used to establish the competition between algorithms. Figure 7 show the ranking of algorithms according to their performance in 20, 30, and 50 dimensions. In this ranking system, the ranking value 1 indicates the best performance, while the ranking value 11 reflects the least favorable performance. Clearly, the ten chaotic maps outperform the original AO algorithm. In addition, the CAO10 algorithm stands out by achieving significantly higher rankings than the other CAO variants.
These results convincingly demonstrate that introducing chaotic maps into the optimization process confers a significant performance advantage over the standard Archimedean optimization algorithm. In particular, the CAO10 algorithm stands out as the preeminent choice, highlighting its exceptional efficiency and ability to obtain optimal solutions in multidimensional spaces. This ranking analysis underlines the importance of integrating chaotic variables into optimization for improved performance and a better ability to solve a variety of problems.
(b)
Comparison test with other optimization algorithms
To verify the effectiveness of the proposed algorithm, CAO10 was compared with ten well-known algorithms: Whale Optimization Algorithm, Gray Wolf Optimizer, Salp Swarm Algorithm, Multiverse Optimizer, Gravitational Search Algorithm, Sine Cosine Algorithm, Particle Swarm Optimization, and Moth-Flame Optimization. The parameter values for the above algorithms have been set in accordance with their original papers. Table 12 shows the parameter values for each algorithm. In all experiments, the population size and maximum number of iterations were set to 30 and 500, respectively.
The mean results (Mean) and standard deviations (Std) obtained by the different algorithms when solving problems F1–F23 are presented in Table 13, Table 14 and Table 15. The evaluation of the convergence behavior of the algorithms in solving the set of problems is also shown in Figure 8. Moreover, Table 13, Table 14 and Table 15 show conclusively that the CAO10 algorithm outperforms all other algorithms in all cases (F1–F23).
The convergence curves shown in Figure 8 highlight the exceptional convergence speed of CAO10 compared with other algorithms in all situations. Indeed, CAO10 shows remarkable convergence from the very first stages of the search, whereas other algorithms struggle to improve the quality of solutions, even after a greater number of exploratory stages.
These results highlight the strengths of the CAO10 algorithm, which manages to significantly speed up the early stages of the search due to its chaotic initialization strategy. In addition, CAO10 significantly improves its chances of avoiding local optima, thus favoring the discovery of optimal solutions.
In short, these findings attest to the undeniable effectiveness of the CAO10 algorithm, demonstrating its ability to solve a wide range of problems quickly and accurately. This combination of rapid convergence and high performance makes it a preferred choice for optimization in complex contexts.

6.2. Comparative Study on Two Real-World Applications

In this subsection, we use the CAO10 algorithm to solve three crucial engineering mathematical modeling problems, namely: (a) the design of welded beams; (b) the design of tension/compression springs; and (c) the design of pressure vessels. The results obtained are compared with those of numerous algorithms, including WOA, GWO, SSA, MVO, GSA, SCA, PSO, and MFO.
(a)
The welded beam design problem (WBDP)
The objective of this test is to optimize the values of the supplied variables (l, h, t, and b) to minimize the manufacturing cost for the WBD problem (Figure 9). To meet this challenge, we used the CAO10 algorithm proposed for solving the WBD problem, which we compared with various other optimization algorithms to determine its degree of superiority.
The results of this comparison are shown in Table 16. They remarkably reveal that the performance of the CAO10 algorithm exceeds that of all the other algorithms considered, a significant finding. As a result, it can be concluded that the CAO10 algorithm succeeds in identifying the best solution for the WBD problem in pursuit of the best possible solution.
(b)
The problem of tension/compression springs (TCSP)
In the TCSP problem, our aim is to minimize the minimum weight of this spring by optimizing three essential design variables (d, D, and the number of active coils, N), as illustrated in Figure 10. To solve this complex problem, we used the CAO10 algorithm, which was specifically designed for this task. We then compared the performance of the CAO10 algorithm with that of various competing optimization techniques.
The results of this comparison, summarized in Table 17, convincingly demonstrating the performance of the CAO10 algorithm. The CAO10 algorithm generates a greater number of optimal solutions than the other approaches examined.
(c)
Pressure vessel design problem (PVDP)
In the context of the PVDP, the main objective is to determine the total cost of the cylindrical pressure vessel, as shown in Figure 11. To satisfy the four constraints (Th, Ts, R, and L), optimization operations must take into account four design factors. Using the CAO10 algorithm to solve this problem, the simulation results obtained are compared with those obtained by several different optimization algorithms, as shown in Table 18. On the basis of these data, we can conclude that the suggested CAO10 algorithm has a better cost value than all other comparative algorithms, with the exception of the shortest path method.
These results underline the exceptional effectiveness of the CAO10 algorithm in solving complex problems such as WBDP, TCSP, and PVDP, demonstrating its ability to achieve optimal solutions and outperform other optimization approaches. This breakthrough offers promising prospects for the application of the CAO10 algorithm in industrial optimization and engineering contexts, where the search for optimal solutions is of crucial importance.

6.3. Reconstruction of 2D and 3D Signals and Images Using Meixner Moments and the Chaos-Archimède Algorithm (CAO)

Discrete orthogonal moments [44] play a prominent role in signal and image analysis, and their usefulness extends to a multitude of varied applications. They have been successful in areas such as classification, reconstruction, watermarking, encryption, and compression [45,46,47,48,49]. They are a versatile tool for representing and processing complex data. In the context of signal and image reconstruction, the use of discrete orthogonal Meixner moments (MMs) is complemented by the calculation of polynomial Meixner values (MPs). These polynomials depend on local parameters, noted as (β, u). To achieve optimum reconstruction quality, it is imperative to adjust these parameters appropriately. This is where the Optimized Chaos-Archimede (CAO) algorithm comes in, proposing an innovative solution. The CAO10 algorithm is used to determine optimal values for (β, u), thus guaranteeing superior quality in image reconstruction. Figure 12 summarizes the key steps in implementing the CAO algorithm for signal and image reconstruction.
In this subsection, we validate the CAO10 algorithm’s ability to reconstruct large medical signals and images using MMs. This validation process comprises three distinct tests. The first test aims to evaluate the performance of the proposed method for bio signals. The second test looks at the reconstruction of medical color images. Its aim is to demonstrate the ability of the CAO10 algorithm, in combination with MMs, to render color images accurately. Finally, the third test tackles an even more complex challenge, namely, the reconstruction of 3D images. We use the reconstruction method based on the CAO10 algorithm to achieve optimum results in the context of these three-dimensional images.
To assess the similarity between the original signal or image and those reconstructed, we use the criteria of mean square error (MSE) and peak signal-to-noise ratio (PSNR) in decibels (dB). These metrics enable us to objectively measure the quality of the reconstruction by quantifying the difference between the original signal or image and its reconstructed version, thus providing an accurate assessment of the CAO10 algorithm’s performance in each test scenario.
M S E = 1 N × x = 0 N 1 ( f ( x ) f ^ ( x ) ) 2
M S E = 1 N × M × x = 0 N 1 y = 0 M 1 ( f ( x , y ) f ^ ( x , y ) ) 2
M S E = 1 N × M × K × x = 0 N 1 y = 0 M 1 z = 0 K 1 ( f ( x , y , z ) f ^ ( x , y , z ) ) 2
P S N R = 10 log 10 k 2 M S E
(a)
Optimal bio signal reconstruction using MMs and the CAO algorithm
In this test, we verify the efficiency of the CAO10 algorithm for reconstructing large signals. To do this, we choose to reconstruct a specific ECG signal, named “MIT-BIH record 124”, of size 1000 taken from the MIT-BIH database [50]. The CAO10 algorithm was employed to optimize the local parameters (β, u) of the MPs used in the reconstruction of this signal. The results obtained with the CAO10 algorithm were compared with those generated by various other optimization algorithms, such as WOA, GWO, SSA, MVO, GSA, SCA, PSO, and MFO.
Table 19 illustrates the original signal, as well as a set of signals reconstructed using the different optimization methods. This figure also highlights the reconstruction errors, measured in terms of MSE and PSNR, as well as the optimal values of the local parameters (β, u) of the MPs. In addition, Figure 13 shows the PSNR values of signals reconstructed using various algorithms.
The results of this test demonstrate that the method based on the CAO10 algorithm enables optimal determination of the parameters (β, u), leading to excellent signal reconstruction quality, characterized by low MSE and high PSNR compared with other optimization algorithms. These findings clearly underline the superiority and robustness of the CAO10 algorithm in the context of biological signal reconstruction, opening up new prospects for improving complex signal analysis in medicine and elsewhere.
(b)
Optimal reconstruction of 2D medical images using CAO-optimized MMs
In this test, we evaluate the ability of the MM-based reconstruction method and CAO10 algorithm to reconstruct color medical images. We used 2D images of size 1024 from the database to carry out these experiments.
The CAO10 algorithm was used to optimize MMs parameters and perform medical image reconstruction. The results obtained by the CAO10 algorithm were compared with those generated by several other commonly used optimization algorithms, including WOA, GWO, SSA, MVO, GSA, SCA, PSO, and MFO. The reconstructed images, including those with the names “Brain”, “multiple-osteochondromas”, “mandible-fracture”, and “soft-tissue-chondroma-thumb”, were evaluated in terms of reconstruction quality. Table 20 shows the results of this evaluation, including reconstructed images, optimized MMs parameter values, and reconstruction errors, while Figure 14 shows the MSE and PSNR curves of the “brain” image reconstructed using various algorithms.
The results obtained attest to the ability of the CAO10 algorithm to determine optimal values of the parameters (β, u) enabling the reconstruction of all images with a considerably low MSE (high PSNR), thus outperforming the other algorithms evaluated. These findings eloquently demonstrate the superiority of the CAO10 algorithm in the field of medical image reconstruction.
(c)
Optimal reconstruction of 3D images by MMs and the CAO algorithm
In this subsection, we evaluate the effectiveness of the proposed 3D reconstruction method based on MMs and the CAO10 algorithm. We used the “Verterba” 3D image of voxel size 256 downloaded from the [49] database. Table 21 shows the results of the reconstruction of this image using the CAO10 algorithm compared with other optimization algorithms. The values of the optimized parameters (β, u) and the error measures MSE and PSNR are also displayed.
The results of this test show that the MMs values optimized by the CAO10 algorithm lead to better reconstruction quality, characterized by very low MSE and high PSNR, compared with other methods. These results demonstrate the effectiveness of our approach to polynomial parameter selection and 3D image reconstruction.

7. Conclusions

In conclusion, this study presents a significant advance in the field of optimization thanks to the introduction of the improved Archimedes optimization algorithm, Chaotic_AO (CAO). This algorithm is based on the use of ten distinct chaotic maps to replace pseudorandom sequences in the essential components of AO, namely, initialization, density and volume updating, and position updating. This enhancement has succeeded in striking a more appropriate balance between mining and exploration, offering an increased probability of discovering global solutions.
Evaluation of CAO’s performance across three distinct groups of problems highlighted its efficiency and reliability. The results of these tests clearly revealed that the CAO algorithm is not only efficient but also reliable. It demonstrated remarkable convergence speeds and exceptional solution quality in most of the cases studied. These observations confirm the real potential of the CAO algorithm for solving varied and complex problems. Overall, CAO offers exciting new prospects for improving optimization techniques, paving the way for wide-ranging applications in fields from engineering to medicine.
Following on from our current research, we plan to extend the application of the CAO algorithm to other areas of optimization, with particular emphasis on complex problems, such as multi-criteria optimization with conflicting objectives. This focus on more complex scenarios is intended to assess the robustness and versatility of the algorithm in the face of a variety of challenges.

Author Contributions

Conceptualization, Formal analysis, Writing original draft, Software: A.B. and M.A.T. Investigation, Visualization, Data curation: H.K. Validation, Methodology, Project administration, Supervision: M.S., H.Q., Y.E.A. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that we have no competing interests in the submission of this manuscript.

References

  1. Ayyarao, T.S.L.V.; Ramakrishna, N.S.S.; Elavarasan, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  2. Abdollahzadeh, B.; Gharehchopogh, F.S.; Khodadadi, N.; Mirjalili, S. Mountain Gazelle Optimizer: A new Nature-inspired Metaheuristic Algorithm for Global Optimization Problems. Adv. Eng. Softw. 2022, 174, 103282. [Google Scholar] [CrossRef]
  3. Hajizadeh, Y.; Christie, M.; Demyanov, V. Ant colony optimization for history matching and uncertainty quantification of reservoir models. J. Pet. Sci. Eng. 2011, 77, 78–92. [Google Scholar] [CrossRef]
  4. Premkumar, M.; Jangir, P.; Kumar, B.S.; Sowmya, R.; Alhelou, H.H.; Abualigah, L.; Yildiz, A.R.; Mirjalili, S. A New Arithmetic Optimization Algorithm for Solving Real-World Multiobjective CEC-2021 Constrained Optimization Problems: Diversity Analysis and Validations. IEEE Access 2021, 9, 84263–84295. [Google Scholar] [CrossRef]
  5. Azizi, M.; Talatahari, S.; Gandomi, A.H. Fire Hawk Optimizer: A novel metaheuristic algorithm. Artif. Intell. Rev. 2023, 56, 287–363. [Google Scholar] [CrossRef]
  6. Bencherqui, A.; Karmouni, H.; Daoui, A.; Alfidi, M.; Qjidaa, H.; Sayyouri, M. Optimization of Jacobi Moments Parameters using Artificial Bee Colony Algorithm for 3D Image Analysis. In Proceedings of the 2020 Fourth International Conference on Intelligent Computing in Data Sciences (ICDS), Fez, Morocco, 21–23 October 2020; pp. 1–7. [Google Scholar]
  7. Mareli, M.; Twala, B. An adaptive Cuckoo search algorithm for optimisation. Appl. Comput. Inform. 2018, 14, 107–115. [Google Scholar] [CrossRef]
  8. Alia, O.M.; Mandava, R. The variants of the harmony search algorithm: An overview. Artif. Intell. Rev. 2011, 36, 49–68. [Google Scholar] [CrossRef]
  9. Alsalibi, B.; Abualigah, L.; Khader, A.T. A novel bat algorithm with dynamic membrane structure for optimization problems. Appl. Intell. 2021, 51, 1992–2017. [Google Scholar] [CrossRef]
  10. Majak, J.; Pohlak, M.; Eerme, M.; Velsker, T. Design of car frontal protection system using neural network and genetic algorithm. Mechanika 2012, 18, 453–460. [Google Scholar] [CrossRef]
  11. Kers, J.; Majak, J.; Goljandin, D.; Gregor, A.; Malmstein, M.; Vilsaar, K. Extremes of apparent and tap densities of recovered GFRP filler materials. Compos. Struct. 2010, 92, 2097–2101. [Google Scholar] [CrossRef]
  12. Kers, J.; Majak, J. Modelling a new composite from a recycled GFRP. Mech. Compos. Mater. 2008, 44, 623–632. [Google Scholar] [CrossRef]
  13. Abdelhamid, A.A.; Towfek, S.K.; Khodadadi, N.; Alhussan, A.A.; Khafaga, D.S.; Eid, M.M.; Ibrahim, A. Waterwheel Plant Algorithm: A Novel Metaheuristic Optimization Method. Processes 2023, 11, 1502. [Google Scholar] [CrossRef]
  14. Agushaka, J.O.; Ezugwu, A.E. Initialisation Approaches for Population-Based Metaheuristic Algorithms: A Comprehensive Review. Appl. Sci. 2022, 12, 896. [Google Scholar] [CrossRef]
  15. Louzazni, M.; Khouya, A.; Amechnoue, K.; Gandelli, A.; Mussetta, M.; Crăciunescu, A. Metaheuristic Algorithm for Photovoltaic Parameters: Comparative Study and Prediction with a Firefly Algorithm. Appl. Sci. 2018, 8, 339. [Google Scholar] [CrossRef]
  16. Rahman, A.; Sokkalingam, R.; Othman, M.; Biswas, K.; Abdullah, L.; Kadir, E.A. Nature-Inspired Metaheuristic Techniques for Combinatorial Optimization Problems: Overview and Recent Advances. Mathematics 2021, 9, 2633. [Google Scholar] [CrossRef]
  17. Kisi, O. Machine Learning with Metaheuristic Algorithms for Sustainable Water Resources Management. Sustainability 2021, 13, 8596. [Google Scholar] [CrossRef]
  18. Ikotun, A.M.; Almutari, M.S.; Ezugwu, A.E. K-Means-Based Nature-Inspired Metaheuristic Algorithms for Automatic Data Clustering Problems: Recent Advances and Future Directions. Appl. Sci. 2021, 11, 11246. [Google Scholar] [CrossRef]
  19. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2020, 51, 1531–1551. [Google Scholar] [CrossRef]
  20. Qayyum, A.; Ahmad, J.; Boulila, W.; Rubaiee, S.; Arshad; Masood, F.; Khan, F.; Buchanan, W.J. Chaos-Based Confusion and Diffusion of Image Pixels Using Dynamic Substitution. IEEE Access 2020, 8, 140876–140895. [Google Scholar] [CrossRef]
  21. Hu, T. Discrete Chaos in Fractional Henon Map. Appl. Math. 2014, 5, 2243–2248. [Google Scholar] [CrossRef]
  22. Setoudeh, F.; Dezhdar, M.M.; Najafi, M. Nonlinear analysis and chaos synchronization of a memristive-based chaotic system using adaptive control technique in noisy environments. Chaos Solitons Fractals 2022, 164, 112710. [Google Scholar] [CrossRef]
  23. Thounaojam, U.S. Stochastic chaos in chemical Lorenz system: Interplay of intrinsic noise and nonlinearity. Chaos Solitons Fractals 2022, 165, 112763. [Google Scholar] [CrossRef]
  24. Chuang, L.-Y.; Hsiao, C.-J.; Yang, C.-H. Chaotic particle swarm optimization for data clustering. Expert Syst. Appl. 2011, 38, 14555–14563. [Google Scholar] [CrossRef]
  25. Tahir, M.; Tubaishat, A.; Al-Obeidat, F.; Shah, B.; Halim, Z.; Waqas, M. A novel binary chaotic genetic algorithm for feature selection and its utility in affective computing and healthcare. Neural Comput. Appl. 2020, 34, 11453–11474. [Google Scholar] [CrossRef]
  26. Yang, L.; Li, K.; Zhang, W.; Ke, Z.; Xiao, K.; Du, Z. An improved chaotic ACO clustering algorithm. In Proceedings of the 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Exeter, UK, 28–30 June 2018; pp. 1642–1649. [Google Scholar]
  27. Masdari, M.; Barshande, S.; Ozdemir, S. CDABC: Chaotic discrete artificial bee colony algorithm for multi-level clustering in large-scale WSNs. J. Supercomput. 2019, 75, 7174–7208. [Google Scholar] [CrossRef]
  28. Kharrich, M.; Selim, A.; Kamel, S.; Kim, J. An effective design of hybrid renewable energy system using an improved Archimedes Optimization Algorithm: A case study of Farafra, Egypt. Energy Convers. Manag. 2023, 283, 116907. [Google Scholar] [CrossRef]
  29. Akdag, O. A Improved Archimedes Optimization Algorithm for multi/single-objective Optimal Power Flow. Electr. Power Syst. Res. 2022, 206, 107796. [Google Scholar] [CrossRef]
  30. Nassef, A.M.; Abdelkareem, M.A.; Maghrabie, H.M.; Baroutaji, A. Metaheuristic-Based Algorithms for Optimizing Fractional-Order Controllers—A Recent, Systematic, and Comprehensive Review. Fractal Fract. 2023, 7, 553. [Google Scholar] [CrossRef]
  31. Chu, H.; Yi, J.; Yang, F. Chaos Particle Swarm Optimization Enhancement Algorithm for UAV Safe Path Planning. Appl. Sci. 2022, 12, 8977. [Google Scholar] [CrossRef]
  32. Valencia-Ponce, M.A.; González-Zapata, A.M.; de la Fraga, L.G.; Sanchez-Lopez, C.; Tlelo-Cuautle, E. Integrated Circuit Design of Fractional-Order Chaotic Systems Optimized by Metaheuristics. Electronics 2023, 12, 413. [Google Scholar] [CrossRef]
  33. Adeyemi, V.-A.; Tlelo-Cuautle, E.; Perez-Pinal, F.-J.; Nuñez-Perez, J.-C. Optimizing the Maximum Lyapunov Exponent of Fractional Order Chaotic Spherical System by Evolutionary Algorithms. Fractal Fract. 2022, 6, 448. [Google Scholar] [CrossRef]
  34. Wang, S.; Rao, H.; Wen, C.; Jia, H.; Wu, D.; Liu, Q.; Abualigah, L. Improved Remora Optimization Algorithm with Mutualistic Strategy for Solving Constrained Engineering Optimization Problems. Processes 2022, 10, 2606. [Google Scholar] [CrossRef]
  35. Wang, W.; Tian, J.; Wu, D. An Improved Crystal Structure Algorithm for Engineering Optimization Problems. Electronics 2022, 11, 4109. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  37. Li, Y.; Lin, X.; Liu, J. An Improved Gray Wolf Optimization Algorithm to Solve Engineering Problems. Sustainability 2021, 13, 3208. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  39. Fu, Y.; Zhou, M.; Guo, X.; Qi, L.; Sedraoui, K. Multiverse Optimization Algorithm for Stochastic Biobjective Disassembly Sequence Planning Subject to Operation Failures. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1041–1051. [Google Scholar] [CrossRef]
  40. Duman, S.; Güvenç, U.; Sönmez, Y.; Yörükeren, N. Optimal power flow using gravitational search algorithm. Energy Convers. Manag. 2012, 59, 86–95. [Google Scholar] [CrossRef]
  41. Sindhu, R.; Ngadiran, R.; Yacob, Y.M.; Zahri, N.A.H.; Hariharan, M. Sine–cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural Comput. Appl. 2017, 28, 2947–2958. [Google Scholar] [CrossRef]
  42. Cui, Y.; Meng, X.; Qiao, J. A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl. Soft Comput. 2022, 119, 108532. [Google Scholar] [CrossRef]
  43. Li, Y.; Zhu, X.; Liu, J. An Improved Moth-Flame Optimization Algorithm for Engineering Problems. Symmetry 2020, 12, 1234. [Google Scholar] [CrossRef]
  44. Bencherqui, A.; Daoui, A.; Karmouni, H.; Qjidaa, H.; Alfidi, M.; Sayyouri, M. Optimal reconstruction and compression of signals and images by Hahn moments and artificial bee Colony (ABC) algorithm. Multimed. Tools Appl. 2022, 81, 29753–29783. [Google Scholar] [CrossRef]
  45. Tahiri, M.A.; Bencherqui, A.; Karmouni, H.; Jamil, M.O.; Sayyouri, M.; Qjidaa, H. Optimal 3D object reconstruction and classification by separable moments via the Firefly algorithm. In Proceedings of the 2022 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 18–20 May 2022; pp. 1–8. [Google Scholar] [CrossRef]
  46. Tahiri, M.A.; Karmouni, H.; Azzayani, A.; Sayyouri, M.; Qjidaa, H. Fast 3D Image Reconstruction by Separable Moments based on Hahn and Krawtchouk Polynomials. In Proceedings of the 2020 Fourth International Conference on Intelligent Computing in Data Sciences (ICDS), Fez, Morocco, 21–23 October 2020. [Google Scholar]
  47. Tahiri, M.A.; Karmouni, H.; Bencherqui, A.; Daoui, A.; Sayyouri, M.; Qjidaa, H.; Hosny, K.M. New color image encryption using hybrid optimization algorithm and Krawtchouk fractional transformations. Vis. Comput. 2022, 39, 6395–6420. [Google Scholar] [CrossRef]
  48. Rabab, O.; Tahiri, M.A.; Bencherqui, A.; Amakdouf, H.; Jamil, M.O.; Qjidaa, H. Efficient Localization and Reconstruction Of 3D Objects Using The New Hybrid Squire Moment. In Proceedings of the 2022 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 18–20 May 2022; pp. 1–8. [Google Scholar] [CrossRef]
  49. Tahiri, M.A.; Karmouni, H.; Sayyouri, M.; Qjidaa, H. 2D and 3D image localization, compression and reconstruction using new hybrid moments. Multidimens. Syst. Signal Process. 2022, 33, 769–806. [Google Scholar] [CrossRef]
  50. Moody, G.B.; Mark, R.G. The impact of the MIT-BIH Arrhythmia Database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef]
Figure 1. Chaotic map values.
Figure 1. Chaotic map values.
Processes 12 00406 g001
Figure 2. CAO algorithm flowchart.
Figure 2. CAO algorithm flowchart.
Processes 12 00406 g002
Figure 3. Initial value by utilizing both a chaotic map and a uniform random distribution.
Figure 3. Initial value by utilizing both a chaotic map and a uniform random distribution.
Processes 12 00406 g003
Figure 4. 20D Convergence graphs.
Figure 4. 20D Convergence graphs.
Processes 12 00406 g004aProcesses 12 00406 g004bProcesses 12 00406 g004c
Figure 5. 30D Convergence graphs.
Figure 5. 30D Convergence graphs.
Processes 12 00406 g005aProcesses 12 00406 g005b
Figure 6. 50D Convergence graphs.
Figure 6. 50D Convergence graphs.
Processes 12 00406 g006aProcesses 12 00406 g006bProcesses 12 00406 g006c
Figure 7. 20D, 30D, and 50D Rank bar graph.
Figure 7. 20D, 30D, and 50D Rank bar graph.
Processes 12 00406 g007
Figure 8. Convergence curves for CAO10 and other original algorithms.
Figure 8. Convergence curves for CAO10 and other original algorithms.
Processes 12 00406 g008aProcesses 12 00406 g008b
Figure 9. WBD problem [36].
Figure 9. WBD problem [36].
Processes 12 00406 g009
Figure 10. TCS problem [36].
Figure 10. TCS problem [36].
Processes 12 00406 g010
Figure 11. PVD problem [36].
Figure 11. PVD problem [36].
Processes 12 00406 g011
Figure 12. Reconstruction method using MMs and CAO.
Figure 12. Reconstruction method using MMs and CAO.
Processes 12 00406 g012
Figure 13. PSNR values for reconstructed signals.
Figure 13. PSNR values for reconstructed signals.
Processes 12 00406 g013
Figure 14. MSE and PSNR plot of reconstructed ‘Brain’ image.
Figure 14. MSE and PSNR plot of reconstructed ‘Brain’ image.
Processes 12 00406 g014
Table 1. Chaotic maps.
Table 1. Chaotic maps.
Maps NameFunctionRange
Map of Chebyshev x k + 1 = cos ( k c o s 1 ( x k ) ) (−1, 1)
Circular map x k + 1 = x k + b ( a 2 π ) sin ( 2 π x k ) mod ( 1 ) (0, 1)
Gauss map x k + 1 = { 0                                  x k = 0 1 x k    mod ( 1 )        o t h e r w i s e       (0, 1)
Iterative map x k + 1 = sin ( a π x k ) ,       a = 0.7 (−1, 1)
Logistic map x k + 1 = a x k ( 1 x k ) ,       a = 4 (0, 1)
Piecewise map x k + 1 = { x k p       0 x k p x k p 0.5 p       P x k 1 / 2 1 p x k 0.5 p       1 / 2 x k 1 p 1 x k p       1 p x k 1 (0, 1)
Sine map x k + 1 = a 4 sin ( π x k ) ,         a = 4 (0, 1)
Singer map x k + 1 = μ ( 7.86 x k 23.31 x k 2 + 28.75 x k 3 13.302875 x k 4 ) ,         μ = 1.07 (0, 1)
Sinusoidal map x k + 1 = a x k 2 sin ( π x k ) ,         a = 2.3 (0, 1)
Tent map x k + 1 = { x k 0.7 ,        x k < 0.7 10 3 ( 1 x k ) ,       x k 0.7 (0, 1)
Table 2. Unimodal, multimodal, and multimodal with fixed dimensions test functions.
Table 2. Unimodal, multimodal, and multimodal with fixed dimensions test functions.
FunctionsDescriptionsDimensionsRange
Unimodal functionsF1 f ( x ) = i = 1 n x i 2 30 ,    100 ,    500 ,    1000 [ 100 , 100 ]
F2 f ( x ) = i = 1 n | x i | + i = 0 n | x i | 30 ,    100 ,    500 ,    1000 [ 10 , 10 ]
F3 f ( x ) = i = 1 d ( j = 1 i x j ) 2 30 ,    100 ,    500 ,    1000 [ 100 , 100 ]
F4 f ( x ) = max i { | x i | ,    1 i n } 30 ,    100 ,    500 ,    1000 [ 100 , 100 ]
F5 f ( x ) = i = 1 n 1 [ 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2 ] 30 ,    100 ,    500 ,    1000 [ 30 , 30 ]
F6 f ( x ) = i = 1 n ( x i + 0.5 ) 2 30 ,    100 ,    500 ,    1000 [ 100 , 100 ]
F7 f ( x ) = i = 0 n i x i 4 + r a n d o m [ 0 ,   1 ) 30 ,    100 ,    500 ,    1000 [ 128 , 128 ]
Multimodal functionsF8 f ( x ) = i = 1 n ( x i sin ( | x i | ) ) 30 ,    100 ,    500 ,    1000 [ 500 , 500 ]
F9 f ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30 ,    100 ,    500 ,    1000 [ 5.12 ,    5.12 ]
F10 f ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30 ,    100 ,    500 ,    1000 [ 32 , 32 ]
F11 f ( x ) = 1 + 1 4000 i = 1 n x i n i = 1 n cos ( x i i ) 30 ,    100 ,    500 ,    1000 [ 600 , 600 ]
F12 f ( x ) = π n [ 10 sin ( π y 1 ) ] + i = 1 n ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) ] 30 ,    100 ,    500 ,    1000 [ 50 , 50 ]
F13 f ( x ) = 0.1 ( sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 ( 1 + sin 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5 , 100 , 4 ) ) 30 ,    100 ,    500 ,    1000 [ 50 , 50 ]
Multimodal functions with a fixed dimensionF14 f ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i , j ) ) 1 2 [ 65 , 65 ]
F15 f ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4 [ 5 , 5 ]
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2 [ 5 , 5 ]
F17 f ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 4 x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2 [ 5 , 5 ]
F18 f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2 [ 2 , 2 ]
F19 f ( x ) = i = 1 4 c i exp ( i = 1 3 a i j ( x i j p i j ) 2 ) 3 [ 1 , 2 ]
F20 f ( x ) = i = 1 4 c i exp ( i = 1 6 a i j ( x i j p i j ) 2 ) 6 [ 0 , 1 ]
F21 f ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1 4 [ 0 , 1 ]
F22 f ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1 4 [ 0 , 1 ]
F23 f ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1 4 [ 0 , 1 ]
Table 3. Results on unimodal benchmark functions (F1 to F7) with Dim = 20.
Table 3. Results on unimodal benchmark functions (F1 to F7) with Dim = 20.
AlgorithmsMetricF1F2F3F4F5F6F7
CAO1Mean3.1594 × 10−624.2767 × 10−451.4931 × 10−457.3686 × 10−430.00420.05149.7338 × 10−3
Std1.4107 × 10−619.2632 × 10−545.5336 × 10−443.7618 × 10−420.01860.11820.0319
CAO2Mean1.9363 × 10−471.7334 × 10−625.7325 × 10−441.7001 × 10−530.00230.08120.0013
Std1.1737 × 10−469.4942 × 10−624.3096 × 10−421.1469 × 10−510.00350.13450.0280
CAO3Mean1.6439 × 10−463.7819 × 10−486.1514 × 10−407.1135 × 10−430.00170.09050.0034
Std2.3888 × 10−452.5496 × 10−474.3359 × 10−381.7891 × 10−420.00350.16020.0198
CAO4Mean1.8571 × 10−511.5478 × 10−519.7073 × 10−414.7818 × 10−390.00170.08831.5364 × 10−4
Std1.5543 × 10−509.8913 × 10−512.4082 × 10−395.4813 × 10−380.00310.13910.0173
CAO5Mean1.2846 × 10−425.0386 × 10−431.0864 × 10−383.6411 × 10−470.00440.05800.0022
Std4.3618 × 10−412.2073 × 10−429.4501 × 10−371.8711 × 10−460.01810.11350.0167
CAO6Mean2.1626 × 10−416.9392 × 10−451.3965 × 10−388.9057 × 10−400.00130.08040.0044
Std8.1082 × 10−412.3261 × 10−434.0487 × 10−371.1046 × 10−380.00240.15020.0224
CAO7Mean2.5815 × 10−457.3256 × 10−447.4666 × 10−371.0667 × 10−400.00160.05430.0016
Std1.5931 × 10−426.6050 × 10−432.2757 × 10−352.3155 × 10−400.00320.09760.0130
CAO8Mean3.0509 × 10−427.5109 × 10−442.3251 × 10−373.0214 × 10−390.00120.07170.0017
Std2.4369 × 10−409.4416 × 10−422.9999 × 10−355.0822 × 10−380.00230.12290.0120
CAO9Mean1.3773 × 10−424.1022 × 10−435.3556 × 10−402.2169 × 10−390.00210.05620.0011
Std1.6850 × 10−411.4126 × 10−422.1538 × 10−382.3310 × 10−380.00340.10300.0205
CAO10Mean4.3124 × 10−682.5689 × 10−702.0973 × 10−541.0760 × 10−620.00710.08050.0012
Std4.7876 × 10−671.2543 × 10−681.5917 × 10−523.1530 × 10−620.02570.12080.0156
AOMean3.6324 × 10−412.9733 × 10−427.8430 × 10−369.6616 × 10−390.01250.31600.0132
Std3.3158 × 10−402.3904 × 10−417.8266 × 10−347.1005 × 10−380.02660.20630.0242
Table 4. Results on multimodal benchmark functions (F8 to F13) with Dim = 20.
Table 4. Results on multimodal benchmark functions (F8 to F13) with Dim = 20.
AlgorithmsMetricF8F9F10F11F12F13
CAO1Mean1.128 × 1030.2274007.881 × 10−101.440 × 10−60.25110.0018
Std5.164 × 1030.8588097.519 × 10−109.349 × 10−60.40870.0028
CAO2Mean25.1341009.274 × 10−116.312 × 10−101.482 × 10−90.15230.0482
Std2.730 × 1027.059 × 10−96.498 × 10−104.489 × 10−90.28580.1142
CAO3Mean1.588 × 1023.333 × 10−92.921 × 10−56.270 × 10−100.19190.0670
Std5.571 × 1025.292 × 10−81.629 × 10−44.489 × 10−90.39590.2197
CAO4Mean47.8350001.316 × 10−93.727 × 10−91.117 × 10−90.22300.0337
Std4.622 × 1024.845 × 10−91.593 × 10−88.992 × 10−90.39030.1814
CAO5Mean5.288 × 1024.422 × 10−92.357 × 10−93.155 × 10−80.26830.0550
Std1.922 × 1031.084 × 10−82.638 × 10−81.044 × 10−80.40910.2154
CAO6Mean1.141 × 1026.606 × 10−98.621 × 10−95.130 × 10−90.16310.0227
Std4.698 × 1023.252 × 10−85.440 × 10−81.600 × 10−80.28990.1198
CAO7Mean71.1334000.08710885.638 × 10−93.205 × 10−70.23180.0557
Std3.352 × 1020.45357005.563 × 10−81.986 × 10−60.38530.1901
CAO8Mean84.3120661.953 × 10−91.111 × 10−71.465 × 10−60.18540.0345
Std3.613 × 1021.191 × 10−81.41 × 10−071.170 × 10−050.30950.1823
CAO9Mean29.9665090.0309003.551 × 10−098.202 × 10−090.18560.0650
Std2.804 × 1020.6808001.962 × 10−073.330 × 10−160.29570.2426
CAO10Mean2.30810903.420 × 10−124.984 × 10−104.897 × 10−100.08370.1403
Std2.080 × 1021.540 × 10−91.753 × 10−084.489 × 10−090.24730.1641
AOMean2.420 × 1040.34408440.646 × 10−041.465 × 10−060.30820.1432
Std1.328 × 1050.76844211.522 × 10−041.170 × 10−050.40690.3872
Table 5. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 20.
Table 5. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 20.
AlgorithmsMetricF14F15F16F17F18F19F20F21F22F23
CAO1Mean23.58150.71050.31210.68890.45040.60330.28103.75795.41313.9216
Std10.82770.80450.56150.67110.68930.21010.16780.12803.94821.1888
CAO2Mean23.87060.27340.30810.49110.45480.54830.32023.86513.72383.8291
Std11.45910.33550.55410.31810.61020.29390.15000.16511.32780.0517
CAO3Mean31.62991.42770.30790.67520.45240.50310.29543.75901.05923.8082
Std0.412291.61460.55560.66770.68410.35950.14920.16820.14210.3970
CAO4Mean8.407300.67660.30400.51980.47490.55480.29383.91123.97403.8283
Std33.67431.24770.55140.50110.68440.24720.13600.23990.12410.2028
CAO5Mean15.70130.93870.30930.43310.46830.52350.46893.80673.84583.9310
Std22.33900.76040.56390.40980.67460.31460.29380.16920.15531.1082
CAO6Mean24.06950.31850.31390.49190.48480.51480.29175.91600.87483.7528
Std10.57410.30320.56110.53310.06350.32760.19990.06010.19460.3190
CAO7Mean24.07370.78980.28910.70110.45670.48980.40053.90083.81133.8739
Std10.85160.76280.53150.79100.70090.37560.24550.14540.17212.2105
CAO8Mean23.76540.61290.31190.67710.45800.50750.32414.73853.75503.8911
Std11.42560.69600.5630.62120.70140.35390.34560.31290.46601.0406
CAO9Mean31.73410.60330.26290.71040.49750.54290.33435.83254.80773.8313
Std0.807120.59420.47560.74080.70690.31290.26592.28632.13320.234
CAO10Mean7.958170.12040.29990.21840.48820.49550.27833.66770.99783.7450
Std33.94010.08260.54750.27180.27180.38390.17710.10950.00390.1277
AOMean32.00597.49680.32060.80110.49980.63180.47107.79267.83873.9236
Std0.267789.01910.54940.82370.69970.18710.25490.20270.10190.2180
Table 6. Results on unimodal benchmark functions (F1 to F7) with Dim = 30.
Table 6. Results on unimodal benchmark functions (F1 to F7) with Dim = 30.
AlgorithmsMetricF1F2F3F4F5F6F7
CAO1Mean6.9002 × 10−564.3889 × 10−462.8527 × 10−504.2177 × 10−430.00130.06800.0020
Std8.8970 × 10−551.6921 × 10−452.0478 × 10−462.9221 × 10−420.00210.10990.0301
CAO2Mean2.0179 × 10−481.7606 × 10−621.0364 × 10−505.7924 × 10−540.00120.06680.0023
Std1.1076 × 10−471.0465 × 10−615.6617 × 10−503.0722 × 10−530.00250.11580.0081
CAO3Mean4.9040 × 10−465.7033 × 10−492.5248 × 10−431.8416 × 10−440.00220.04390.0048
Std1.0077 × 10−444.3851 × 10−472.5420 × 10−413.2764 × 10−430.00360.06710.0174
CAO4Mean1.4185 × 10−492.7644 × 10−442.2256 × 10−412.4460 × 10−400.00130.09600.0024
Std3.0797 × 10−491.2080 × 10−436.0544 × 10−393.0646 × 10−390.00250.13860.0181
CAO5Mean7.9361 × 10−415.5580 × 10−451.0328 × 10−451.4898 × 10−410.00330.09600.0018
Std3.6388 × 10−408.9315 × 10−443.4739 × 10−438.7590 × 10−400.01090.06880.0114
CAO6Mean5.1744 × 10−444.4873 × 10−463.5064 × 10−411.8246 × 10−390.00170.09480.0012
Std5.1589 × 10−434.2547 × 10−451.4646 × 10−391.6917 × 10−380.00250.14910.0231
CAO7Mean2.9060 × 10−432.1857 × 10−455.0238 × 10−418.6015 × 10−410.00190.06700.0039
Std1.5628 × 10−426.0383 × 10−452.6723 × 10−393.1661 × 10−400.00260.10330.0190
CAO8Mean6.2145 × 10−419.7401 × 10−453.7685 × 10−394.3596 × 10−390.00520.07600.0038
Std6.3794 × 10−406.2932 × 10−431.8435 × 10−372.9606 × 10−380.02140.11700.0218
CAO9Mean7.5027 × 10−441.0543 × 10−511.5536 × 10−372.3961 × 10−460.00310.08356.0726 × 10−04
Std7.4475 × 10−432.3570 × 10−518.7301 × 10−363.2826 × 10−450.00400.13460.0274
CAO10Mean1.1623 × 10−664.0243 × 10−664.3410 × 10−594.8664 × 10−600.00190.15760.0013
Std6.2719 × 10−661.8822 × 10−652.3865 × 10−571.4679 × 10−590.00310.15150.0234
AOMean1.0825 × 10−406.6106 × 10−441.9768 × 10−372.9982 × 10−380.00690.31940.0051
Std2.2192 × 10−391.7556 × 10−437.6752 × 10−362.0650 × 10−370.00470.20030.0187
Table 7. Results on multimodal benchmark functions (F8 to F13) with Dim = 30.
Table 7. Results on multimodal benchmark functions (F8 to F13) with Dim = 30.
AlgorithmsMetricF8F9F10F11F12F13
CAO1Mean52.6970013.294 × 10−081.850 × 10−095.011 × 10−090.13810.0649
Std3.006 × 10+022.894 × 10−076.374 × 10−072.336 × 10−080.27740.1940
CAO2Mean23.3300102.415 × 10−044.589 × 10−085.585 × 10−070.15740.0824
Std3.553 × 10+020.8493001.809 × 10−074.629 × 10−060.30110.1570
CAO3Mean1.214 × 10+021.329 × 10−059.265 × 10−081.992 × 10−080.17790.0556
Std2.600 × 10+023.123 × 10−043.094 × 10−071.677 × 10−070.33640.2147
CAO4Mean91.1730009.416 × 10−081.334 × 10−086.218 × 10−080.18650.0358
Std2.387 × 10+028.232 × 10−073.551 × 10−082.195 × 10−070.33470.1818
CAO5Mean1.819 × 10+020.23933429.295 × 10−089.967 × 10−070.25660.0655
Std2.982 × 10+021.43891103.808 × 10−074.127 × 10−060.38440.1811
CAO6Mean1.144 × 10+020.0630005.721 × 10−099.769 × 10−080.18620.0909
Std3.260 × 10+020.2066233.485 × 10−085.083 × 10−070.38130.2269
CAO7Mean40.2359004.242 × 10−097.161 × 10−102.315 × 10−070.17670.0773
Std2.829 × 10+025.530 × 10−082.708 × 10−086.464 × 10−040.33020.1732
CAO8Mean1.761 × 10+022.882 × 10−081.0661 × 10−083.582 × 10−050.15630.0540
Std1.606 × 10+031.866 × 10−075.102 × 10−081.908 × 10−060.34000.2567
CAO9Mean1.354 × 10+020.71061201.447 × 10−084.202 × 10−090.28290.0517
Std2.710 × 10+021.01481107.504 × 10−084.608 × 10−080.30360.1689
CAO10Mean17.0142126.319 × 10−102.465 × 10−115.004 × 10−090.25430.0524
Std2.604 × 10+021.191 × 10−081.913 × 10−111.923 × 10−080.34210.1734
AOMean2.994 × 10+020.7488003.492 × 10−073.396 × 10−050.34050.1537
Std1.582 × 10+030.8191001.674 × 10−063.876 × 10−050.36470.3426
Table 8. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 30.
Table 8. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 30.
AlgorithmsMetricF14F15F16F17F18F19F20F21F22F23
CAO1Mean31.9181.11660.30810.68890.48990.51710.40473.86995.36993.8539
Std0.06991.27250.56280.67110.69850.30290.25570.24494.86760.1391
CAO2Mean31.9360.97250.30800.49110.49860.53120.43973.87090.99483.8559
Std0.11600.97580.56000.31810.70260.30430.30330.12360.01230.1689
CAO3Mean32.0370.92630.31050.67520.49770.53680.32945.82063.90133.9448
Std0.01420.87180.56180.66770.14880.28680.17530.12370.08541.0730
CAO4Mean31.9730.13530.29100.51980.49760.46850.30225.94153.87043.9107
Std0.02110.63560.52500.50110.70810.35520.12990.08130.02460.9368
CAO5Mean31.9540.35390.30360.43310.48740.48800.32117.79623.78613.9607
Std0.06860.23550.55240.40980.69190.38010.18980.38820.25080.0660
CAO6Mean7.91940.82950.31340.49190.47790.35780.28717.84453.83443.8886
Std33.9450.72700.56400.53310.65020.59280.18340.08850.18830.2245
CAO7Mean31.9640.83880.26940.70110.48200.46900.40293.89974.87583.9378
Std0.18560.79190.49320.79100.69690.39920.26390.14992.17070.0595
CAO8Mean31.8851.42730.29950.67710.48300.54110.31483.92033.81313.8889
Std0.05222.18960.57180.62120.69410.27830.17010.13100.23180.1612
CAO9Mean15.9720.52390.30120.71040.47370.47950.33053.75373.88403.7793
Std22.6010.32700.54960.74080.70950.39920.16320.20410.14611.1065
CAO10Mean7.79020.19770.32220.21840.47990.50240.27533.70180.97583.7749
Std11.7300.67180.54970.27180.69830.36570.13870.27910.04531.0418
AOMean32.9981.44590.32360.80110.58680.59430.45187.98993.83595.1871
Std0.13887.13370.54390.82370.70670.36730.17220.46042.64511.8885
Table 9. Results on unimodal benchmark functions (F1 to F7) with Dim = 50.
Table 9. Results on unimodal benchmark functions (F1 to F7) with Dim = 50.
AlgorithmsMetricF1F2F3F4F5F6F7
CAO1Mean1.4014 × 10−458.8678 × 10−571.1554 × 10−405.8719 × 10−420.00690.10930.0040
Std7.6487 × 10−452.1708 × 10−565.5669 × 10−395.3266 × 10−410.02320.16380.0291
CAO2Mean2.6731 × 10−512.4961 × 10−591.9667 × 10−437.4677 × 10−540.00210.18615.4088 × 10−04
Std2.0123 × 10−501.3672 × 10−581.0771 × 10−424.8434 × 10−530.00710.19620.0080
CAO3Mean1.8612 × 10−486.0549 × 10−503.6566 × 10−438.6094 × 10−430.00290.12530.0042
Std3.3861 × 10−464.6964 × 10−492.7153 × 10−412.2832 × 10−410.00310.17510.0226
CAO4Mean2.9703 × 10−432.7805 × 10−453.0920 × 10−385.7765 × 10−390.00540.09674.6365 × 10−04
Std1.6591 × 10−416.2944 × 10−451.6459 × 10−363.3849 × 10−380.00740.14820.0135
CAO5Mean1.4490 × 10−431.5213 × 10−451.6830 × 10−454.8799 × 10−400.00530.16530.0028
Std1.7810 × 10−423.7847 × 10−441.6462 × 10−438.1681 × 10−390.01210.16300.0239
CAO6Mean2.8062 × 10−433.2209 × 10−472.7807 × 10−418.7902 × 10−400.00500.11850.0018
Std1.1892 × 10−427.7997 × 10−461.5526 × 10−399.6235 × 10−390.01770.16970.0178
CAO7Mean1.2935 × 10−446.5273 × 10−493.7121 × 10−411.2178 × 10−390.00690.16650.0018
Std7.2688 × 10−442.4915 × 10−464.7472 × 10−398.3726 × 10−380.01670.18970.0201
CAO8Mean2.1431 × 10−503.2920 × 10−531.2780 × 10−383.6523 × 10−440.00310.11880.0014
Std8.4976 × 10−503.8756 × 10−521.4820 × 10−351.6212 × 10−430.02520.15460.0224
CAO9Mean4.8437 × 10−436.7968 × 10−461.4760 × 10−383.6295 × 10−410.00340.08418.9504 × 10−04
Std1.8334 × 10−429.7044 × 10−453.0308 × 10−366.4156 × 10−400.00400.13490.0152
CAO10Mean3.6296 × 10−653.1257 × 10−679.5705 × 10−509.8494 × 10−580.00330.07907.4727 × 10−05
Std8.2527 × 10−643.2552 × 10−664.1655 × 10−476.5074 × 10−570.01160.10220.0148
AOMean1.5736 × 10−419.6721 × 10−457.9429 × 10−351.1173 × 10−370.00800.41220.1962
Std4.8437 × 10−435.4167 × 10−444.8572 × 10−331.1009 × 10−360.00900.13550.0278
Table 10. Results on multimodal benchmark functions (F8 to F13) with Dim = 50.
Table 10. Results on multimodal benchmark functions (F8 to F13) with Dim = 50.
AlgorithmsMetricF8F9F10F11F12F13
CAO1Mean50.5761808.948 × 10−107.102 × 10−118.376 × 10−100.19090.0332
Std4.529 × 10+026.276 × 10−097.895 × 10−094.557 × 10−090.32200.1311
CAO2Mean1.089 × 10+021.145 × 10−096.343 × 10−104.510 × 10−100.27460.0317
Std3.043 × 10+026.065 × 10−097.622 × 10−093.941 × 10−090.41750.0927
CAO3Mean2.454 × 10+022.308 × 10−091.864 × 10−062.348 × 10−090.18830.1515
Std8.856 × 10+021.333 × 10−085.692 × 10−061.071 × 10−080.30990.2685
CAO4Mean19.6856721.323 × 10−088.463 × 10−091.078 × 10−090.31170.0771
Std5.120 × 10+022.827 × 10−086.374 × 10−087.045 × 10−090.25850.2566
CAO5Mean9.737 × 10+033.089 × 10−081.829 × 10−092.185 × 10−090.35540.1723
Std5.333 × 10+041.630 × 10−072.002 × 10−081.132 × 10−080.38740.2590
CAO6Mean75.2241866.554 × 10−093.572 × 10−082.693 × 10−090.43540.0339
Std3.022 × 10+023.432 × 10−081.473 × 10−071.040 × 10−080.46230.1805
CAO7Mean1.046 × 10+021.120 × 10−073.017 × 10−097.572 × 10−100.15720.0428
Std3.382 × 10+022.570 × 10−062.594 × 10−083.839 × 10−090.26860.1845
CAO8Mean20.0615023.531 × 10−097.303 × 10−081.915 × 10−090.34180.1002
Std3.566 × 10+025.911 × 10−081.790 × 10−079.994 × 10−090.36680.2592
CAO9Mean24.9547096.108 × 10−063.673 × 10−098.857 × 10−100.20280.0707
Std3.791 × 10+022.172 × 10−049.125 × 10−094.306 × 10−090.27530.2037
CAO10Mean18.7340937.141 × 10−106.571 × 10−103.384 × 10−100.17500.0199
Std2.78390011.735 × 10−091.493 × 10−094.752 × 10−090.34430.1116
AOMean8.926 × 10+076.732 × 10−067.217 × 10−052.255 × 10−040.57380.2460
Std4.315 × 10+082.183 × 10−042.243 × 10−041.122 × 10−040.47490.3448
Table 11. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 50.
Table 11. Results on multimodal with fixed dimension benchmark functions (F14 to F23) with Dim = 50.
AlgorithmsMetricF14F15F16F17F18F19F20F21F22F23
CAO1Mean31.8740.93220.30550.16370.49950.48060.38143.85093.94483.9409
Std0.04650.86680.54780.04090.70690.39780.41400.17321.16021.1492
CAO2Mean31.9170.16380.30020.48120.46680.52820.45103.89103.79603.9572
Std0.06820.06460.55300.37170.69090.30890.25530.10701.31770.0726
CAO3Mean24.0430.29420.30401.05200.47910.49690.30173.91003.80415.9418
Std11.3340.17010.56580.91270.69360.37730.17100.13150.30230.0674
CAO4Mean31.9761.83980.31090.80810.45470.51240.31733.86383.88653.9464
Std0.00421.63690.56210.92900.70760.33910.15100.08870.09430.1587
CAO5Mean31.8761.32040.31470.72240.48000.50290.36873.86203.86933.8476
Std0.48201.51820.56410.60300.70080.35460.29670.17430.23810.1844
CAO6Mean31.9302.14730.31341.16400.49930.50750.33380.97623.92155.1173
Std0.14201.84190.56491.12490.70750.32620.18450.16840.09091.9043
CAO7Mean32.2870.91210.30931.34510.49350.51370.46353.83784.85094.8710
Std0.09700.80890.55601.23790.70430.34440.37370.27392.09682.3336
CAO8Mean31.8281.84640.30570.26260.49710.50740.29675.94503.89563.8634
Std0.00101.72140.54640.13010.70520.36330.16100.07080.14001.1657
CAO9Mean32.0340.16960.29980.94020.49550.49980.41303.86883.91003.9037
Std0.12500.04290.56200.96640.70460.36900.26280.12560.15630.9978
CAO10Mean16.0751.28540.20830.61320.44930.49950.28033.92910.99983.8473
Std22.6091.16610.56300.46420.67900.35340.15550.07380.06390.1782
AOMean32.9882.59930.31381.41860.49970.53260.47737.95955.84185.9869
Std0.09901.76760.53701.29840.14530.34960.24620.07070.15360.0108
Table 12. Parameter values for all algorithms.
Table 12. Parameter values for all algorithms.
AlgorithmsParameters
Proposed CAON (Population size) = 30, tmax = 500
C1 (Control variable 1) = 2, C2 (Control variable 2) = 6, C3 (Control variable 3) = 2 and C4 (Control variable 4) = 0.5
WOAa1 = [0, 2]; a2 = [−2, −1]; b = 1
GWOa = [0, 2]; r1 ∈ [0, 1]; r2 ∈ [0, 1]
MVOExistence probability ∈ [0.2, 1]; traveling distance rate ∈ [0.6, 1]
SSA1 ∈ [0, 1]; c2 ∈ [0, 1]
GSAα = 20; G 0 = 100
SCAa = 2; r 4 = [0, 1]; r 2 = [0, 2]
PSOc1 = 2; c2 = 2; v max = 6
MFOb = 1; t = [ −1, 1]; a ∈ [ −2, −1]
Table 13. Comparison results on unimodal benchmark functions (F1 to F7).
Table 13. Comparison results on unimodal benchmark functions (F1 to F7).
AlgorithmsMetricF1F2F3F4F5F6F7
CAO10Mean00000.07830.00332.563 × 10−04
Std0000000
WOAMean5.9699 × 10−733.2843 × 10−264.3199 × 10+0448.673227.36830.29950.0010
Std0000000
GWOMean7.5948 × 10+026.0593 × 10+103.3671 × 10+034.21581.7138 × 10+065.9424 × 10+020.5919
Std5.7409 × 10+031.3549 × 10+121.4126 × 10+0415.15901.6081 × 10+074.5826 × 10+036.4210
MVOMean0.69120.84246.1245 × 10+021.133444.02111.26520.0527
Std0000000
SSAMean9.3343 × 10−080.74201.8868 × 10+0315.76461.0708 × 10+022.4597 × 10−070.1490
Std0000000
GSAMean2.3130 × 10+031.9539 × 10+045.2702 × 10+037.82917.9100 × 10+052.0677 × 10+031.1004
Std3.2912 × 10+033.7007 × 10+051.4597 × 10+0411.22991.1827 × 10+075.7917 × 10+035.2359
SCAMean8.6473 × 10+031.7905 × 10+034.5324 × 10+0471.85921.0117 × 10+081.6909 × 10+0439.0650
Std1.8116 × 10+043.9504 × 10+044.8487 × 10+0421.43671.2722 × 10+082.4662 × 10+0450.2985
PSOMean−3.4910 × 10−253.2843 × 10−262.3859 × 10−15−8.0173 × 10−200.5224−0.50000.0018
Std6.0413 × 10−242.6330 × 10−253.0418 × 10−146.4544 × 10−190.19850.00570.0294
MFOMean1.0003 × 10+0440.07002.9998 × 10+0469.65654.2170 × 10+023.98263.0054
Std0000000
Table 14. Comparison results on multimodal benchmark functions (F8 to F13).
Table 14. Comparison results on multimodal benchmark functions (F8 to F13).
AlgorithmsMetricF8F9F10F11F12F13
CAO10Mean7.167251608.881 × 10−1600.0258440.0017
Std3.770 × 10+0200000
WOAMean3.623 × 10+037.212 × 10−1031.43220−1.032 × 10−070.1771691.328 × 10+07
Std8.615 × 10+024.546 × 10−093.1099564.394 × 10−070.3449651.114 × 10+08
GWOMean1.254 × 10+041.475 × 10+029.7446731.777 × 10+021.173 × 10+070.1690040
Std01.014 × 10+023.2965001.165 × 10+025.037 × 10+070.0029300
MVOMean3.760 × 10+031.297 × 10+023.26564833.630711.394 × 10+074.145 × 10+07
Std1.170 × 10+0392.4001205.7646881.041 × 10+028.101 × 10+071.889 × 10+08
SSAMean7.170 × 10+032.190 × 10+0219.9039376.219964.983 × 10+075.291 × 10+07
Std000000
GSAMean7.084 × 10+031.748 × 10+023.9382901.3641977.1864293.5204640
Std000000
SCAMean2.755 × 10+031.399 × 10+0219.413032.184 × 10+023.305 × 10+085.687 × 10+08
Std3.936 × 10+0280.314652.9373722.278 × 10+022.700 × 10+084.470 × 10+08
PSOMean7.317 × 10+0387.713198.92802810.52426.6642015.272 × 10+02
Std000000
MFOMean88.4750084.342 × 10−071.174 × 10−06−8.520 × 10−050.7977390.5992800
Std3.571 × 10+021.643 × 10−053.876 × 10−050.0014450.9076480.4798860
Table 15. Comparison results on multimodal with fixed dimension benchmark functions (F14 to F23).
Table 15. Comparison results on multimodal with fixed dimension benchmark functions (F14 to F23).
AlgorithmsMetricF14F15F16F17F18F19F20F21F22F23
CAO10Mean0.99806.237 × 10−040.310604.957 × 10−040.45830.49610.28380.92011.18330.8968
Std0000000000
WOAMean23.96870.817390.98770.86304.12133.847112.63892.63533.87733.9407
Std11.18620.836440.56340.77050.66060.38300.12990.12730.11740.0610
GWOMean4.712500.007291.03160.02323.06361−3.86273.197210.14847.92432.7740
Std27.28950.013190.15650.01376.67420.09450.26081.08192.79341.6997
MVOMean10.76400.002310.98530.02265.63693.84383.06371.787910.395210.535
Std0.002200.006490.35230.017519.17520.03400.13220.64762.84792.9281
SSAMean3.968200.001181.03160.00122.99993.86273.20315.10076.12715.5101
Std0000000000
GSAMean15.50380.001401.03160.01863.00023.86273.132610.14710.402210.536
Std0000000000
SCAMean8.419150.004200.97280.00334.26233.74412.72643.86672.751510.521
Std30.45920.008010.22430.00374.60390.54630.41350.48400.59610.1332
PSOMean4.950490.001521.03160.01923.03223.86263.111410.15341.18333.8351
Std0000000000
MFOMean31.97832.126770.31142.07950.50670.50760.34494.703154.30684.3554
Std9.963 × 10−062.059530.56742.01430.70710.37130.19005.501 × 10−056.311 × 10−046.316 × 10−04
Table 16. Comparison results for WBDP.
Table 16. Comparison results for WBDP.
AlgorithmsThe Optimal Values of the VariablesOptimal Cost
hLTb
WOA0.2034813.5221349.0346080.2058321.728744
GWO0.2100184.6856829.6120540.2114482.129173
SSA0.2168403.3324108.8019180.2168501.764686
MVO0.2058683.4925949.0209460.2064501.730838
GSA0.2035913.5865089.2988960.2099352.445557
SCA0.2068113.4824299.7341180.2083471.870321
PSO0.2076413.5902499.3707190.2110371.812654
MFO0.2119803.6106419.5128450.2071552.236191
CAO100.2054033.4780669.0366320.2057321.725388
Table 17. Comparison results for TCSP.
Table 17. Comparison results for TCSP.
AlgorithmsThe Optimal Values of the VariablesOptimal Weight
dDN
WOA0.0544590.4262908.1545320.012838
GWO0.0864791.3000002.0000000.030506
SSA0.0719270.31734214.056130.012738
MVO0.0504500.32755213.273980.012734
GSA0.0532510.3930569.5758600.013263
SCA0.0535620.31193614.377160.014633
PSO0.0674390.3812989.6743200.014429
MFO0.0571920.41821114.114330.023833
CAO100.0500000.34336412.117040.012671
Table 18. Comparison results for PVD.
Table 18. Comparison results for PVD.
AlgorithmsThe Optimal Values of the VariablesOptimal Cost
TsThRL
WOA0.7876820.39785040.79602193.62845.93255 × 10+03
GWO1.21036514.2385552.61191200.00008.04630 × 10+04
SSA1.2522780.61693164.6437612.579387.29154 × 10+03
MVO0.7920200.40450341.00121191.12955.96206 × 10+03
GSA0.8722080.65181940.57705174.97997.40800 × 10+03
SCA1.3409870.59339761.5364635.537128.21843 × 10+03
PSO0.9120990.77421042.83227174.99996.41810 × 10+03
MFO1.4629911.01871262.8853965.995528.78800 × 10+03
CAO100.7818450.38647340.51013197.36495.89166 × 10+03
Table 19. “MIT-BIH record 111” signals reconstructed by MMs using the proposed method based on the CAO algorithm method, compared with other algorithms.
Table 19. “MIT-BIH record 111” signals reconstructed by MMs using the proposed method based on the CAO algorithm method, compared with other algorithms.
Processes 12 00406 i001
MethodsOptimal Values of β and uReconstructed SignalMSE & PSNR
CAO10 β = 700 μ = 0.90 Processes 12 00406 i002 M S E = 2.341 × 10 4 P S N R = 78.978
AO β = 55.781 μ = 0.192 Processes 12 00406 i003 M S E = 0.1022 P S N R = 52.572
WOA β = 270 μ = 0.230 Processes 12 00406 i004 M S E = 0.1195 P S N R = 51.895
GWO β = 120 μ = 0.1 Processes 12 00406 i005 M S E = 0.0164 P S N R = 60.519
SSA β = 11.920 μ = 0.320 Processes 12 00406 i006 M S E = 0.0040 P S N R = 66.642
MVO β = 650.874 μ = 0.893 Processes 12 00406 i007 M S E = 0.00148 P S N R = 70.943
GSA β = 20.815 μ = 0.452 Processes 12 00406 i008 M S E = 0.00413 P S N R = 66.508
SCA β = 461.533 μ = 0.919 Processes 12 00406 i009 M S E = 0.00269 P S N R = 68.384
PSO β = 661.22 μ = 0.413 Processes 12 00406 i010 M S E = 0.0022 P S N R = 69.207
MFO β = 600.541 μ = 0.733 Processes 12 00406 i011 M S E = 0.00166 P S N R = 74.812
Table 20. Simulation results for medical image reconstruction of size 1024.
Table 20. Simulation results for medical image reconstruction of size 1024.
Processes 12 00406 i012Processes 12 00406 i013Processes 12 00406 i014Processes 12 00406 i015
CAO10 β o p t μ o p t β = 750.980 μ = 0.198 β = 500.119 μ = 0.0154 β = 649.772 μ = 0.266 β = 420.883 μ = 0.933
Errors M S E = 4.341 × 10 5 P S N R = 84.852 M S E = 3.013 × 10 5 P S N R = 85.093 M S E = 1.341 × 10 5 P S N R = 85.380 M S E = 8.026 × 10 5 P S N R = 84.177
Table 21. The comparison results of the 3D image of the “corona” with size 256 × 256 × 256 .
Table 21. The comparison results of the 3D image of the “corona” with size 256 × 256 × 256 .
Processes 12 00406 i016
The 3D image of the “Verterba” with size 256 × 256 × 256
MethodsCAO10AOWOAGWOSSA
Reconstruction errors M S E = 0.000784 P S N R = 79.1830 M S E = 0.0902 P S N R = 55.881 M S E = 0.00155 P S N R = 70.017 M S E = 0.0103 P S N R = 61.991 M S E = 0.0031 P S N R = 67.822
Optimal values of (β, u) β = 700 μ = 0.5 β = 55.781 μ = 0.192 β = 530.144 μ = 0.991 β = 120 μ = 0.1 β = 11.920 μ = 0.320
MethodsMVOGSASCAPSOMFO
Reconstruction errors M S E = 0.00118 P S N R = 72.733 M S E = 0.00491 P S N R = 66.440 M S E = 0.00201 P S N R = 69.118 M S E = 0.00198 P S N R = 69.355 M S E = 0.002 P S N R = 69.772
Optimal values of (β, u) β = 650.874 μ = 0.893 β = 20.815 μ = 0.452 β = 461.533 μ = 0.919 β = 661.22 μ = 0.413 β = 600.541 μ = 0.733
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bencherqui, A.; Tahiri, M.A.; Karmouni, H.; Alfidi, M.; El Afou, Y.; Qjidaa, H.; Sayyouri, M. Chaos-Enhanced Archimede Algorithm for Global Optimization of Real-World Engineering Problems and Signal Feature Extraction. Processes 2024, 12, 406. https://doi.org/10.3390/pr12020406

AMA Style

Bencherqui A, Tahiri MA, Karmouni H, Alfidi M, El Afou Y, Qjidaa H, Sayyouri M. Chaos-Enhanced Archimede Algorithm for Global Optimization of Real-World Engineering Problems and Signal Feature Extraction. Processes. 2024; 12(2):406. https://doi.org/10.3390/pr12020406

Chicago/Turabian Style

Bencherqui, Ahmed, Mohamed Amine Tahiri, Hicham Karmouni, Mohammed Alfidi, Youssef El Afou, Hassan Qjidaa, and Mhamed Sayyouri. 2024. "Chaos-Enhanced Archimede Algorithm for Global Optimization of Real-World Engineering Problems and Signal Feature Extraction" Processes 12, no. 2: 406. https://doi.org/10.3390/pr12020406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop