Next Article in Journal
ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems
Previous Article in Journal
Design, Modeling, and Experimental Validation of a Bio-Inspired Rigid–Flexible Continuum Robot Driven by Flexible Shaft Tension–Torsion Synergy
Previous Article in Special Issue
Research on Capacitated Multi-Ship Replenishment Path Planning Problem Based on the Synergistic Hybrid Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications

1
School of Intelligent Manufacturing, Shanghai Zhongqiao Vocational and Technical University, Shanghai 201514, China
2
Engineering Technology Department, Shanghai Caoyang Vocational School, Wenzhou 200065, China
3
School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 302; https://doi.org/10.3390/biomimetics10050302
Submission received: 22 April 2025 / Revised: 6 May 2025 / Accepted: 7 May 2025 / Published: 9 May 2025

Abstract

:
In this study, a brand-new algorithm called the Comprehensive Adaptive Enterprise Development Optimizer (CAED) is proposed to overcome the drawbacks of the Enterprise Development (ED) algorithm in complex optimization tasks. In particular, it aims to tackle the problems of slow convergence and low precision. To enhance the algorithm’s ability to break free from local optima, a lens imaging reverse learning approach is incorporated. This approach creates reverse solutions by utilizing the concepts of optical imaging. As a result, it expands the search range and boosts the probability of finding superior solutions beyond local optima. Moreover, an environmental sensitivity-driven adaptive inertial weight approach is developed. This approach dynamically modifies the equilibrium between global exploration, which enables the algorithm to search for new promising areas in the solution space, and local development, which is centered on refining the solutions close to the currently best-found areas. To evaluate the efficacy of the CAED, 23 benchmark functions from CEC2005 are chosen for testing. The performance of the CAED is contrasted with that of nine other algorithms, such as the Particle Swarm Optimization (PSO), Gray Wolf Optimization (GWO), and the Antlion Optimizer (AOA). Experimental findings show that for unimodal functions, the standard deviation of the CAED is almost 0, which reflects its high accuracy and stability. In the case of multimodal functions, the optimal value obtained by the CAED is notably better than those of other algorithms, further emphasizing its outstanding performance. The CAED algorithm is also applied to engineering optimization challenges, like the design of cantilever beams and three-bar trusses. For the cantilever beam problem, the optimal solution achieved by the CAED is 13.3925, with a standard deviation of merely 0.0098. For the three-bar truss problem, the optimal solution is 259.805047, and the standard deviation is an extremely small 1.11 × 10−7. These results are much better than those achieved by the traditional ED algorithm and the other comparative algorithms. Overall, through the coordinated implementation of multiple optimization strategies, the CAED algorithm exhibits high precision, strong robustness, and rapid convergence when searching in complex solution spaces. As such, it offers an efficient approach for solving various engineering optimization problems.

1. Introduction

In today’s modern world, optimization problems are ubiquitous across a diverse range of real-life systems. They are of vital importance in guaranteeing the efficient functioning and performance of these systems. For example, in the field of fault diagnosis, precisely determining the underlying causes of faults and optimizing the diagnostic procedure can result in prompt repairs and a reduction in downtime, thereby saving substantial costs for various industries. In energy management, optimizing the distribution and utilization of energy sources is fundamental for attaining sustainable development, minimizing environmental impacts, and increasing energy efficiency. Prediction, whether it involves forecasting market trends, natural disasters, or equipment malfunctions, necessitates advanced optimization methods to enhance the accuracy of predictions and enable proactive decision-making [1,2]. These real-world systems frequently pose complex optimization hurdles. They usually entail large-scale problems, where the solution space is extensive and difficult to thoroughly explore. Moreover, the existence of multiple extreme values makes it arduous to differentiate between local optima and the global optimum. Traditional mathematical programming techniques, like the conjugate gradient method and the quasi-Newton method, have been employed for a long time to solve optimization problems. Nevertheless, these methods depend on the gradient information of the objective function and make assumptions regarding the smoothness and convexity of the function. When confronted with complex real-world problems that have non-convex, discontinuous, or high-dimensional solution spaces, their performance is severely restricted. They often become stuck in local optima, are unable to converge to the global optimum, or demand excessive computational resources, rendering them ineffective in dealing with these challenging optimization tasks [3].
On the other hand, swarm intelligence optimization algorithms have emerged as a potent alternative. As a population-oriented evolutionary computing technology, they take inspiration from the collective behaviors of biological groups in the natural world, such as ants, birds, fish, and bees. These algorithms simulate the social interactions, communication, and cooperation among individuals within these groups to search for optimal solutions [4]. By maintaining a population of candidate solutions and evolving them over multiple generations through operations such as selection, crossover, and mutation, swarm intelligence algorithms can explore the solution space more comprehensively and avoid being trapped in local optima. They demonstrate greater adaptability to different problem characteristics and are capable of handling complex, nonlinear, and multimodal optimization problems [5].
There are several notable representatives of swarm intelligence optimization algorithms. The Particle Swarm Optimization algorithm (PSO) is inspired by the swarming behavior of birds or the schooling behavior of fish. In PSO, each particle represents a potential solution in the search space, and particles update their positions based on their own best-known position and the best-known position within the entire swarm. This straightforward yet effective mechanism enables PSO to rapidly converge to satisfactory solutions in numerous optimization problems [6]. The chicken swarm optimization algorithm (CSO) models the hierarchical social structure and foraging behavior of chickens. It divides the population into different subgroups, such as roosters, hens, and chicks, with each subgroup having its own behavioral rules. This hierarchical structure assists CSO in effectively balancing exploration and exploitation and enhancing the optimization performance [7]. The Gray Wolf Optimization algorithm (GWO) is inspired by the leadership hierarchy and hunting behavior of gray wolves in nature. It utilizes the concepts of alpha, beta, delta, and omega wolves to guide the search process and gradually approach the optimal solution [8]. The Harris Hawks Optimization algorithm (HHO) imitates the cooperative hunting behavior of Harris hawks. It combines various hunting strategies, such as surprise attacks, encirclement, and weakening the prey, to search for the global optimum in the solution space [9].
Currently, research in the field of swarm intelligence optimization algorithms mainly focuses on two directions. The first direction is the development of new algorithms with better performance. Researchers are constantly exploring new inspiration from nature and developing innovative algorithms that can handle complex optimization problems more effectively. These new algorithms often incorporate novel mechanisms and strategies to improve the search efficiency, convergence speed, and solution accuracy. The second direction is the improvement in the optimization efficiency of existing algorithms. This involves enhancing the update mechanisms of these algorithms, such as modifying the selection, crossover, and mutation operators, or introducing new strategies to balance exploration and exploitation. By improving the existing algorithms, researchers aim to make them more robust, efficient, and adaptable to different problem scenarios. These advancements in swarm intelligence optimization algorithms offer new ideas and tools for solving complex optimization problems in various fields and have the potential to revolutionize the way we approach optimization tasks [10,11,12].
The Enterprise Development Optimization (ED) algorithm is a unique intelligent optimization method inspired by the evolution mechanism of modern enterprises. In a dynamic market environment, enterprises strive for sustainable development through strategic adjustments, structural optimizations, technological upgrades, and personnel collaborations. The ED algorithm abstracts these real-world enterprise behaviors and decision-making processes into an optimization framework, aiming to find the best development paths for enterprises [13,14]. At the core of the ED algorithm is a dynamic equilibrium-based optimization framework. It simulates how enterprises balance market exploration (global search) and resource integration (local exploitation) under resource constraints. Market exploration helps enterprises identify new opportunities and expand their reach, while resource integration optimizes the allocation of internal resources to enhance operational efficiency. Additionally, the algorithm emphasizes maintaining organizational diversity through cross-departmental collaborations (population interactions) and strategic iterations (elite retention), mirroring the collaborative and adaptive nature of real-world enterprises [15,16]. However, like many other optimization algorithms, the ED algorithm has its limitations. Its solution accuracy can be relatively low when dealing with complex problems, often resulting in suboptimal solutions. Moreover, its convergence speed is often insufficient, which may hinder its practical application in time-sensitive scenarios. These drawbacks highlight the need for further research and improvement to enhance the algorithm’s performance and make it more applicable to real-world enterprise optimization tasks.
To solve the problems of low solution accuracy and slow convergence speed in the ED algorithm, this paper transforms the core behaviors of enterprises in response to the competitive environment into a mathematical model and proposes a Comprehensive Adaptive Enterprise Development Optimizer (CAED) that integrates an adaptive inertia weight strategy based on environmental sensitivity, a lens imaging reverse learning co-evolution strategy, and a chaos-perturbation-driven approach. By simulating the entire process of enterprises, dynamically adjusting their business structures, optimizing resource allocations, and responding to market fluctuations, the CAED enables efficient search in complex solution spaces and stable approximation to global optimal solutions. The algorithm shows significant advantages in decision-making problems with multi-dimensional constraints, such as organizational structure restructuring and supply chain optimization. The effectiveness and robustness of the CAED are verified by solving the optimal solutions of 23 typical test functions and the practical engineering cases of cantilever beam and three-bar truss designs.
This paper is organized as follows: Section 1 is the introduction, Section 2 is the research status of optimization algorithms, Section 3 introduces the enterprise optimization algorithm, Section 4 describes the improvement of the enterprise optimization algorithm, Section 5 presents the function performance test experiment and algorithm comparison, and Section 6 is the conclusion.

2. Research Status of Optimization Algorithms

Optimization algorithms tackle intricate engineering optimization problems by emulating the mechanisms present in natural ecosystems [17]. The relevant research can be broadly classified into three main categories. Firstly, there are general optimization algorithms. For example, the hill-climbing algorithm, which is grounded in the greedy strategy, steadily approaches the global optimum by way of local optimal solutions. Secondly, evolutionary algorithms exist, which mimic the mechanisms of biological evolution. Thirdly, swarm intelligence algorithms draw inspiration from the collaborative behaviors exhibited by group organisms [18,19]. The core concepts of these algorithms are essentially derived from the mathematical abstraction of natural laws, the process of biological evolution, and the concept of swarm intelligence [20]. Nevertheless, the first two types of algorithms tend to become stuck in local optimal solutions during the process of seeking solutions, and as a result, they are unable to attain the global optimum [21].
Swarm intelligence optimization algorithms address complex optimization problems by imitating the intelligent behaviors of biological groups. Their fundamental framework involves mapping optimization parameters to individual members of the group. Then, based on the collaborative rules of animals (such as foraging and migration behaviors), the states of these individuals are updated iteratively. The quality of the solutions is quantitatively evaluated through fitness functions. Ultimately, the global optimal solution is selected from the iterative process [22]. A distinctive characteristic of these algorithms is their utilization of group collaborative search strategies. This approach enables them to overcome the limitations of being trapped in local optima, making them particularly well suited for high-dimensional nonlinear optimization scenarios [23].
The ant colony optimization algorithm was initially introduced by the Italian scholar Marco Dorigo in his doctoral thesis in 1992. This algorithm is inspired by the way real ant colonies cooperate to find the shortest path by means of pheromones, and it serves as a typical example of swarm intelligence optimization algorithms [24]. In 1995, the American scholar James proposed the Particle Swarm Optimization algorithm. This algorithm models the collaborative behavior of bird flocks or fish schools when they are searching for food [25]. It considers the potential solutions of the optimization problem as “particles”, and these particles dynamically adjust their search directions by tracking the individual historical best solution (pbest) and the group historical best solution (gbest) in order to obtain the optimal solution. In 2003, the American scholar M. Eusuff put forward the shuffled frog-leaping algorithm. Inspired by the collaborative foraging behavior of frog groups in wetlands [26], the SFLA combines the concepts of memetic algorithms and Particle Swarm Optimization. It divides the population into multiple “sub-groups” (Memeplex), where each subgroup conducts a local search independently, and then global information exchange is achieved through population shuffling to attain the optimization effect [27]. In 2014, Chinese scholars Meng et al. proposed the chicken swarm optimization algorithm. Inspired by the hierarchical structure and foraging behavior within chicken flocks [28], the CSO regards the solutions of the optimization problem as “chicken flock individuals” and devises interaction rules based on the social hierarchy of chicken flocks (roosters, hens, chicks) to achieve the optimization goal [29].
In 2024, the enterprise optimization algorithm was proposed. It is an intelligent optimization algorithm that takes inspiration from the development process of modern enterprises. Its core inspiration stems from the strategies that enterprises adopt to achieve sustainable development. These strategies include task planning, structural adjustment, technological innovation, and human resource collaboration in a dynamic market environment [30,31]. The algorithm abstracts four crucial aspects of Enterprise Development, namely task, structure, technology, and people, into a mathematical model. It views the organizational structure as a workflow. The new organizational structure is influenced by the structures of other workflows within the organization as well as the current optimal workflow. This algorithm has been compared with several of the latest and renowned algorithms, and experimental results demonstrate that it can effectively solve complex multi-dimensional optimization problems.

3. Enterprise Optimization Algorithm

3.1. Personnel Initialization

Similarly to other common intelligent optimization algorithms, the enterprise optimization algorithm also starts with initialization. In the algorithm framework, each enterprise individual is regarded as a potential solution in the search space, and the entire enterprise E can be expressed by Equation (1) [32]:
E = x 1 1 x 1 D i m x p o p 1 x p o p D i m
In Equation (1), pop represents the total number of solutions, and each set of solutions consists of a Dim-dimensional vector group. The vector group and each personnel individual are defined as shown in Equation (2):
X i = x i 1 , , x i D i m x i j = r a n d × U B j L B j + L B j
In Equation (2), Xi is the i-th set of vector solutions, where i is in the range of [1,pop]; Xij is the j-th personnel individual in the i-th set of solutions, and j is in the range of [1,Dim]; UBj and LBj are the upper and lower limits of the j-th personnel individual, respectively; rand is a random number between [0,1].
The solution effect of intelligent optimization algorithms is usually measured by the fitness function F. Taking the problem of minimizing the fitness function as an example, the goal of this problem is to solve a set of Xi such that the fitness function F reaches the minimum value.
At this time, Xi is also defined as the optimal variable Xbest, as shown in Equation (3):
X b e s t = a r g m i n F X i
In Equation (3), F(·) represents the fitness value of the problem to be solved.

3.2. Enterprise Business

3.2.1. Optimal Rules for Enterprise Business Development

The Enterprise Development Optimization simulated by the algorithm needs to establish the following three core rules [33]: (1) A four-dimensional process dynamic switching mechanism. The ED process must include four core activities, task, structure, technology, and people, and establish a dynamic switching logic among the activities. (2) A performance-oriented strong-correlation design. All activities must be directly linked to organizational performance, and high-performance organizations can be preferentially strengthened in terms of the adaptability of the plan. (3) A quantitative evaluation system construction. The solutions must be systematically evaluated through a preset objective function to achieve an accurate measurement of organizational performance.

3.2.2. Enterprise Business (Task)

Enterprise businesses come in different forms, such as daily affairs and external affairs. To simulate the task activities of an enterprise, the worst-case business is defined as shown in Equation (4):
x w o r s t ( t ) = r a n d × U B L B + L B
In Equation (4), xworst represents the worst-case individual value in the solution space; UB and LB are the upper and lower limits of the solution space.

3.2.3. Enterprise Structure (Structure)

In the enterprise optimization algorithm, the organizational structure of the enterprise is regarded as a workflow. The new organizational structure of the enterprise is affected by the structures of other workflows in the organization and the current optimal workflow. This process is updated by Equations (5) and (6):
x i s ( t ) = x i s ( t 1 ) + r a n d ( 1 , 1 ) × x b e s t t 1 x c s t 1
x c s ( t 1 ) = x r a n d 1 s t 1 + x r a n d 2 s t 1 + + x r a n d m s t 1 m
In Equations (5) and (6), x i s ( t ) represents the new organizational structure of the enterprise; x b e s t t 1 is the current optimal solution; x c s ( t 1 ) is the central organizational structure, which usually affects other new structures; x r a n d m s t 1 is a randomly selected structure in the enterprise. Through experimental verification, when m is usually set to 3, the optimal result can be obtained in a relatively short calculation time.

3.2.4. Enterprise Technology (Technology)

The technological level of an enterprise is usually considered the core factor affecting enterprise optimization. In most cases, the change in the enterprise’s organizational structure does not directly affect the enterprise’s optimization and development but rather promotes the enterprise’s technology to make breakthroughs in the new organizational structure. From the perspective of actual enterprise operations, the organizational structure of an enterprise balances the exploration and development efforts to obtain the necessary innovative technologies in the cruel market competition in the fastest and most effective way. Equation (7) simulates the process of balancing exploration and development:
x i τ ( t ) = x i τ ( t 1 ) + r a n d α ( 0 , 1 ) × x b e s t t 1 x i τ t 1 + r a n d β ( 0 , 1 ) × x b e s t t 1 x r a n d 1 τ t 1
In Equation (7),  x i τ ( t ) is the individual after a technology update; x b e s t t 1 x i τ t 1 represents the adjustment of the algorithm’s exploration ability; x b e s t t 1 x r a n d 1 τ t 1 represents the adjustment of the algorithm’s development ability.

3.2.5. Enterprise People (People)

Enterprise personnel usually need to be trained before taking up their posts. Excellent enterprises can directly influence the value judgment and creative abilities of enterprise personnel through corporate culture. The corporate work culture also affects the work choices of enterprise personnel and determines the overall size of the enterprise. Assuming that the personnel characteristics are in one dimension, Equations (8) and (9) describe how to update the data characteristics of enterprise personnel by randomly selecting numbers to simulate the behavior of enterprise personnel:
x i , d p ( t ) = x i , d p ( t 1 ) + r a n d ( 1 , 1 ) × x b e s t , d t 1 x c , d p t 1
x c , d p ( t 1 ) = x r a n d 1 , d p t 1 + x r a n d 2 s t 1 + + x r a n d m s t 1 m
In Equations (8) and (9), x i , d p ( t ) is the advanced individual after personnel culture optimization; d is the random characteristic of the personnel, defined by Equation (10); m is the number of people affecting the personnel, and when m is set to 3, the optimization can be calculated in a relatively short amount of time.
d = r a n d 0 , 1 × n d
In Equation (10), nd represents the dimension of the problem to be solved.

3.2.6. Conversion Mechanism

Enterprise optimization usually does not carry out four steps simultaneously. Suppose an enterprise only focuses on one optimization step at a time, that is, only one task, structure, technology, and people change in the current iteration, and the iteration is controlled by the conversion mechanism. The conversion mechanism c(t) is defined as shown in Equation (11):
c t = 3 × 1 r a n d 0 , 1 × t M a x i t e r
In Equation (11), t represents the t-th task iteration; Maxiter is the maximum number of iterations.

3.2.7. Optimization Algorithm Process

Combining data initialization and the four steps of enterprise optimization, the flow chart of the enterprise optimization algorithm is shown in Figure 1.

4. Comprehensive Adaptive Enterprise Optimization Algorithm

The enterprise optimization algorithm, a novel intelligent optimization approach, creates a distinctive population update mechanism by mimicking the adjustments in an enterprise’s development across business operations, organizational structure, technological aspects, and human resources. Nevertheless, this algorithm suffers from several issues, including the drawbacks of static step-size adjustment, the limitations of its one-way evolution mode, and inadequate responses to environmental changes. To address these problems and enhance its performance, this paper presents the Comprehensive Adaptive Enterprise Development Optimizer (CAED), which integrates multiple strategies. Firstly, the CAED utilizes the Tent chaotic map, which is based on random variables, for population initialization. By leveraging the mapping’s perturbation effect, it overcomes the periodic flaws of traditional sequences. This process generates uniformly distributed enterprise strategy solutions, effectively simulating the nonlinear nature of the market environment.
Secondly, the algorithm incorporates the lens imaging reverse learning strategy. Through performing interference jumps at the current implementation position, this strategy breaks free from the local-optimum constraints that typically limit traditional one-way adjustment methods. This enables the algorithm to explore a wider solution space and avoid becoming trapped in suboptimal solutions.
Moreover, the CAED constructs a dynamic search framework driven by an environmental sensitivity factor. This framework is integrated with an adaptive inertia weight strategy, allowing for the autonomous adjustment of the intensity between exploration and development. When the diversity of the population diminishes, the algorithm automatically boosts its global search capabilities, ensuring a more comprehensive exploration of the solution space and improving the overall optimization efficiency.

4.1. Tent Chaotic Map Based on Random Variables

The ED algorithm’s approach of randomly assigning the positions of its population inherently leads to an inevitable issue of uneven population distribution. When it comes to the search performance of the ED algorithm, there is a direct relationship, where a more uniform distribution of population individuals significantly increases the likelihood of discovering the optimal solution. Consequently, the initial configuration of the population becomes a pivotal factor in determining the algorithm’s effectiveness. Chaotic maps have gained substantial traction and widespread application in the domain of population initialization due to their unique properties [34,35]. Among them, the Tent chaotic map stands out with its remarkably straightforward mathematical representation and robust ergodicity. Its formula is presented in Equation (12), which showcases its simplicity and potential for efficient population initialization [36].
Y i + 1 = Y i a , 0 Y i < a 1 Y i 1 a , a Y i 1
In Equation (12), Yi and Yi+1 are the chaotic values at the i-th and (i + 1)-th times, respectively. When a ∈ [0,1] and Yi ∈ [0,1], the system is in a chaotic state. When a is set to 0.3, the initial value Y is set to 0.32, and the number of iterations is set to 500 times, it can be seen that the Tent map is relatively evenly distributed in the [0,1] interval. The Tent chaotic map sequence in the [0,1] interval is shown in Figure 2.
The chaotic sequence generated by the Tent map has good distribution and randomness. By selecting multiple different initial values, a chaotic sequence of [0,1] is obtained to replace the random quantity in the population initialization of the ED algorithm, as shown in Equation (13), to complete the population initialization.
X i n e w = ( u b i l b i ) Y i + 1 + l b i
In Equation (13), ubi and lbi are the upper and lower bounds of the independent variables of the objective function; X i n e w is the position of enterprise personnel updated by the Tent chaotic map.
The pseudo-randomness and deterministic ergodicity (Lyapunov exponent > 0) of the Tent chaotic map enable the initial solutions to be distributed with low differences in the solution space. Compared with random initialization, the uniformity of the point set has been significantly improved, laying a high-quality solution-space foundation for the subsequent search stage of the algorithm.

4.2. Lens Imaging Reverse Learning Strategy

In the later stage of the ED algorithm operation, the species diversity gradually decreases, and individuals tend to gather around the optimal individual. If the optimal individual falls into a local optimal solution, it will be difficult for the population individuals to jump out of the local optimum, thereby reducing the optimization accuracy of the algorithm [37]. To address this challenge, this paper introduces a reverse learning strategy based on the lens imaging principle to perform interference operations on individuals, aiming to enhance the population diversity. This method aims to improve the algorithm’s ability to break free from the shackles of local optimal solutions and search for the global optimal solution more accurately [38,39].
This paper combines the converging lens imaging principle in the field of optics with the enterprise operation model. Suppose there is an individual in the interval [lb, ub], denoted as P, its height is set as h, and its projection on the X-axis is defined as X (this X is equivalent to the global optimal solution). The mid-point of [lb, ub] is marked as O, where a convex lens is placed, and its focal length is equal to F. When the individual P passes through the convex lens to produce a new image, an inverted image P* can be obtained at this time, and its size is h*. According to the standards of the convex lens imaging principle, the following conclusion can be drawn [40]:
( u b + l b ) / 2 X X * ( u b + l b ) / 2 = η
In Formula (14), η = h h * is called the scaling factor. The inverse point can be obtained as follows:
X * = u b + l b 2 + u b + l b 2 η X η
After extending to the D-dimensional search space, the corresponding result can be obtained:
X j * = u b j + l b j X j
In Formula (16), X j and X j * are the j-th dimensional vectors of X and X*, respectively, and ubj and lbj are the j-th dimensional vectors of the decision variables. At the same time, this paper introduces a greedy mechanism to select the individuals after inverse learning so as to obtain the optimal individual. The mathematical model of the greedy mechanism is shown in Formula (17):
X n e w ( t ) = X * , f ( X ) k f ( X * ) X , f ( X ) < f ( X * )

4.3. Adaptive Inertia Weight Strategy

The inertia weight ω reflects the ability of an individual to inherit the position of the previous individual. When the inertia weight value is large, it is helpful to enhance the exploration ability [41]. When the inertia weight is small, it is conducive to the specific exploitation ability. According to the principles of physics, the position of the i-th individual is updated based on the positions of the i-th and a random individual, which has a strong dependence on other individuals and is prone to becoming trapped in local optima and stagnating. In order to better balance the exploration and exploitation capabilities of the enterprise optimization algorithm, a linearly decreasing inertia weight is introduced, which determines the influence of previous individuals on the current individual. The new position update formula for the technology stage is as follows [42]:
x i τ ( t ) = x i τ ( t 1 ) + r a n d α ( 0 , 1 ) × x b e s t ( t 1 ) x i τ ( t 1 ) + w s × r a n d β ( 0 , 1 ) × x b e s t ( t 1 ) x r a n d 1 τ ( t 1 )
ω = ( ω s ω e ) ( T t ) / T + ω e
In Formulas (18) and (19), ω s is the initial inertia weight, ω e is the inertia weight when iterating to the maximum number of iterations, t is the current iteration number, and T is the maximum number of iterations.
Through experimental verification, it was determined that the algorithm has the best performance when ω s = 1 and ω e = 0.4. As the iteration progresses, the inertia weight linearly decreases from 1 to 0.4. A larger inertia weight at the beginning of the iteration enables the algorithm to maintain a good exploration ability, while a smaller inertia weight in the later stage of the iteration helps the algorithm to have a better exploitation ability.

4.4. Implementation Steps of the CAED

The Comprehensive Adaptive Enterprise Development Optimizer (CAED) introduces the Tent chaotic map to improve the distribution position of initialized individuals, the adaptive inertia weight strategy to optimize the individual update process, and the lens imaging inverse learning strategy to escape local optimal solutions in the later stage of the algorithm so as to find the global optimal solution more accurately. The implementation process and pseudo-code framework of the improved algorithm are as follows.
Step 1: Initialize relevant optimization parameters, set the dimension D of enterprise decision variables, the population size N, and the maximum number of iterations Maxiter. Generate a Tent chaotic sequence and map the chaotic sequence to the solution space.
Step 2: Perform Tent chaotic map population initialization, calculate the inertia weight and transformation mechanism c(t) of the current iteration, and integrate the weight coefficient during the iteration strategy update.
Step 3: Refer to the content in Section 2 to execute the enterprise optimization algorithm process, update and recombine strategies, simulate the business restructuring of enterprise departments, and integrate resources according to conditions. Trigger the adjustment of structure, technology, and personnel under corresponding conditions, and retain the top 10% elite strategies for the next iteration.
Step 4: Lens imaging inverse learning (triggered in the later stage of iteration): When t reaches 0.7 Maxiter, perform inverse learning on the elite strategies, and select the individuals after inverse learning through the greedy mechanism to obtain the optimal individual.
Step 5: Output the optimal result when Maxiter is reached.
The Flow chart of the CAED algorithm is shown in Figure 3. The pseudocode of comprehensive adaptive enterprise development optimizer is shown in Algorithm 1.
Algorithm 1: Comprehensive Adaptive Enterprise Development Optimizer
Step 1: Initialization
objective function f(x), x = (x1, x2, …, xd)T, population size (npop), and maximum iteration (Maxiter)
search space, up and lp limits for initialization
Initialize time: t = 1
Initialize population xi(i = 1, 2, …, npop) by using Equation (13) *
The fitness value based on the objective function(organization’s performance)
Find the organization currently with the best fitness value(xbest)
Repeat
Step 2:Calculate the c(t) and ω according by Equations (11) and (19), go to Step 3
Step 3:If rand < 0.1
 New task is defined by Equation (4)
 Adaptive weighting according to Equation (18) and updating individual positions *
Else
Switch c(t)
    Case c(t) = 1
     New structure is defined by Equation (5)
     Adaptive weighting according to Equation (18) and updating individual positions *
    Case c(t) = 2
     New technology is defined by Equation (7)
     Adaptive weighting according to Equation (18) and updating individual positions *
    Case c(t) = 3
     New people is defined by Equation (8)
     Adaptive weighting according to Equation (18) and updating individual positions *
End of switch
End of if
Update the organization currently with the best fitness value(xbest)
Update the time: t ++
If t < 0.7Maxiter, go to Step 3
Else If t > 0.7Maxiter and t < Maxiter: go to Step 4
Else: go to Step 5
Step 4According to the lens imaging reverse learning strategy, the individual position is updated by Equation (27) *
Step 5Output the optimal solution

5. Algorithm Performance Testing and Comparative Analysis

5.1. Experimental Design and Test Functions

To verify the search accuracy and robustness of the Comprehensive Adaptive Enterprise Development Optimizer (CAED) proposed in this paper when solving relevant optimization problems, the ED algorithm, CAED algorithm, PSO algorithm [43], GWO algorithm [44], AOA algorithm [45], DBO algorithm [46], GJO algorithm [47], SCSO algorithm [48], BKA algorithm [49], and SABO algorithm [50] were used to solve for the optimal values on 23 typical benchmark functions, and 50 independent experiments were carried out. The benchmark functions are shown in Table 1.

5.2. Experimental Results and Algorithm Analysis and Comparison

A comparison of the optimization results of benchmark functions F1–F23 is shown in Table 2 and Figure 4. A benchmark function convergence curve is shown in Figure 4. The radar charts and average ranking questions of 10 algorithms in 23 test functions are shown in Figure 5 and Figure 6.
Based on the test results of the 23 benchmark functions from CEC2005, the CAED demonstrates remarkable superiority in the vast majority of problems. For unimodal functions (such as F1–F5), the minimum values (min) of the CAED nearly all approach the theoretical optimal values (for example, the min of F1 is 0 and the min of F3 is 0), and the standard deviations (std) are extremely low (e.g., the std of F1 is 0). This indicates that the CAED can not only accurately approximate the global optimal solution but also exhibit high stability. Moreover, in multimodal functions (such as F8 and F21–F23), the worst values (worse) of the CAED are significantly better than those of other algorithms (for instance, the worse of F21 is −10.1532 for the CAED, while it is −2.6303 for GWO), suggesting that it can effectively avoid becoming trapped in local optima within complex search spaces. Although the computational time (time) of the CAED is slightly longer than that of some algorithms (e.g., the time of F1 is 0.0897 for the CAED and 0.0294 for PSO), its outstanding performance in optimization accuracy and robustness more than compensates for this drawback.
Other algorithms have their own advantages and disadvantages in different problems. DBO and BKA stand out in terms of stability. For example, the std of DBO in F1 is 3.73 × 10−116, and the std of BKA in F4 is 6.38 × 10−77. They are suitable for scenarios with high requirements for result consistency. GWO and PSO perform moderately in some high-dimensional problems (such as F8 and F20), but GWO has a relatively fast convergence speed (e.g., the time of F9 is 0.0583 for GWO), making it applicable to scenarios that demand quick responses. AOA and SABO perform poorly overall, especially in complex multimodal functions (for example, the mean value of SABO in F5 is 322.38, which is much higher than 26.62 of the CAED). This may be due to their sensitivity to parameters or insufficient search mechanisms, resulting in unstable performance. SCSO and GJO perform impressively in specific functions (e.g., the min of GJO in F7 is 5.12 × 10−6), but they have a high time cost (e.g., the time of SCSO in F6 is 0.6878). Thus, a trade-off between efficiency and accuracy is necessary.
Overall, the CAED, with its global search ability and stability, is the preferred algorithm for solving the CEC2005 benchmark problems, especially in scenarios that require high precision and robustness. If computational time is a concern, DBO or BKA can be considered as alternatives. Their stability is comparable to that of the CAED, and they consume less time (e.g., the time of BKA in F4 is 0.0685). For simple unimodal problems, GWO or PSO can also be used as supplementary options due to their high computational efficiency (e.g., the time of GWO in F10 is 0.0579). It is advisable to avoid using AOA and SABO, as they perform poorly in most problems and may affect the optimization results. Ultimately, the choice should be made by considering the complexity of the actual problem, accuracy requirements, and computational resource limitations.

5.3. Analysis of Algorithm Time Complexity

Assume that the population size of the enterprise optimization algorithm (ED) is n, the decision-making dimension is j, and the maximum number of iterations is Tmax. At the beginning, the algorithm randomly generates a population, so the time complexity of initializing the population is O(n·j). After entering the loop to find the optimal solution, in the strategy update stage, each of the j dimensions of each individual needs to be updated, and the time complexity of a single iteration is O(n·j). After updating the values of each individual, it is necessary to calculate the fitness value. Since the time complexity required for each individual is O(j), the total time complexity is O(n·j). After calculating the fitness values, it is necessary to find the optimal individual, and the time complexity at this time is O(n). In summary, the total time complexity of the enterprise optimization algorithm is shown in Equation (20):
T E D = O n j + T max O n j = O T max n j
In the Comprehensive Adaptive Enterprise Development Optimizer (CAED), the time complexity of generating a population using the Tent chaotic map is O(n·j). The time complexity of the adaptive inertia weight is O(1). In the lens imaging inverse learning, since the number of trigger times is 0.3Tmax, the time complexity of generating inverse solutions is O(0.3n·j). The time complexity of individual selection using the greedy mechanism is O(nlogn). After calculating the fitness values, the time complexity of finding the optimal individual is also O(n). In summary, after calculation and ignoring the lower-order terms, the total time complexity of the Comprehensive Adaptive Enterprise Development Optimizer (CAED) is shown in Equation (21):
T C A E D = O n j + T max O n j + 0.3 O n log n = O T max n j + log n
Through the analysis of Equations (20) and (21), it can be seen that when solving high-dimensional problems, j >> logn. Therefore, in Equation (21), j + logn can be approximately regarded as j, and at this time, the time complexity is close to that of the original algorithm. From another perspective, even when using the CAED algorithm to solve high-dimensional problems, the time cost and the performance improvement after optimization are not of the same order of magnitude. That is, the CAED only incurs a slightly increased time cost (0.1% time cost), but it can bring about a significant performance boost.

5.4. Application of the CAED in Engineering

5.4.1. Optimization of Cantilever Beam Design

Cantilever beam design is a typical optimization problem in structural engineering. Its aim is to determine the geometric parameters (such as cross-sectional shape and size) and material properties of the beam to achieve specific engineering goals (such as lightweight and low cost) while meeting mechanical performance requirements (strength, stiffness, and stability). A cantilever beam is usually fixed at one end and free at the other end, bearing external loads (such as concentrated forces and distributed forces), and it is necessary to avoid failures (such as yielding and excessive deformation). The goals and significance of optimizing the cantilever beam design are as follows:
1. Minimize the weight to reduce material usage and lower the self-weight of the structure. This is applicable in fields such as aerospace and automotive, saving material costs, improving energy efficiency, and enhancing the flexibility of movable structures.
2. Minimize the maximum stress/deformation to ensure that the stress of the beam does not exceed the allowable value of the material and the deformation is within the allowable range, guaranteeing structural safety and avoiding fatigue failure or functional failure.
3. Minimize the manufacturing cost by comprehensively considering material costs, processing complexity, and manufacturing process limitations. This can improve economic efficiency and meet the requirements of large-scale production.
The cantilever beam design problem is a structural engineering design problem related to the weight optimization of a square-cross-section cantilever beam. One end of the cantilever beam is rigidly supported, and a vertical force acts on the free node of the cantilever. The beam consists of five hollow square blocks with a constant thickness. Its height (or width) is the decision-making variable, and the thickness is fixed (here it is 2/3). This problem can be expressed by the following equations:
  • Objective function:
f ( X ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
  • Constraint conditions:
g ( X ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
  • Boundary constraints:
0.01 x i 100 , i = 1 , 2 , , 5
The schematic diagram of cantilever beam design structure is shown in Figure 7.
In the experimental simulation, 10 intelligent optimization algorithms were used, including the Comprehensive Adaptive Enterprise Development Optimizer (CAED), enterprise optimization algorithm (ED), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), Arithmetic Optimization Algorithm (AOA), Dung Beetle Optimizer (DBO), Golden Jackal Optimizer (GJO), Sand Cat Swarm Optimizer (SCSO), Black-Winged Kite Optimizer (BKA), and Subtraction Average Optimizer (SABO). Simulation optimization experiments were carried out around the objective function of the cantilever beam design. The worst values, best values, standard deviations, average values, and median values of each optimization algorithm were recorded, as shown in Table 3, and the convergence curves are shown in Figure 8.
Based on the comprehensive analysis of the experimental results, in terms of the optimal solution (best), the CAED (13.3925) performs the best, outperforming the ED (13.4365), GWO (13.3661), etc., and is significantly better than the AOA (90.03) and SABO (18.8989). In terms of stability (std), the standard deviation of the CAED (0.0098) is the smallest, indicating the most stable results. The GWO (0.0013) and DBO (0.0077) follow. In terms of the average value (mean) and median value (median), the average value (13.3712) and median value (13.3696) of the CAED are both close to the optimal value, showing a balanced, comprehensive performance. In terms of the worst value (worst), the CAED (13.3605) performs the best among all algorithms. From the results, it can be seen that the CAED leads comprehensively in the cantilever beam design, especially in terms of stability and the quality of the optimal solution.

5.4.2. Optimization of Three-Bar Truss Design

Three-bar truss design is a classic optimization problem in structural engineering. The goal is to determine the geometric parameters (such as the cross-sectional area, length, and node positions of the bars) and material properties of the truss structure composed of three bars to achieve goals, such as lightweight, low cost, or high reliability, while meeting mechanical performance constraints (such as strength, stiffness, and stability). Three-bar trusses are usually used in simple support structures (such as bridges and roof brackets), and it is necessary to avoid failure modes (such as bar yielding, buckling, and excessive node displacement). The goals and significance of optimizing the three-bar truss design are as follows:
1. Minimize the structural weight to reduce material consumption and lower the manufacturing cost. This is suitable for weight-sensitive scenarios (such as aerospace and mobile structures).
2. Minimize the maximum stress to ensure that the stress of all the bars does not exceed the allowable value of the material, preventing yielding or fracture.
3. Minimize the node displacement to control the displacement of the key nodes and avoid functional failure or excessive structural deformation.
The goal of the three-bar truss design problem is to minimize its volume while satisfying the stress constraints on each side of the truss members. This problem can be described by the cross-sectional area (X = [x1,x2] = [A1,A2]), and its mathematical model is as follows:
  • Objective function:
f ( X ) = ( 2 2 A 1 + A 2 ) × l
  • Constraint conditions:
g 1 ( X ) = 2 A 1 + A 2 2 A 1 2 + 2 A 1 A 2 P σ 0
g 2 ( X ) = A 2 2 A 1 2 + 2 A 1 A 2 P σ 0
g 3 ( X ) = 1 A 1 + 2 A 2 P σ 0
  • Boundary constraints:
0 A 1 , A 2 1
where l = 100 cm; P = 2 kN/(cm2); σ = 2 kN/(cm2).
The schematic diagram of the three-bar truss design structure is shown in Figure 9.
In the experimental simulation, 10 intelligent optimization algorithms were used, including the Comprehensive Adaptive Enterprise Development Optimizer (CAED), enterprise optimization algorithm (ED), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), Arithmetic Optimization Algorithm (AOA), Dung Beetle Optimizer (DBO), Golden Jackal Optimizer (GJO), Sand Cat Swarm Optimizer (SCSO), Black-Winged Kite Optimizer (BKA), and Subtraction Average Optimizer (SABO). Simulation optimization experiments were carried out around the objective function of the three-bar truss design. The worst values, best values, standard deviations, average values, and median values of each optimization algorithm were recorded, as shown in Table 4, and the convergence curves are shown in Figure 10.
Based on the comprehensive analysis of the experimental results, in terms of the best solution (best), the CAED (259.805047) shows the best performance, while the AOA and SABO perform the worst (262.65 and 260.39, respectively), indicating that they are prone to becoming trapped in local optima. Regarding stability (std), the CAED has a relatively low standard deviation (1.11 × 10−7), outperforming most algorithms in terms of stability. In terms of the average value (mean) and median value (median), the mean and median values of the CAED, DBO, and BKA are the lowest (≈259.805), presenting the optimal comprehensive performance. Concerning the worst value (worst), the worst values of all the algorithms are close, but those of the AOA and SABO are significantly higher (259.85 and 259.82), suggesting that they perform poorly in the worst-case scenario. From these results, it can be seen that the CAED leads comprehensively in the three-bar truss design, excelling in optimization accuracy, stability, and convergence, and is thus the preferred algorithm for the three-bar truss design problem.

6. Conclusions

This paper puts forward the Comprehensive Adaptive Enterprise Development Optimizer (CAED), an innovative optimization algorithm designed to address the limitations of traditional enterprise optimization techniques. By integrating the Tent chaotic map initialization, lens imaging inverse learning, and a dynamic inertia weight strategy, the CAED effectively boosts the global search accuracy and convergence efficiency. The Tent chaotic map initialization helps in generating a diverse set of initial solutions, ensuring a more comprehensive exploration of the solution space from the start. The lens imaging inverse learning mechanism broadens the search scope by creating reverse solutions based on optical imaging principles, increasing the likelihood of escaping local optima. Meanwhile, the dynamic inertia weight strategy dynamically adjusts the balance between global exploration and local exploitation, enabling the algorithm to adaptively navigate through the different regions of the solution space.
A series of rigorous experiments were conducted to evaluate the performance of the CAED. The results clearly show that the CAED can closely approach the theoretical optimal solutions when applied to 23 benchmark functions. For instance, in the case of function F1, the standard deviation approaches zero, which strongly indicates its high precision and stability. In practical engineering applications, such as the design of a cantilever beam and a three-bar truss, the CAED significantly outperforms other comparative algorithms. For the cantilever beam problem, the CAED obtains an optimal value of 13.3925, while for the three-bar truss problem, it achieves an optimal value of 259.805047. These outstanding results firmly verify the robustness and practicality of the CAED in handling high-dimensional and complex problems, making it a promising solution for various real-world optimization tasks. However, like any algorithm, the CAED also has its limitations. In this paper, we thoroughly discuss the challenges that the algorithm may encounter when dealing with high-dimensional and complex problems. As the dimensionality of the problem increases, the computational complexity of the CAED rises significantly, which may lead to longer computation times and higher resource requirements. Additionally, the convergence speed of the algorithm may slow down, making it less efficient in finding the optimal solution within a reasonable time frame. We also analyze the possible failure scenarios of the algorithm in certain special situations. For example, when the objective function has a large number of local optimal solutions with a complex distribution, the CAED may struggle to distinguish the global optimal solution from the local ones, potentially resulting in suboptimal solutions.
In the section dedicated to scaling options, we explore various strategies to enhance the performance of the CAED algorithm. One approach is to further optimize and refine the existing strategies, such as improving the parameters and mechanisms of the Tent chaotic map initialization, lens imaging inverse learning, and dynamic inertia weight strategy. Another promising direction is to combine the CAED with other powerful optimization algorithms, leveraging their respective advantages to create a more efficient and effective hybrid algorithm. Looking ahead, future research on the CAED can focus on several key aspects. Multi-objective expansion would enable the algorithm to handle problems with multiple conflicting objectives simultaneously, which is common in many real-world applications. Improving its adaptability to dynamic scenarios would allow the CAED to respond effectively to changes in the problem environment in real time. Moreover, developing intelligent parameter tuning techniques would help the algorithm automatically adjust its parameters according to the characteristics of different problems, further enhancing its performance and versatility. These research directions have the potential to significantly expand the application depth of the CAED in important fields such as intelligent manufacturing and real-time scheduling, opening up new possibilities for solving complex optimization problems in these domains.

Author Contributions

Conceptualization, S.W.; writing—review and editing, S.W.; validation, L.C.; data curation, Y.Z.; writing—original draft preparation, L.C.; methodology, M.X.; software, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  2. Zhu, D.; Wang, S.; Zhou, C.; Yan, S.; Xue, J. Human memory optimization algorithm: A memory-inspired optimizer for global optimization problems. Expert Syst. Appl. 2024, 237, 121597. [Google Scholar] [CrossRef]
  3. Aggarwal, S.; Tripathi, S. MODE/CMA-ES: Integrated multi-operator differential evolution technique with CMA-ES. Appl. Soft Comput. 2025, 176, 113177. [Google Scholar] [CrossRef]
  4. Sakovich, N.; Aksenov, D.; Pleshakova, E.; Gataullin, S. MAMGD: Gradient-based optimization method using exponential decay. Technologies 2024, 12, 154. [Google Scholar] [CrossRef]
  5. Wang, Y.; Li, J.; Tan, X. Chaos and elite reverse learning–Enhanced sparrow search algorithm for IIoT sensing communication optimization. Alex. Eng. J. 2025, 125, 663–676. [Google Scholar] [CrossRef]
  6. Yang, X.; Zeng, G.; Cao, Z.; Huang, X.; Zhao, J. Parameters estimation of complex solar photovoltaic models using bi-parameter coordinated updating L-SHADE with parameter decomposition method. Case Stud. Therm. Eng. 2024, 61, 104917. [Google Scholar] [CrossRef]
  7. Bodalal, R.; Shuaeib, F. Marine predators algorithm for sizing optimization of truss structures with continuous variables. Computation 2023, 11, 91. [Google Scholar] [CrossRef]
  8. Li, W.; Tang, J.; Wang, L. Many-objective evolutionary algorithm with multi-strategy selection mechanism and adaptive reproduction operation. J. Supercomput. 2024, 80, 24435–24482. [Google Scholar] [CrossRef]
  9. Hu, G.; Song, K.; Abdel-salam, M. Sub-population evolutionary particle swarm optimization with dynamic fitness-distance balance and elite reverse learning for engineering design problems. Adv. Eng. Softw. 2025, 202, 103866. [Google Scholar] [CrossRef]
  10. Hwang, J.; Kale, G.; Patel, P.P.; Vishwakarma, R.; Aliasgari, M.; Hedayatipour, A.; Rezaei, A.; Sayadi, H. Machine learning in chaos-based encryption: Theory, implementations, and applications. IEEE Access 2023, 11, 125749–125767. [Google Scholar] [CrossRef]
  11. Chen, C.; Cao, L.; Chen, Y.; Chen, B.; Yue, Y. A comprehensive survey of convergence analysis of beetle antennae search algorithm and its applications. Artif. Intell. Rev. 2024, 57, 141. [Google Scholar] [CrossRef]
  12. Yue, Y.; Cao, L.; Zhang, Y. Novel WSN Coverage Optimization Strategy Via Monarch Butterfly Algorithm and Particle Swarm Optimization. Wirel. Pers. Commun. 2024, 135, 2255–2280. [Google Scholar] [CrossRef]
  13. Truong, D.N.; Chou, J.S. Metaheuristic algorithm inspired by enterprise development for global optimization and structural engineering problems with frequency constraints. Eng. Struct. 2024, 318, 118679. [Google Scholar] [CrossRef]
  14. Cai, X.; Wang, W.; Wang, Y. Multi-strategy enterprise development optimizer for numerical optimization and constrained problems. Sci. Rep. 2025, 15, 10538. [Google Scholar] [CrossRef] [PubMed]
  15. Jawad, Z.N.; Balázs, V. Machine learning-driven optimization of enterprise resource planning (ERP) systems: A comprehensive review. Beni-Suef Univ. J. Basic Appl. Sci. 2024, 13, 4–20. [Google Scholar] [CrossRef]
  16. Simuni, G. Auto ML for Optimizing Enterprise AI Pipelines: Challenges and Opportunities. Int. IT J. Res. 2024, 2, 174–184. [Google Scholar]
  17. Akl, D.T.; Saafan, M.M.; Haikal, A.Y.; El-Gendy, E.M. IHHO: An improved Harris Hawks optimization algorithm for solving engineering problems. Neural Comput. Appl. 2024, 36, 12185–12298. [Google Scholar] [CrossRef]
  18. Hu, G.; Huang, F.; Chen, K.; Wei, G. MNEARO: A meta swarm intelligence optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 419, 116664. [Google Scholar] [CrossRef]
  19. Özçelik, Y.B.; Altan, A. Overcoming nonlinear dynamics in diabetic retinopathy classification: A robust AI-based model with chaotic swarm intelligence optimization and recurrent long short-term memory. Fractal Fract. 2023, 7, 598. [Google Scholar] [CrossRef]
  20. Tawhid, M.A.; Ibrahim, A.M. An efficient hybrid swarm intelligence optimization algorithm for solving nonlinear systems and clustering problems. Soft Comput. 2023, 27, 8867–8895. [Google Scholar] [CrossRef]
  21. Tang, J.; Duan, H.; Lao, S. Swarm intelligence algorithms for multiple unmanned aerial vehicles collaboration: A comprehensive review. Artif. Intell. Rev. 2023, 56, 4295–4327. [Google Scholar] [CrossRef]
  22. Shitharth, S.; Yonbawi, S.; Manoharan, H.; Alahmari, S.; Yafoz, A.; Mujlid, H. Physical stint virtual representation of biomedical signals with wireless sensors using swarm intelligence optimization algorithm. IEEE Sens. J. 2023, 23, 3870–3877. [Google Scholar] [CrossRef]
  23. Chao, W.; Zhang, S.; Tianhang, M.A.; Yuetong, X.; Chen, M.Z.; Lei, W. Swarm intelligence: A survey of model classification and applications. Chin. J. Aeronaut. 2025, 38, 102982. [Google Scholar]
  24. Heng, H.; Ghazali, M.H.M.; Rahiman, W. Exploring the application of ant colony optimization in path planning for Unmanned Surface Vehicles. Ocean Eng. 2024, 311, 118738. [Google Scholar] [CrossRef]
  25. Nayak, J.; Swapnarekha, H.; Naik, B.; Dhiman, G.; Vimal, S. 25 years of particle swarm optimization: Flourishing voyage of two decades. Arch. Comput. Methods Eng. 2023, 30, 1663–1725. [Google Scholar] [CrossRef]
  26. Maaroof, B.B.; Rashid, T.A.; Abdulla, J.M.; Hassan, B.A.; Alsadoon, A.; Mohammadi, M.; Khishe, M.; Mirjalili, S. Current studies and applications of shuffled frog leaping algorithm: A review. Arch. Comput. Methods Eng. 2022, 29, 3459–3474. [Google Scholar] [CrossRef]
  27. Yang, Y.; Li, G.; Luo, T.; Al-Bahrani, M.; Al-Ammar, E.A.; Sillanpaa, M.; Ali, S.; Leng, X. The innovative optimization techniques for forecasting the energy consumption of buildings using the shuffled frog leaping algorithm and different neural networks. Energy 2023, 268, 126548. [Google Scholar] [CrossRef]
  28. Hu, B.; Zheng, X.; Lai, W. EPKO: Enhanced pied kingfisher optimizer for numerical optimization and engineering problems. Expert Syst. Appl. 2025, 278, 127416. [Google Scholar] [CrossRef]
  29. Reka, R.; Manikandan, A.; Venkataramanan, C.; Madanachitran, R. An energy efficient clustering with enhanced chicken swarm optimization algorithm with adaptive position routing protocol in mobile adhoc network. Telecommun. Syst. 2023, 84, 183–202. [Google Scholar] [CrossRef]
  30. Wang, Z.; Zhang, W.; Guo, Y.; Han, M.; Wan, B.; Liang, S. A multi-objective chicken swarm optimization algorithm based on dual external archive with various elites. Appl. Soft Comput. 2023, 133, 109920. [Google Scholar] [CrossRef]
  31. Zhang, S. Innovative application of particle swarm algorithm in the improvement of digital enterprise management efficiency. Syst. Soft Comput. 2024, 6, 200151. [Google Scholar] [CrossRef]
  32. Yin, L.; Tian, J.; Chen, X. Consistent African vulture optimization algorithm for electrical energy exchange in commercial buildings. Energy 2025, 318, 134741. [Google Scholar] [CrossRef]
  33. Truong, D.N.; Chou, J.S. Multiobjective enterprise development algorithm for optimizing structural design by weight and displacement. Appl. Math. Model. 2025, 137, 115676. [Google Scholar] [CrossRef]
  34. Akraam, M.; Rashid, T.; Zafar, S. An image encryption scheme proposed by modifying chaotic tent map using fuzzy numbers. Multimed. Tools Appl. 2023, 82, 16861–16879. [Google Scholar] [CrossRef]
  35. Fu, Y.; Liu, D.; Fu, S.; Chen, J.; He, L. Enhanced Aquila optimizer based on tent chaotic mapping and new rules. Sci. Rep. 2024, 14, 3013. [Google Scholar] [CrossRef]
  36. Zhang, Q.; Hongshun, L.; Jian, G.; Yifan, W.; Luyao, L.; Hongzheng, L.; Haoxi, C. Improved GWO-MCSVM algorithm based on nonlinear convergence factor and tent chaotic mapping and its application in transformer condition assessment. Electr. Power Syst. Res. 2023, 224, 109754. [Google Scholar] [CrossRef]
  37. Ai, C.; He, S.; Fan, X. Parameter estimation of fractional-order chaotic power system based on lens imaging learning strategy state transition algorithm. IEEE Access 2023, 11, 13724–13737. [Google Scholar] [CrossRef]
  38. Li, W.; Luo, H.; Wang, L. Multifactorial brain storm optimization algorithm based on direct search transfer mechanism and concave lens imaging learning strategy. J. Supercomput. 2023, 79, 6168–6202. [Google Scholar] [CrossRef]
  39. Yuan, P.; Zhang, T.; Yao, L.; Lu, Y.; Zhuang, W. A hybrid golden jackal optimization and golden sine algorithm with dynamic lens-imaging learning for global optimization problems. Appl. Sci. 2022, 12, 9709. [Google Scholar] [CrossRef]
  40. Liao, Y.J.; Tarng, W.; Wang, T.L. The effects of an augmented reality lens imaging learning system on students’ science achievement, learning motivation, and inquiry skills in physics inquiry activities. Educ. Inf. Technol. 2024, 30, 5059–5104. [Google Scholar] [CrossRef]
  41. Jena, J.J.; Satapathy, S.C. A new adaptive tuned Social Group Optimization (SGO) algorithm with sigmoid-adaptive inertia weight for solving engineering design problems. Multimed. Tools Appl. 2024, 83, 3021–3055. [Google Scholar] [CrossRef]
  42. John, N.; Janamala, V.; Rodrigues, J. An adaptive inertia weight teaching–learning-based optimization for optimal energy balance in microgrid considering islanded conditions. Energy Syst. 2024, 15, 141–166. [Google Scholar] [CrossRef]
  43. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An overview of variants and advancements of PSO algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  44. Liu, Y.; As’arry, A.; Hassan, M.K.; Hairuddin, A.A.; Mohamad, H. Review of the grey wolf optimization algorithm: Variants and applications. Neural Comput. Appl. 2024, 36, 2713–2735. [Google Scholar] [CrossRef]
  45. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  46. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  47. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  48. Lou, T.; Yue, Z.; Chen, Z.; Qi, R.; Li, G. A hybrid multi-strategy SCSO algorithm for robot path planning. Evol. Syst. 2025, 16, 54. [Google Scholar] [CrossRef]
  49. Wang, J.; Wang, W.; Hu, X.; Qiu, L.; Zang, H. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  50. Trojovský, P.; Dehghani, M. Subtraction-average-based optimizer: A new swarm-inspired metaheuristic algorithm for solving optimization problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef]
Figure 1. Enterprise optimization algorithm flow chart.
Figure 1. Enterprise optimization algorithm flow chart.
Biomimetics 10 00302 g001
Figure 2. Tent chaos mapping sequence.
Figure 2. Tent chaos mapping sequence.
Biomimetics 10 00302 g002
Figure 3. Flow chart of the CAED (* represents the improved steps).
Figure 3. Flow chart of the CAED (* represents the improved steps).
Biomimetics 10 00302 g003
Figure 4. Benchmark function convergence curve.
Figure 4. Benchmark function convergence curve.
Biomimetics 10 00302 g004aBiomimetics 10 00302 g004b
Figure 5. The radar charts of 10 algorithms in 23 test functions.
Figure 5. The radar charts of 10 algorithms in 23 test functions.
Biomimetics 10 00302 g005
Figure 6. The average ranking questions of 10 algorithms in 23 test functions.
Figure 6. The average ranking questions of 10 algorithms in 23 test functions.
Biomimetics 10 00302 g006
Figure 7. Schematic diagram of cantilever beam design structure.
Figure 7. Schematic diagram of cantilever beam design structure.
Biomimetics 10 00302 g007
Figure 8. Cantilever beam convergence curve.
Figure 8. Cantilever beam convergence curve.
Biomimetics 10 00302 g008
Figure 9. Schematic diagram of the three-bar truss design structure.
Figure 9. Schematic diagram of the three-bar truss design structure.
Biomimetics 10 00302 g009
Figure 10. Cantilever beam convergence curve.
Figure 10. Cantilever beam convergence curve.
Biomimetics 10 00302 g010
Table 1. Benchmark functions table.
Table 1. Benchmark functions table.
Test FunctionnSFmin
F 1 ( x ) = i = 1 n x i 2 50[−100,100]n0
F 2 ( x ) = i = 1 n x i + i = 1 n x i 50[−10,10]n0
F 3 ( x ) = i = 1 n j = 1 i x j 2 50[−100,100]n0
F 4 ( x ) = max i x i , 1 i n 50[−100,100]n0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 50[−30,30]n0
F 6 ( x ) = i = 1 n x i + 0.5 2 50[−100,100]n0
F 7 ( x ) = i = 1 n i x i 4 + r a n d o m 0 , 1 50[−1.28,1.28]n0
F 8 ( x ) = i = 1 n x i sin x i 50[−500,500]n−12,569.5
F 9 ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 50[−5.12,5.12]n0
F 10 ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i +     20 + e 50[−32,32]n0
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 50[−600,600]n0
F 12 ( x ) = π n 10 sin 2 π y i + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4 , y i = 1 + 1 4 x i + 1 u x i , a , k , m = k x i a m , x i > a , 0 , k x i a m , a x i a , x i < a . 50[−50,50]n0
F 13 ( x ) = 0.1 sin 2 3 π x 1 + i = 1 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x 1 , 5 , 100 , 4 50[−50,50]n0
F 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2[−65.536,
65.536]n
0
F 15 ( x ) = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5,5]n0.000307
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5,5]n−1.01362
F 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5,10] ×
[0,15]
0.398
F 18 ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2,2]n3
F 19 ( x ) = i = 1 4 c i exp j = 1 4 a i j x j p i j 2 4[0,1]n−3.86
F 20 ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2 6[0,1]n−3.32
F 21 ( x ) = i = 1 5 x a i x a i T + c i 1 4[0,10]n−10
F 22 ( x ) = i = 1 7 x a i x a i T + c i 1 4[0,10]n−10
F 23 ( x ) = i = 1 10 x a i x a i T + c i 1 4[0,10]n−10
Table 2. Comparison of optimization results of benchmark functions F1–F23.
Table 2. Comparison of optimization results of benchmark functions F1–F23.
F1PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min5.00 × 10−45.18 × 10−292.32 × 10−1581.53 × 10−1941.21 × 10−571.33 × 10−1242.29 × 10−1071.64 × 10−2014.60 × 10−30.00
std2.07 × 10−21.54 × 10−278.09 × 10−383.73 × 10−1162.84 × 10−543.57 × 10−1082.27 × 10−770.002.96 × 10−10.00
avg8.89 × 10−31.22 × 10−271.48 × 10−386.82 × 10−1171.96 × 10−546.52 × 10−1094.15 × 10−788.01 × 10−1952.56 × 10−10.00
median3.72 × 10−35.66 × 10−283.48 × 10−867.51 × 10−1407.10 × 10−552.36 × 10−1193.04 × 10−981.07 × 10−1971.07 × 10−10.00
worse1.16 × 10−15.74 × 10−274.43 × 10−372.05 × 10−1151.12 × 10−531.95 × 10−1071.24 × 10−762.37 × 10−19310.60.00
time2.94 × 10−25.46 × 10−23.51 × 10−24.77 × 10−27.36 × 10−26.82 × 10−14.31 × 10−26.80 × 10−26.70 × 10−28.97 × 10−2
conv1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00
F2PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min2.16 × 10−31.16 × 10−1705.80 × 10−821.00 × 10−331.54 × 10−657.48 × 10−547.23 × 10−1135.57 × 10−20.00
std40.96.40 × 10−1701.63 × 10−533.81 × 10−329.08 × 10−601.15 × 10−441.39 × 10−1104.03 × 10−10.00
avg20.38.16 × 10−1702.98 × 10−542.20 × 10−321.87 × 10−602.09 × 10−456.74 × 10−1113.10 × 10−10.00
median2.04 × 10−26.16 × 10−1706.59 × 10−688.68 × 10−339.08 × 10−635.97 × 10−501.41 × 10−1111.97 × 10−10.00
worse10.33.31 × 10−1608.94 × 10−532.04 × 10−314.99 × 10−596.27 × 10−445.39 × 10−11022.60.00
time3.08 × 10−25.62 × 10−23.76 × 10−25.13 × 10−27.46 × 10−26.82 × 10−15.48 × 10−26.87 × 10−27.06 × 10−29.13 × 10−2
conv1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00
F3PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min4.96 × 1029.60 × 10−98.20 × 10−1417.93 × 10−1433.67 × 10−246.02 × 10−1154.20 × 10−1021.23 × 10−871.53 × 1030.00
std2.18 × 1033.80 × 10−57.86 × 10−35.50 × 10−533.44 × 10−145.37 × 10−982.09 × 10−807.80 × 10−451.24 × 1030.00
avg2.51 × 1031.52 × 10−53.80 × 10−31.00 × 10−536.65 × 10−151.60 × 10−983.81 × 10−811.48 × 10−453.37 × 1030.00
median1.60 × 1033.19 × 10−64.24 × 10−361.88 × 10−1164.65 × 10−204.71 × 10−1031.20 × 10−968.67 × 10−623.25 × 1030.00
worse9.13 × 1031.97 × 10−42.57 × 10−23.01 × 10−521.89 × 10−132.23 × 10−971.14 × 10−794.28 × 10−446.37 × 1030.00
time1.04 × 10−11.29 × 10−11.10 × 10−11.25 × 10−11.56 × 10−17.55 × 10−12.01 × 10−11.42 × 10−11.39 × 10−12.34 × 10−1
conv0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00
F4PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min4.21 × 101.60 × 10−71.88 × 10−668.79 × 10−849.56 × 10−196.62 × 10−569.49 × 10−538.11 × 10−799.62 × 100.00
std15.82.77 × 10−62.10 × 10−21.20 × 10−535.60 × 10−153.16 × 10−481.18 × 10−426.38 × 10−7738.90.00
avg68.41.57 × 10−62.35 × 10−22.19 × 10−541.72 × 10−155.85 × 10−492.18 × 10−433.75 × 10−7722.60.00
median64.85.28 × 10−73.70 × 10−21.26 × 10−666.97 × 10−171.27 × 10−527.73 × 10−501.43 × 10−7722.60.00
worse99.61.35 × 10−54.90 × 10−26.58 × 10−532.82 × 10−141.73 × 10−476.47 × 10−422.63 × 10−7629.10.00
time3.02 × 10−25.37 × 10−23.52 × 10−24.89 × 10−27.29 × 10−26.79 × 10−15.16 × 10−26.85 × 10−26.94 × 10−28.77 × 10−2
convergence1.001.001.001.001.001.001.001.000.001.00
F5PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min29.425.227.025.226.526.026.027.993.925.9
std1.64 × 1048.67 × 10−13.72 × 10−12.55 × 10−16.32 × 10−18.77 × 10−19.87 × 10−13.26 × 10−12.50 × 1024.02 × 10−1
avg3.22 × 10326.928.425.827.827.827.728.53.22 × 10226.6
median87.827.028.525.728.028.027.928.62.50 × 10226.7
worse9.01 × 10428.728.926.528.828.828.928.91.43 × 10327.3
time3.88 × 10−26.31 × 10−24.49 × 10−25.68 × 10−28.44 × 10−26.92 × 10−16.87 × 10−27.69 × 10−27.71 × 10−21.03 × 10−1
convergence0.00 1.00 1.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00
F6PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min3.71 × 10−42.58 × 10−32.62 × 103.26 × 10−617.54.98 × 10−19.75 × 10−114.14.54 × 10−32.60 × 10−4
std1.15 × 10−23.44 × 10−12.40 × 10−14.48 × 10−24.73 × 10−15.88 × 10−11.39 × 106.37 × 10−12.25 × 10−15.51 × 10−3
avg1.01 × 10−28.13 × 10−132.49.38 × 10−32.59 × 1019.019.426.52.21 × 10−14.29 × 10−3
median5.72 × 10−37.58 × 10−132.81.67 × 10−42.51 × 1019.914.025.31.67 × 10−12.23 × 10−3
worse5.61 × 10−21.27 × 1036.92.46 × 10−13.73 × 1027.967.739.37.66 × 10−12.84 × 10−2
time3.00 × 10−25.34 × 10−23.56 × 10−24.75 × 10−27.32 × 10−26.88 × 10−15.06 × 10−26.82 × 10−26.99 × 10−28.27 × 10−2
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F7PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min2.21 × 10−21.04 × 10−31.43 × 10−63.15 × 10−57.11 × 10−55.13 × 10−66.53 × 10−61.12 × 10−56.22 × 10−22.01 × 10−6
std1.49 × 10−27.05 × 10−47.42 × 10−51.02 × 10−31.40 × 10−31.74 × 10−42.31 × 10−41.18 × 10−44.50 × 10−21.54 × 10−4
avg5.05 × 10−22.05 × 10−38.34 × 10−51.14 × 10−36.64 × 10−41.26 × 10−42.60 × 10−41.52 × 10−41.25 × 10−11.00 × 10−4
median5.14 × 10−21.89 × 10−37.73 × 10−51.02 × 10−33.81 × 10−49.24 × 10−51.86 × 10−41.19 × 10−41.12 × 10−15.26 × 10−5
worse8.32 × 10−23.85 × 10−32.97 × 10−44.15 × 10−37.90 × 10−39.41 × 10−47.57 × 10−44.06 × 10−42.59 × 10−18.11 × 10−4
time8.17 × 10−21.06 × 10−18.93 × 10−21.00 × 10−11.31 × 10−17.35 × 10−11.53 × 10−11.19 × 10−11.19 × 10−11.85 × 10−1
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F8PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−1.01 × 104−7.61 × 103−6.02 × 103−1.22 × 104−6.43 × 103−8.59 × 103−1.15 × 104−4.08 × 103−1.24 × 104−1.18 × 104
std6.40 × 1029.57 × 1024.15 × 1021.61 × 1031.16 × 1037.83 × 1021.54 × 1033.13 × 1021.32 × 1038.87 × 102
avg−8.31 × 103−5.95 × 103−5.30 × 103−8.51 × 103−4.26 × 103−6.85 × 103−8.45 × 103−3.06 × 103−1.03 × 104−9.40 × 103
median−8.34 × 103−6.05 × 103−5.32 × 103−8.40 × 103−4.06 × 103−6.84 × 103−8.31 × 103−3.03 × 103−1.06 × 104−9.25 × 103
worse−7.28 × 103−3.15 × 103−4.53 × 103−5.95 × 103−2.76 × 103−5.42 × 103−4.40 × 103−2.50 × 103−7.77 × 103−7.80 × 103
time4.06 × 10−26.64 × 10−24.89 × 10−26.28 × 10−28.63 × 10−26.95 × 10−17.57 × 10−27.94 × 10−28.29 × 10−21.06 × 10−1
convergence1.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 1.00
F9PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min18.95.68 × 10−140.000.000.000.000.000.0058.00
std16.751.10.009.08 × 10−10.000.000.000.0082.20
avg51.037.40.001.66 × 10−10.000.000.000.0072.10
median47.815.60.000.000.000.000.000.0072.70
worse94.623.40.0049.70.000.000.000.0086.00
time3.95 × 10−25.83 × 10−23.83 × 10−25.64 × 10−27.65 × 10−26.83 × 10−15.83 × 10−26.98 × 10−27.96 × 10−29.06 × 10−2
convergence1.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00
F10PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min9.29 × 10−37.55 × 10−148.88 × 10−168.88 × 10−164.44 × 10−158.88 × 10−168.88 × 10−164.44 × 10−151.80 × 108.88 × 10−16
std7.31 × 10−11.64 × 10−140.009.01 × 10−161.53 × 10−150.000.000.009.97 × 10−10.00
avg8.62 × 10−11.03 × 10−138.88 × 10−161.13 × 10−157.16 × 10−158.88 × 10−168.88 × 10−164.44 × 10−1535.28.88 × 10−16
median11.61.00 × 10−138.88 × 10−168.88 × 10−167.99 × 10−158.88 × 10−168.88 × 10−164.44 × 10−1533.38.88 × 10−16
worse21.31.36 × 10−138.88 × 10−164.44 × 10−157.99 × 10−158.88 × 10−168.88 × 10−164.44 × 10−1557.38.88 × 10−16
time3.88 × 10−25.79 × 10−23.97 × 10−25.35 × 10−27.69 × 10−26.85 × 10−15.74 × 10−27.06 × 10−27.97 × 10−29.54 × 10−2
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00
F11PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min8.32 × 10−40.002.29 × 10−20.000.000.000.000.005.41 × 10−20.00
std2.68 × 10−21.59 × 10−21.43 × 10−10.000.000.000.000.001.31 × 10−10.00
avg3.15 × 10−25.66 × 10−31.98 × 10−10.000.000.000.000.002.37 × 10−10.00
median2.37 × 10−20.001.74 × 10−10.000.000.000.000.002.17 × 10−10.00
worse1.11 × 10−17.61 × 10−26.26 × 10−10.000.000.000.000.006.38 × 10−10.00
time4.51 × 10−26.44 × 10−24.72 × 10−25.99 × 10−28.48 × 10−26.88 × 10−17.33 × 10−27.68 × 10−28.47 × 10−21.07 × 10−1
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F12PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min1.07 × 10−51.33 × 10−24.20 × 10−18.57 × 10−86.91 × 10−23.70 × 10−22.55 × 10−28.75 × 10−21.49 × 10−12.49 × 10−5
std1.85 × 10−12.44 × 10−24.87 × 10−21.62 × 10−31.24 × 10−14.30 × 10−21.89 × 10−19.00 × 10−217.62.50 × 10−4
avg1.53 × 10−14.75 × 10−25.25 × 10−14.31 × 10−42.29 × 10−19.36 × 10−21.15 × 10−12.32 × 10−125.62.15 × 10−4
median1.04 × 10−14.05 × 10−25.35 × 10−13.95 × 10−62.11 × 10−19.14 × 10−24.44 × 10−22.27 × 10−119.51.12 × 10−4
worse7.28 × 10−11.01 × 10−16.11 × 10−16.86 × 10−37.56 × 10−12.24 × 10−17.27 × 10−14.05 × 10−158.41.09 × 10−3
time1.63 × 10−11.87 × 10−11.70 × 10−11.85 × 10−12.37 × 10−18.16 × 10−13.20 × 10−12.00 × 10−11.94 × 10−13.39 × 10−1
convergence1.001.001.001.001.001.001.001.001.001.00
F13PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min3.30 × 10−41.02 × 10−125.71.06 × 10−512.316.25.38 × 10−114.448.34.32 × 10−5
std1.61 × 10−13.35 × 10−19.29 × 10−23.97 × 10−11.90 × 10−13.12 × 10−14.84 × 10−16.18 × 10−113.43.41 × 10−4
avg1.25 × 10−16.23 × 10−128.34.31 × 10−116.724.317.324.122.63.39 × 10−4
median6.61 × 10−26.31 × 10−128.43.71 × 10−116.725.017.328.520.22.32 × 10−4
worse6.95 × 10−113.130.016.420.928.029.930.563.21.45 × 10−3
time1.66 × 10−11.87 × 10−11.65 × 10−11.87 × 10−12.34 × 10−18.10 × 10−13.19 × 10−12.01 × 10−11.94 × 10−13.36 × 10−1
convergence1.001.001.001.001.001.001.001.001.001.00
F14PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min9.98 × 10−19.98 × 10−11.99 × 109.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−110.09.98 × 10−19.98 × 10−1
std5.83 × 10−1732.236.59.23 × 10−141.830.95.03 × 10−115.40.008.25 × 10−17
avg9.98 × 10−130.610.113.940.733.611.329.49.98 × 10−19.98 × 10−1
median9.98 × 10−124.912.79.98 × 10−129.829.89.98 × 10−129.89.98 × 10−19.98 × 10−1
worse9.98 × 10−112.712.749.512.710.829.861.89.98 × 10−19.98 × 10−1
time2.48 × 10−12.45 × 10−12.47 × 10−12.74 × 10−12.73 × 10−12.92 × 10−15.02 × 10−12.66 × 10−12.88 × 10−15.24 × 10−1
convergence1.001.001.001.001.001.001.001.001.001.00
F15PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min3.11 × 10−43.09 × 10−43.50 × 10−43.07 × 10−43.08 × 10−43.07 × 10−43.07 × 10−43.18 × 10−44.40 × 10−43.07 × 10−4
std7.42 × 10−38.54 × 10−32.90 × 10−24.20 × 10−46.07 × 10−33.19 × 10−45.06 × 10−32.10 × 10−32.39 × 10−47.71 × 10−9
avg4.08 × 10−35.14 × 10−32.05 × 10−29.91 × 10−42.47 × 10−34.40 × 10−41.78 × 10−39.26 × 10−41.08 × 10−33.07 × 10−4
median7.30 × 10−44.30 × 10−41.01 × 10−21.22 × 10−34.71 × 10−43.08 × 10−43.07 × 10−44.80 × 10−41.22 × 10−33.07 × 10−4
worse2.04 × 10−22.04 × 10−21.01 × 10−11.66 × 10−32.04 × 10−21.60 × 10−32.04 × 10−21.20 × 10−21.23 × 10−33.08 × 10−4
time1.73 × 10−22.08 × 10−22.01 × 10−24.61 × 10−24.22 × 10−21.05 × 10−14.29 × 10−23.84 × 10−26.69 × 10−28.48 × 10−2
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F16PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3
std6.52 × 10−161.59 × 10−81.42 × 10−75.68 × 10−161.87 × 10−71.15 × 10−95.76 × 10−161.40 × 10−26.52 × 10−166.32 × 10−16
avg−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.2−10.3−10.3
median−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3−10.3
worse−10.3−10.3−10.3−10.3−10.3−10.3−10.3−9.86−1−10.3−10.3
time1.75 × 10−21.88 × 10−21.85 × 10−24.35 × 10−23.81 × 10−26.12 × 10−23.83 × 10−23.64 × 10−26.54 × 10−28.52 × 10−2
convergence1.001.001.001.001.001.001.001.001.001.00
F17PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min3.98 × 10−13.98 × 10−13.99 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
std0.003.47 × 10−49.23 × 10−30.002.00 × 10−33.86 × 10−81.95 × 10−151.27 × 10−10.000.00
avg3.98 × 10−13.98 × 10−14.09 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.48 × 10−13.98 × 10−13.98 × 10−1
median3.98 × 10−13.98 × 10−14.06 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.01 × 10−13.98 × 10−13.98 × 10−1
worse3.98 × 10−14.00 × 10−14.34 × 10−13.98 × 10−14.09 × 10−13.98 × 10−13.98 × 10−11.003.98 × 10−13.98 × 10−1
time1.29 × 10−21.45 × 10−21.53 × 10−24.15 × 10−23.47 × 10−25.72 × 10−23.44 × 10−23.17 × 10−26.36 × 10−27.69 × 10−2
convergence1.001.001.001.001.001.001.001.001.001.00
F18PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min3.003.003.003.003.003.003.003.003.003.00
std14.841.0−511.049.31.99 × 10−61.38 × 10−52.31 × 10−1516.01.06 × 10−151.59 × 10−15
avg57.03.0084.039.03.003.003.0039.73.003.00
median3003.003.003.003.003.003.0032.63.003.00
worse84.03.0030.030.03.003.003.0087.73.003.00
time1.25 × 10−21.39 × 10−21.36 × 10−23.90 × 10−23.37 × 10−25.62 × 10−23.37 × 10−23.15 × 10−26.33 × 10−27.62 × 10−2
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F19PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−38.6−38.6−38.6−38.6−38.6−38.6−38.6−38.6−38.6−38.6
std2.60 × 10−152.15 × 10−33.80 × 10−33.21 × 10−33.92 × 10−33.20 × 10−32.40 × 10−152.41 × 10−12.71 × 10−152.71 × 10−15
avg−38.6−38.6−38.5−38.6−38.6−38.6−38.6−35.6−38.6−38.6
median−38.6−38.6−38.5−38.6−38.6−38.6−38.6−36.1−38.6−38.6
worse−38.6−38.5−38.4−3.85−38.5−38.5−38.6−29.8−38.6−38.6
time1.99 × 10−22.26 × 10−22.22 × 10−24.88 × 10−24.33 × 10−28.59 × 10−24.90 × 10−24.06 × 10−27.21 × 10−29.41 × 10−2
convergence1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
F20PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−33.2−33.2−31.5−33.2−33.2−33.2−332−33.2−33.2−33.2
std6.54 × 10−21.00 × 10−18.73 × 10−21.05 × 10−19.09 × 10−21.18 × 10−16.03 × 10−21.60 × 10−11.36 × 10−151.42 × 10−15
avg−32.8−32.3−30.4−32.4−31.7−32.4−32.9−32.2−33.2−33.2
median−33.2−32.6−30.5−33.2−31.3−33.2−33.2−33.1−33.2−33.2
worse−31.4−30.2−28.4−28.5−30.2−28.4−31.2−25.9−33.2−33.2
time2.30 × 10−22.72 × 10−22.46 × 10−25.01 × 10−25.01 × 10−21.53 × 10−15.11 × 10−24.43 × 10−27.41 × 10−29.30 × 10−2
convergence1.001.001.001.001.001.001.001.001.001.00
F21PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−10.2−10.2−56.1−10.2−10.2−10.2−10.2−50.5−10.1−10.2
std35.923.77.32 × 10−126.929.122.12.39 × 10−65.69 × 10−122.35.96 × 10−15
avg−69.3−88.9−35.3−75.4−78.7−53.8−10.2−47.9−77.4−10.2
median−10.2−10.2−34.8−76.2−10.1−50.6−10.2−50.5−88.8−10.2
worse−26.3−26.3−22.4−26.3−26.3−8.82 × 10−1−10.2−28.8−50.6−10.2
time2.64 × 10−22.84 × 10−22.77 × 10−25.39 × 10−24.98 × 10−21.13 × 10−15.94 × 10−24.79 × 10−27.76 × 10−21.01 × 10−1
convergence1.001.001.001.000.001.001.001.001.001.00
F22PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−10.4−10.4−10.1−10.4−10.4−10.4−10.4−50.9−10.4−10.4
std2971.08 × 10−31922681342656.17 × 10−54.66 × 10−12.399.33 × 10−16
avg−86.8−10.4−44.4−85.6−10.0−69.9−10.4−48.0−79.0−10.4
median−10.4−10.4−42.7−10.4−10.4−50.9−10.4−50.5−93.6−10.4
worse−27.7−10.4−12.5−27.7−50.9−37.2−10.4−31.9−50.9−10.4
time3.06 × 10−23.30 × 10−23.22 × 10−25.77 × 10−25.42 × 10−21.17 × 10−16.78 × 10−25.03 × 10−28.02 × 10−21.11 × 10−1
convergence1.000.001.001.000.001.001.001.001.001.00
F23PSOGWOAOADBOGJOSCSOBKASABOEDCAED
min−10.5−10.5−63.7−10.5−10.5−10.5−10.5−97.5−10.5−10.5
std3569.79 × 10−112.327.421.826.116.411.022.31.14−15
avg−82.6−10.4−37.4−89.4−98.2−66.7−10.1−48.5−90.0−10.5
median−10.5−10.5−37.9−10.5−10.5−51.3−10.5−48.7−10.2−10.5
worse−24.2−51.7−17.6−28.1−24.2−28.1−33.3−28.0−51.3−10.5
time3.65 × 10−23.94 × 10−23.87 × 10−26.44 × 10−26.10 × 10−21.24 × 10−17.64 × 10−25.72 × 10−28.81 × 10−21.23 × 10−1
convergence1.00 0.00 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
Table 3. Comparison table of optimization results for cantilever beam design.
Table 3. Comparison table of optimization results for cantilever beam design.
Cantilever BeamCAEDEDPSOGWOAOADBOGJOSCSOBKASABO
worst13.360513.362113.361613.361922.858013.362213.369513.360413.360614.7054
best13.392513.436513.373713.366190.030113.389213.403313.381214.707018.8989
std0.00980.02480.00450.001319.70270.00770.01190.00650.42541.4752
mean13.371213.383013.367013.363642.141313.370713.381713.364713.496316.6254
median13.369613.371713.367213.363536.407413.368613.377513.361613.360916.3014
Table 4. Comparison table of optimization results for three-bar truss design.
Table 4. Comparison table of optimization results for three-bar truss design.
Cantilever BeamCAEDEDPSOGWOAOADBOGJOSCSOBKASABO
worst259.8050467259.8050467259.8050467259.8050675259.8500987259.8050467259.8050759259.8050484259.8050467259.8243953
best259.805047259.8050477259.8050467259.8062248262.649983259.8050467259.8120313259.8052105259.8050467260.3863756
std1.1171 × 10−73.21343 × 10−71.15786 × 10−110.0003405490.8090677256.71779 × 10−130.002471574.89464 × 10−54.96273 × 10−130.17902373
mean259.8050467259.8050469259.8050467259.8053782260.4395624259.8050467259.8076083259.8050817259.8050467260.0190289
median259.8050467259.8050469259.8050467259.8053139260.2590527259.8050467259.8069449259.8050596259.8050467259.9452844
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Zheng, Y.; Cao, L.; Xiong, M. Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications. Biomimetics 2025, 10, 302. https://doi.org/10.3390/biomimetics10050302

AMA Style

Wang S, Zheng Y, Cao L, Xiong M. Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications. Biomimetics. 2025; 10(5):302. https://doi.org/10.3390/biomimetics10050302

Chicago/Turabian Style

Wang, Shuxin, Yejun Zheng, Li Cao, and Mengji Xiong. 2025. "Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications" Biomimetics 10, no. 5: 302. https://doi.org/10.3390/biomimetics10050302

APA Style

Wang, S., Zheng, Y., Cao, L., & Xiong, M. (2025). Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications. Biomimetics, 10(5), 302. https://doi.org/10.3390/biomimetics10050302

Article Metrics

Back to TopTop