Next Article in Journal
CO2 Emissions and Scenario Analysis of Transportation Sector Based on STIRPAT Model: A Case Study of Xuzhou in Northern Jiangsu
Previous Article in Journal
Driver Behavior-Driven Evacuation Strategy with Dynamic Risk Propagation Modeling for Road Disruption Incidents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cauchy Operator Boosted Artificial Rabbits Optimization for Solving Power System Problems

by
Haval Tariq Sadeeq
Artificial Intelligence Department, Technical College of Duhok, Duhok Polytechnic University, Duhok 42001, Kurdistan Region, Iraq
Eng 2025, 6(8), 174; https://doi.org/10.3390/eng6080174
Submission received: 29 June 2025 / Revised: 21 July 2025 / Accepted: 22 July 2025 / Published: 1 August 2025
(This article belongs to the Section Electrical and Electronic Engineering)

Abstract

The majority of the challenges faced in power system engineering are presented as constrained optimization functions, which are frequently characterized by their complicated architectures. Metaheuristics are mathematical techniques used to solve complicated optimization problems. One such technique, Artificial Rabbits Optimization (ARO), has been designed to address global optimization challenges. However, ARO has limitations in terms of search functionality, restricting its efficiency in dealing with constrained optimization environments. To improve ARO’s compatibility with a variety of challenging problems, this work proposes implementing the Cauchy mutation operator into the position-updating procedure during the exploration stage. Furthermore, a novel multi-mode control parameter is developed to facilitate a smooth transition between exploration and exploitation phases. The enhancements may boost the performance and serve as an effective optimization tool for tackling complex engineering tasks. The improved version is known as Cauchy Artificial Rabbits Optimization (CARO). The proposed CARO’s performance is evaluated using eleven power system challenges as part of the CEC2020 competition’s test set of real-world constrained problems. The experimental results demonstrate the practical applicability of the proposed CARO in engineering applications and provide areas for future investigation.

1. Introduction

Enhancing power system efficiency corresponds with the objectives of sustainable development. The increasing implementation of renewable energy sources and the substantial integration of contemporary ICT technologies have led to significant transformations in power systems [1]. Demand response programs, which incentivize users to modify their consumption behaviors at peak periods when electricity is costly or sourced from non-renewable resources, are facilitated by the integration of smart grid technologies into power systems. These programs assist consumers in utilizing power more effectively and promote environmental sustainability [2]. Nonetheless, such advances could lead to increased difficulties in system management and operation. Effective optimization approaches are necessary to tackle various power optimization challenges.
Metaheuristic algorithms have garnered significant interest over the past decades and have become a prevalent method for tackling complex real-world optimization challenges [3]. Their principal benefit over conventional mathematical programming methods is in their capacity to solve optimization problems by exclusively evaluating fitness functions and their constraints without the need for gradient information. Additionally, the search agents in these algorithms display stochastic tendencies that give them the ability to do global searches [4].
Optimization problems that arise from practical applications and exhibit high complexity are referred to as real-world problems. It’s worth noting that non-convex and non-linear constraints are common in real-world optimization challenges [5]. The constrained nature of these problems, coupled with the relatively smaller feasible regions within the decision variable space, can potentially diminish the efficiency of any optimizer. Furthermore, it’s worth mentioning that most of the optimization methods are primarily intended to deal with unconstrained optimization problems. Consequently, constraints should be handled using appropriate techniques to address the equality and inequality restrictions inherent in real-world problems [6].
Metaheuristics are problem-solving techniques that replicate various natural phenomena, including biological processes, physical laws, and chemical reactions. These algorithms are generally categorized into four primary groups depending on their source of inspiration: (1) Evolutionary-based algorithms, (2) Swarm intelligence-based algorithms, (3) Natural science-based algorithms (4) Human-based algorithms [7]. Evolutionary-based methodologies are rooted in the principles of natural evolution. These methods typically initiate the search process by generating an initial population through randomization, thereafter exposing this population to a multi-generational evolutionary advancement. The scientific literature provides numerous examples of procedures classified as evolutionary-based. Notably, Genetic Algorithm (GA) [8] stands out as one of the most renowned algorithms in this category and draws its inspiration from Darwin’s theory of evolution. Furthermore, this category encompasses other techniques like Differential Evolution (DE) [9].
The second category of methodologies comprises swarm intelligence-based algorithms. This category of metaheuristics is highly regarded, as it mimics the cooperative, adaptive, cognitive, and collective social behaviors found in natural communities and groupings. These natural assemblages include a diverse range of organisms, such as schools of fish, flocks of birds, herds of mammals, colonies of insects like bees, and several other species. The domain of swarm intelligence algorithms is extensive and varied. Among them, the most renowned is Particle Swarm Optimization (PSO) [10], which models the flocking behavior of birds. There are also other notable algorithms in this category, including but not limited to the Nutcracker Optimizer (NO) [11], Giant Trevally Optimizer (GTO) [12], and Draco Lizard Optimizer (DLO) [13].
Alternatively, algorithms grounded in natural science replicate particular physical phenomena or chemical principles, such as electrical charges, gravity, and ion movement, among others. The Black Hole Algorithm (BHA) [14] is a prominent algorithm in this domain, drawing inspiration from the behavior of black holes in astrophysics, namely their ability to attract and assimilate matter. Some other notable methods falling into this group include Archimedes Optimization Algorithm (AOA) [15], Energy Valley Optimizer (EVO) [16], and several others.
The last classification of these algorithms belongs to human-based methods. These methods take inspiration from human cooperation and community behavior. Some of the most widely recognized algorithms within this category include the Brain Storm Optimization (BSO) [17], Group Teaching Optimization Algorithm (GTOA) [18], among others.
Given their ability to offer precise and flexible optimization solutions for various complex real-world problems, metaheuristic algorithms have been extensively employed in tackling engineering optimization challenges in various domains. These applications encompass a diverse spectrum of fields, including applied mechanics and engineering [19,20], parameter estimation of photovoltaic models [21,22], maximum power point tracking of generation systems [23,24], classification problems [25,26], power system stabilizer [27], image segmentation [28], constrained engineering optimization [29,30], feature selection [31,32], global optimization [33,34], machine learning model optimization [35], optimum design of truss structures [36], optimize the deployment of distribution-static VAR compensator [37], internet of things systems [38], intrusion detection system [39], optimal placement of unified power flow controller [40], vehicle routing [41], large-scale optimization [42].
Metaheuristic algorithms are typically structured around two fundamental phases: exploration (diversification) and exploitation (intensification). Achieving an optimal equilibrium between these phases is imperative for the efficiency and robustness of metaheuristic algorithms, thereby ensuring superior outcomes within specific applications [7]. The exploration phase is centered on the exploration of remote regions within the solution space to detect improved candidate solutions. Following the exploration phase, a thorough examination of the search space is essential. The method navigates feasible areas of the search space and seeks to identify the optimal solution through several local convergence strategies [43].
Studies have demonstrated the exceptional ability of specific metaheuristic algorithms to rapidly achieve high-quality solutions, efficiently solving complicated issues. These advanced algorithms aim to enhance aims and effectively determine ideal solutions through the use of search agents. During exploration, it is crucial that candidate solutions demonstrate a high success rate in identifying the ideal selection [44]. Notably, some of these candidates may prematurely converge to local optima, inaccurately assuming them to be the global optimum. Consequently, there is an imperative need for a comprehensive exploration of candidate solutions within the solution space to ensure the discovery of the true global optimum. The vital prerequisites for metaheuristic algorithms to effectively address engineering optimization problems encompass striking the right balance between diversification and intensification, as well as guarding against the entrapment in local optima. This equilibrium serves to facilitate a comprehensive exploration of the solution space and, in parallel, ensures the successful acquisition of the optimal global solution [45].
The Artificial Rabbits Optimization (ARO), presented by Wang et al. [46], is a recent development in the field. ARO draws influence from the survival tactics of wild rabbits, including their methods of detour foraging and random concealment. In ARO, detour foraging represents the exploration phase, whereas random hiding signifies the exploitation phase. Although ARO demonstrates strong exploitation skills, it struggles with an inconsistent exploration phase, which sometimes results in convergence to local optima, particularly in complicated and high-dimensional challenges [47].
In this study, ARO was selected as the base algorithm due to its promising characteristics in balancing exploration and exploitation phases through detour foraging and random hiding strategies. Its simplicity and flexibility make it a suitable candidate for adaptation to complex, constrained optimization problems such as power system challenges. Moreover, recent empirical studies [46] have demonstrated its competitive performance in engineering applications. However, ARO’s limitations in handling exploration under strict constraints motivated the incorporation of the Cauchy mutation operator in this work, aiming to further enhance its performance for power system problems.
The motivation behind this design choice is based on both theoretical foundations and empirical evidence from prior metaheuristic research. Specifically:
(1)
Heavy-Tailed Distribution Advantage: The Cauchy distribution is characterized by its heavy tails and undefined mean and variance. This property allows the mutation to produce occasional large jumps in the search space, which enhances global exploration and mitigates the risk of premature convergence to local optima, which is a common limitation observed in standard ARO, especially in constrained high-dimensional problems.
(2)
Enhanced Exploration Capability: By introducing the Cauchy mutation in the exploration phase, the algorithm gains an increased ability to escape local optima and conduct broader searches across the solution space, leading to a more thorough investigation of potential optimal regions.
(3)
Convergence Acceleration: While promoting exploration, the probabilistic nature of the Cauchy operator also permits fine-tuning around promising areas due to the frequent occurrence of smaller steps near the distribution center, thus supporting a more balanced transition between exploration and exploitation.
(4)
Supporting Empirical Studies: Prior studies, such as the work on Cauchy mutation-enhanced Harris Hawks Optimization [48], have empirically demonstrated the effectiveness of this approach in improving both convergence speed and solution accuracy in various engineering optimization problems. This empirical background provided a solid basis for adopting a similar strategy in enhancing ARO.
Motivated by the advantages of the Cauchy mutation operator and recognizing the limitations of ARO, a novel algorithm called Cauchy Artificial Rabbits Optimization (CARO) is proposed in this study. Eleven challenging power system problems from CEC2020 constrained benchmark functions have been used to seek the applicability of CARO while solving real-world problems. To evaluate the CARO performance, the results of the experiments were compared to various well-known recently developed algorithms.
The key contributions of the paper can be summarized as follows:
(1)
A novel optimization framework has been proposed by enhancing the Artificial Rabbits Optimization (ARO) algorithm with a Cauchy mutation operator and a multi-mode energy shrink control parameter, providing an effective structure for solving complex constrained engineering problems.
(2)
The main improvement in comparison to the standard ARO focuses on enhancing the exploration phase and reducing the probability of becoming trapped in local optima. This enhancement is achieved by introducing the Cauchy mutation operator. Additionally, proposing a novel multi-mode control parameter that allows for a seamless transition between exploration and exploitation, avoiding premature convergence and ultimately increasing the exploration and exploitation potential of the search space.
(3)
The proposed CARO is applied to eleven power system problems derived from IEEE CEC2020 constrained engineering benchmark functions. The superiority of CARO has been noticed using different metrics. Comprehensive evaluation criteria are used. These criteria are: best, mean, worst, std, and spider plots.
The remainder of this paper is organized as follows. Section 2 explains the Artificial Rabbits Optimization (ARO). Section 3 describes the Proposed CARO. Section 4 is about power system problems. Performance evaluation and discussion of results when the CARO algorithm is applied over power system challenges are presented in Section 5. The conclusions and future works are presented in Section 6.

2. Artificial Rabbits Optimization (ARO)

This algorithm is inspired by the survival strategies observed in rabbits within their natural habitat. These strategies encompass the detour foraging approach, where rabbits explore the surroundings of their nests in search of sustenance. In addition, rabbits employ a random hiding strategy, constructing burrows around their nests and intermittently seeking shelter within them to evade potential threats from hunters and predators. These tactical transitions are influenced by fluctuations in their energy levels [46].

2.1. Detour Foraging (Exploration)

In this step, each rabbit in the swarm typically updates its position by shifting towards a randomly chosen member of the group, while also introducing a perturbation. The mathematical representation of the detour foraging process is given by the following equation:
R a b b n e w = x j + R × x i x j + r o u n d 0.5 × 0.05 + r 1 × n 1 , i , j = 1 , , a n d   i j
R = L × m
L = sin 2 π r 2 × e e t 1 T 2
m k = 0 1 i f   k = = g l e l s e k = 1 , , d   a n d   l = 1 , , r 3 d
g = r a n d p e r m d
n 1 ~ N 0 , 1
where R a b b n e w represents the potential location of the ith rabbit in the subsequent iteration, denoted as t + 1. X i and X j signify the current positions of the ith and jth rabbits, respectively, during iteration t. n corresponds to the population size. t is the current iteration count. T is the maximum allowable number of iterations. d represents the dimensionality of the specific problem. The notation [·] indicates the ceiling function, while r o u n d (·) signifies rounding to the nearest r a n d p e r m (·) refers to the selection of a random integer within the range of 1 to d. r 1 ,   r 2 ,   a n d   r 3 are all random numbers falling within [0, 1]. n 1 represents the normal distribution. L denotes the distance covered in each step of the exploration stage.

2.2. Random Hiding (Exploitation)

In the context of evading predators, rabbits typically dig burrows near their nests for concealment. The equation related to this behavior is as follows:
b i , j = X i + H × g × X i , j = 1 , n a n d j = 1 , , d
H = T t + 1 T × r 4
n 2 ~ N 0 , 1
g k = 1 0 i f k = = j e l s e k = 1 , , d
In their quest for survival, rabbits must find secure shelter to evade threats. Consequently, they randomly choose one of their burrows to hide in and avoid capture. This representation of the random hiding strategy is expressed below:
R a b b n e w = X i + R × r 4 × b i r X i , i = 1 , , n
g r k = 1 0 i f k = = r 5 d e l s e k = 1 , , d
b i , r = X i + H × g r × X i
Here, b i , r denotes the randomly chosen burrow of the ith rabbit from a set of d burrows used for hiding in the current iteration t. Additionally, r 4 and r 5 are two random numbers generated from the interval [0, 1], and n 2 represents the normal distribution.
Following either the first or the second step of the algorithm, the position of the ith rabbit is updated as follows:
X i t + 1 = X i R a b b n e w i f f X i R a b b n e w i f f X i > R a b b n e w
According to the above equation, the rabbit will forsake its current location and choose to remain at the candidate position generated by either Equation (1) or Equation (11) if the fitness of the ith rabbit’s candidate position is greater than the fitness of the present position.

2.3. Energy Shrink (Switch from Exploration to Exploitation)

An energy factor is employed to signify the transition from the exploration phase to the exploitation phase and can be expressed in the following manner:
A t = 4 1 t T ln 1 r
where r randomly generated from [0, 1]. The energy coefficient A takes values within the interval [0, 2]. If A > 1, it signifies that the rabbit possesses ample energy for extensive exploration of the foraging area of various individuals, leading to the detour foraging phase. This phase is characterized as exploration. Conversely, when A ≤ 1, it indicates that the rabbit has limited energy for physical activities. Consequently, it resorts to random hiding as a means of evading predators, and this marks the exploitation phase in the algorithm.

3. Proposed Cauchy Artificial Rabbits Optimization (CARO)

The optimal direction in the swarm is guided by the best agent, making a proper balance between diversification and intensification vital for search algorithms. These factors are key in the search for problem optima. Typically, in each iteration, the standard ARO employs a detour foraging strategy to move closer to the optima, facilitating the solution of complex real-world problems. However, there are instances where this movement is insufficient to prevent the swarm from becoming trapped in local optima. To address this issue, this paper proposes an enhanced version of the original ARO algorithm by incorporating a Cauchy mutation operator. The algorithm incorporates the Cauchy operator to facilitate exploration of the search space by moving towards the best solution. The Cauchy operator employed here utilizes Cauchy random numbers. These numbers have the potential to trigger significant jumps from the current best agent, facilitating sudden transitions from local optima to exploring the entire search space.
This mechanism aims to expedite convergence and prevent premature convergence. The use of Cauchy-distributed random numbers has been proven effective in producing high-quality solutions [48].

3.1. Cauchy Distribution

The Cauchy distribution is a probability distribution used in statistics and probability theory to describe random variables with heavy tails and undefined mean and variance. The Cauchy distribution is known for its heavy tails, meaning that it has a higher probability of extreme values compared to many other distributions. It is characterized by its probability density function (PDF), which has a distinctive shape:
f x , x 0 , γ = 1 π γ 1 + x x 0 γ 2
where x 0 is the location parameter, which represents the center of the distribution. While γ represents the scaling parameter, determining the distribution’s shape.
The Cauchy distribution’s cumulative distribution function (CDF) is described as follows:
F x , x 0 , γ = 1 2 + 1 π arctan x x 0 γ

3.2. Proposed Cauchy Operator

The use of the Cauchy operator involves generating a random number y from the interval ( 0 ,   1 ) , and then generating another random number ℘ using the Cauchy distribution by inverting Equation (17), which can be expressed as:
= x 0 + γ t a n ( π ( ( y 0.5 ) )
Following this, the mutation strategy is applied to each rabbit in the swarm as follows:
R a b b n e w = R a b b B e s t × R + × X r 1 j X r 2 j
where R a b b B e s t is the jth dimensional position of the optimal solution obtained so far. ℘ is a random number that obeys the Cauchy distribution. X r 1 j and X r 2 j are the jth dimensional positions of two rabbits randomly selected from the population. Noted that these two rabbits are not equal to the current optimal rabbit.
The inclusion of the term X r 1 j X r 2 j serves the purpose of enhancing population diversity and regulating the extent of the Cauchy mutation within the boundaries of the search domain. If this term remains small, the algorithm primarily exploits the local neighborhood of the current optimal solution. Conversely, when the term becomes large, the algorithm explores the entire search space. The value of this term is stochastic, but during the final iterative search mechanism, as the population converges, this value tends not to be excessively large. Consequently, the population’s condition does not become overly random.
During the progression of the mutation strategy into the jth dimension, if the fitness of R a b b n e w is inferior to that of R a b b B e s t , the update of R a b b n e w in the subsequent iteration continues. However, once the fitness of R a b b n e w surpasses that of R a b b B e s t , the R a b b B e s t is set to R a b b n e w , and the process proceeds to update R a b b n e w in the next iteration. This strategy, particularly in proximity to the optimal solution, empowers the algorithm with enhanced exploitation capabilities during the final stages. It also bolsters the algorithm’s capacity to evade local optima traps.
When comparing the advantages of the Cauchy mutation over the T-distribution mutation and Gaussian distribution, the Cauchy mutation stands out for its superior ability to explore extreme regions of the search space. Unlike the Gaussian distribution, which has finite tails, the Cauchy distribution assigns significant probabilities to extreme values, facilitating effective exploration of distant solutions. Additionally, the Cauchy mutation offers simplicity in implementation and tuning compared to the T-distribution mutation, which requires additional parameters such as degrees of freedom. Furthermore, the heavy tails of the Cauchy distribution provide robustness against outliers and noise, which may be detrimental to the performance of algorithms relying on Gaussian-based mutations. Overall, the Cauchy mutation presents a compelling choice for evolutionary computation algorithms seeking efficient exploration and robust optimization in complex and challenging problem domains. Figure 1 compares between Cauchy, T-distribution, and Gaussian distribution.

3.3. The Proposed Cauchy Artificial Rabbits’ Optimization Design (CARO)

In the standard ARO, each individual’s position is initially updated using Equations (1) and (11), followed by the application of a statistical approach between the current and previous positions. In this approach, the position with the better fitness value is selected. This process continues until the specified stopping criteria are met.
In the proposed CARO, to boost the convergence and to enhance the overall algorithm performance as previously discussed, Equation (1) is replaced by Equation (19). Additionally, two rabbits are randomly selected, and the Cauchy operator, as described in Section 3.2, is applied. Furthermore, a multi-mode energy factor parameter is proposed to effectively switch between exploration and exploitation and is replaced by Equation (15).
ε t = a sin ω t + b η t
where a is the amplitude of sinusoidal oscillations and can be fixed at 0.2, b represents scaling coefficient which is also fixed at 0.2, ω indicates the frequency of oscillations, and can be calculated by ω = 2 × π / T . The term ƞ represents a Gaussian component that is independently sampled for each t. The proposed parameter Ɛ is mainly designed to influence the energy dynamics during the energy shrink step in the proposed CARO. This step occurs when the algorithm transitions from the exploration phase, where it searches broadly across the search space, to the exploitation phase, where it fine-tunes solutions around promising areas. The parameter introduces a balance between deterministic and stochastic behaviors through its components: the sinusoidal term a sin ω t introduces periodic oscillations, which help avoid premature convergence and encourage exploration, while the b η t introduces randomness, preventing the algorithm from getting trapped in local minima. Table 1 presents the descriptions and settings of the parameters used in the CARO algorithm.
Figure 2 show the differences between the standard and proposed multi-mode energy shrink control parameter. The sinusoidal term provides a regular oscillation that allows the algorithm to periodically shift between exploration and exploitation throughout the iterations, ensuring that the system doesn’t stay in a purely explorative phase for too long, which could lead to overfitting. The addition of noise allows for occasional randomness, which injects diversity and helps escape local optima when necessary. The balance between these components provides a more precise and controlled transition through the optimization phases, making the algorithm more robust in complex problem-solving environments. In this manner, the search is carried out utilizing Cauchy-ARO. The flowchart of the proposed CARO is depicted in Figure 3. Furthermore, the pseudo-code of the proposed CARO is shown in Algorithm 1.
Algorithm 1. Pseudo-code of CARO
Start CARO
  • Initialize   population   of   rabbits   X i ( i = 1,2 , , n ) randomly within search bounds.
  • For   t = 1 : T
  •             Calculate   the   multi - mode   energy   shrink   factor   ε (Equation (20)).
  •             For   each   rabbit   X i
                              if   ε > 1 (exploration phase) then
                    Perform Cauchy mutation operator (Equation (19)).
                    Update the position of the ith rabbit (Equation (14)).
              else (exploitation phase)
                    Perform random hiding strategy (Equation (11)).
                    Update the position of the ith rabbit (Equation (14)).
              end if
5.
    Evaluate the fitness.
                          If   fitness   ( X i _ n e w ) <   fitness   ( X i ) then
                   X i   =   X i _ n e w
            end if
6.
    End For
7.
End For
8.
Output best solution obtained by CARO.
End CARO.

3.4. Computational Complexity Analysis

The overall computational complexity of the proposed CARO algorithm consists of four main components: problem definition O ( 1 ) , initialization O ( n ) , fitness evaluation O ( T n ) , and position updating O ( T n d ) , where n , T , and d represent the population size, number of iterations, and problem dimensions, respectively. In CARO, the inclusion of the Cauchy mutation operator during exploration and the multi-mode energy shrink mechanism introduces additional position updating steps. As a result, the computational complexity of CARO remains governed by the position updating and fitness evaluation phases, resulting in a total complexity of O ( T n d + T n + n ) .

4. Power System Problems

As it was declared that eleven cases of the 2020 IEEE Congress on Evolutionary Computation (CEC) related to the power system problems [19,49,50] have been chosen to show the effectiveness of the proposed CARO algorithm.
(1)
Case 1: Optimal Sizing of Single-Phase Distributed Generation with reactive power support for Phase Balancing at Main Transformer/Grid. Practical distribution systems frequently experience imbalances, which can result in the production of negative and zero sequence currents. Rotating equipment may operate inefficiently as a result of this imbalance, and neutral conductor losses may also occur. In a balanced system, the flow of neutral current is zero, and the conductor that serves as neutral is designed to carry a smaller current under specific conditions.
Aside from the issues related to unbalanced phase currents, another significant concern is the overloading of the main substation transformer. When the phases are unbalanced, the phase with the highest load determines the capacity of the substation transformer. Even if the other two phases are underloaded, the transformer cannot accommodate additional load.
In the current context, Distribution Generators (DGs) are typically suitable for single-phase generation, and therefore, no special arrangements are needed to transfer the load from one phase to another. This case can be mathematically represented as a constrained optimization problem as follows [50]:
M i n i m i z e f = I r , 1 a + I r , 1 b + I r , 1 c 2 + I m , 1 a + I m , 1 b + I m , 1 c 2 + I r , 1 a 0.5 I r , 1 b + I r , 1 c 0.5 3 I m , 1 b I m , 1 c 2 + I m , 1 a 0.5 I m , 1 b + I m , 1 c 0.5 3 I r , 1 b I r , 1 c 2 w h e r e I r , 1 s = k a , b , c i = 1 N G 1 , i s k V r , i k B 1 , i s k V m , i k I r , 1 s = k a , b , c i = 1 N B 1 , i s k V r , i k + G 1 , i s k V m , i k
(2)
Case 2: Optimal sizing of distributed generation for active power loss minimization. The main goal of this case involves determining the appropriate capacity and configuration of DG units within an electrical distribution system to reduce active power losses. Active power losses occur when electricity is dissipated as heat as it flows through power lines and components, resulting in a decrease in the overall efficiency of the distribution network. Optimally sizing DG units aims to mitigate these losses by strategically placing and sizing generators. This challenge case can be expressed as follows:
M i n i m i z e f = i = 1 N P i
(3)
Case 3: Optimal sizing of distributed generation (DG) and capacitors for reactive power loss minimization. The loads in power system, such as transformers, have inductive characteristics that consume reactive power, leading to reduced system performance and increased losses. To address this issue, shunt capacitors (SC) are employed to supply reactive power, thereby improving the Volt-Ampere Reactive of the system. Furthermore, DGs represent an efficient means of reducing active power losses in the system. Integrating SCs with DGs can further contribute to minimizing the power losses. Consequently, this case can be formulated as a constrained optimization problem.
M i n i m i z e f = 0.5 i = 1 N P i + 0.5 i = 1 N Q i
(4)
Case 4: Optimal power flow (minimization of active power loss). This engineering case is the subject of ongoing research interest. The problem can be framed as a single-objective constrained optimization benchmark, where the goal is to optimize various factors such as transmission losses, fuel costs, voltage stability, emissions, and more, while adhering to the constraint requirements. In this context, one of the primary objectives is to minimize active power losses, making it an integral part of the optimization problem:
M i n i m i z e f = i = 1 N P g i P l , i
(5)
Case 5: Optimal power flow (minimization of fuel cost). In this scenario, the objective is to minimize fuel costs, which is formulated as another objective function within the constrained optimization problem framework.
M i n i m i z e f = i = 1 N a i + b i P g , i + c i P g , i 2
where a i , b i , and c i are the cost coefficient of the ith bus generator.
(6)
Case 6: Optimal power flow (minimization of active power loss and fuel cost). Balancing the trade-off between loss reduction and fuel cost minimization is a key challenge in this engineering case, and efficient optimization techniques are required to solve these complex optimization problems, thereby improving the economic and operational performance of power systems.
M i n i m i z e i = 1 N a i + b i P g , i + c i P g , i 2
where ai, bi, and ci are the cost coefficient of the ith bus generator, and λi represents the weight factors.
(7)
Case 7: Microgrid power flow (islanded case). A proper power flow instrument is required during the operational analysis of this case. Droop controllers manage the control of active and reactive power sharing among Distribution Generators (DGs) in Droop-Based Islanded Microgrids (DBIMGs). For different types of buses, traditional power flow procedures typically involve four unknown variables: voltage angle, voltage magnitude, reactive power, and active power. Traditional approaches are not ideal for handling the power flow problem in this engineering case as the operation frequency is treated as an additional element in this engineering case. The equations regarding this challenging case, to be formulated as a constrained benchmark, are outlined below:
M i n i m i z e f = i = 1 N P i v r , i j = 1 N G j , i V r , j + B i j V m , j + V m , i i = 1 N b i j V r , j + G i j V m , j 2 + i = 1 N Q i v r , i j = 1 N G j , i V r , j + B i j V m , j V m , i i = 1 N b i j V r , j + G i j V m , j 2
(8)
Case 8: Microgrid power flow (grid-connected case)
The Power Flow Problem (PFP) in grid-connected microgrids is a significant challenge in steady-state power systems analysis. A range of methods have been employed to tackle the PFP issue since the 1950s. These methods have primarily relied on numerical methods designed to solve sets of nonlinear simultaneous equations, including variants of Newton’s Numerical Techniques (NNTs).
However, in the context of grid-connected microgrids, a problem arises during the PFP solution process with NNTs. Specifically, the Jacobian Matrix approaches near singularity or becomes singular, rendering NNTs incapable of providing solutions in such cases. To overcome this challenge, the PFP can be reformulated as an alternative constrained optimization problem, as presented below:
M i n i m i z e f = i = 1 N P i v r , i j = 1 N G j , i V r , j + B i j V m , j + V m , i i = 1 N b i j V r , j + G i j V m , j 2 + i = 1 N Q i v r , i j = 1 N G j , i V r , j + B i j V m , j V m , i i = 1 N b i j V r , j + G i j V m , j 2
(9)
Case 9: Optimal setting of droop controller for minimization of active power loss in islanded microgrids (IMG). In the context of IMG applications, DGs play a critical role in distributing local loads while ensuring that bus voltages and system frequency remain within acceptable limits. Additionally, it’s crucial to maintain the flow of current within specified bounds across the grid lines. In IMGs, various droop control systems are employed to increase the ability of DGs to share power. It’s essential that these schemes are not only stable but also optimized for performance. In IMGs, to reduce active losses, droop settings need to be adjusted. This particular challenge can be framed as a complex constrained optimization problem.
M i n i m i z e f = i = 1 N P i
(10)
Case 10: Optimal setting of droop controller for minimization of reactive power loss in islanded microgrids. The optimal setting of a droop controller in islanded microgrids plays a critical role in minimizing reactive power losses and ensuring efficient operation. To reduce reactive power losses, the droop controller parameters, such as the droop slope and reference voltage/frequency, must be carefully adjusted. This case can be formulated:
M i n i m i z e f = i = 1 N Q i
(11)
Case 11: Wind farm layout problem (WFLP). WFLP is a complex optimization challenge. It involves determining the optimal arrangement of wind turbines within a designated area to maximize energy production while considering various constraints and objectives. The primary aim of the WFLP is to design an efficient layout that maximizes the wind farm’s energy output while minimizing costs.
M i n i m i z e f = i = 1 N E P i

5. Experimental Analysis and Results

In the subsequent subsections, a number of numerical experiments are used to conduct assessments for CARO and their competitor algorithms. CARO’s performance is evaluated against some recent standards, well-known metaheuristics. The well-researched constrained power system engineering problems are chosen to confirm the efficiency of the CARO optimizer in solving complicated constrained engineering optimization problems in real-world applications. Below is a comprehensive description of the experimental analysis and findings.

5.1. Experimental Setup

The effectiveness of the proposed approach, a comparison is made with the following algorithms: (1) Standard Artificial Rabbits Optimization (ARO); (2) Artificial Hummingbird Algorithm (AHA) [51]; (3) Beluga Whale Optimization (BWO) [52]; (4) Dwarf Mongoose Optimization Algorithm (DMO) [53]; (5) Dandelion Optimizer (DO) [54]; (6) Golden Jackal Optimization (GJO) [55]; (7) Honey Badger Algorithm (HBA) [56]; (8) Sand Cat Swarm Optimization (SCSO) [57]; (9) Snake Optimizer (SO) [58]; (10) MPA [59]; (11) AGPSO [60]; (12) IMODE [61] (CEC2020 winner); and (13) LSHADE_SPACMA [62] (CEC2017 winner) are compared and analyzed. The experimental parameters were defined in accordance with the CEC experimental standards, with the termination condition based on the maximum number of evaluations [63]. Specifically, the maximum number of evaluations was set to dim*10,000.
These experiments were conducted within a MATLAB R2022b environment, utilizing a computer with a 2.00 GHz Intel Core i7 processor and 8.00 GB of RAM. As presented in Table 2, the selected benchmark problems encompass a wide range of complexities, reflected by the varying numbers of decision variables and constraints. The number of decision variables ranges from 30 to 158, while the equality and inequality constraints vary between 0 and 148. Additionally, the optimal known objective function values differ significantly across the cases, indicating the diverse nature of these power system challenges. This diversity ensures a comprehensive evaluation of the proposed CARO algorithm, testing its robustness and adaptability in solving both small-scale and large-scale constrained optimization problems within the power system domain.
In this context, D signifies the total count of design variables, while h and g represent the quantities of equality and inequality constraints, respectively. Furthermore, f ( x ) represents the most optimal known cost function value.

5.2. CARO for Power System Problems

In real-world constrained optimization problems, it’s essential to acknowledge that constraints pose challenges to algorithms in both the objective functions and the constraint conditions. In the engineering real-world benchmark, constraints can be handled through the utilization of penalty terms is carried out by linearly combining the objective function and the nonlinear constraints. This weighting method provides greater flexibility and convenience in dealing with optimization problems. Typically, a constrained minimization engineering problem can be formally defined as follows:
M i n i m i z e f z , z = z 1 , z 2 , , z n
S u b j e c t t o = h j z = 0 j = 1 , 2 , , l g i z 0 i = 1 , 2 , , m
Here, l represents the number of balance constraints, m represents the number of multiple constraints, and z denotes the valid solution. In the case involving boundary constraints, each variable z k has both upper and lower boundary requirements:
l o w k z k u p k ,           k = 1,2 , , n
where l o w k and u p k reflect the lowest and maximum limits on z k variable, and n indicates the variables number.
The objective function’s linear weighting and the nonlinear constraints are used to represent the engineering problem with constraints as follows:
F z = f z + Ψ i = 1 m max g i z , 0 + Φ j = 1 i max h i z , 0
Here Ψ represents the weight assigned to more than one constraint, and Φ denotes the balanced weight constraint. In this study, significantly high weight values will be assigned to these factors to guarantee that the optimization solution faces substantial penalties should it violate any constraints. This approach is implemented to deliberately discourage the algorithm from generating infeasible solutions throughout the optimization process.

5.3. Statistical Results Analysis

Table 3 compares CARO’s statistical findings to the aforementioned algorithms throughout the CEC2020 benchmarks. The data in bold represents the best results among all algorithms. The outcomes presented in Table 3, when considering the mean solution, indicate that CARO has outperformed other metaheuristics in nine out of the eleven functions: case 1, case 2, case 4, case 5, case 6, case 7, case 8, case 9, and case 11. Additionally, CARO has achieved the best optimization outcomes for eight of the tested functions, specifically for case 1, case 4, case 5, case 6, case 7, case 8, case 9, and case 11. Nevertheless, for two functions, namely case 3 and case 10, ARO and BWO have obtained the best mean solution, respectively. This suggests that CARO surpasses ARO and BWO in the remaining functions.
In the latter section of Table 3, standard deviation values for all algorithms regarding the CEC2020 functions can be observed. From this data, it can be concluded that superior performance is demonstrated by the proposed CARO when compared to other algorithms, and it is located in the top position, followed by AGPSO in the second place, and DO holds the third position. In addition, to provide a brief summary of the CARO’s performance, Figure 4 displays a spider plot based on the mean value of each algorithm across all cases. Based on preliminary findings, CARO exhibits remarkable performance. Furthermore, Figure 5 depicts the convergence curves of the comparative algorithms.

6. Conclusions

This study introduces the Cauchy mutation operator to enhance the ARO algorithm, resulting in the proposed CARO algorithm. CARO was extensively evaluated against many contemporary metaheuristics, including ARO. Furthermore, CARO was applied to tackle eleven real-world power system problems from the IEEE CEC2020 test suite. The comprehensive experimental results and comparative analyses indicate that CARO is highly efficient in addressing practical engineering problems and holds significant practical utility. Therefore, the incorporation of the Cauchy mutation operator mechanism and the proposed multi-mode control parameter accelerates algorithm convergence, broadens the exploration of valuable solution regions, and excels in resolving complex optimization problems.
In future research, there are several promising areas for exploration. The single-objective optimization framework developed here can be adapted into a binary version to handle complex discrete problems, such as COVID-19 modeling. Additionally, CARO can be extended to address more challenging real-world problems and diverse applications, including PV parameter estimation, industrial applications, path planning, image segmentation, resource management, Internet of Things systems, electronic circuits, task scheduling, smart home systems, and various other domains.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CAROCauchy Artificial Rabbits Optimization
AROArtificial Rabbits Optimization
AHAArtificial Hummingbird Algorithm
BWOBlack Widow Optimization
DMODwarf Mongoose Optimization
DODingo Optimizer
GJOGolden Jackal Optimization
HBAHoney Badger Algorithm
SCSOSand Cat Swarm Optimization
SOSnake Optimizer 
MPAMarine Predators Algorithm
AGPSOAutonomous Groups Particle Swarm Optimization
IMODEImproved Multi-Operator Differential Evolution
LSHADE-SPACMALSHADE with Semi-Parameter Adaptation Hybrid With CMA-ES (LSHADE-SPACMA)
LS_SPLSHADE-SPACMA 
CEC20202020 IEEE Congress on Evolutionary Computation (CEC)
DGDistributed Generation 
SCShunt Capacitors
DBIMGsDroop-Based Islanded Microgrids 
PFPPower Flow Problem
NNTsNewton’s Numerical Techniques
IMGIslanded Microgrids
WFLPWind Farm Layout Problem

References

  1. Dutta, R.; Das, S.; De, S. Multi Criteria Decision Making with Machine-Learning Based Load Forecasting Methods for Techno-Economic and Environmentally Sustainable Distributed Hybrid Energy Solution. Energy Convers. Manag. 2023, 291, 117316. [Google Scholar] [CrossRef]
  2. Byles, D.; Mohagheghi, S. Sustainable Power Grid Expansion: Life Cycle Assessment, Modeling Approaches, Challenges, and Opportunities. Sustainability 2023, 15, 8788. [Google Scholar] [CrossRef]
  3. Sadeeq, H.T.; Abdulazeez, A.M. Car Side Impact Design Optimization Problem Using Giant Trevally Optimizer. Structures 2023, 55, 39–45. [Google Scholar] [CrossRef]
  4. Sadeeq, H.T.; Abdulazeez, A.M. Improved Northern Goshawk Optimization Algorithm for Global Optimization. In Proceedings of the 2022 4th International Conference on Advanced Science and Engineering (ICOASE), Zakho, Iraq, 21–22 September 2022; pp. 89–94. [Google Scholar]
  5. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems. Technical Report. 2011, pp. 341–359. Available online: https://al-roomi.org/multimedia/CEC_Database/CEC2011/CEC2011_TechnicalReport.pdf (accessed on 21 July 2025).
  6. Lagaros, N.D.; Kournoutos, M.; Kallioras, N.A.; Nordas, A.N. Constraint Handling Techniques for Metaheuristics: A State-of-the-Art Review and New Variants. Optim. Eng. 2023, 24, 2251–2298. [Google Scholar] [CrossRef]
  7. Sadeeq, H.T.; Abdulazeez, A.M. Metaheuristics: A Review of Algorithms. Int. J. Online Biomed. Eng. 2023, 19, 142–164. [Google Scholar] [CrossRef]
  8. Koza, J.R. Genetic Programming, on the Programming of Computers by Means of Natural Selection; A Bradford Book; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  9. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker Optimizer: A Novel Nature-Inspired Metaheuristic Algorithm for Global Optimization and Engineering Design Problems. Knowl.-Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  12. Sadeeq, H.T.; Abdulazeez, A.M. Giant Trevally Optimizer (GTO): A Novel Metaheuristic Algorithm for Global Optimization and Challenging Engineering Problems. IEEE Access 2022, 10, 121615–121640. [Google Scholar] [CrossRef]
  13. Wang, X. Draco Lizard Optimizer: A Novel Metaheuristic Algorithm for Global Optimization Problems. Evol. Intell. 2024, 18, 10. [Google Scholar] [CrossRef]
  14. Hatamlou, A. Black Hole: A New Heuristic Optimization Approach for Data Clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  15. Hashim, F.A.; Hussain, K.; Houssein, E.; Mabrouk, M.; Al-Atabany, W. Archimedes Optimization Algorithm: A New Metaheuristic Algorithm for Solving Optimization Problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  16. Azizi, M.; Aickelin, U.; A Khorshidi, H.; Baghalzadeh Shishehgarkhaneh, M. Energy Valley Optimizer: A Novel Metaheuristic Algorithm for Global and Engineering Optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef] [PubMed]
  17. Shi, Y. Brain Storm Optimization Algorithm. In Advances in Swarm Intelligence; Lecture Notes in Computer Science; International Conference in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6728 LNCS, pp. 303–309. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Jin, Z. Group Teaching Optimization Algorithm: A Novel Metaheuristic Method for Solving Global Optimization Problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  19. Hu, G.; Guo, Y.; Zhong, J.; Wei, G. IYDSE: Ameliorated Young’s Double-Slit Experiment Optimizer for Applied Mechanics and Engineering. Comput. Methods Appl. Mech. Eng. 2023, 412, 116062. [Google Scholar] [CrossRef]
  20. Duzgun, E.; Acar, E.; Yildiz, A.R. A Novel Chaotic Artificial Rabbits Algorithm for Optimization of Constrained Engineering Problems. Mater. Test. 2024, 66, 1449–1462. [Google Scholar] [CrossRef]
  21. Zaid, S.A.; Bakeer, A.; Albalawi, H.; Alatwi, A.M.; AbdelMeguid, H.; Kassem, A.M. Optimal Fractional-Order Controller for the Voltage Stability of a DC Microgrid Feeding an Electric Vehicle Charging Station. Fractal Fract. 2023, 7, 677. [Google Scholar] [CrossRef]
  22. Albalawi, H.; Zaid, S.A.; Alatwi, A.M.; Moustafa, M.A. Application of an Optimal Fractional-Order Controller for a Standalone (Wind/Photovoltaic) Microgrid Utilizing Hybrid Storage (Battery/Ultracapacitor) System. Fractal Fract. 2024, 8, 629. [Google Scholar] [CrossRef]
  23. Ravi, S.; Premkumar, M.; Abualigah, L. Comparative Analysis of Recent Metaheuristic Algorithms for Maximum Power Point Tracking of Solar Photovoltaic Systems under Partial Shading Conditions. Int. J. Appl. Power Eng. 2023, 12, 196–217. [Google Scholar] [CrossRef]
  24. Pervez, I.; Pervez, A.; Tariq, M.; Sarwar, A.; Chakrabortty, R.K.; Ryan, M.J. Rapid and Robust Adaptive Jaya (Ajaya) Based Maximum Power Point Tracking of a PV-Based Generation System. IEEE Access 2021, 9, 48679–48703. [Google Scholar] [CrossRef]
  25. Amine Tahiri, M.; Zohra El hlouli, F.; Bencherqui, A.; Karmouni, H.; Amakdouf, H.; Sayyouri, M.; Qjidaa, H. White Blood Cell Automatic Classification Using Deep Learning and Optimized Quaternion Hybrid Moments. Biomed. Signal Process. Control 2023, 86, 105128. [Google Scholar] [CrossRef]
  26. Saranya, R.; Jaichandran, R. A Dense Kernel Point Convolutional Neural Network for Chronic Liver Disease Classification with Hybrid Chaotic Slime Mould and Giant Trevally Optimizer. Biomed. Signal Process. Control 2025, 102, 107219. [Google Scholar] [CrossRef]
  27. Aribowo, W. A Novel Improved Sea-Horse Optimizer for Tuning Parameter Power System Stabilizer. J. Robot. Control 2023, 4, 12–22. [Google Scholar] [CrossRef]
  28. Emam, M.M.; Houssein, E.H.; Ghoniem, R.M. A Modified Reptile Search Algorithm for Global Optimization and Image Segmentation: Case Study Brain MRI Images. Comput. Biol. Med. 2023, 152, 106404. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, Y.; Huang, L.; Zhong, J.; Hu, G. LARO: Opposition-Based Learning Boosted Artificial Rabbits-Inspired Optimization Algorithm with Lévy Flight. Symmetry 2022, 14, 2282. [Google Scholar] [CrossRef]
  30. Sadeeq, H.T.; Abrahim, A.; Hameed, T.; Kako, N.; Mohammed, R.; Ahmed, D. An Improved Pelican Optimization Algorithm for Function Optimization and Constrained Engineering Design Problems. Decis. Sci. Lett. 2025, 14, 623–640. [Google Scholar] [CrossRef]
  31. Tiwari, A.; Chaturvedi, A. A Hybrid Feature Selection Approach Based on Information Theory and Dynamic Butterfly Optimization Algorithm for Data Classification. Expert Syst. Appl. 2022, 196, 116621. [Google Scholar] [CrossRef]
  32. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A Novel Bio-Inspired Optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef] [PubMed]
  33. Deng, L.; Shu, T.; Xia, J. Multi-Strategy Improved Artificial Rabbit Algorithm for QoS-Aware Service Composition in Cloud Manufacturing. Algorithms 2025, 18, 107. [Google Scholar] [CrossRef]
  34. Wang, R.; Zhang, S.; Jin, B. Improved Multi-Strategy Artificial Rabbits Optimization for Solving Global Optimization Problems. Sci. Rep. 2024, 14, 18295. [Google Scholar] [CrossRef] [PubMed]
  35. Shu, T.; Pan, Z.; Ding, Z.; Zu, Z. Resource Scheduling Optimization for Industrial Operating System Using Deep Reinforcement Learning and WOA Algorithm. Expert Syst. Appl. 2024, 255, 124765. [Google Scholar] [CrossRef]
  36. SeyedOskouei, S.L.; Sojoudizadeh, R.; Milanchian, R.; Azizian, H. Shape and Size Optimization of Truss Structure by Means of Improved Artificial Rabbits Optimization Algorithm. Eng. Optim. 2024, 56, 2329–2358. [Google Scholar] [CrossRef]
  37. Aljumah, A.S.; Alqahtani, M.H.; Shaheen, A.M.; Ginidi, A.R. Adaptive Operational Allocation of D-SVCs in Distribution Feeders Using Modified Artificial Rabbits Algorithm. Electr. Power Syst. Res. 2025, 245, 111588. [Google Scholar] [CrossRef]
  38. Singh, S.P.; Dhiman, G.; Juneja, S.; Viriyasitavat, W.; Singal, G.; Kumar, N.; Johri, P. A New QoS Optimization in IoT-Smart Agriculture Using Rapid Adaption Based Nature-Inspired Approach. IEEE Internet Things J. 2023, 11, 5417–5426. [Google Scholar] [CrossRef]
  39. Qaraad, M.; Amjad, S.; Hussein, N.K.; Mirjalili, S.; Elhosseini, M.A. An Innovative Time-Varying Particle Swarm-Based Salp Algorithm for Intrusion Detection System and Large-Scale Global Optimization Problems. Artif. Intell. Rev. 2023, 56, 8325–8392. [Google Scholar] [CrossRef]
  40. Reddy, K.M.K.; Rao, A.K.; Rao, R.S. An Improved Grey Wolf Algorithm for Optimal Placement of Unified Power Flow Controller. Adv. Eng. Softw. 2022, 173, 103187. [Google Scholar] [CrossRef]
  41. Ahmed, Z.H.; Maleki, F.; Yousefikhoshbakht, M.; Haron, H. Solving the Vehicle Routing Problem with Time Windows Using Modified Football Game Algorithm. Egypt. Inform. J. 2023, 24, 100403. [Google Scholar] [CrossRef]
  42. Sang-To, T.; Le-Minh, H.; Abdel Wahab, M.; Thanh, C. Le A New Metaheuristic Algorithm: Shrimp and Goby Association Search Algorithm and Its Application for Damage Identification in Large-Scale and Complex Structures. Adv. Eng. Softw. 2023, 176, 103363. [Google Scholar] [CrossRef]
  43. Rajwar, K.; Deep, K.; Das, S. An Exhaustive Review of the Metaheuristic Algorithms for Search and Optimization: Taxonomy, Applications, and Open Challenges; Springer: Cham, The Netherlands, 2023; ISBN 0123456789. [Google Scholar]
  44. Jin, J.; Wang, P. Multiscale Quantum Harmonic Oscillator Algorithm with Guiding Information for Single Objective Optimization. Swarm Evol. Comput. 2021, 65, 100916. [Google Scholar] [CrossRef]
  45. Makhadmeh, S.N.; Khader, A.T.; Al-Betar, M.A.; Naim, S.; Abasi, A.K.; Alyasseri, Z.A.A. A Novel Hybrid Grey Wolf Optimizer with Min-Conflict Algorithm for Power Scheduling Problem in a Smart Home. Swarm Evol. Comput. 2021, 60, 100793. [Google Scholar] [CrossRef]
  46. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial Rabbits Optimization: A New Bio-Inspired Meta-Heuristic Algorithm for Solving Engineering Optimization Problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  47. Wang, Y.; Xiao, Y.; Guo, Y.; Li, J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes 2022, 10, 2703. [Google Scholar] [CrossRef]
  48. Shan, W.; He, X.; Liu, H.; Heidari, A.A.; Wang, M.; Cai, Z.; Chen, H. Cauchy Mutation Boosted Harris Hawk Algorithm: Optimal Performance Design and Engineering Applications. J. Comput. Des. Eng. 2023, 10, 503–526. [Google Scholar] [CrossRef]
  49. Tang, H.; Lee, J. Adaptive Initialization LSHADE Algorithm Enhanced with Gradient-Based Repair for Real-World Constrained Optimization. Knowl.-Based Syst. 2022, 246, 108696. [Google Scholar] [CrossRef]
  50. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A Test-Suite of Non-Convex Constrained Optimization Problems from the Real-World and Some Baseline Results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  51. Zhao, W.; Wang, L.; Mirjalili, S. Artificial Hummingbird Algorithm: A New Bio-Inspired Optimizer with Its Engineering Applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  52. Zhong, C.; Li, G.; Meng, Z. Beluga Whale Optimization: A Novel Nature-Inspired Metaheuristic Algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  53. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  54. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A Nature-Inspired Metaheuristic Algorithm for Engineering Applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  55. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  56. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  57. Seyyedabbasi, A.; Kiani, F. Sand Cat Swarm Optimization: A Nature-Inspired Algorithm to Solve Global Optimization Problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  58. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  59. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-Inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  60. Mirjalili, S.; Lewis, A.; Sadiq, A.S. Autonomous Particles Groups for Particle Swarm Optimization. Arab. J. Sci. Eng. 2014, 39, 4683–4697. [Google Scholar] [CrossRef]
  61. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved Multi-Operator Differential Evolution Algorithm for Solving Unconstrained Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 9–24 July 2020. [Google Scholar] [CrossRef]
  62. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with Semi-Parameter Adaptation Hybrid with CMA-ES for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar] [CrossRef]
  63. Liang, J.J.; Suganthan, P.N.; Qu, B.Y.; Gong, D.W.; Yue, C.T. Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2020.
Figure 1. Comparison between Cauchy, T, and Gaussian distributions.
Figure 1. Comparison between Cauchy, T, and Gaussian distributions.
Eng 06 00174 g001
Figure 2. Energy shrink parameter (a) Behavior of standard energy shrink parameter during 1000 iterations (b) Behavior of the proposed multi-mode energy shrink parameter during 1000 iterations.
Figure 2. Energy shrink parameter (a) Behavior of standard energy shrink parameter during 1000 iterations (b) Behavior of the proposed multi-mode energy shrink parameter during 1000 iterations.
Eng 06 00174 g002
Figure 3. The flowchart of the proposed CARO.
Figure 3. The flowchart of the proposed CARO.
Eng 06 00174 g003
Figure 4. Spider plot based on the mean values of the competitive algorithms across all cases.
Figure 4. Spider plot based on the mean values of the competitive algorithms across all cases.
Eng 06 00174 g004
Figure 5. Convergence graph for the eleven cases of the CEC2020.
Figure 5. Convergence graph for the eleven cases of the CEC2020.
Eng 06 00174 g005aEng 06 00174 g005b
Table 1. Parameter Descriptions and Configurations in the CARO Algorithm.
Table 1. Parameter Descriptions and Configurations in the CARO Algorithm.
ParameterDescriptionValue/Range
nNumber of rabbits30–100
tIteration countCurrent iteration count
TMaximum number of iterations1000–3000
dNo. of dimensionsDepends on the specific problem 
r 4 ,   r 5 Random numbers[0, 1]
℘ Random numberRandom number obey Cauchy distribution
a Amplitude of sinusoidal oscillations0.2
b Scaling coefficient0.2
ω Frequency of oscillations ω = 2 × π / T
ƐEnergy shrink control2→0.2
Table 2. A summary of the power system benchmarks derived from CEC2020.
Table 2. A summary of the power system benchmarks derived from CEC2020.
Case D h g f ( x )
111810800.0000
215314800.0890
315814800.0720
412611600.0219
512611602.7766
612611602.8677
7767600.0000
8747400.0000
9867600.0862
10867600.0804
1130091−6260.7000
Table 3. Comparison of the results of CEC2020 real-world power system problems.
Table 3. Comparison of the results of CEC2020 real-world power system problems.
CaseIndexComparative Algorithms
CAROAROAHABWODMODOGJOHBA
1Best8.561 × 1001.791 × 1011.011 × 1011.441 × 1011.262 × 1011.351 × 1011.101 × 1011.411 × 101
Mean1.831 × 1011.952 × 1012.442 × 1011.922 × 1012.543 × 1012.112 × 1012.112 × 1012.143 × 101
Worst1.922 × 1013.131 × 1013.872 × 1013.121 × 1014.202 × 1013.292 × 1014.113 × 1014.232 × 101
Std1.778 × 1001.997 × 1002.891 × 1001.942 × 1003.074 × 1002.289 × 1002.289 × 1002.344 × 100
2Best2.311 × 1011.621 × 1022.392 × 1012.962 × 1024.392 × 1021.741 × 1021.773 × 1021.813 × 102
Mean2.351 × 1012.912 × 1022.463 × 1014.241 × 1027.791 × 1022.773 × 1022.411 × 1022.711 × 102
Worst3.112 × 1014.841 × 1023.161 × 1016.242 × 1011.421 × 1035.142 × 1024.893 × 1025.123 × 102
Std1.880 × 1005.071 × 1012.081 × 1007.500 × 1011.398 × 1025.017 × 1014.159 × 1014.706 × 101
3Best1.421 × 1029.741 × 1014.111 × 1024.292 × 1023.871 × 1021.981 × 1021.951 × 1024.192 × 102
Mean1.851 × 1021.721 × 1024.111 × 1025.422 × 1024.742 × 1022.212 × 1022.412 × 1025.221 × 102
Worst1.982 × 1021.961 × 1024.111 × 1027.120 × 1026.010 × 1023.702 × 1023.412 × 1026.321 × 102
Std1.599 × 1011.362 × 1015.725 × 1018.117 × 1016.875 × 1012.256 × 1012.621 × 1017.752 × 101
4Best1.222 × 1006.300 × 1001.788 × 1003.852 × 1006.300 × 1003.521 × 1003.912 × 1001.682 × 100
Mean3.361 × 1006.300 × 1005.813 × 1005.972 × 1006.300 × 1004.571 × 1005.983 × 1004.731 × 100
Worst5.443 × 1006.303 × 1001.234 × 1016.891 × 1006.300 × 1007.301 × 1006.933 × 1001.355 × 101
Std3.907 × 10−19.274 × 10−18.380 × 10−18.672 × 10−19.274 × 10−16.116 × 10−18.690 × 10−16.408 × 10−1
5Best3.372 × 1007.881 × 1001.074 × 1011.344 × 1011.161 × 1015.092 × 1001.385 × 1014.021 × 100
Mean8.801 × 1001.212 × 1011.294 × 1011.389 × 1011.451 × 1019.300 × 1001.412 × 1019.800 × 100
Worst1.242 × 1011.322 × 1011.424 × 1011.434 × 1011.671 × 1011.443 × 1011.712 × 1011.454 × 101
Std9.913 × 10−11.593 × 1001.739 × 1001.831 × 1002.032 × 1001.082 × 1001.959 × 1001.173 × 100
6Best2.122 × 1008.822 × 1007.232 × 1001.677 × 1011.313 × 1013.124 × 1009.829 × 1009.861 × 100
Mean8.253 × 1001.611 × 1019.443 × 1001.877 × 1011.452 × 1019.616 × 1001.586 × 1011.662 × 101
Worst1.686 × 1011.678 × 1011.278 × 1011.997 × 1011.673 × 1011.515 × 1011.794 × 1011.871 × 101
Std1.119 × 1002.552 × 1001.336 × 1002.661 × 1002.260 × 1001.367 × 1002.497 × 1002.643 × 100
7Best1.688 × 1024.679 × 1022.334 × 1039.154 × 1034.525 × 1021.717 × 1027.515 × 1032.011 × 103
Mean4.022 × 1026.133 × 1023.815 × 1032.103 × 1047.664 × 1024.424 × 1024.262 × 1043.754 × 103
Worst5.436 × 1021.200 × 1035.917 × 1033.853 × 1041.353 × 1036.338 × 1021.022 × 1055.853 × 103
Std4.253 × 1018.161 × 1016.649 × 1023.730 × 1031.091 × 1025.002 × 1017.746 × 1036.539 × 102
8Best2.291 × 10−12.341 × 1013.332 × 1012.322 × 10−13.665 × 1013.284 × 1013.781 × 1013.000 × 101
Mean3.001 × 1011.432 × 1021.432 × 1023.222 × 1011.037 × 1021.135 × 1021.232 × 1021.017 × 102
Worst7.522 × 1011.263 × 1033.557 × 1027.689 × 1013.584 × 1023.385 × 1023.448 × 1023.183 × 102
Std5.435 × 1002.606 × 1012.606 × 1015.837 × 1001.876 × 1012.058 × 1012.241 × 1011.839 × 101
9Best− 1.198 × 10−1− 2.456 × 10−1− 2.164 × 10−1− 2.144 × 10−1− 1.400 × 103− 1.299 × 101− 1.355 × 103− 2.441 × 10−1
Mean− 1.868 × 10−1− 2.255 × 10−1− 2.067 × 10−1− 2.044 × 10−1− 5.811 × 102− 2.233 × 100− 5.879 × 102− 2.286 × 10−1
Worst− 7.615 × 10−2− 2.186 × 10−1− 2.056 × 10−1− 2.036 × 10−12.827 × 101− 4.449 × 10−12.773 × 101− 2.194 × 10−1
Std1.223 × 10−21.935 × 10−21.588 × 10−21.551 × 10−21.060 × 1023.854 × 10−11.071 × 1021.990 × 10−2
10Best− 7.121 × 101− 9.181 × 10−23.854 × 100− 8.564 × 10−2− 9.198 × 10−23.864 × 100− 6.422 × 102− 1.223 × 10−1
Mean2.811 × 1011.891 × 10−11.901 × 101−8.480 × 10−21.891 × 10−11.842 × 101− 2.221 × 102− 1.181 × 10−1
Worst1.431 × 1027.271 × 10−12.891 × 101− 8.712 × 10−27.892 × 10−12.742 × 1012.392 × 102− 1.134 × 10−1
Std5.147 × 1005.090 × 10−23.485 × 1001.278 × 10−35.126 × 10−23.376 × 1004.051 × 1014.783 × 10−3
11Best− 6.411 × 103− 5.822 × 103− 5.589 × 103− 5.687 × 103− 5.866 × 103− 5.814 × 103− 5.743 × 103− 5.978 × 103
Mean− 6.321 × 103− 5.951 × 103− 5.222 × 103− 5.346 × 103− 5.910 × 103− 5.711 × 103− 5.553 × 103− 5.867 × 103
Worst− 5.871 × 103− 5.851 × 103− 4.982 × 103− 4.992 × 103− 5.668 × 103− 5.567 × 103− 5.388 × 103− 5.756 × 103
Std1.643 × 1018.398 × 1012.172 × 1021.953 × 1029.128 × 1011.278 × 1021.570 × 1021.004 × 102
CaseIndexComparative Algorithms 
CAROSCSOSOMPAAGPSOIMODELSHADE_SPACMA 
1Best8.561 × 1001.432 × 1011.221 × 1019.73 × 1001.22 × 1011.09 × 1011.07 × 101 
Mean1.831 × 1012.342 × 1011.890 × 1011.98 × 1011.98 × 1011.97 × 1011.89 × 101 
Worst1.922 × 1014.712 × 1013.561 × 1013.87 × 1014.49 × 1013.57 × 1013.48 × 101 
Std1.778 × 1002.709 × 1001.887 × 1002.05 × 1002.05 × 1002.03 × 1001.89 × 100 
2Best2.311 × 1012.562 × 1021.320 × 1011.52 × 1021.92 × 1024.90 × 1024.54 × 102 
Mean2.351 × 1011.001 × 1032.711 × 1012.81 × 1021.34 × 1039.02 × 1028.06 × 102 
Worst3.112 × 1011.422 × 1034.391 × 1017.11 × 1021.39 × 1031.24 × 1031.26 × 103 
Std1.880 × 1001.801 × 1022.537 × 1004.89 × 1012.42 × 1021.62 × 1021.45 × 102 
3Best1.421 × 1021.952 × 1025.991 × 1029.61 × 1011.56 × 1023.99 × 1023.98 × 102 
Mean1.851 × 1026.640 × 1026.023 × 1022.15 × 1029.64 × 1026.46 × 1026.08 × 102 
Worst1.982 × 1021.011 × 1036.053 × 1024.30 × 1021.10 × 1038.87 × 1028.80 × 102 
Std1.599 × 1011.034 × 1029.212 × 1012.17 × 1011.58 × 1021.00 × 1029.35 × 101 
4Best1.222 × 1003.811 × 1006.262 × 1005.22 × 1003.85 × 1001.81 × 1001.85 × 100 
Mean3.361 × 1006.011 × 1006.301 × 1008.25 × 1007.11 × 1005.89 × 1005.92 × 100 
Worst5.443 × 1006.301 × 1006.362 × 1001.49 × 1018.40 × 1009.13 × 1009.16 × 100 
Std3.907 × 10−18.745 × 10−19.274 × 10−11.28 × 1001.08 × 1008.53 × 10−18.58 × 10−1 
5Best3.372 × 1007.901 × 1001.091 × 1014.62 × 1001.35 × 1015.53 × 1005.53 × 100 
Mean8.801 × 1001.288 × 1011.282 × 1011.44 × 1011.39 × 1011.51 × 1011.61 × 101 
Worst1.242 × 1011.355 × 1011.722 × 1012.18 × 1011.49 × 1012.38 × 1012.50 × 101 
Std9.913 × 10−11.721 × 1001.721 × 1002.01 × 1001.92 × 1002.14 × 1002.32 × 100 
6Best2.122 × 1009.414 × 1009.793 × 1002.57 × 1009.57 × 1008.83 × 1009.84 × 100 
Mean8.253 × 1001.604 × 1011.553 × 1011.82 × 1011.85 × 1012.26 × 1012.42 × 101 
Worst1.686 × 1011.675 × 1011.762 × 1013.17 × 1011.94 × 1013.67 × 1014.13 × 101 
Std1.119 × 1002.534 × 1002.442 × 1002.94 × 1002.99 × 1003.74 × 1004.03 × 100 
7Best1.688 × 1024.422 × 1022.061 × 1034.68 × 1036.21 × 1015.37 × 1046.46 × 104 
Mean4.022 × 1027.554 × 1023.294 × 1033.14 × 1042.58 × 1021.71 × 1051.61 × 105 
Worst5.436 × 1021.044 × 1035.824 × 1031.06 × 1055.99 × 1023.71 × 1053.65 × 105 
Std4.253 × 1011.071 × 1025.699 × 1025.70 × 1031.62 × 1013.12 × 1042.94 × 104 
8Best2.291 × 10−12.300 × 10−19.372 × 10−11.44 × 1032.81 × 10−13.98 × 1042.60 × 103 
Mean3.001 × 1013.023 × 1012.838 × 1032.78 × 1041.08 × 1021.19 × 1051.38 × 104 
Worst7.522 × 1017.487 × 1017.149 × 1032.64 × 1052.85 × 1022.42 × 1052.23 × 105 
Std5.435 × 1005.471 × 1005.166 × 1025.08 × 1031.97 × 1012.17 × 1042.52 × 103 
9Best− 1.198 × 10−1− 2.233 × 10−1− 2.094 × 102−1.216 × 102−2.73 × 10−1−1.16 × 103−1.28 × 103 
Mean− 1.868 × 10−1− 2.057 × 10−1− 7.699 × 101−3.344 × 102−3.52 × 10−1−8.55 × 102−4.45 × 102 
Worst− 7.615 × 10−2−1.976 × 10−11.991 × 100−4.455 × 102−4.16 × 10−1−9.67 × 102−6.94 × 102 
Std1.223 × 10−21.570 × 10−21.401 × 1016.10 × 1014.24 × 10−21.56 × 1028.12 × 101 
10Best−7.121 × 101−1.141 × 10−1−7.181 × 101−6.848 × 101−1.04 × 10−1−4.75 × 102−4.77 × 102 
Mean2.811 × 101−1.041 × 10−12.862 × 1013.13 × 1013.17 × 10−1−5.31 × 102−5.11 × 102 
Worst1.431 × 102−9.776 × 10−21.298 × 1027.49 × 1011.55 × 1002.11 × 1022.09 × 102 
Std5.147 × 1002.227 × 10−35.238 × 1005.73 × 1007.46 × 10−29.69 × 1019.33 × 101 
11Best− 6.411 × 103−5.636 × 103− 5.973 × 103−6.016 × 103−6.12 × 103−5.14 × 103−5.27 × 103 
Mean− 6.321 × 103−5.442 × 103− 5.842 × 103−5.745 × 103−5.95 × 103−5.01 × 103−5.12 × 103 
Worst−5.871 × 103−5.271 × 103− 5.732 × 103−5.514 × 103−5.67 × 103−4.73 × 103−4.91 × 103 
Std1.643 × 1011.770 × 1021.040 × 1021.21 × 1028.40 × 1012.56 × 1022.36 × 102 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sadeeq, H.T. Cauchy Operator Boosted Artificial Rabbits Optimization for Solving Power System Problems. Eng 2025, 6, 174. https://doi.org/10.3390/eng6080174

AMA Style

Sadeeq HT. Cauchy Operator Boosted Artificial Rabbits Optimization for Solving Power System Problems. Eng. 2025; 6(8):174. https://doi.org/10.3390/eng6080174

Chicago/Turabian Style

Sadeeq, Haval Tariq. 2025. "Cauchy Operator Boosted Artificial Rabbits Optimization for Solving Power System Problems" Eng 6, no. 8: 174. https://doi.org/10.3390/eng6080174

APA Style

Sadeeq, H. T. (2025). Cauchy Operator Boosted Artificial Rabbits Optimization for Solving Power System Problems. Eng, 6(8), 174. https://doi.org/10.3390/eng6080174

Article Metrics

Back to TopTop