Next Article in Journal
Multi-Strategy Improved Pelican Optimization Algorithm for Engineering Optimization Problems and 3D UAV Path Planning
Previous Article in Journal
Enhanced Fracture Energy and Toughness of UV-Curable Resin Using Flax Fiber Composite Laminates
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Fusion Improved Walrus Optimization Algorithm for Coverage Optimization in Wireless Sensor Networks

1
School of Electronic Information and Artificial Intelligence, West Anhui University, Lu’an 237012, China
2
School of Electrical and Optoelectronic Engineering, West Anhui University, Lu’an 237012, China
3
Department of Experimental and Practical Training Teaching Management, West Anhui University, Lu’an 237012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2026, 11(1), 72; https://doi.org/10.3390/biomimetics11010072
Submission received: 18 December 2025 / Revised: 12 January 2026 / Accepted: 13 January 2026 / Published: 15 January 2026
(This article belongs to the Section Biological Optimisation and Management)

Abstract

The Walrus Optimization (WO) algorithm, a metaheuristic inspired by walrus behavior, is known for its competitive convergence speed and effectiveness in solving high-dimensional and practical engineering optimization problems. However, it suffers from a tendency to converge to local optima and exhibits instability during the iterative process. To overcome these limitations, this study proposes an improved WO (IMWO) algorithm based on the integration of Differential Evolution/best/1 (DE/best/1) mutation, Logistics–Sine–Cosine (LSC) Mapping, and the Beta Opposition-Based Learning (Beta-OBL) strategy. These strategies work synergistically to enhance the algorithm’s global exploration capability, improve its search stability, and accelerate convergence with higher precision. The performance of the IMWO algorithm was comprehensively evaluated using the CEC2017 and CEC2022 benchmark test suites, where it was compared against the original WO algorithm and six other state-of-the-art metaheuristics. Experimental data revealed that the IMWO algorithm achieved average fitness rankings of 1.66 and 1.33 in the two test suites, ranking first among all compared algorithms. The WSN coverage optimization problem aims to maximize the monitored area while reducing perception blind spots under limited node resources and energy constraints, which is a typical complex optimization problem with multiple constraints. In a practical application addressing the coverage optimization problem in Wireless Sensor Networks (WSNs), the IMWO algorithm attained average coverage rates of 95.86% and 96.48% in two sets of coverage experiments, outperforming both the original WO and other compared algorithms. These results confirm the practical utility and robustness of the IMWO algorithm in solving complex real-world engineering problems.

1. Introduction

Against the backdrop of the rapidly evolving Internet of Things (IoT) technology, WSNs, as the core carrier of information perception and transmission, have been deeply integrated into many key areas, such as smart agriculture and smart cities [1,2,3]. The coverage quality of WSNs directly impacts the comprehensiveness of data collection and the credibility of decision making. Their core optimization goal is to maximize the coverage of monitoring areas under limited node resources and energy constraints, while reducing perceived blind spots. However, traditional random deployment or static layout strategies often encounter problems such as uneven coverage and rapid energy consumption when facing large-scale networks or complex terrain, making it difficult to fully meet the application requirements of practical scenarios [4]. Therefore, optimizing the deployment strategy of sensor nodes through the application of intelligent optimization algorithms has become a core research topic for enhancing the performance of WSNs.
Metaheuristic algorithms, with their powerful global optimization capabilities, have been widely applied in the field of WSN coverage optimization [5]. Swarm intelligence optimization algorithms represent an important branch of metaheuristic algorithms inspired by the collaborative behaviors of swarm organisms in nature (such as ant colonies, bee colonies, and bird flocks) [6]. Their core characteristic is the achievement of global optimization through local interactions among multiple individuals, with information sharing and collaboration among individuals being key. As research deepens, new algorithms continue to emerge. In classic algorithms, Kennedy and Eberhart proposed the particle swarm optimization (PSO) algorithm [7] inspired by the foraging behavior of birds, but it tends to fall into local optima due to the decline in population diversity in the later stage. The cuckoo search (CS) algorithm proposed by Yang relies on Lévy flight to update the position [8], but the convergence accuracy is significantly affected by the flight step size. The gray wolf optimizer (GWO) [9] algorithm and whale optimization algorithm (WOA) [10] proposed by Mirjalili et al. imitate gray wolf hunting and humpback whale bubble net hunting, respectively, but they have the problem of low search efficiency in the later stage. Dorigo et al.’s ant colony optimization (ACO) algorithm [11] and Karaboga’s artificial bee colony (ABC) algorithm [12] perform well in combinatorial optimization, but exhibit insufficient adaptability when applied to WSN coverage optimization in continuous space. In emerging algorithms, Xiao et al.’s artificial lemming algorithm (ALA) [13], Almutairi and Shaheen’s kangaroo escape optimization technique (KET) [14], Martínez Gámez et al.’s tetragonula carbonaria optimization algorithm (TGCOA) [15], Sánchez Cortez et al.’s mantis shrimp optimization algorithm (MShOA) [16], etc., although imitating unique natural behaviors, focus on improving a single search mechanism and fail to effectively balance the dynamic relationship between exploration and exploitation. Xiao et al. proposed the multi-strategy boosted snow ablation optimizer (MSAO) [17] and the joint opposite selection-based arithmetic artificial rabbits optimization algorithm (MAROAOA) [18], which imitate unique natural behaviors and integrate multiple strategies. However, they still have limitations: MSAO tends to suffer from insufficient local exploitation precision in the late iteration stage, while MAROAOA incurs excessive computational overhead due to its complex opposite selection mechanism, resulting in difficulty in effectively balancing the dynamic trade-off between exploration and exploitation. Among them, the WO algorithm [19] proposed by Han et al. searches for the optimal solution through two stages of migration (exploration) and reproduction (exploitation). The structure is simple and easy to implement, but it has significant flaws: First, population update relies on the historical optimal information of individuals, which can easily lead to a lack of population diversity in the later stages of iteration, and it is difficult to jump out of the local optimum in multi-peak coverage scenarios. Second, the risk factors adopt a linear decreasing mode with a fixed variation pattern, resulting in insufficient exploration in the early stage and a tendency to stagnate in the later stage.
In recent years, many swarm intelligence optimization algorithms have been applied to research on coverage optimization in WSNs. For example, Liao et al. used the glowworm swarm optimization (GSO) algorithm [20] to expand the post-deployment coverage area, but failed to address the premature convergence problem. Saravanan et al. introduced the WO algorithm into WSN node coverage enhancement technology [21], but did not mitigate the inherent defects of the WO algorithm itself. Das et al. proposed the termite colony optimization (TCO) algorithm [22] to balance coverage and the number of sensors. Deif et al. combined the local search ACO algorithm [23] to optimize deployment cost and reliability. Mohar et al. improved coverage based on the bat algorithm (BA) [24]. However, these studies were mostly based on the direct application of basic algorithms and lacked ideas for systematic improvement. To overcome the bottleneck of traditional algorithms, scholars have proposed a variety of multi-strategy improvement schemes, but there are still obvious shortcomings: Chen et al. proposed a multi-strategy improved sparrow search algorithm (IM-DTSSA) [25] for WSN coverage optimization, but the strategy fusion lacks synergy and the global exploration breadth is insufficient. Li et al. proposed the virtual force-guided improved sand cat swarm optimization (VF-ISCSO) [26]; although it can reduce coverage blind spots, it relies too heavily on the virtual force mechanism and has limited local development accuracy. Chang et al. proposed a variant of the tuna swarm optimization algorithm based on the behavior evaluation and simplex strategy (SITSO) [27]; it has slow convergence and poor adaptability to multimodal scenarios. Wang et al. introduced a novel self-adaptive multi-strategy artificial bee colony (SaMABC) [28] that enhances optimization performance but has insufficient population diversity maintenance capabilities and is prone to late convergence stagnation. Liang et al. designed a new adaptive Cauchy variant butterfly optimization algorithm (ACBOA) [29], which can improve network coverage, but the dynamic balance control of exploration and development is insufficient. Deepa et al. proposed the Lévy Flight mechanism and the whale optimization algorithm (LWOA) [30], which enhance global exploration capabilities but ignore the collaborative improvement of local fine search and convergence stability.
In view of the shortcomings of the WO algorithm and those of existing improved algorithms, this study proposes an IMWO algorithm that integrates multiple enhancement strategies and applies it to WSN coverage optimization. The core improvements include three collaborative strategies: LSC Mapping [31] is adopted to increase the randomness through the ergodicity and pseudo-randomness of chaotic sequences, which introduces disturbance to expand population diversity. This helps maintain the global search intensity in the early stages while preserving local development disturbance in later stages, thereby balancing exploration and exploitation and improving the algorithm’s convergence accuracy and stability. The DE/best/1 mutation strategy [32] is introduced, drawing on the mutation mechanism based on the optimal individual in the differential evolution algorithm to enhance the algorithm’s ability to escape local optima and improve global search performance. It integrates Beta-OBL [33], which generates opposite solutions of current solutions to provide high-quality candidate individuals, thereby accelerating the convergence speed and improving coverage optimization efficiency.
Compared with existing multi-strategy algorithms, the distinctiveness of IMWO lies not in the simple superposition of strategies, but in the collaborative interaction of its three mechanisms to address core algorithmic deficiencies. Specifically, the DE/best/1 mutation compensates for the insufficient global exploration in algorithms such as IM-DTSSA and SaMABC. The LSC Mapping overcomes the weak maintenance of population diversity in algorithms such as SaMABC. The Beta-OBL strategy mitigates the slow convergence and poor stability observed in algorithms such as ISCSO and LWOA, ultimately achieving a dynamic balance between exploration and development capabilities. Research has found that not only does the IMWO algorithm show better overall performance in function optimization, but its joint improvement strategy can also effectively improve global search capabilities and convergence efficiency. In WSN coverage optimization, the coverage rate achieved by IMWO is significantly higher than that of the comparison algorithms, validating its superiority in solving practical problems. This study provides an efficient solution for improving the perceived performance of WSNs by improving the WO algorithm and applying it to WSN coverage optimization.
The main contributions of this study are summarized as follows:
(1)
The proposal of an IMWO algorithm by integrating DE/best/1 mutation, LSC Mapping, and Beta-OBL strategies, which effectively solves the defects of the original WO algorithm, such as easily falling into local optima and insufficient stability.
(2)
Establishing a complete mathematical model for WSN coverage optimization, including an objective function, constraint equations, and an encoding scheme, provides a theoretical basis for the application of intelligent optimization algorithms in this field.
(3)
Conducting comprehensive experiments on CEC2017/2022 benchmark suites and WSN coverage scenarios, verifying the superiority of IMWO in terms of scalability, robustness, and practical application value.
The remainder of this paper is organized as follows: Section 2 introduces the basic principles of WO. Section 3 presents the basic principles of IMWO. Section 4 validates the effectiveness of IMWO using the CEC2017 and CEC2022 standard function test suites. Section 5 introduces the wireless sensor network coverage model and provides experimental analysis and discussion on the application of IMWO to the WSN coverage problem. Section 6 concludes the paper and outlines future work.

2. Walrus Optimization Algorithm, WO

In 2024, M. Han et al. [19], inspired by the natural behaviors of walrus populations, proposed the WO algorithm. Specifically, upon receiving key signals such as danger and safety signals, walruses make behavioral choices such as migration, reproduction, habitat selection, foraging, aggregation, and escape. The WO algorithm draws ideas from these behaviors. In the mathematical model of this algorithm, the search space refers to the range within which the algorithm explores for optimal solutions, typically represented as a multi-dimensional space defined by the decision variables of the problem. The solution space encompasses all potential solutions, with each solution analogous to the position of a walrus in the search space, representing a potential solution to the problem. It can be a vector, where each component corresponds to the value of a decision variable, although the decision variables are determined by the specific problem. The core task of the WO algorithm is to find the optimal solution in the search space that minimizes or maximizes the objective function of the optimization problem.

2.1. Danger Signal and Safety Signal

The walrus swarm dynamically adjusts its behavior through an alert mechanism. The danger signal (DS) in WO is defined as shown in formula (1).
D S = O K
O = 2 × ( 1 t / T )
K = 2 × α 1 1
The safety signal (SS) is defined as shown in formula (4).
S S = α 2
Here, O and K are danger factors, t denotes the current iteration number, and T denotes the maximum number of iterations allowed in the process. Additionally, both parameters α 1 and α 2 are randomly generated values, confined to the interval (0,1).

2.2. Migration (Exploration)

When the danger signal value exceeds 1, the walrus herd will migrate to areas more conducive to the survival of their population. During this migration phase of the walrus herd, the update method for their position is shown in formula (5), which specifies the specific rules for adjusting the position of the walrus at this stage.
W i , j t + 1 = W i , j t + M S
Among them, W i , j t + 1 and W i , j t denote the new position and current position of the i-th walrus in the j-th dimension, respectively.
MS represents the movement step length of walruses, which determines the magnitude of positional change. In specific calculations, it is obtained by multiplying the positional difference between two randomly selected guardian walruses by the control factor and the square of a random number within the interval (0,1).

2.3. Reproduction (Exploitation)

When studying the reproduction behavior of walrus pods, we mainly consider the two behavior patterns of walruses: roosting on land and foraging underwater.
(1)
Roosting Behavior
When the danger signal is less than 1 and the safety signal is greater than 0.5, walrus groups inhabit land. The walrus population is divided into males, females, and juvenile individuals, and reproductive efficiency is enhanced via differentiated strategies.
Male walruses employ the Halton sequence from the quasi-Monte Carlo method to update their positions. The position update of female walruses is influenced by both male walruses and the leading walrus. Juvenile walruses, being at the edge of the population, are vulnerable to predators and update their positions through Levy Flight to evade dangers.
(2)
Foraging behavior
When the danger signal is less than 1 and the safety signal is less than 0.5, the walrus herd forages underwater, including fleeing and gathering behaviors. The fleeing behavior is manifested in that walruses flee the current area according to the danger signals transmitted by their companions when the danger signal is greater than 0.5. The gathering behavior is manifested in that walruses collaboratively locate food-enriched areas when the danger signal is less than 0.5.
The flow chart of WO is shown in Figure 1.

3. Improved Walrus Optimization Algorithm, IMWO

The WO algorithm, as a metaheuristic algorithm inspired by oceanic phenomena, exhibits advantages such as strong stability and excellent performance in handling high-dimensional and practical problems. However, it suffers from drawbacks such as susceptibility to local optima and insufficient population diversity. To address these issues, this study proposes the IMWO algorithm, which integrates the DE/best/1 mutation, LSC Mapping, and Beta-OBL to achieve multi-strategy collaborative optimization. The incorporation of multiple strategies enables the algorithm to explore the search space from multiple dimensions, effectively compensating for the shortcomings of the WO algorithm. Compared to single-strategy approaches, IMWO enhances global search capability, improves convergence accuracy, and accelerates the convergence through interaction and switching between strategies.

3.1. LSC Mapping

LSC Mapping is a hybrid chaotic map that enhances the complexity and randomness of chaotic behavior by combining the nonlinear characteristics of the Logistic map, Sine map, and Cosine map [31]. The recursive formula for the LSC Mapping is shown in (6).
m t + 1 = cos π 4 r m t ( 1 m t ) + ( 1 r ) sin ( π m t ) 0.5
Here, r is a random number uniformly distributed within the interval [0,1], and m t represents the value at the t-th iteration. During each iteration, the LSC Mapping dynamically adjusts the contributions of the Logistic map component 4 r m t ( 1 m t ) and the sine function component ( 1 r ) sin ( π m t ) in the overall map based on the value of r.
Utilizing the randomness and ergodicity of chaotic sequences, individuals are able to explore the search space more extensively, thereby enhancing the algorithm’s global search capability and increasing the likelihood of escaping local optima. In the traditional WO algorithm, the danger factor O is adjusted linearly, which often results in convergence to local optima during later search stages. To address this issue, this study introduces a chaotic sequence to dynamically adjust the danger factor O.
In each iteration t, O is adaptively regulated by the chaotic sequence. The dynamic adjustment formula for parameter O is shown in (7).
O = 2 t 2 T m t + 1
where 2 t 2 T is a coefficient that linearly decreases as the iteration number t increases. The term m t + 1 introduces chaotic characteristics, endowing parameter O with randomness and ergodicity in its values, further enriching the search behavior of the algorithm. During the early optimization stage, when the number of iterations t is small and the value of O is relatively large, combined with the chaotic randomness of LSC, the position update step size of the walruses exhibits greater diversity, enabling exploration across the entire search space. In the later stage of iteration, O gradually decreases as t increases. However, LSC Mapping prevents the walrus population from converging entirely to a single region through subtle chaotic fluctuations, maintaining local traversability.

3.2. DE/best/1

DE is a population-based stochastic optimization algorithm proposed by Storn and Price in 1995, designed primarily for solving global optimization problems in continuous spaces. The core idea of DE is to generate new candidate solutions through differential information between individuals and then retain better individuals through selection operations, iterating until termination conditions are satisfied. Its basic procedure consists of four steps: Initialization, Mutation, Crossover, and Selection [32].
In the DE algorithm, DE/best/1 is a classic and commonly used mutation strategy, characterized by utilizing the best individual in the current population to guide the mutation direction, thereby enhancing the algorithm’s local search capability and convergence speed. In the reproduction (exploitation) phase, after each position update of the walruses, the DE/best/1 mutation is integrated into the IMWO algorithm. Taking the position of the current population’s best walrus as a benchmark, local perturbation is provided by combining the positional differences in two randomly selected individuals, enabling mutated individuals to generate small fluctuations near the optimal solution and achieve fine-grained search. The DE/best/1 mutation operator formula is shown in (8).
W i , j t + 1 = W b e s t t + G ( W r 1 , j t W r 2 , j t )
where W b e s t t represents the best individual in the population at the t-th iteration. G is the scaling factor that controls the magnitude of the difference vector, and it is a pseudo-random number generated within (0, 1) by the rand function. W r 1 , j t and W r 2 , j t are two randomly selected different individuals ( r 1 r 2 i ).

3.3. Beta-OBL

Opposition-Based Learning (OBL), proposed by Tizhoosh in 2005 [33], enhances search efficiency by simultaneously evaluating the original solution and its opposite solution. However, traditional OBL employs a deterministic symmetric opposition strategy, which lacks flexibility in complex spaces. Beta-OBL breaks through the deterministic limitations of traditional OBL by incorporating the probabilistic characteristics of the Beta distribution, achieving adaptive exploration of the search space. Its core idea is to dynamically adjust the distribution pattern of opposite solutions based on population diversity, controlling the generation probability of opposite solutions through the shape parameters (α, β) of the Beta distribution, and balancing the capabilities of global exploration and local exploitation. This study proposes an IMWO algorithm that incorporates a Beta-OBL strategy. In the migration (exploration) phase, the Beta-OBL strategy is introduced after updating the position of the walruses. In the reproduction (exploitation) phase, the Beta-OBL strategy is introduced after applying the DE/best/1 mutation. By generating probabilistic opposition points, it balances global exploration and local exploitation, enhancing the algorithm’s optimization performance in complex spaces.
(1)
OBL
Traditional OBL assumes a uniformly distributed search space, and for any solution W i , j t [ l b j , u b j ] , the constructed inverse solution formula is shown in (9).
W i , j t = l b j + u b j W i , j t
(2)
Beta-OBL
Beta-OBL replaces the uniform distribution with a Beta distribution, controlling the opposite solution’s distribution pattern via shape parameters α and β , allowing the inverse solution to better fit the real search space of the problem. The constructed inverse solution formula is shown in (10).
W i , j t = ( d max , j d min , j ) B e t a ( α , β ) + d min , j
α = s p r e a d p e a k , s p r e a d , m d < 0.5 o t h e r w i s e
β = s p r e a d , s p r e a d p e a k , m d < 0.5 o t h e r w i s e
s p r e a d = 1 n o r m D i v 1 + N ( 0 , 0.5 ) ,   r 3 < 0.5 0.1 n o r m D i v + 0.9 , o t h e r w i s e
p e a k = ( s p r e a d 2 ) m d + 1 s p r e a d ( 1 m d ) , 2 s p r e a d s p r e a d + s p r e a d 1 s p r e a d m d , m d < 0.5 o t h e r w i s e
m d = d max , j W i , j t d max , j d min , j , r 3 < 0.5 W i , j t d min , j d max , j d min , j ,   o t h e r w i s e
n o r m D i v = 1 N i = 1 N j = 1 D 1 D W i , j t W ¯ j d max , j d min , j 2
It calculates the dynamic boundaries of the population instead of the fixed boundaries used in the original algorithm, where d max , j and d min , j represent the current upper and lower bounds of exploration, respectively. B e t a ( α , β ) is a random variable that follows a Beta distribution with parameters α and β , and it outputs values within the [0,1] interval according to this distribution pattern. r 3 is a random number uniformly distributed over the interval (0,1). N denotes the population size, D represents the number of decision variables, and W ¯ j stands for the mean value of all individuals in the j-th dimension. After generating reverse solutions, Beta-OBL participates in fitness evaluation alongside candidate solutions from the original IMWO population, and only solutions with better fitness are retained for the next iteration.
The flow chart of IMWO is shown in Figure 2.

3.4. Time Complexity Analysis

Time complexity is the core indicator for evaluating the efficiency of optimization algorithms. The IMWO algorithm adds three new strategies—the LSC Mapping, DE/best/1 mutation, and Beta-OBL strategies—to the WO algorithm framework.
The time complexity of IMWO is still determined by the three core processes of initialization, fitness evaluation, and solution update. Parameter N is the population size, T is the maximum number of iterations, and D is the problem dimension. In the initialization phase, the IMWO algorithm follows the population initialization logic of the WO algorithm, and the complexity remains O (N × D). In the fitness evaluation stage, the IMWO algorithm needs to calculate the fitness values for N search agents one by one in T iterations, with a complexity of O (N × T). None of the three major improvement strategies will change the core process, so there is no essential change in the complexity. The solution update stage is the core difference between the IMWO algorithm and the WO algorithm, but it does not increase the complexity metric level. For the IMWO algorithm’s new chaotic sequence parameter calculations (each iteration O (1)), DE/best/1 mutation (traversing the population O (N × D)), and Beta-OBL (population position adjustment O (N × D)), after superimposing T iterations, the total complexity is still determined by the dominant term O (N × T × D).
In summary, the IMWO algorithm improves optimization performance while maintaining the time complexity of the WO algorithm, ensuring that the algorithm has good operating efficiency and efficient optimization.

4. Experimental Results and Analysis

4.1. Experimental Environment

The simulation platform is configured with a Windows 11 operating system, an Intel Core Ultra 7 series processor with a clock speed of 3.80 GHz, and 32 GB of on-board RAM. All algorithms are implemented in MATLAB R2024a.

4.2. CEC2017 Test Functions

This section adopts the CEC2017 function set as the test suite. Notably, this suite does not include the F2 function, as it has been explicitly removed by the official due to stability flaws, resulting in a total of 29 test functions. Among them, F1 and F3 are unimodal functions with a single global optimum, which can effectively evaluate the convergence capability of the algorithm. F4–F10 are classified as simple multimodal functions, characterized by the presence of multiple local optima and a single global optimum. These functions are used to test two key capabilities of algorithms, the ability to avoid local optimum traps and the ability to find the global optimum. F11–F20 are hybrid functions, constructed by combining three or more CEC2017 benchmark functions after applying rotations and translations, with each subfunction assigned a certain weight. These functions are primarily used to test the performance of algorithms in handling complex hybrid structural problems. F21–F30 are composition functions, formed by combining at least three hybrid functions or CEC2017 benchmark functions after rotations and translations. Each subfunction has not only a weight but also an additional bias value, which further increases the optimization difficulty for the algorithm.
This study provides a detailed comparison of the performance of eight algorithms on the CEC2017 test functions, including IMWO, WO, the Greylag Goose Optimization (GGO) algorithm [34], the Chinese Pangolin Optimizer (CPO) algorithm [35], the Pigeon-inspired Optimization (PIO) algorithm [36], the Dung Beetle Optimizer (DBO) algorithm [37], the Pelican Optimization algorithm (POA) [38], and the Aquila Optimizer (AO) algorithm [39]. The population size of the test function is set to 30, with a dimensionality of 10, and the number of iterations is 600. The experiment is independently executed 30 times, and the average fitness values of each algorithm are subsequently ranked. Table 1 lists the results for unimodal functions and simple multimodal functions, Table 2 lists the results for hybrid functions, and Table 3 lists the results for composition functions. Among them, min, std, avg, median, and worse represent the optimal value, standard deviation, average value, median, and worst value, respectively.
As shown by the data in Table 1, the average value of IMWO is significantly lower than that of the other seven comparison algorithms on eight functions, including F1, F3–F8, and F10. It has the smallest standard deviation among F1, F3, F4, and F6, with standard deviations on the remaining functions also maintained at a low level. Moreover, the average values and standard deviations of these eight functions are all better than those of the WO algorithm. The DE/best/1 mutation strategy addresses the defect that the WO algorithm is prone to falling into local optima. Through a globally guided mutation step size based on the optimal individual, it enables the IMWO algorithm to jump out of local extreme value regions on functions such as F1 and F5, achieving more accurate global search. The LSC Mapping chaotic perturbation makes up for the deficiency of population convergence in the later stage of the WO algorithm. Leveraging the ergodicity and pseudo-randomness of chaotic sequences, it continuously injects diversity into the population, maintaining exploration vitality in the late iteration stage on functions such as F3 and F7. This ensures that the standard deviation of IMWO remains consistently low, and its stability is far superior to that of the WO algorithm.
As shown by the data in Table 2, in terms of solution accuracy, the IMWO algorithm has significantly lower average values than the other comparison algorithms on functions F16, F17, and F20, while maintaining relatively low average values on the remaining functions. In terms of stability, it achieves the smallest standard deviations on functions F11, F16, and F17, with relatively low standard deviations on other functions. Across all functions F11–F20, the IMWO algorithm exhibits comprehensively lower average values and standard deviations than the original WO algorithm, which mitigates the issues of unstable convergence and insufficient solution accuracy that the WO algorithm is prone to encounter in hybrid functions. The Beta-OBL strategy generates high-quality opposition solutions for the current solutions, constructs a two-way search mechanism, and accelerates the convergence process. This enables the IMWO algorithm to approach the global optimum faster on functions such as F11 and F20, with the convergence speed and solution accuracy far exceeding those of the WO algorithm. The DE/best/1 mutation strategy further enhances the global exploration capability, avoiding the performance bottleneck of the WO algorithm caused by falling into local optima in the complex search space of hybrid functions, and ensuring that the IMWO algorithm maintains stable and excellent performance across the entire set of functions.
As shown by the data in Table 3, the IMWO algorithm achieves significantly lower average values than the other comparison algorithms on five functions, including F21, F23, and F27–F29, and maintains relatively low average values on the remaining functions. In terms of stability, it achieves the smallest standard deviations on functions F21, F22, F27, F29, and F30, with relatively low standard deviations on the other functions. Compared with the WO algorithm, the IMWO algorithm has lower average values on seven functions, including F21, F23, F24, and F27–F30, and smaller standard deviations on six functions, including F21, F22, F26, F27, F29, and F30. Even for the few functions where the WO algorithm has a slight advantage, the overall performance of the IMWO algorithm is more balanced. The chaotic perturbation of LSC Mapping effectively alleviates the problem of the excessively rapid decay of population diversity in the WO algorithm when addressing composition functions. It maintains population vitality during the iteration of F21 and F30, thereby ensuring stability. The DE/best/1 mutation strategy and Beta-OBL strategy operate synergistically: the mutation guided by the optimal individual improves the global search accuracy, while the opposition solutions accelerate convergence. This solves the defects of the WO algorithm in composite functions, such as the imbalance between exploration and exploitation and insufficient convergence accuracy, and achieves the dual optimization of accuracy and stability.
Figure 3 displays the convergence curves of all functions from F1 to F30. As observed from the figure, the improved algorithm IMWO outperforms the WO algorithm on most functions, with a faster average fitness decrease and a lower final value, indicating significant improvements in convergence speed and accuracy for IMWO. For example, on functions such as F1 and F5, the IMWO curve lies below that of WO. Compared with other algorithms, IMWO also demonstrates strong competitiveness, leading or ranking among the top performers in most function tests. On functions like F16 and F21, the downward trend of the IMWO fitness curve is significantly better than that of other algorithms. This suggests that the improvement strategies of IMWO are effective and enable better searching for optimal solutions.
Figure 4 presents the distribution of fitness values for various algorithms, including WO and IMWO, across the test functions F1 to F30. As evident from the box plots, IMWO outperforms WO in most functions. The boxes corresponding to IMWO are, in most cases, shorter and positioned lower, indicating a more concentrated data distribution and better algorithm stability, which enables it to consistently obtain better solutions across different runs. Other comparative algorithms have their own strengths and weaknesses across different test functions; some outperform IMWO in specific scenarios (e.g., test functions F12 and F13), such as the POA. But overall, IMWO surpasses these algorithms in most cases. This suggests that the improvement strategies effectively enhance the algorithm’s performance, rendering IMWO more advantageous in solving various optimization problems and more stable in finding high-quality solutions.
Figure 5 presents a performance comparison of WO, IMWO, and other representative algorithms on the CEC2017 benchmark functions, including a radar chart and a ranking bar chart. As can be observed from the radar chart (a), the IMWO algorithm exhibits outstanding performance across multiple functions, with its coverage area surpassing that of WO and other algorithms in numerous function dimensions. This indicates that IMWO demonstrates more balanced and superior performance across different types of functions. The ranking bar chart (b) further quantifies the strengths and weaknesses of each algorithm. With an average ranking of 1.66, IMWO ranks highly among all algorithms, significantly outperforming the original WO algorithm (ranked 3.17). This suggests that the improved IMWO algorithm has significantly enhanced overall performance. Compared to other algorithms, IMWO also showcases strong competitiveness, albeit being slightly inferior to certain algorithms in some function performances. Overall, through the implementation of improved strategies, the problem-solving capabilities of the IMWO algorithm on benchmark functions have been effectively enhanced, demonstrating excellent performance in terms of convergence accuracy, stability, and other aspects. For high-dimensional problems with 30 and 50 dimensions, the comprehensive performance of the IMWO algorithm is also superior to that of the WO algorithm and other comparative algorithms. Detailed statistical data are presented in Figures S1–S4 of the Supplementary Material.

4.3. CEC2022 Test Functions

This section adopts the CEC2022 function set as the test suite. The CEC2022 test suite comprises 12 single-objective test functions with boundary constraints, where F1 is a unimodal function, F2–F5 are multimodal functions, F6–F8 are hybrid functions, and F9–F12 are composite and multimodal functions. The population size of the test function is set to 30, with a dimensionality of 10, and the number of iterations is 600. The experiment is run 30 times, and the average fitness values are collected for subsequent analysis. Table 4 presents the results of eight algorithms on the CEC2022 test functions.
According to the data in Table 4, the IMWO algorithm demonstrates excellent performance in both solution accuracy and stability. In terms of solution accuracy, IMWO has significantly lower average values than the other seven comparison algorithms in nine functions, namely F1–F3, F6, F7, and F9–F12. In terms of stability, IMWO performs equally well. It has the smallest standard deviation in nine functions, namely F1–F3, F6–F10, and F12. Most importantly, compared to the WO algorithm, IMWO has comprehensively lower average values and standard deviations in 11 functions, namely F1–F4 and F6–F12. This demonstrates significant improvements over the original algorithm and achieves dual optimization of accuracy enhancement and stability improvement across most test functions.
By introducing the DE/best/1 mutation strategy, IMWO incorporates the perturbation mechanism based on the best individual from differential evolution, which effectively mitigates the WO algorithm’s tendency to fall into local optima. Leveraging globally guided mutation step sizes, it successfully escapes local extrema in multimodal functions F1 and F5, significantly improving the quality of both the average and worst solutions. The LSC chaotic mapping utilizes the ergodicity of chaotic sequences to continuously enhance population diversity, sustaining exploratory vigor in the later stages of functions such as F3 and F7, thereby maintaining a low standard deviation and enhancing algorithmic stability. Beta opposition-based learning generates high-quality opposite candidate solutions, enabling bidirectional search in functions like F2 and F8, further accelerating convergence speed. Consequently, IMWO demonstrates significantly superior minimum and average values across multiple functions compared to competing algorithms.
Figure 6 displays the variation in the average fitness of eight algorithms across different test functions F1 to F12 with the number of iterations. As can be clearly observed from the figure, IMWO demonstrates superior performance compared to WO and other algorithms in most test functions. Its average fitness value decreases rapidly in the early iterations and stabilizes at a lower level in the later stages, indicating that IMWO has advantages in terms of convergence speed and optimization accuracy. Other algorithms exhibit varying performance across different functions, but overall, the advantage of IMWO is more prominent. This suggests that the improvement strategy effectively enhances the algorithm’s optimization capability, enabling IMWO to quickly converge toward the optimal solution and rendering it more competitive in addressing complex optimization problems.
Figure 7 displays the fitness values of eight algorithms across 12 different functions. As can be clearly observed from the figure, the IMWO algorithm outperforms the original algorithm WO on most functions; its median fitness values and overall distribution characteristics are consistently superior, indicating that the proposed improvement measures have effectively enhanced the algorithm’s performance. Other comparative algorithms exhibit varying performances across different functions. For instance, POA demonstrates good performance on functions such as F4 and F8, but its overall stability is inferior to that of IMWO. There are notable differences in the advantages of different algorithms across various functions, with no single algorithm performing optimally on all functions. IMWO demonstrates strong overall competitiveness, which fully validates the effectiveness and robustness of the proposed improvement strategies.
Figure 8 presents a comprehensive performance comparison of competing algorithms on the CEC2022 benchmark functions. Figure 8a is a radar chart, where lines of different colors represent the performance of different algorithms across multiple functions; the closer that line is to the center, the better the algorithm’s performance. It can be observed that IMWO performs outstandingly on most functions. Figure 8b combines a bar chart and a line chart to present the average ranking of each algorithm, with lower values indicating better performance. The average ranking for WO is 2.67, while that for IMWO is 1.33; this represents a significant performance improvement over WO, confirming that the proposed improvement strategies effectively enhance the algorithm’s optimization capability. Other algorithms, such as CPO, with an average ranking of 6.58, perform poorly. POA ranks 3.25, and AO ranks 4.50, both lagging behind IMWO in overall performance. This demonstrates that IMWO outperforms WO and other compared algorithms on the CEC2022 benchmark functions, achieving better results across multiple functions and leading in average performance ranking, which reflects the effectiveness of the improvement measures. For 20-dimensional optimization problems, the comprehensive performance of the IMWO algorithm outperforms that of the WO algorithm and other comparative algorithms. Relevant detailed statistical data can be found in Figures S5 and S6 of the Supplementary Material.

5. Application of IMWO Algorithm in Coverage Optimization of WSNs

5.1. WSN Coverage Model

To simplify the mathematical optimization model of the objective function, this study adopts the Boolean perception model as the node sensing model for the WSN coverage optimization problem in a two-dimensional plane. The WSN Boolean sensing coverage model is an important model used in WSNs to describe the sensing capability of sensor nodes and regional coverage, characterized by its simplicity and intuitive expression. In this model, the sensing range of a sensor node is a circular area centered at the node with a sensing radius as the radius. Only target points that fall within this area are considered to be covered by the node, exhibiting a clear binary characteristic of “yes” or “no”; hence, it is also known as the 0/1 sensing model.
Assuming that the WSN monitoring area is a two-dimensional plane, the monitoring area is digitized into an L × M grid, with each grid cell considered as a target point. There are N homogeneous sensor nodes deployed, denoted as a set Z = Z 1 , Z 2 , , Z N . The coordinates of the node Z i are ( x i , y i ) , and the coordinates of the target point S j are ( x j , y j ) . According to the distance formula between two points in a two-dimensional space, the distance between node Z i and target point S j is given by formula (17), which is used to determine whether the target point is within the sensing range of the node.
d ( Z i , S j ) = ( x i x j ) 2 + ( y i y j ) 2
P ( Z i , S j ) is used to represent the perception reliability of node Z i towards the target point S j . If d ( Z i , S j ) R s , then P ( Z i , S j ) = 1 , indicating that the target point can be effectively perceived by the node; otherwise, P ( Z i , S j ) = 0 , indicating that the target point cannot be perceived by the node.
To improve the probability of target perception, multiple sensors often need to collaborate for detection. The joint perception probability of a WSN for the target point S j is Q ( Z , S j ) , calculated using formula (18). Based on the perception quality of each node towards the target point, the perception probability of the target point under the combined effect of multiple nodes can be calculated.
Q ( Z , S j ) = 1 i = 1 N 1 P ( Z i , S j )
Coverage is an important indicator for measuring the coverage performance of WSNs, denoted by COV. Coverage is defined as the ratio of the number of target points covered by all sensor nodes to the total number of target points in the area, and the coverage COV formula is shown in (19). A higher coverage indicates more effective coverage of the target area by the network, enabling more comprehensive monitoring of information within the target region.
C O V = j = 1 L × M Q ( Z , S j ) L × M

5.2. Mathematical Formulation of WSN Coverage Optimization Problem

To reformulate the WSN coverage optimization problem into a form amenable to solution by intelligent optimization algorithms, this section establishes a comprehensive mathematical model and elaborates on its encoding scheme, constraint handling method, and discretization strategy.

5.2.1. Objective Function

Under the Boolean sensing model, the coverage optimization problem can be formulated as maximizing the coverage rate of the monitored area. To conform to the common formulation of optimization algorithms (i.e., minimizing the objective function), the problem is transformed into minimizing the uncovered rate, and the objective function is given by Equation (20).
min f ( X ) = 1 C O V = 1 j = 1 L × M Q ( Z , S j ) L × M
where f(X) denotes the uncovered rate, which is the fitness function to be minimized.

5.2.2. Constraint Conditions

The deployment of sensor nodes must satisfy the boundary constraints of the monitored area; that is, the node coordinates must fall within the range of the area. The constraint conditions are given by Equation (21).
1 x i L ,   1 y i M ,   i = 1 , 2 , , N

5.2.3. Individual Encoding Scheme

In the optimization algorithm, each individual is directly encoded as a real-valued vector of length 2N. The individual encoding is given by Equation (22).
X = [ x 1 , x 2 , x N , y 1 , y 2 , y N ]
where x i [ 1 , L ] and y i [ 1 , M ] denote the abscissa and ordinate of the i-th node, respectively.

5.2.4. Constraint Handling Technique

For boundary constraint handling, the boundary truncation method is adopted. If the coordinate values in the encoding vector exceed the range of [1, L] or [1, M], they are truncated to the corresponding boundary values.
(1)
If x i < 1 , set x i = 1 . If x i > L , set x i = L
(2)
If y i < 1 , set y i = 1 . If y i > M , set y i = M .

5.2.5. Discretization of Continuous Optimization Algorithms

Because the monitored area has been digitized into an L × M grid (where the target points have integer coordinates), the positioning of sensor nodes must correspond to integer grid points. For optimization algorithms that output continuous values, a rounding operation is adopted to achieve discretization. After the algorithm outputs the continuous coordinate vector X, the floor function is applied to each coordinate to convert the continuous values into integers.

5.3. WSNs Coverage Experiment

To verify the performance of the IMWO algorithm in WSN coverage optimization, it was compared with seven other algorithms: WO, GGO, CPO, PIO, DBO, POA, and AO. Two sets of experiments were conducted with parameter adjustments. The experiments were simulated using MATLAB R2024a.

5.3.1. Environment 1

The simulation environment consists of a 100 × 100 monitoring area, where a Boolean communication model is adopted. The number of sensor nodes N = 30 , the sensing radius R s = 6 , and the communication radius R c = 12 . The population size for all algorithms is set to 30, with a maximum iteration count of 600. To avoid result bias due to experimental randomness, each algorithm is independently run 30 times, and the average value of the experiments is taken as the final result. The coverage curve for the 30th run is shown in Figure 9, and the coverage optimization distribution comparison of the 30th run is shown in Figure 10. As can be seen from Figure 9 and Figure 10, the node distribution of IMWO in a single run is more uniform and it achieves the highest coverage rate, surpassing the original algorithm WO and the other six comparison algorithms. The average coverage results of the eight algorithms after 30 operations are shown in Table 5 and Figure 11. It can be seen from Table 5 and Figure 11 that IMWO’s average value of 0.9586 is the highest among all algorithms, exceeding WO’s 0.9186. The maximum value of 0.9688 is also the best, higher than the 0.9605 of POA and 0.9439 of AO. The minimum value of 0.9454 is also the highest, and far higher than the 0.7635 of WO. IMWO has obvious advantages over WO, with its average performance being 4.0 percentage points higher and maximum coverage rate 0.88 percentage points higher, while the minimum value leads by 18.19 percentage points. This indicates that IMWO not only performs better overall, but also shows a significant improvement in stability.

5.3.2. Environment 2

The simulation environment consists of a 150 × 150 monitoring area, where a Boolean communication model is adopted. The number of sensor nodes N = 30 , the sensing radius R s = 9 , and the communication radius R c = 18 . The population size for all algorithms is set to 30, with a maximum iteration count of 600. Similarly, each algorithm is independently run 30 times, and the average value of the experiments is taken as the final result. The coverage curve for the 30th run is shown in Figure 12, and the comparison of the coverage optimization distribution for the 30th run is shown in Figure 13. As can be seen from Figure 12 and Figure 13, the node distribution of IMWO in a single run is more uniform, and it achieves the highest coverage rate, surpassing the original algorithm WO and the other six comparison algorithms. The average coverage results of the eight algorithms after 30 runs are shown in Table 6 and Figure 14. As can be seen from Table 6 and Figure 14, the IMWO algorithm demonstrates significant advantages. Its average score is 0.9648, which is higher than that of all the other algorithms, such as WO’s 0.9319 and GGO’s 0.8893. IMWO exhibits the best stability and overall performance. Its maximum value of IMWO is 0.9803, and the minimum value is 0.9464, both of which are the highest among all algorithms, with minimal fluctuation (the difference between the maximum and minimum values is only 0.0339). This is far superior to WO’s fluctuation of 0.1831, indicating that IMWO maintains high efficiency and stability across different scenarios. Compared to the original WO algorithm, IMWO not only improves the average performance by approximately 3.29% but also addresses the drawback of WO’s excessively low minimum value, making it more reliable in extreme cases. The comprehensive advantages of IMWO are evident.

5.4. Feasibility Analysis of Node Deployment

The node layout scheme obtained by the IMWO algorithm in WSN coverage optimization exhibits excellent deployment feasibility in engineering practice, mainly reflected in the following aspects:
(1)
Feasibility of node positions
All node coordinates optimized by the IMWO algorithm fall within the scope of the two-dimensional monitoring area without overlap or boundary exceeding. In actual deployment, sensor nodes can be accurately placed via GPS positioning or ground marking, which complies with the spatial constraints of practical deployment.
(2)
Guarantee of communication connectivity
The communication radius R c and sensing radius R s adopted in the experiment both meet the common connectivity conditions of R c 2 R s , ensuring that the optimized network maintains global connectivity while achieving coverage optimization. This avoids isolated nodes and satisfies the networking requirements of practical WSNs.
(3)
Considerations for energy consumption and deployment cost
By optimizing coverage to reduce redundant nodes, the IMWO algorithm minimizes the number of nodes while ensuring a good coverage rate, thereby lowering deployment costs and long-term maintenance energy consumption. The uniform distribution of nodes avoids regional energy hotspots, which helps extend the network lifetime.

6. Conclusions and Future Work

In summary, based on a combined strategy of DE/best/1 mutation, LSC Mapping, and Beta-OBL, this study proposes the IMWO algorithm, which aims to address the shortcomings of the WO algorithm, including its tendency to become trapped in local optima, insufficient stability, and inadequate population diversity. The experimental results show that, in CEC2017 and CEC2022 standard function tests, IMWO achieves average rankings of 1.66 and 1.33, ranking first among eight compared algorithms, outperforming WO’s rankings of 3.17 and 2.67; the IMWO algorithm exhibits smaller performance fluctuations and stronger stability. In WSN coverage optimization, IMWO’s average coverage rates are 95.86% and 96.48% in two experiments, compared with WO’s 91.86% and 93.19%; the IMWO algorithm achieves a higher coverage rate. However, the algorithm still faces challenges: its convergence speed significantly decreases in ultra-high-dimensional optimization scenarios. Furthermore, in dynamic problems where the solution space changes over time, it lacks the capability of real-time adaptive adjustment in search strategies, leading to performance degradation.
Subsequent research will explore hybrid strategies combining IMWO with other intelligent optimization algorithms. By integrating the search advantages of different algorithms, we aim to further enhance global optimization capabilities and improve the algorithm’s stability in complex scenarios. Meanwhile, the algorithm will be extended to fields such as energy-efficient routing in WSNs and complex engineering design to verify its universality, reduce application costs and resource consumption, and provide efficient optimization solutions for engineering problems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biomimetics11010072/s1, Figure S1: Figure of convergence curve comparison of various algorithms on CEC2017 functions (30-dimensional); Figure S2: (a) Figure of radar chart of various algorithms on CEC2017 functions, (b) Figure of ranking chart of various algorithms on CEC2017 functions (30-dimensional); Figure S3: Figure of convergence curve comparison of various algorithms on CEC2017 functions (50-dimensional); Figure S4: (a) Figure of radar chart of various algorithms on CEC2017 functions, (b) Figure of ranking chart of various algorithms on CEC2017 functions (50-dimensional); Figure S5: Figure of convergence curve comparison of various algorithms on CEC2022 functions(20-dimensional); Figure S6: (a) Figure of radar chart of various algorithms on CEC2022 functions, (b) Figure of ranking chart of various algorithms on CEC2022 functions(20-dimensional).

Author Contributions

Conceptualization, L.L. and X.Z. (Xiancun Zhou); Methodology, L.L. and X.Z. (Xuemei Zhu); Software, X.Z. (Xuemei Zhu); Validation, L.L., Z.W. and J.Z.; Formal analysis, W.P.; Investigation, Y.D. and W.P.; Resources, X.Z. (Xiancun Zhou) and C.J.; Data curation, Y.D. and X.Z. (Xuemei Zhu); Writing—original draft, L.L., Y.D., Z.W. and J.Z.; Writing—review & editing, C.J.; Visualization, Z.W.; Supervision, X.Z. (Xiancun Zhou); Project administration, C.J.; Funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Major Projects for Natural Science Research in Anhui Universities (Grant No. KJ2021ZD0116), the Startup Funding Project for High-Level Talents at West Anhui University (Grant Nos. WGKQ2023003, WGKQ2024011), the Open Research Projects of Anhui Dabie Mountain Traditional Chinese Medicine Research Institute (Grant No. TCMADM-2024-07), and the Key Projects of Natural Science Research in Anhui Universities (Grant No. 2024AH051994). The APC was funded by the Key Projects of Natural Science Research in Anhui Universities (Grant No. 2024AH051994).

Data Availability Statement

The original contributions presented in this study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

WOWalrus Optimization
IMWOImproved Walrus Optimization
DEDifferential Evolution
LSCLogistics–Sine–Cosine
Beta-OBLBeta Opposition-Based Learning
WSNsWireless Sensor Networks
GGOGreylag Goose Optimization
CPOChinese Pangolin Optimizer
PIOPigeon-inspired Optimization
DBODung Beetle Optimizer
POAPelican Optimization Algorithm
AOAquila Optimizer

References

  1. Aburukba, R.; El Fakih, K. Wireless Sensor Networks for Urban Development: A Study of Applications, Challenges, and Performance Metrics. Smart Cities 2025, 8, 89. [Google Scholar] [CrossRef]
  2. Amutha, J.; Sharma, S.; Nagar, J. WSN strategies based on sensors, deployment, sensing models, coverage and energy efficiency: Review, approaches and open issues. Wirel. Pers. Commun. 2020, 111, 1089–1115. [Google Scholar] [CrossRef]
  3. Osamy, W.; Khedr, A.M.; Salim, A.; Al Ali, A.I.; El-Sawy, A.A. Coverage, deployment and localization challenges in wireless sensor networks based on artificial intelligence techniques: A review. IEEE Access 2022, 10, 30232–30257. [Google Scholar] [CrossRef]
  4. Jiao, W.; Tang, R.; Xu, Y. A coverage optimization algorithm for the wireless sensor network with random deployment by using an improved flower pollination algorithm. Forests 2022, 13, 1690. [Google Scholar] [CrossRef]
  5. Toloueiashtian, M.; Golsorkhtabaramiri, M.; Rad, S.Y.B. An improved whale optimization algorithm solving the point coverage problem in wireless sensor networks. Telecommun. Syst. 2022, 79, 417–436. [Google Scholar] [CrossRef]
  6. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  7. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 1942–1948. [Google Scholar]
  8. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  9. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  11. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  12. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  13. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  14. Almutairi, S.Z.; Shaheen, A.M. A Kangaroo Escape Optimizer-Enabled Fractional-Order PID Controller for Enhancing Dynamic Stability in Multi-Area Power Systems. Fractal Fract. 2025, 9, 530. [Google Scholar] [CrossRef]
  15. Gámez, M.G.M.; Vázquez, H.P. A Novel Swarm Optimization Algorithm Based on Hive Construction by Tetragonula carbonaria Builder Bees. Mathematics 2025, 13, 2721. [Google Scholar] [CrossRef]
  16. Sánchez Cortez, J.A.; Peraza Vázquez, H.; Peña Delgado, A.F. A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics. Mathematics 2025, 13, 1500. [Google Scholar] [CrossRef]
  17. Xiao, Y.; Cui, H.; Hussien, A.G.; Hashim, F.A. MSAO: A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Adv. Eng. Inform. 2024, 61, 102464. [Google Scholar] [CrossRef]
  18. Xiao, Y.; Khurma, R.A.; Cui, H.; Camacho, D. MAROAOA: A joint opposite selection-based arithmetic artificial rabbits optimization algorithm for solving engineering problems. Clust. Comput. 2025, 28, 609. [Google Scholar] [CrossRef]
  19. Han, M.; Du, Z.; Yuen, K.F.; Zhu, H.; Li, Y.; Yuan, Q. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl. 2024, 239, 122413. [Google Scholar] [CrossRef]
  20. Liao, W.-H.; Kao, Y.; Li, Y.-S. A sensor deployment approach using glowworm swarm optimization algorithm in wireless sensor networks. Expert Syst. Appl. 2011, 38, 12180–12188. [Google Scholar] [CrossRef]
  21. Saravanan, V.; Indhumathi, G.; Palaniappan, R.; Narayanasamy, P.; Kumar, M.H.; Sreekanth, K.; Navaneethan, S. A novel approach to node coverage enhancement in wireless sensor networks using walrus optimization algorithm. Results Eng. 2024, 24, 103143. [Google Scholar] [CrossRef]
  22. Das, P.P.; Chakraborty, N.; Allayear, S.M. Optimal coverage of wireless sensor network using termite colony optimization algorithm. In Proceedings of the 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 21–23 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  23. Deif, D.S.; Gadallah, Y. An ant colony optimization approach for the deployment of reliable wireless sensor networks. IEEE Access 2017, 5, 10744–10756. [Google Scholar] [CrossRef]
  24. Mohar, S.S.; Goyal, S.; Kaur, R. Optimized sensor nodes deployment in wireless sensor network using bat algorithm. Wirel. Pers. Commun. 2021, 116, 2835–2853. [Google Scholar] [CrossRef]
  25. Chen, H.; Wang, X.; Ge, B.; Zhang, T.; Zhu, Z. A multi-strategy improved sparrow search algorithm for coverage optimization in a WSN. Sensors 2023, 23, 4124. [Google Scholar] [CrossRef]
  26. Li, Y.; Zhao, L.; Wang, Y.; Wen, Q. Improved sand cat swarm optimization algorithm for enhancing coverage of wireless sensor networks. Measurement 2024, 233, 114649. [Google Scholar] [CrossRef]
  27. Chang, Y.; He, D.; Qu, L. An improved tuna swarm optimization algorithm based on behavior evaluation for wireless sensor network coverage optimization. Telecommun. Syst. 2024, 86, 829–851. [Google Scholar] [CrossRef]
  28. Wang, J.; Liu, Y.; Rao, S.; Zhou, X.; Hu, J. A novel self-adaptive multi-strategy artificial bee colony algorithm for coverage optimization in wireless sensor networks. Ad Hoc Netw. 2023, 150, 103284. [Google Scholar] [CrossRef]
  29. Liang, J.; Tian, M.; Liu, Y.; Zhou, J. Coverage optimization of soil moisture wireless sensor networks based on adaptive Cauchy variant butterfly optimization algorithm. Sci. Rep. 2022, 12, 11687. [Google Scholar] [CrossRef] [PubMed]
  30. Deepa, R.; Venkataraman, R. Enhancing Whale Optimization Algorithm with Levy Flight for coverage optimization in wireless sensor networks. Comput. Electr. Eng. 2021, 94, 107359. [Google Scholar] [CrossRef]
  31. Wang, P.; Wang, Y.; Xiang, J.; Xiao, X. Fast image encryption algorithm for logistics-sine-cosine mapping. Sensors 2022, 22, 9929. [Google Scholar] [CrossRef] [PubMed]
  32. Storn, R.; Price, K. Differential evolution–A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  33. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Washington, DC, USA, 28–30 November 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 695–701. [Google Scholar]
  34. El-Kenawy, E.-S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag goose optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  35. Guo, Z.; Liu, G.; Jiang, F. Chinese Pangolin Optimizer: A novel bio-inspired metaheuristic for solving optimization problems. J. Supercomput. 2025, 81, 517. [Google Scholar] [CrossRef]
  36. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  37. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  38. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  39. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
Figure 1. Flow chart of WO.
Figure 1. Flow chart of WO.
Biomimetics 11 00072 g001
Figure 2. Flow chart of IMWO.
Figure 2. Flow chart of IMWO.
Biomimetics 11 00072 g002
Figure 3. Figure of convergence curve comparison of various algorithms on CEC2017 functions.
Figure 3. Figure of convergence curve comparison of various algorithms on CEC2017 functions.
Biomimetics 11 00072 g003aBiomimetics 11 00072 g003b
Figure 4. Figure of boxplot comparison of various algorithms on CEC2017 functions.
Figure 4. Figure of boxplot comparison of various algorithms on CEC2017 functions.
Biomimetics 11 00072 g004aBiomimetics 11 00072 g004bBiomimetics 11 00072 g004cBiomimetics 11 00072 g004d
Figure 5. (a) Radar chart of various algorithms on CEC2017 functions; (b) ranking chart of various algorithms on CEC2017 functions.
Figure 5. (a) Radar chart of various algorithms on CEC2017 functions; (b) ranking chart of various algorithms on CEC2017 functions.
Biomimetics 11 00072 g005
Figure 6. Convergence curve comparison of various algorithms on CEC2022 functions.
Figure 6. Convergence curve comparison of various algorithms on CEC2022 functions.
Biomimetics 11 00072 g006
Figure 7. Boxplot comparison of various algorithms on CEC2022 functions.
Figure 7. Boxplot comparison of various algorithms on CEC2022 functions.
Biomimetics 11 00072 g007
Figure 8. (a) Radar chart of various algorithms on CEC2022 functions; (b) ranking chart of various algorithms on CEC2022 functions.
Figure 8. (a) Radar chart of various algorithms on CEC2022 functions; (b) ranking chart of various algorithms on CEC2022 functions.
Biomimetics 11 00072 g008
Figure 9. Comparison of algorithm coverage rates in the 30th run (Environment 1).
Figure 9. Comparison of algorithm coverage rates in the 30th run (Environment 1).
Biomimetics 11 00072 g009
Figure 10. Coverage optimization comparison chart for the 30th run (Environment 1).
Figure 10. Coverage optimization comparison chart for the 30th run (Environment 1).
Biomimetics 11 00072 g010
Figure 11. Bar chart of average coverage rate from 30 runs (Environment 1).
Figure 11. Bar chart of average coverage rate from 30 runs (Environment 1).
Biomimetics 11 00072 g011
Figure 12. Comparison of algorithm coverage rates in the 30th run (Environment 2).
Figure 12. Comparison of algorithm coverage rates in the 30th run (Environment 2).
Biomimetics 11 00072 g012
Figure 13. Coverage optimization comparison chart for the 30th run (Environment 2).
Figure 13. Coverage optimization comparison chart for the 30th run (Environment 2).
Biomimetics 11 00072 g013
Figure 14. Bar chart of average coverage rate from 30 runs (Environment 2).
Figure 14. Bar chart of average coverage rate from 30 runs (Environment 2).
Biomimetics 11 00072 g014
Table 1. Performance comparison of various algorithms on unimodal functions F1 and F3 and simple multimodal functions F4–F10.
Table 1. Performance comparison of various algorithms on unimodal functions F1 and F3 and simple multimodal functions F4–F10.
FResultsIMWOWOGGOCPOPIODBOPOAAO
F1min229.58991514.110410110687.571240.5765271163486565880212.476023.2086299262.5395
std5502.066376485.629810264102753149.02952853477401116835479343198165.58643719.105
avg6120.892254625.22645936896825023.0217751810422.72021031126151397289.55376114.64
median4561.006317709.7228195560412.54861.328715005873.918162160246094448.7932668760.003
worse19609.1082267807.7199457611824215372.091912583297444322879855144769590039599068.44
F3min300.5217321.7214499.2726396.03011228.4376764.9891302.5064437.0015
std80.9322172.32062667.4569689.52621775.62081526.963951.7175831.6011
avg345.9616461.96254188.4581182.58374365.41682971.3503865.59252034.6042
median319.5065417.76243796.10031055.38834449.80372692.4853485.6821942.4904
worse661.93651108.823210224.89663241.39298164.6896491.30063936.24714226.7034
F4min403.0676406.4399408.4511400.1846418.045444.1223400.2534405.7822
std15.431331.4659.461328.129525.532853.50328.904929.3232
avg411.8398422.9193481.5752421.2145449.7287516.417418.4694430.2193
median407.8287408.1152475.8696407.5816443.1113499.3778407.7155410.2051
worse471.4807506.3395636.3441477.5156523.2452615.8136510.0154494.7409
F5min511.9395507.0788511.9397517.9136539.5475533.7721518.0556512.5957
std9.0618.96877.394629.12296.04610.163214.446912.0387
avg521.7371530.989525.1663570.3853552.7717555.0003541.9866532.9793
median518.4093522.4507524.0469569.6486553.6671554.2658539.5077533.2233
worse542.3019568.7045538.6793632.3305566.9499578.5985578.1521561.4106
F6min600.064600.1256603.7472630.0116615.0625615.6831611.3747607.0817
std1.60151.80615.598113.76494.4174.32128.43396.1913
avg601.2462601.4716611.3701653.5707621.3962624.7477622.2958618.4894
median600.5418600.9138609.7271658.0198620.4371624.6354622.7134618.3326
worse605.4175608.239623.8229682.827635.0634634.4149638.9088632.8283
F7min718.7804719.3968732.982744.1282796.8374768.1144739.4003726.5341
std13.532724.449810.723130.930512.903111.538419.160114.0051
avg735.6147745.4903752.5627795.7616825.6774792.7125768.5038752.9987
median733.6222738.4014749.1514808.6529825.2444796.2198768.8613754.2325
worse767.7707800.5921771.5405838.5385846.4149815.0133805.6343780.0947
F8min809.9496809.9724813.529829.8504851.6821824.272811.9712812.4217
std7.647615.54769.15148.23095.05617.69497.03866.9384
avg820.8548827.2489829.7674836.2229861.0853837.7822.101825.6821
median819.9042821.4297829.397834.8244860.3997840.0864822.8976826.1937
worse835.7039861.9793845.7413868.7378868.6984851.9244834.8741845.239
F9min900.0022900.0503916.49281247.93791080.50721005.9067912.7557917.3138
std105.15942.788296.5387240.9497133.130981.2858110.2729115.4174
avg926.2266902.43481021.09181757.06071295.76731097.27251063.74861047.0945
median900.8962901.7309995.19491778.26551248.3491082.93381035.9951003.8728
worse1372.0391911.86261258.91282215.23211576.28451282.39951283.45171385.8568
F10min1252.07991277.05611379.23801624.19172207.13081805.64691147.58641241.2476
std224.127492.2836333.2166689.4722150.1691252.9751227.9144356.083
avg1618.12772052.23982198.0632745.87012552.91552242.7511652.27281992.3633
median1610.72312090.38612152.91872716.56552578.19712201.98131661.91111944.8884
worse1985.59762771.25392852.3083962.68922882.86402638.84922011.12632640.1488
Table 2. Performance comparison of various algorithms on hybrid functions F11–F20.
Table 2. Performance comparison of various algorithms on hybrid functions F11–F20.
FResultsIMWOWOGGOCPOPIODBOPOAAO
F11min1105.04011106.48451134.20511111.12861158.56831177.84961104.01491137.394
std7.467746.634368.378278.109852.9441131.258123.29664.6975
avg1118.70621126.8221214.95571210.78241247.09351319.60131143.84661221.8512
median1117.52891117.0021189.6571170.33751237.62531302.77731139.92361204.5332
worse1134.54781322.24061384.28621342.96821363.05181681.92311194.25831371.6339
F12min27348.257346356.8737235846.214214407.12344454153.385374046.79163936.76625087.7879
std1087639.392292405.313778282.2183719510.9199500037.6846014236.701469267.11195155460.59
avg1032929.8261619650.4463975088.9783214690.84422725738.816535394.385267695.68514935249.927
median755841.533856456.04522517387.2721822050.19721151447.955552206.15855065.34213301671.905
worse3153999.8569682730.72813357573.4113542585.6637679991.621545629.341915631.87518742703.11
F13min1738.08681639.65993901.65422151.27832485.7583122.24491421.45453145.4109
std8129.73710534.180910455.029210887.4728112060.623616426.2679875.634510265.1175
avg9067.012311264.839118848.596314693.0921111505.882921316.38682178.266316835.5751
median7048.527806.257117449.987710976.605585699.706416966.71121881.784614929.2067
worse31673.122336324.4338565.435539527.3009406801.187666799.24645038.100444257.0383
F14min1460.871491.7071478.69131518.1811484.59971593.68171428.79631532.2278
std357.3378753.867348.71424380.5527123.14171507.2924.4006686.1676
avg1665.41032085.43541638.47125824.15831583.35822479.04671459.96962159.1357
median1526.59961665.97451537.78643749.58061539.42621933.40261457.46821744.1782
worse3028.52814035.35373089.991315169.04421924.05478042.35261519.40663597.89
F15min1586.89662088.65621733.33965306.13251729.34792075.9931540.59182029.2618
std1778.54082181.52933860.151125471.54471953.68932109.7022113.56223533.8271
avg2933.84225367.53775769.298237374.91364207.67325017.73361669.63917203.601
median1950.5334944.76714190.038134875.72353833.94564339.29031629.4717638.7217
worse7731.72038926.768316161.7631115410.639910713.20539328.23431989.814312303.0979
F16min1601.93631604.69341611.47431772.65971621.23251631.69251603.50871628.6066
std57.0825135.0853129.6932214.2007114.0101119.8398145.8043117.0683
avg1663.96561734.38021802.25842147.26881804.08061795.30711831.57891817.0119
median1641.99591684.41581768.89262234.81451794.37301758.30411867.86761836.5927
worse1735.43082039.99122028.16632603.52731985.71321989.17712022.57632068.2059
F17min1719.28551723.13861736.6071752.42411765.32481754.83021738.78651748.5383
std10.731737.388823.1437122.396633.219417.323114.940823.4107
avg1742.37371756.52631769.41251862.23621821.7221788.24891755.69891783.482
median1741.77831744.76841765.05091810.64851820.4951786.4871753.18021776.4348
worse1766.11771889.3531823.65592140.87121894.29901838.45551800.57591832.8222
F18min2382.75844001.10333876.94393144.11328183.261312349.54971873.29376594.704
std12327.813813419.658812593.630513079.3196221604.052563870.4902767.619029043.4261
avg13699.983921548.935218531.99318447.4066185126.943262538.33612225.204834108.2616
median9639.117517029.143117230.76315288.3768122130.830345415.1891991.410230008.1689
worse51091.542948990.619251885.692544616.3889987679.913277304.60695338.0863144154.2336
F19min1973.00252021.18021949.47973845.51652018.30832012.19231907.22032062.3777
std3753.64678587.381734360.95732875.8073531.344412346.546638.3979141737.9888
avg4448.53619329.843127483.455626485.27684560.89317649.2621949.130758059.5065
median3094.36885356.64621244.760317978.68663041.55493243.21061935.088911388.2765
worse15919.891231064.9079121488.0559145236.867514917.241750845.43882048.3450594439.9826
F20min2019.80322021.08162068.42602077.55572066.93072044.78202026.25472060.0220
std20.053268.911144.1455105.219119.411545.312033.730865.5272
avg2042.80152085.73542150.47432278.09712103.90872121.80432082.86272141.9518
median2041.87042048.13122166.82532266.03542097.67422119.42722080.08212115.6644
worse2092.14502243.81742205.47862431.69262145.66502183.77302164.86232278.1432
Table 3. Performance comparison of various algorithms on composite functions F21–F30.
Table 3. Performance comparison of various algorithms on composite functions F21–F30.
FResultsIMWOWOGGOCPOPIODBOPOAAO
F21min2200.15962201.7182219.4412204.15952205.88952207.3812202.55772206.0188
std2.040053.545536.403046.091666.38579.229864.879453.6815
avg2204.04452235.26222315.99462347.90902311.03272219.53582276.54352296.7503
median2203.58112206.58582326.15012351.00242351.04132217.79892321.56312321.8994
worse2208.37372365.3722346.82872406.33742362.55612245.06792353.86642347.8017
F22min2301.10312215.58472243.85152301.15982287.28572290.00092229.29892308.5467
std1.228520.136982.8172367.597733.889586.754328.1696.8472
avg2302.63252300.18152341.82552421.27242396.72792513.32582308.61972313.8048
median2302.15862303.49712321.18852305.97402400.37122515.59682306.07922312.2466
worse2305.27212313.32212628.61573754.59212438.7242722.82092370.16532338.9287
F23min2300.63082611.0242614.4022628.77652633.09562644.84852622.06802621.1898
std99.68914.662211.630773.95438.714110.846521.524612.3309
avg2590.5482632.40792626.10432738.95352648.21922665.67172657.30922645.7939
median2617.77072628.78962622.18592729.07692648.04802665.28762656.95252644.7848
worse2645.57682657.06432657.06212879.41252666.14822686.88082704.13972667.918
F24min2500.01982500.7062571.40552781.05332560.6752556.35132506.23532504.2126
std112.742581.067457.453680.69688.626571.5396129.159497.8817
avg2692.74882731.95062741.192874.81792725.48652656.47712675.11782731.2863
median2751.58572747.16022759.16532855.5962778.00512630.25872755.67992769.9843
worse2774.89762789.70762772.02343082.77672795.76382795.71262809.40172791.1542
F25min2898.87792899.45872950.27652898.65512960.42332944.38062899.1942900.482
std19.131117.833242.058421.031814.754870.406430.710122.9014
avg2941.98762940.97132987.2322931.19772980.07123030.50172924.00632936.3274
median2947.47622947.59512964.06832943.78902983.31653003.96512915.11572948.0836
worse2971.36062951.6853099.1992946.80053018.30353171.60763015.92332957.1229
F26min2800.27022800.5162928.47732815.71082991.55222720.51262612.32712831.0166
std86.729789.8092370.3839479.236343.5265150.6896368.9634166.8563
avg2964.16072946.78293246.32013793.59623052.31973196.51513046.12333119.2933
median2988.4712990.09613091.23883863.63693043.07753196.12452971.75463094.1377
worse3095.65433034.45954057.61854619.8233196.42773396.04984188.64393462.145
F27min3089.41083090.21113099.39193100.53433097.23233097.89913090.59513094.3498
std2.50292.930336.883894.09784.18219.870220.321211.8237
avg3093.79573094.30353139.90023198.27973102.79553113.07153106.40083109.8237
median3093.66283093.61243123.66583194.44693101.51583111.27953098.69773107.7196
worse3098.53643100.73893210.94133489.53373110.13683135.68463164.90493139.0721
F28min3100.04773101.72913148.16313171.9423204.71443225.70183164.82543147.6581
std115.7014106.1308174.169126.737771.0156122.1159108.430396.4327
avg3258.2533343.29893527.9433427.96383261.08953325.59353272.99123388.1083
median3208.60713407.25863540.60673417.98693233.47473273.73343223.20543414.324
worse3411.82663455.13983776.23883731.81293431.32713654.62273457.69253526.2244
F29min3164.47993158.45643176.00473210.56653177.28863183.59053138.59893171.1203
std37.965755.515975.7172192.654847.324649.622959.150871.1785
avg3205.19033221.69573306.52313461.61913249.95843264.80443218.92083268.1494
median3199.2643204.96383310.41563452.1693235.91653257.85123213.82913257.996
worse3272.62183361.19133469.05873851.15523355.92573376.38743379.57623472.2724
F30min30252.599066213.6551582215.02194267512.7744135508.965234862.082334315.27965129057.08821
std428988.9833568380.87892339630.32639092962.951056614.115843596.2531649208.30271247700.773
avg483785.2761638542.69622093586.08414076373.831175731.911037324.466264744.04961256386.523
median543179.61613118.48051508209.0032841227.959627601.5527673404.001124487.66634667294.2432
worse1337783.6261761182.42710204750.19177048702.64046653.2382496848.8542670235.0213657021.434
Table 4. Performance comparison of various algorithms on CEC2022 functions.
Table 4. Performance comparison of various algorithms on CEC2022 functions.
FResultsIMWOWOGGOCPOPIODBOPOAAO
F1min301.2319308.81271407.63232847.89521183.12282378.2323326.6935923.2229
std30.050879.11582430.26147886.33881069.19841441.3639513.28462038.0502
avg333.7049400.39156860.732413716.27283581.03444621.1677563.21723957.3425
median325.4441399.89516471.106713833.74993333.85564402.5659418.28213487.0106
worse410.9764642.256812779.438629256.59495607.01027228.99312634.15659775.7761
F2min400.1661407.6442411.5287400.3287424.5453460.0716400.1088402.8532
std19.835633.226541.152735.623326.2288144.95170.511656.6608
avg413.5831430.0402467.9772431.727462.556591.7204447.4026446.3143
median408.917408.9525476.8998408.9487458.1857525.3774417.1407422.4101
worse475.4617493.0827556.8243495.7052506.7973970.1802711.1684622.2135
F3min600.0385600.0983603.3525616.8075615.3295613.8086602.4643609.2133
std1.34072.05036.246114.27283.07295.327810.31766.8029
avg601.004601.9845611.9256644.0827622.2025625.5383619.674619.1271
median600.5784601.6579612.1653640.0381622.0826625.9717622.9509618.6235
worse605.6112607.8397626.6052671.8257629.0297635.0863639.5044633.8353
F4min807.962807.0038812.5257815.9245840.7054828.579806.2814810.3431
std7.134118.16246.47633.77948.79924.8586.18087.8419
avg824.8245834.1466825.8031830.7508854.7241835.5376819.5077822.2671
median827.3614829.8489826.4992831.8396853.5553835.35819.6611822.231
worse832.8337858.2505837.349832.8971870.0457846.9005829.8586837.9829
F5min900.1791900.1281904.11841284.52141000.60131020.6565911.7111921.4213
std35.152911.766661.5948169.5416128.549492.1580112.711696.0731
avg923.9841909.83980.01111587.51241260.21051152.19321070.84191028.8835
median909.6108906.7935963.50061514.90581240.03281117.83121065.31741024.7388
worse1019.7571946.92831111.57181836.99331591.24451334.2451286.50361338.875
F6min1873.03191892.95031933.80121996.6269360888.77549780.40731839.17013064.4658
std1055.79191567.54126796.3631980.73642059366.7152494344.1071851.580029735.7878
avg2719.8652923.146721885.00383567.68642938778.369996113.44173074.054228722.8259
median2159.5032083.70829678.1242830.56332813182.175216950.04272049.331718418.2677
worse5177.37117215.9152114474.71148107.77658241636.61611169295.278117.5537125798.9876
F7min2001.63662018.25342032.8942043.02662042.70652030.28682018.38552024.1328
std6.945213.95714.700547.40439.68516.965612.699818.444
avg2023.13932032.21022050.98432123.87512061.72012063.07782035.8092047.7646
median2022.35472026.00842048.1472128.00612062.43752060.20152034.18882043.1636
worse2037.47022067.09892083.6512230.42022085.88262099.86672072.05952085.8293
F8min2216.89622221.30732219.76822220.4892230.18682224.94712204.93862223.9326
std2.40664.55643.039572.52513.30914.56696.40733.5558
avg2223.77192226.38142229.00512290.61092234.62342231.93772221.16052228.5565
median2223.842224.45372229.53732261.70842233.77052231.47832222.87452227.489
worse2229.1272239.39342233.92562443.43252240.7852245.70192229.06512237.9449
F9min2529.28442529.28442546.40462537.26392542.02342573.93422529.32172569.9105
std12.519118.257751.650538.270413.572450.686520.543323.3337
avg2535.12272544.50912657.02352589.49962558.66112646.67582543.61522599.7523
median2529.28732533.80822658.89122574.4982557.31662650.91842532.19452598.0737
worse2573.29532578.71412747.36652650.17192601.8972755.01182603.08212641.5812
F10min2500.29232500.45822500.46452500.73162501.21972501.04402500.47132500.7794
std0.207466.136963.7310545.407752.438053.246652.071357.6326
avg2500.68582557.90702562.83682892.09442520.37682529.05542527.06842586.4009
median2500.68442501.08502559.95842663.74992503.17912507.03022500.82702617.7033
worse2501.24732663.73072635.65114427.13922677.72962663.61622648.11792647.5973
F11min2600.30722602.32022737.01972600.44523377.79942758.43992612.51962622.2337
std148.1347202.0257287.7346388.8202484.777292.4758194.2474102.6919
avg2728.88112747.27803146.83922890.90264653.54512855.91182883.46712751.4343
median2750.52132746.89113249.25892910.27354842.55832843.51442777.11492751.0351
worse3202.96443354.73013570.92754422.62425109.01453178.15543226.00652930.2359
F12min2862.56742862.56742871.22092871.63282866.74512868.89502862.69652865.0492
std1.49034.112847.331670.27661.516218.50124.94768.7370
avg2864.26952865.53472916.48212948.49622868.80182885.73622868.03902872.1634
median2863.75962864.29382904.57432930.22092868.53322884.10212866.71112869.2006
worse2500.29232500.45822500.46452500.73162501.21972501.04402500.47132500.7794
Table 5. Statistical results of coverage rate from 30 runs (Environment 1).
Table 5. Statistical results of coverage rate from 30 runs (Environment 1).
AlgorithmIMWOWOGGOCPOPIODBOPOAAO
average0.95860.91860.88700.77320.84380.90710.94230.9155
maximum0.96880.96000.92340.81020.86320.94400.96050.9439
minimum0.94540.76350.85700.74110.83000.87200.90850.8721
Table 6. Statistical results of coverage rate from 30 runs (Environment 2).
Table 6. Statistical results of coverage rate from 30 runs (Environment 2).
AlgorithmIMWOWOGGOCPOPIODBOPOAAO
average0.96480.93190.88930.78060.85150.91870.94600.9176
maximum0.98030.97120.93160.82200.86460.95010.95830.9420
minimum0.94640.78810.85570.75170.83560.87550.91630.8907
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Ding, Y.; Zhou, X.; Zhu, X.; Wu, Z.; Peng, W.; Zhang, J.; Jia, C. Multi-Strategy Fusion Improved Walrus Optimization Algorithm for Coverage Optimization in Wireless Sensor Networks. Biomimetics 2026, 11, 72. https://doi.org/10.3390/biomimetics11010072

AMA Style

Li L, Ding Y, Zhou X, Zhu X, Wu Z, Peng W, Zhang J, Jia C. Multi-Strategy Fusion Improved Walrus Optimization Algorithm for Coverage Optimization in Wireless Sensor Networks. Biomimetics. 2026; 11(1):72. https://doi.org/10.3390/biomimetics11010072

Chicago/Turabian Style

Li, Ling, Youyi Ding, Xiancun Zhou, Xuemei Zhu, Zongling Wu, Wei Peng, Jingya Zhang, and Chaochuan Jia. 2026. "Multi-Strategy Fusion Improved Walrus Optimization Algorithm for Coverage Optimization in Wireless Sensor Networks" Biomimetics 11, no. 1: 72. https://doi.org/10.3390/biomimetics11010072

APA Style

Li, L., Ding, Y., Zhou, X., Zhu, X., Wu, Z., Peng, W., Zhang, J., & Jia, C. (2026). Multi-Strategy Fusion Improved Walrus Optimization Algorithm for Coverage Optimization in Wireless Sensor Networks. Biomimetics, 11(1), 72. https://doi.org/10.3390/biomimetics11010072

Article Metrics

Back to TopTop