Next Article in Journal
Deep Learning Image-Based Classification for Post-Earthquake Damage Level Prediction Using UAVs
Previous Article in Journal
LoRA-INT8 Whisper: A Low-Cost Cantonese Speech Recognition Framework for Edge Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fusion Multi-Strategy Gray Wolf Optimizer for Enhanced Coverage Optimization in Wireless Sensor Networks

1
School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
2
School of Transportation, Changsha University of Science and Technology, Changsha 410114, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(17), 5405; https://doi.org/10.3390/s25175405
Submission received: 2 July 2025 / Revised: 21 August 2025 / Accepted: 25 August 2025 / Published: 2 September 2025
(This article belongs to the Section Sensor Networks)

Abstract

Wireless sensor networks (WSNs) are fundamental to applications in the Internet of Things, smart cities, and environmental monitoring, where coverage optimization is critical for maximizing monitoring efficacy under constrained resources. Conventional approaches often suffer from low global coverage efficiency, high computational overhead, and a tendency to converge to local optima. To address these challenges, this study proposes the fusion multi-strategy gray wolf optimizer (FMGWO), an advanced variant of the Gray Wolf Optimizer (GWO). FMGWO integrates various strategies: electrostatic field initialization for uniform population distribution, dynamic parameter adjustment with nonlinear convergence and differential evolution scaling, an elder council mechanism to preserve historical elite solutions, alpha wolf tenure inspection and rotation to maintain population vitality, and a hybrid mutation strategy combining differential evolution and Cauchy perturbations to enhance diversity and global search capability. Ablation studies validate the efficacy of each strategy, while simulation experiments demonstrate FMGWO’s superior performance in WSN coverage optimization. Compared to established algorithms such as PSO, GWO, CSA, DE, GA, FA, OGWO, DGWO1, and DGWO2, FMGWO achieves higher coverage rates with fewer nodes—up to 98.63% with 30 nodes—alongside improved convergence speed and stability. These results underscore FMGWO’s potential as an effective solution for efficient WSN deployment, offering significant implications for resource-constrained optimization in IoT and edge computing systems.

1. Introduction

In recent years, significant advancements in the fields of artificial intelligence (AI) and microelectronics have contributed to the pervasive adoption of wireless sensor networks (WSNs) in various applications, including the Internet of Things (IoT), smart cities, and environmental monitoring [1]. WSNs facilitate the operation of intelligent systems by collecting real-time data, including temperature, humidity, and chemical concentration, from distributed sensor nodes [2]. Moreover, in dynamic scenarios such as disaster response, military reconnaissance, or adaptive environmental monitoring, WSNs must rapidly adjust to changing conditions, necessitating mobile sensor nodes capable of dynamic deployment. However, the coverage optimization problem persists as a pivotal challenge in WSN design, namely, how to maximize the coverage of the monitoring area while ensuring connectivity and energy efficiency under the constraints of node number, energy, and computational resources. This problem directly impacts the accuracy of monitoring and the efficiency of resources in WSNs, especially in dynamic and heterogeneous environments, thereby limiting their reliability in intelligent networks [3]. The optimization of WSN coverage is imperative to enhance network performance and facilitate the extensive implementation of IoT applications [4].
Conventional optimization methodologies often prove challenging when it comes to addressing the intricacies inherent in the WSN coverage problem. Static deployment strategies, while effective in controlled environments, struggle to adapt to environmental changes or node failures, limiting their flexibility in real-world applications. In contrast, dynamic deployment enables sensor nodes to reposition themselves, enhancing network flexibility and robustness under resource constraints. Swarm intelligence algorithms offer a novel approach to addressing this challenge. The efficacy of Particle Swarm Optimization (PSO) in optimizing node positions is contingent upon swarm collaboration, yet its performance is constrained by the speed of convergence [5]. Ant Colony Optimization (ACO) is predicated on the principle of colony collaboration to optimize node locations; however, it is constrained by high computational overhead [6]. Deep reinforcement learning has demonstrated remarkable proficiency in dynamic optimization; however, its substantial resource requirements impose significant limitations on its practical applications [7]. The deficiencies of these methodologies underscore the pressing necessity for efficient and robust algorithms to address the high-dimensional search space and real-time constraints in WSN coverage problems.
In order to address these practical issues, a thorough analysis of the particular requirements for optimizing the coverage of WSNs was conducted. This analysis revealed that the gray wolf optimization (GWO) algorithm offers an effective optimization framework by emulating the collaborative behavior of wolves. However, the GWO algorithm is subject to certain limitations, including uneven limitations and an imbalance between exploration and exploitation, as well as local optimal traps, arising from random initialization and exploration–exploitation imbalance [8]. Therefore, in response to the pressing need to address the challenge of optimizing network coverage, we propose a fusion multi-strategy gray wolf optimizer (FMGWO), which significantly improves search efficiency and stability through electrostatic field initialization, dynamic parameter tuning, an elder council mechanism, head wolf tenure checking and a rotation mechanism, and a hybrid mutation strategy. This approach directly addresses the inefficiency and instability of existing techniques in complex WSN scenarios and achieves higher coverage with fewer nodes, outperforming algorithms such as PSO, GWO, CSA, DE, GA, FA, OGWO, DGWO1, and DGWO2. FMGWO’s innovation not only improves the performance of WSN deployments but also provides a reference framework for the IoT, edge computing, and efficient smart systems, providing a reference framework for optimization and demonstrating its potential for application in optimization problems.

2. Related Work

The issue of coverage in WSNs represents a fundamental challenge in various application areas, including the IoT, smart cities, and environmental monitoring. In these contexts, the objective is to optimize the coverage area while ensuring network connectivity and energy efficiency within the limitations imposed by the number of nodes, energy resources, and computational capabilities [9]. A plethora of studies have been conducted on this issue, proposing a variety of optimization methods, primarily focusing on two strategies: deterministic and random deployment [10].
Deterministic deployment is a methodical process that ensures comprehensive area coverage and network connectivity by meticulously positioning sensor nodes at predefined locations. This approach is particularly well suited for scenarios where environmental information is known, such as industrial monitoring or smart buildings [11]. Boualem A et al. have proposed a minimum semi-deterministic deployment model for mobile wireless sensor networks based on Pick’s theorem. This model effectively reduces node mobility and enhances network lifetime and coverage efficiency [12]. Zrelli A et al. investigated the k-coverage and connectivity of WSNs in border monitoring. They explored the optimal model under deterministic deployment to determine the minimum number of sensors. They also proposed a hybrid WSN scheme for Tunisian–Libyan border monitoring to achieve efficient connectivity and event detection [13]. However, deterministic deployment is contingent on detailed environmental information obtained in advance and is challenging to implement in dynamic or unknown environments, thereby limiting its flexibility and practical application [14].
Conversely, random deployment does not necessitate any a priori environmental data, and the deployment process is straightforward and adaptable, making it well suited to large-scale or emergency scenarios, such as forest fire monitoring or battlefield reconnaissance [15]. The random scattering of nodes, a deployment strategy that aims to reduce complexity, often results in coverage blindness or excessive node clustering due to an uneven distribution of nodes. This, in turn, affects coverage and connectivity [16]. In order to address these issues, researchers have devised post-optimization techniques to enhance the coverage effect by adjusting node positions. Furthermore, they have conducted theoretical analyses of the optimal and worst coverage scenarios and have proposed various improvement algorithms. These algorithms have the potential to alleviate the coverage unevenness problem to a certain extent [17]. However, random deployment continues to encounter challenges, including node redundancy and energy expenditure. Consequently, the development of more efficient deployment optimization schemes is imperative [18].
Beyond static placement problems, recent research has increasingly addressed dynamic coverage scenarios where sensors are mobile or environmental conditions change over time [19]. Approaches to dynamic coverage often rely on adaptive re-deployment, online control policies, or evolutionary algorithms that can handle time-varying objectives [20]. In particular, multi-objective evolutionary algorithms and multi-task learning frameworks have been proposed to handle conflicting goals and exploit knowledge transfers between related tasks or time instances [21]. Such methods can include interval-based uncertainty handling, inverse-mapping techniques for rapid re-initialization, or learning-based surrogates to accelerate solution updates [22]. While these dynamic and multi-objective methods are powerful for mobile or uncertain WSNs, they typically involve more complex objective formulations, additional communication for sensor mobility, or a higher computational burden [23].
In light of the limitations of conventional optimization methodologies, swarm intelligence algorithms offer a novel approach to WSN coverage optimization due to their capacity to generate complex intelligent behaviors through local interactions of simple individuals. Song J et al. proposed an enhanced algorithm, NEHPO, based on the prey–predator optimization (HPO) algorithm to address the challenges posed by the low search accuracy of HPO and its propensity to converge on a local optimum, with the objective of enhancing WSN coverage performance [24]. Although NEHPO improves the overall search capability, its hybrid structure introduces additional parameter-tuning complexity and increases computational overhead during each iteration. Furthermore, in large-scale WSN deployments, the algorithm may still suffer from unstable convergence behavior due to sensitivity to initial population distribution, and the improvements in local exploitation do not fully eliminate the tendency to stagnate when facing multimodal search landscapes.
Wang J et al. proposed an algorithm for optimizing network coverage based on the Improved Salpa Swarm Intelligence Algorithm (ATSSA). This algorithm enhances the global and local searching ability through the use of a tent chaotic sequence initialization of the population, a T-distribution variation, and an adaptive position-updating formula. The purpose of these elements is to improve WSN coverage and reduce node mobile energy consumption [25]. Despite these enhancements, ATSSA’s reliance on chaotic sequence initialization may result in excessive randomness when the search space is very large, potentially causing inefficient early-stage exploration. The incorporation of a T-distribution variation, while beneficial for avoiding premature convergence, can produce large and unpredictable position jumps that slow convergence when the search approaches near-optimal regions. Additionally, the adaptive update mechanism adds computational complexity and requires careful parameter calibration to maintain a suitable balance between exploration and exploitation across different network scales.
Chen X et al. proposed a hybrid butterfly–predator optimization (HBPO) algorithm with a dynamic quadratic parameter adaptive strategy, and the hybrid butterfly–beluga optimization algorithm (NHBBWO), which combines the advantages of beluga and butterfly optimization algorithms to improve WSN node coverage area and reduce redundancy [26]. Although HBPO and NHBBWO benefit from hybridization by integrating complementary search mechanisms, this combination inherently increases algorithmic complexity, leading to longer computation times per iteration. The performance gain is also highly dependent on appropriate parameter weighting between the two embedded metaheuristics, which reduces robustness in dynamically changing WSN environments. Moreover, hybrid algorithms may suffer from redundant search behavior when the optimization process lacks effective cooperation control, thus causing wasted computational effort and reduced scalability in large-scale deployments.
Despite the advances made by metaheuristic algorithms, such as the Hungarian method and Salpa, persistent challenges remain across most swarm intelligence-based coverage optimization methods. The majority encounter performance degradation in high-dimensional or large-scale networks, with slow convergence rates and an increased computational overhead from complex parameter tuning. Their search behaviors tend to collapse prematurely toward local optima, especially in rugged or multimodal landscapes. In mobile node scenarios, frequent relocation amplifies energy consumption, limiting the real-world applicability of these solutions. Notably, population diversity often decreases rapidly in the early search stages, reducing the algorithm’s global search capacity. These limitations impede the efficacy of node optimization deployment. These limitations have prompted researchers to investigate more efficient swarm intelligence algorithms to address the complexity of WSN coverage optimization.
To address these limitations, the GWO, introduced by Mirjalili et al., has emerged as a competitive alternative [27]. Inspired by the social hierarchy and hunting behavior of grey wolf packs, GWO provides an efficient metaheuristic optimization framework by simulating the wolf pack collaboration mechanism [27]. In comparison with algorithms such as HPO and Salpa, GWO demonstrates superiority in the optimization of WSN coverage with its simplicity and reduced computational cost, thereby effectively enhancing the coverage rate [28]. However, the performance of GWO is limited due to the uneven distribution of the initial population, the lack of flexibility in the linear convergence strategy, and the tendency to fall into local optimization in complex, high-dimensional search spaces [29].
Specifically, populations that are randomly initialized may result in inadequate coverage of the solution space [30], linear convergence factors encounter challenges in achieving a balance between global exploration and local exploitation [31], and a single-head wolf guidance mechanism may lead to convergence stagnation in multi-peak optimization scenarios [32]. These issues are more prominent in the WSN optimization coverage problem, which affects the robustness of the algorithm and the actual deployment effect to a certain extent.
In summary, deterministic and random deployment strategies offer their own advantages in terms of WSN coverage optimization. However, the limitations of these strategies highlight the necessity of advanced optimization techniques. As a class of advanced optimization techniques, swarm intelligence algorithms provide effective solutions for coverage optimization through global search and adaptive mechanisms. We chose the GWO framework as the baseline for developing FMGWO for several pragmatic and theoretical reasons. First, GWO is a population-based metaheuristic whose leadership hierarchy and hunting mechanism provide an intuitive balance between exploration and exploitation, which aligns well with the nature of continuous node-placement problems in WSN coverage. Second, the canonical GWO possesses a relatively small number of tunable parameters and a simple update scheme, which makes it straightforward to analyze and hybridize with auxiliary mechanisms. Third, the literature shows that GWO exhibits competitive performance on many continuous optimization benchmarks, and it has been successfully applied to placement and coverage problems; thus, it represents a suitable and well-understood starting point for targeted algorithmic improvements. At the same time, the standard GWO has known limitations—most notably a tendency toward premature convergence and diminished population diversity in later stages of optimization. These limitations motivate the targeted enhancements presented in this work, each designed to alleviate specific weaknesses without substantially complicating the algorithm’s structure. The findings of the present study propose the implementation of FMGWO, with the objective of achieving substantial enhancement in the coverage and resource efficiency of WSNs, thereby addressing the prevailing challenges in the context of practical applications.

3. Wireless Sensor Network Coverage Problem and Standard GWO

3.1. Sensor Network Node Coverage Model

In WSNs, the sensing capability of sensor nodes is a pivotal component in ensuring the efficacy of network monitoring. In this study, the sensing range of a node is defined as a circular area centered on the node with a sensing radius of R. It is demonstrated that only monitoring points within this area can be effectively sensed. The sensing radius, R, a pivotal parameter, directly determines the coverage efficiency of wireless sensor networks. In order to facilitate an analysis of complex problems, this study simplifies the WSN coverage area into a two-dimensional plane and develops the study based on the following idealized assumptions.
Assumption 1.
The sensing range of all nodes is a perfect circular area and is not affected by any obstacles or environmental factors.
Assumption 2.
All sensor nodes have the same hardware structure and sensing capability to ensure a consistent sensing range.
Assumption 3.
All sensor nodes are mobile and can sense other nodes within their sensing range in real time and obtain the precise location information of these nodes.
Based on the above assumptions, the WSN coverage model is as follows.
In this paper, the monitoring area is defined as a two-dimensional rectangular plane whose length is denoted as Y and whose width is denoted as Z, while the total area is Y × Z. In the planar rectangular coordinate system, the four vertices of this rectangular area have the following coordinates: (0, 0), (0, Z), (Y, 0), and (Y, Z). In the discretization process, the monitoring area is divided into n equal-area and symmetric grids. The center point of each grid is designated as the monitoring point, and its set is represented as follows.
J = J j = ( x j , y j ) j = 1 , 2 , , n
Next, v sensor nodes are randomly deployed in the monitoring area, the set of which is represented as follows.
O = { O i = ( x i , y i ) i = 1 , 2 , , v }
All nodes adhere to a Boolean perception model, with each node possessing a perception radius of R.
The calculation of the Euclidean distance between the sensor node, Oi, and the monitoring point, Jj, is achieved through the following equation.
d ( O i , J j ) = ( x i x j ) 2 + ( y i y j ) 2
In Equation (3), d ( O i , J j ) denotes the Euclidean distance between sensor node Oi and monitoring point Jj. The Boolean perception function, P(Oi,Jj), is utilized to ascertain whether the sensor node is capable of detecting the target, the definition of which is as follows.
P ( O i , J j ) = 1 if   d ( O i , J j ) R 0 otherwise
In the event that this distance is less than or equal to the sensing radius, R, the monitoring point, Jj, is within the sensing range of the sensor node, Oi. This indicates that the grid where the monitoring point, Jj, is located is covered by the WSN. As the same monitoring point, designated as Jj, may be repeatedly sensed via multiple sensor nodes, the joint probability of sensing monitoring point J via all sensor nodes is given by the following formula.
η ( O all , J j ) = 1 i = 1 v ( 1 P ( O i , J j ) )
In Equation (5), η ( O all , J j ) denotes the joint probability that monitoring point Jj is sensed via at least one node in the monitoring area, Oall denotes all sensor nodes, and v is the total number of sensor nodes.
The evaluation of the performance of a WSN is contingent upon the utilization of an appropriate metric. One such metric is coverage, which is of paramount importance in this regard. In this model, the coverage ratio is defined as the ratio of the coverage area to the total area of the monitoring area. The coverage area is calculated by the product of the sum of the joint perception probabilities of all the monitoring points and the area of each grid. The calculation of the coverage ratio is as follows:
C r = j = 1 n η ( O all , J j ) × κ Y × Z
κ = Y × Z n
In Equations (6) and (7), Cr denotes the coverage ratio, Y × Z denotes the total area of the monitoring area, κ is the area of a single grid, and n is the number of grids. The coverage of the WSN is obtained by calculating the ratio of the area of the covered grid to the total area of the monitoring area.

3.2. Coverage Optimization Model

In order to achieve the optimal solution for the coverage performance of WSN in dynamic deployment, this study takes the coverage Cr as the optimization objective and constructs the corresponding objective function with the following mathematical expression.
max   C r s . t . O i G J j G
In Equation (8), Cr denotes the coverage rate of the WSN, which is determined by Equation (6). Oi and Jj represent the coordinates of the i-th sensor node and the j-th monitoring point in the monitoring area, respectively, and G is the monitoring area.
The present study proposes a solution to the aforementioned optimization problem by introducing the GWO algorithm and embedding the objective function Cr into the fitness function of GWO. In the implementation of the algorithm, the monitoring area, G, is defined as the search space, and the coordinates of all v sensor nodes are mapped as dimension parameters of the gray wolf individuals. The first v dimensions correspond to the x-axis coordinates of all deployed nodes, and the last v dimensions correspond to their y-axis coordinates. The GWO iteratively optimizes the population positions through a high-frequency updating mechanism, and constraints within the search space ensure the feasibility of the solutions. Consequently, the optimal individuals resulting from the convergence of the algorithm correspond to the optimal node distribution scheme of the WSN, thereby maximizing the coverage. Note that the proposed FMGWO does not always guarantee finding the global optimal placement of sensor nodes, as it is a metaheuristic algorithm that approximates optimal solutions in complex, NP-hard problems like WSN coverage optimization. The optimal placement would ideally cover the rectangle with minimal overlap and no gaps, but due to the problem’s complexity, FMGWO provides near-optimal solutions, as validated through comparisons with other heuristics in the experiments.

3.3. Standard GWO

GWO is a metaheuristic optimization method inspired by the social behavior of grey wolves in nature. GWO’s design is based on the efficient simulation of the hierarchical structure of gray wolf packs with hunting strategies. The algorithm divides the pack into four hierarchical roles: alpha (α), beta (β), delta (δ), and omega (ω), which represent leaders, sub-leaders, secondary decision makers, and ordinary members of the pack, respectively. The wolves labeled α, β, and δ play the dominant roles in optimization and are responsible for guiding the direction of the search, whereas omega wolves adjust their behaviors based on the positional information of these dominant wolves. The wolves classified as α, β, and δ play a dominant role in the optimization process, guiding the search direction, while the wolves classified as ω adjust their behavior based on the position information of these dominant wolves.
The GWO algorithm is predicated on simulating the hunting behavior of gray wolves. This behavior can be divided into three phases: encircling the prey, tracking the prey, and executing the attack. The algorithm delineates the collaborative behavior of the wolf pack through a mathematical model and iteratively updates the positions of individuals step by step to approximate the global optimal solution. Within the search space, the position of each wolf individual denotes a potential solution. The following formula is employed to model hunting behavior in the GWO algorithm.
X t + 1 = X t + A · D · X p A · D · X t
In Equation (9), Xt+1 denotes the location of the next generation of wolves, Xt denotes the location of the contemporary wolves, Xp denotes the location of the prey, and A and D denote the coefficients and distances, respectively.
A = 2 · a · r 1 a
C = 2 · r 2
a = 2 2 · t T
In the formula, C is the perturbation parameter for correcting the prey position, which is determined via Equation (13). r1 and r2 are random numbers between (0, 1), and a is the convergence factor, whose value decreases linearly from 2 to 0, which is calculated through Equation (14). t denotes the number of current iterations, and T denotes the total number of iterations.
In the GWO algorithm, the core of the leadership hierarchy is reflected in the role of wolves α, β, and δ in guiding the search direction of the entire wolf pack. Among them, wolf α, wolf β, and wolf δ represent the optimal solution, the second-best solution, and the third-best solution in the current iteration, respectively. During the iteration process, wolf ω dynamically adjusts its position based on the information provided by the three dominant wolves, and its trajectory is described by Equations (15) and (16). The following is the position update formula for the leadership level.
D α = C 1 · X α X t D β = C 2 · X β X t D δ = C 3 · X δ X t
X 1 = X α A 1 · D α X 2 = X β A 2 · D β X 3 = X δ A 3 · D δ
In Equations (13) and (14), Dα, Dβ, and Dδ represent the distances between the specified gray wolf and wolves α, β, and δ, respectively. A1, A2, and A3 are the parameter control vectors that determine the direction of motion for the individual gray wolf. These vectors are calculated using the following Equation (11), and C1, C2, and C3 correct the ingested parameters for the positions of wolf α, wolf β, and wolf δ, which are computed in Equation (13). Xα, Xβ, and Xδ are the wolf α, wolf β, and wolf δ position vectors, and X1, X2, and X3 are temporary positions.
X t + 1 = X 1 + X 2 + X 3 3
During the execution of the algorithm, the roles of α, β, and δ wolves are defined. Specifically, these wolves are responsible for recognizing the location of the prey and moving in the direction of the prey to pursue it. Concurrently, the three alpha wolves direct the other wolves to assume their positions, thereby facilitating the encirclement and capture of the prey through a series of coordinated actions. This collaborative effort ultimately culminates in the attainment of the optimal solution. The position update process of the wolf pack is governed by Equation (15). The time complexity of the standard GWO is O (N × D × T), where N is the population size, D is the problem dimension, and T is the number of iterations, primarily due to fitness evaluations and position updates in each iteration.

4. FMGWO

The standard GWO algorithm demonstrates considerable application potential in a variety of optimization problems, particularly in addressing the WSN coverage problem, exhibiting certain advantages over other swarm intelligence algorithms. However, its intrinsic mechanism still exhibits limitations in complex, high-dimensional, or multi-peak optimization scenarios. Specifically, GWO’s random initialization results in uneven node distributions, producing coverage gaps in WSNs. Its linear convergence factor lacks adaptability, hindering the balance between global exploration of diverse node layouts and local exploitation to minimize overlaps. The reliance on current alpha wolf information ignores historical high-quality node configurations, reducing robustness. The alpha wolf selection mechanism risks diversity loss, leading to convergence stagnation in multi-peak WSN landscapes. Additionally, GWO’s simplistic position update struggles to navigate the complex, multi-peak search space, often trapping the algorithm in local optima. These shortcomings collectively impair standard GWO’s performance in WSN coverage optimization. To address these limitations, this paper proposes an FMGWO that incorporates multiple strategies to effectively overcome the aforementioned shortcomings. The specific improvements are described as follows.

4.1. Electrostatic Field Initialization

To address the problem of the random distribution of nodes in WSNs tending to lead to coverage blindness, an electrostatic field model is used to generate a uniform initial population to improve the coverage efficiency. The standard GWO algorithm utilizes random generation during the initialization of the population. This approach is straightforward but may result in an imbalanced distribution of initial solutions within high-dimensional or intricate search spaces. This lack of initial diversity weakens global exploration, as nodes cluster in suboptimal regions, reducing the coverage rate. Consequently, a more scientific initialization method is required to enhance the diversity and coverage of the population.
In order to address the issue of the imbalanced distribution of initial populations, this paper proposes an electrostatic field initialization strategy. The strategy under discussion draws inspiration from the theoretical framework of electrostatic fields as they apply in the physical sciences. In scenarios characterized by a low population number, the charged particles will tend to be uniformly distributed under the action of mutual repulsive force. By emulating the electrostatic force between particles, the initial population can be naturally dispersed in the search space, thereby providing a superior starting point for subsequent optimization. Furthermore, in comparison with the random initialization distribution, the electrostatic field initialization can better ensure enhanced stability during the initialization process. As illustrated in Figure 1, the initialization schematic following the application of the electrostatic field is presented (the population size is 50).
The electrostatic field initialization aims to produce an initial population that is more uniformly distributed across the search space and less likely to cluster prematurely. By using repulsive-like interactions during initialization, candidate solutions tend to avoid high-density clusters, which improves early exploration and provides better coverage of the solution manifold. The practical effect on coverage: more diverse initial placements yield a higher probability of finding well-distributed sensor configurations that cover sparse regions of the area. The fundamental principle underlying the initialization of the electrostatic field is the dynamic adjustment of the population’s position, a process determined by the calculation of distance and electrostatic force between individuals. First, the initial population location is randomly generated using a uniform distribution within the upper and lower bounds of the search space [L, U] with a scale of (N, m), where N denotes the population size and m denotes the problem dimension. For each pair of individuals, Pi and Pj, in the population, the Euclidean distance, d, between them is computed. To avoid the potential division by zero error that can result from this calculation, a tiny value of 10−6 is introduced. This step ensures that the computation is stable even when the distance between individuals is extremely small. The formula is as follows.
d = max P i P j , 10 6
Subsequently, in accordance with the inverse square law of electrostatic force, the magnitude of the force F is calculated. The force is directly proportional to the distance between the objects; therefore, as the distance between the objects decreases, the force of repulsion increases, causing the objects to be pushed apart. Furthermore, the direction vector of the force, σ, denoting the unit vector pointing from Pi to Pj, must be calculated to ensure that the force acts along the line of connection between the individuals. The formula is as follows.
F = 1 d 2
σ = P i P j d
In Equations (17) and (18), F denotes the magnitude of the electrostatic force, and σ denotes the direction vector of the force.
The individual positions are updated through the application of electrostatic forces, and a regulation factor, designated as k, is incorporated to govern the step size.
P i = P i + k · F · σ
P j = P j k · F · σ
In Equation (20), Pi and Pj represent the positions of the ith and jth individuals in the population, respectively. These positions are assumed to move in opposite directions, thereby simulating the repulsion effect. The parameter k is the electrostatic force regulation coefficient, which serves to balance the magnitude of movement with stability.
Ultimately, the updated position is cropped to guarantee that the search space is not exceeded.
P i = clip ( P i , L , U )
In Equation (21), L denotes the upper bound of the search space and U denotes the lower bound of the search space.
This strategy is particularly effective for WSN coverage optimization, as it ensures a uniform initial node distribution, reducing coverage gaps in large-scale monitoring areas.

4.2. Dynamic Parameter Adjustment

In order to accommodate the constrained computational and energy resources of WSNs, this study adopts a balanced optimization strategy by integrating a nonlinear convergence factor with a differential evolution scaling parameter. The conventional GWO employs a linearly decreasing convergence factor (a), which monotonically decreases from 2 to 0 throughout the iterations. Although this approach offers simplicity, it exhibits inadequate adaptability when confronted with the complex and multimodal nature of WSN coverage optimization. A linear decay imposes a uniform reduction in search step size across all iterations, often resulting in excessive exploration in the early stages and insufficient exploitation toward the end of the search process. This rigidity impairs the algorithm’s ability to dynamically adjust its search behavior to problem-specific terrain, adversely affecting the convergence speed, precision, and the balance between exploration and exploitation [33].
In many biological systems, including the cooperative hunting of gray wolves, behavioral adjustments naturally exhibit nonlinear dynamics, where hunting speed, pursuit radius, and coordination strategies evolve according to environmental stimuli. Motivated by this observation, a nonlinear convergence factor is introduced to modulate the search radius more adaptively throughout the optimization process. At the early stage of iterations, the proposed nonlinear curve decreases more rapidly than its linear counterpart, enabling a swift contraction of the search range to intensify candidate solution refinement and accelerate convergence in promising regions. In the latter stage, the decay becomes gentler, preserving a certain degree of spatial exploration to reduce the probability of premature convergence to a local optimum. This nonlinear modulation effectively maintains a dynamic balance between intensification and diversification, thereby enhancing overall optimization accuracy, adaptability, and robustness in the presence of high-dimensional and irregular WSN topologies.
From a theoretical standpoint, this adjustment alters the search dynamics by providing a variable exploration radius proportional to both iteration progress and the curvature of the nonlinear decay function. This facilitates a more problem-aware transition from exploration to exploitation, which is particularly beneficial for multimodal landscapes with irregular objective value distributions. Comparative results against the standard GWO (see Figure 2) demonstrate that the proposed nonlinear convergence factor yields a more desirable trade-off profile between the convergence rate and the solution quality. The nonlinear convergence factor (a) is formulated as follows.
a = 2 · 1 t T 0.5
In Equation (22), “a” denotes the convergence factor, “t” indicates the number of current iterations, and “T” signifies the maximum number of iterations.
Furthermore, the differential evolution scaling factor was recalibrated in this study. The magnitude of weighting in the differential evolution variant is controlled, thereby achieving a balance between global exploration and local exploitation.
f ( t ) = 0.5 1 t T
In Equation (23), the differential evolutionary scaling factor is denoted as f ( t ) .
The mathematically nonlinear nature of the square root function enables the algorithm to adaptively adjust the search behavior. This adaptability renders the algorithm more flexible and suitable for complex optimization scenarios compared to linear decay.
By adaptively balancing exploration and exploitation, this approach addresses the dynamic and resource-constrained nature of WSNs, enabling efficient node placement.

4.3. Elder Council Mechanism

The standard GWO utilizes only the head wolf positions from the current iteration to guide the population update, with the alpha wolf serving as the principal leader in determining the search direction. However, high-quality solutions from earlier iterations, such as previous alpha positions with superior objective function values, are not preserved once replaced. This omission means that, if the current alpha is suboptimal, the algorithm is deprived of the ability to revert to historically superior solutions. As a result, the search process becomes more susceptible to premature convergence, potential quality degradation, and reduced robustness in WSN coverage optimization. In optimization theory, this represents a lack of temporal elitism retention, a concept well recognized in memory-based metaheuristics, where preservation of elite solutions can enhance both convergence stability and solution quality.
In order to address the problem of lost historical information and enhance the stability of the algorithm, the Council of Elders mechanism is proposed. The strategy draws inspiration from the elder system in human societies, wherein experienced elders are retained within the group to impart wisdom and guidance. In the context of wolf packs, this phenomenon can be conceptualized as analogous to the experience of the alpha wolf, who serves as the reference point for the pack. By periodically storing historical head wolf positions, the committee of elders provides a resource of candidate solutions for operations such as head wolf rotation, thereby avoiding the permanent loss of quality solutions.
The Council of Elders mechanism is implemented by periodically recording head wolf positions and limiting storage capacity. First, a storage set is created for alpha, beta, and delta, respectively, initially empty, for the purpose of recording historical positions. In each iteration, it is imperative to ascertain whether the current iteration number meets the update condition (i.e., every three generations). In the event that the specified condition is met, the current locations of alpha, beta, and delta are appended to the relevant storage collections.
E a = E a { A } , E b = E b { B } , E d = E d { D } , if   t   mod 3 = 0
In Equation (24), the variables Ea, Eb, and Ed represent a set of historical alpha, beta, and delta positions, respectively, which are stored in the Council of Elders. Initially, this set is empty. The variables A, B, and D correspond to the position vectors of the current alpha, beta, and delta, respectively. The variable t denotes the current iteration count.
In order to circumvent the accumulation of excessive historical data, the quantity of head wolves retained is constrained to a maximum of three per category. In the event that the limit is exceeded subsequent to the addition of new locations, the three most recent locations are retained, while the earliest records are removed, in order to ensure the currency of historical information.
E x = E x , if   | E x | 3 { e i e i E x , i > | E x | 3 } , if   | E x | > 3 , x { α , β , δ }
In Equation (25), x is the category identifier for elders, which takes values in the range of {α, β, δ} and corresponds to alpha, beta, and delta, respectively. |Ex| denotes the number of elements in the set Ex. ei denotes the ith element in the set Ex, sorted in the order of additions. m denotes the problem dimension.
The Council of Elders’ primary function is to establish historical head wolf positions for the head wolf rotation mechanism as part of the Candidate Solution. They will not be directly involved in daily position updates.
This mechanism enhances the stability of WSN coverage optimization by preserving high-quality node layouts, preventing the loss of effective configurations in complex scenarios.

4.4. Alpha Wolf Tenure Inspection and Rotation

To mitigate premature convergence and preserve population diversity, this study introduces a tenure check and rotation mechanism for leader wolves in the GWO. In the standard GWO, the alpha, beta, and delta wolves are selected purely based on the instantaneous fitness ranking of the current population. While this procedure is computationally efficient, it cannot detect temporal stagnation; if the alpha wolf’s position remains unchanged or exhibits negligible fitness improvement across successive iterations, the search direction becomes overly dependent on a single, potentially suboptimal reference. This overreliance accelerates population clustering, reduces positional variance, and shrinks the effective exploration radius, often causing the algorithm to be trapped in a local optimum—a limitation particularly pronounced in WSN coverage optimization. Therefore, a strategy is required to detect and resolve this stagnant state.
This strategy draws inspiration from the phenomenon of leadership turnover observed among wolves in their natural habitat. In a wolf pack, when the incumbent leader is unable to effectively guide the pack forward due to advanced age or diminished capabilities, a new, strong individual typically assumes the leadership role. This turnover mechanism is instrumental in ensuring the continuity of the group’s adaptation to its environment. In a similar vein, the diversity of the algorithm and the capacity to circumvent local optima can be augmented by monitoring the performance of the alpha wolf and opting for a new alpha wolf from historical information and random perturbations when the need arises. The mechanism for monitoring and regulating the tenure of the alpha wolf is implemented through the use of counters and triggers. This mechanism is designed to initiate a rotation when a period of stagnation reaches a predetermined threshold.
First, a counter is initialized to keep track of the number of consecutive unimproved iterations of alpha adaptation. In each iteration, a comparison is made between the adaptation of the current alpha and the adaptation of the previous generation alpha. In the event that the counter remains unchanged, it is incremented; conversely, if an improvement is detected, the counter is reset to zero. This process quantifies the degree of stagnation of the alpha wolf.
n t + 1 = n t + 1 , if   S A ( t ) = S L 0 , if   S A ( t ) < S L
S L = S A ( t ) , if   S A ( t ) < S L
As indicated in Equations (26) and (27), nt is the counter value at the t-th iteration, denoting the number of consecutive unimproved generations of alpha scores, with an initial value of 0. S A ( t ) signifies the objective function score of alpha at the t-th iteration, while S L denotes the objective function score of alpha in the previous generation, with an initial value of ∞.
The rotation mechanism is activated once the counter reaches a preset threshold, which is set to five generations. The set of candidate solutions is to be constructed, including three types of sources: first, historical alpha, beta, and delta positions stored in the Council of Elders, which represent past quality solutions; second, alpha, beta, and delta positions of the current iteration, which retains the existing optimal solutions; and third, randomly perturbed solutions generated based on the current alpha positions, which use the normal distribution to introduce randomness to enhance the diversity.
C = E a E b E d { A , B , D } { A + λ ( 0 , a , m ) } , if   n t 5
In Equation (28), C is the set of candidate solutions containing the historical head wolf, the current head wolf, and the randomly perturbed solutions. Ea, Eb, and Ed are the set of historical alpha, beta, and delta positions stored in the Council of Elders. A, B, and D are the position vectors of the current alpha, beta, and delta. λ ( 0 , a , m ) is a normally distributed stochastic vector with a mean of 0, a standard deviation of convergence factor of α, and a dimension of m.
The objective function value is computed for each solution in the set of candidate solutions, and the solution with the best fitness is selected as the new alpha.
S = { objf ( p ) p C }
i = arg min { S }
A = C i
S A = S [ i ]
As illustrated in Equations (29)–(32), S denotes the set of objective function scores for the candidate solutions. An element, p, in the set of candidate solutions, C, denotes a specific candidate position vector. The index, i, indicates the candidate solution with the optimal score. The position vector of the alpha, A, and the objective function score of the alpha, S A , are also defined.
n t = 0
E a = E b = E d =
Once the rotation is complete, we reset the counters and empty the Council of Elders to ensure that the history is updated from the new alpha and to avoid old data interfering with subsequent optimizations.
This strategy mitigates the risk of local optima in WSN coverage problems, ensuring diverse node placements that maximize monitoring efficiency.

4.5. Hybrid Mutation Strategy

The position update of the standard GWO algorithm relies exclusively on the guidance of the head wolves (alpha, beta, delta) to compute the new position via a linear combination. This single strategy has limitations when facing multi-peak or high-dimensional optimization problems. Due to the lack of additional diversity generation mechanisms, the population tends to converge to the local optimum prematurely, leading to an insufficient search capability. Furthermore, the standard GWO’s simplistic position update mechanism struggles to navigate the multi-peak search space of WSN coverage optimization, often trapping the algorithm in local optima. Consequently, a more effective mutation strategy is required to promote population diversity and enhance global search capability.
In order to enhance the global search capability and the ability to jump out of local optima, this paper introduces a hybrid mutation strategy that combines population difference learning via differential evolution [34] and random perturbation through the Cauchy distribution [35]. The exploration and development capability of the algorithm is enhanced through multi-level variation.
The hybrid mutation strategy is realized via multi-stage position updating, which gradually enhances the diversity and adaptability of the population.
Initially, the position of each individual is calculated based on the position of the head wolf. The distances to alpha, beta, and delta were calculated independently for each individual, and the step size was adjusted via random coefficients. Ultimately, the average of the three was taken as the preliminary solution. This stage preserves the bootstrapping properties of GWO.
Δ α = | C 1 P α P i ( t ) | Δ β = | C 2 P β P i ( t ) | Δ δ = | C 3 P δ P i ( t ) |
X 1 = P α H 1 Δ α X 2 = P β H 2 Δ β X 3 = P δ H 3 Δ δ
Q ( t ) = X 1 + X 2 + X 3 3
In Equations (35)–(37), Δα, Δβ, and Δδ represent the distance vectors between the gray wolf individuals and wolves α, β, and δ, respectively. C1, C2, and C3 are the ingestion parameters that correct the positions of wolves α, β, and δ. P i ( t ) is the current position vector of the i-th individual. H1, H2, and H3 are the parameter control vectors that determine the direction of movement for the gray wolf individuals. Pα, Pβ, and Pδ denote the position vectors of wolf α, wolf β, and wolf δ, respectively. X1, X2, and X3 are the temporary position vectors. Q ( t ) is the initial new position vector of the i-th individual in the t-th iteration.
Based on the preliminary position vector, the differential evolution mechanism is introduced in the t-th iteration. Two different individuals are randomly selected from the population, and the difference in their position vectors is calculated, weighted by a dynamic scaling factor, and superimposed on the preliminary position vector to form a variant position vector. This step utilizes inter-population differences to enhance diversity.
V ( t ) = Q ( t ) + f ( t ) · ( P k 1 ( t ) P k 2 ( t ) )
k 1 k 2 i k 1 , k 2 { 1 , 2 , , N }
As indicated by Equations (38) and (39), V ( t ) denotes the variant position vector of the i-th individual in the t-th iteration, while f ( t ) signifies the differential evolutionary scaling factor in the t-th iteration, which is determined by Equation (23). The position vectors of two randomly selected individuals in the population in the t-th iteration are represented by P k 1 ( t ) and P k 2 ( t ) , respectively. The random indexes k1 and k2 are defined as such.
In the t-th iteration, a random perturbation is applied to the variant position vector. This perturbation is based on the Cauchy distribution, and it is applied with a certain probability. The long-tailed nature of the Cauchy distribution generates larger jump steps, which facilitate the escape from local optima. The magnitude of the perturbation is associated with the range of the search space, thereby ensuring that the stochasticity remains moderate.
V ( t ) = V ( t ) + W ( t ) , if   u < 0.2 V ( t ) , otherwise
W ( t ) = Cauchy ( 0 , 1 , m ) · 0.1 ( U L )
In Equations (40) and (41), W ( t ) is defined as the Cauchy perturbation vector in the t-th iteration. μ is a uniformly distributed random number with values in the range of [0, 1], which is employed to regulate the Cauchy perturbation probability. Cauchy ( 0 , 1 , m ) is a standard Cauchy-distributed random vector with a mean of 0, a scale parameter of 1, and a dimensionality of m. L denotes the upper bound of the search space, and U denotes the lower bound of the search space.
Prior to the conclusion of the t-th iteration, boundary constraints are imposed on the variant position vector. The fitness of the variant position vector is then compared with that of the preliminary position vector. The superior one is selected as the final position vector for that individual in this iteration. This process is designed to ensure that the quality of the individual is improved or maintained at each iteration.
V ( t ) = clip ( V ( t ) , L , U )
P i ( t + 1 ) = V ( t ) , if   f obj ( V ( t ) ) < f obj ( Q ( t ) ) Q ( t ) , otherwise
As indicated by Equations (42) and (43), P i ( t + 1 ) denotes the final updated position vector of the i-th individual in the t + 1st iteration. The objective function, f obj , is employed to assess the fitness of the solution.
The hybrid mutation strategy is tailored to the multi-peak nature of WSN coverage optimization, enabling the algorithm to explore diverse node arrangements and achieve higher coverage rates.

4.6. Algorithm Steps

Based on the above improvements, the implementation of the algorithm proposed in this paper can be divided into the following eight steps. As illustrated in Figure 3.
Step 1: determine the relevant parameters of the algorithm, including the population size, N, the maximum number of iterations T, and the search space.
Step 2: initialize the wolf pack and generate the initial position of the wolf pack in the search space using the electrostatic field initialization method.
Step 3: compute the fitness value and update wolf α, wolf β, and wolf δ.
Step 4: update the Council of Elders according to Equations (24) and (25).
Step 5: monitor the fitness of Wolf α and perform tenure checking and the rotation of Wolf α according to Equations (26)–(34).
Step 6: calculate the convergence factor and the differential evolutionary scaling factor.
Step 7: update the position of the mixed-variant strategy computed using Equations (35)–(43).
Step 8: Determine whether the current algorithm satisfies the optimal solution or the maximum number of iterations. If so, terminate the algorithm, and output the optimal solution. Otherwise, proceed to Step 3.

5. Experimental Design and Analysis

To verify the effectiveness of the algorithm improvement and its application performance in WSN coverage optimization, ablation experiments and application simulations using the algorithm were conducted. The experimental execution environment was an Intel (R) Core i9-12900H CPU with 2.90 GHz, 16 GB RAM, the Windows 11 64-bit operating system, and the Python 3.12 integrated development environment.

5.1. Design of Ablation Experiments

To verify the effectiveness of the improved strategy, ablation experiments were performed using the improved FMGWO algorithm. The experimental comparison algorithm is shown in Table 1. A total of 33 benchmark functions were selected as test functions for the experiment in Table 2, Table 3 and Table 4. The selection of these functions was driven by their capacity to showcase a range of optimization challenges, encompassing high dimensionality, multi-modal, and intricate search spaces. These characteristics emulate those encountered in the domain of WSN coverage problems. The employment of FMGWO in the context of these functions serves two primary objectives. Firstly, it ensures the system’s resilience in confronting high-dimensional, non-linear optimization challenges. Secondly, it validates the system’s aptitude in addressing the specific challenges posed by the WSN coverage problem in a domain-specific manner. The population size of the algorithms was set to 30, and the number of iterations was set to 500. To ensure the stability of the experimental data, each algorithm was run independently 30 times, and the optimal, average, standard deviation, and worst values were taken as the performance comparison indices. The experimental results are shown in Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16. The convergence curves are shown in Figure 4, Figure 5 and Figure 6.
In the course of analyzing the data for this study, it was discovered that certain data points were found to be in close proximity to a specific value during the data-processing and computational procedures. Consequently, these data points were directly approximated to that particular value when utilizing Excel formulas for computation. However, in reality, these data were not strictly equal to that particular value but were infinitely closer to it.
This phenomenon can be attributed to a confluence of factors, including the inherent characteristics of the data itself and the precision limitations inherent to Excel’s calculation capabilities. Excel employs distinct numerical precision and rounding rules when executing formula calculations. In instances where the discrepancy between the data and a specific value falls below Excel’s computational precision thresholds, the value is approximated as that specific value.
It is important to note that, although the data are shown as specific values in Excel, based on our in-depth understanding of the experimental process, data characteristics, and calculation methods, these data are actually infinitely close to that specific value rather than strictly equal. The data can be found with the standard deviation not being equal to zero. In the subsequent analysis of the data and discussion of the results, this characteristic was given full consideration. The data were processed and interpreted in a reasonable manner to ensure the accuracy and reliability of the research results.
When evaluating the convergence performance of the algorithms on functions, it should be noted that some of the fitness values may be non-positive. Consequently, the logarithmic scale cannot be applied directly for the visualization of these values. Consequently, a leveling operation is performed on a subset of the fitness values. It is important to note that this panning operation does not modify the relative disparity between the fitness values. Furthermore, it preserves the trend of the convergence curve, which can effectively demonstrate the iterative process of the algorithm. The shifted data are then employed to plot the convergence curve under the logarithmic scale, with the specific amount of shifting indicated in the graph description.
The experimental results demonstrate that the FMGWO algorithm exhibits superior performance in comparison to other algorithms when evaluated under a range of test functions, including basic unimodal functions, classic multimodal functions, and complex composite functions. These test functions are subsequently analyzed as follows:
All four strategies play a certain role in promoting the FMGWO algorithm.
The strategy in Section 4.1 enhances the initialization stage by combining a uniform distribution with controlled random perturbations. This ensures that the initial population is evenly dispersed across the search space, preventing local aggregation that often occurs in the standard GWO. By using a more balanced initialization, a sufficient number of candidate solutions are distributed across most feasible regions, laying a solid foundation for subsequent global exploration. Experimental results on unimodal functions F2 and F6 indicate that the standard GWO’s initial distribution can result in insufficient search coverage. When the Section 4.1 strategy is removed, GWO1 shows larger fluctuations across independent runs and, in several cases, fails to reach the global optimum. In contrast, FMGWO with the Section 4.1 strategy preserves higher diversity at the start of the search, avoids early local traps, and achieves better optimal and mean fitness values. This confirms that improving uniformity and stochasticity at initialization significantly enhances stability, the breadth of early exploration, and overall optimization robustness.
The strategy delineated in Section 4.2 is intended to enhance the algorithm’s capacity to conduct local searches with greater precision while ensuring that the depth and breadth of the global search are balanced. In practice, the search process often faces a trade-off between global exploration and local mining. It is easy to fall into a local optimum if only focusing on local search and difficult to converge if it is too decentralized. This strategy in FMGWO enables the algorithm to swiftly augment its advantage in the proximity of the optimal solution region by dynamically calibrating the search step size and implementing a local learning mechanism, while preserving a degree of jumping out capability in the global stage. An examination of the experimental data, such as the F6 test function, reveals that the standard GWO exhibits diminished local convergence performance. Additionally, a discernible discrepancy is evident between the optimal value and the mean value of GWO2 and FMGWO on specific functions. For instance, the optimal value of GWO2 in the F6 function is 1. In the context of the aforementioned numerical simulations, the convergence rate of 44 × 10−7 is observed to be less than that of FMGWO, which is found to decrease to 1.67 × 10−8. A similar trend is observed in the complex composite functions. While GWO2 performs marginally better than FMGWO in some of the tested functions, FMGWO ultimately demonstrates superior convergence speed and higher accuracy in most cases through this strategy. It can be argued that the Section 4.2 strategy effectively encourages individuals to seek out high-quality solutions within their local region. Additionally, its balancing mechanism serves to mitigate the loss of diversity that can result from too-fast convergence, thereby ensuring that the sensitivity of local search is not compromised while maintaining the effectiveness of a global search. A comprehensive analysis integrating the aforementioned mechanisms and data comparisons reveals that this strategy offers substantial advantages in enhancing solution accuracy and reducing convergence time, thereby playing a pivotal role in optimizing the overall performance of FMGWO.
Strategies 4.3 and 4.4 focus on addressing search stagnation by preserving high-quality solutions from historical iterations and integrating them into future searches. This prevents excessive reliance on the current best individual, thereby reducing the risk of local entrapment. The use of historical information gives the population a “memory effect,” guiding it toward unexplored yet promising regions. Comparative tests—particularly on F2, F11, and other functions—show that GWO3 consistently underperforms FMGWO, especially on unimodal, multimodal, and composite function benchmarks. Without these “memory” capabilities, the algorithm is prone to forgetting good solutions and repeatedly converging to suboptimal areas in complex landscapes. The combined effect of Section 4.3 and Section 4.4 is twofold: accelerating early search through historical guidance, and maintaining adaptability to escape local optima later, as reflected in smoother convergence curves. This long-term perspective greatly enhances the algorithm’s resilience and supports fine-tuning accuracy in challenging optimization tasks.
Section 4.5 introduces mutations and perturbations to maintain population diversity, particularly vital in high-dimensional, multi-peak problems. Without sufficient diversity, traditional algorithms tend to concentrate in a few local regions while neglecting potentially optimal areas. Data from composite functions F32 and F33 show that the standard GWO’s best values stagnate earlier at higher levels, and in some cases GWO4 even underperforms the original GWO. In contrast, FMGWO with Section 4.5 maintains wide coverage early through random perturbations, and it prevents over-convergence later in the search. This ensures the continuous exploration of new regions and improves global search performance, as confirmed by convergence curves showing a gradual, stable descent on multi-peak problems. Overall, this strategy significantly delays premature convergence, enhances population diversity, and provides the algorithm with a stronger global perspective in complex search spaces.

5.2. Wireless Sensor Network Coverage Experiment

The effectiveness of FMGWO in optimizing the WSN coverage problem is evaluated by using the coverage Cr (see Equation (6)) as the fitness value. The experimental parameters are listed in Table 17. Among them, PSO [36], GWO [27], CSA [37], DE [34], GA [38], FA [39], OGWO [40], DGWO1 [41], DGWO2 [41], and FMGWO optimize the coverage of WSNs, respectively. The superiority of FMGWO over other algorithms is verified by comparing the coverage Cr of each algorithm. The parameter settings of the above algorithms are shown in Table 18.
For practical implementation, we assume that each sensor node is equipped with localization capabilities to determine its exact position, as is common in many modern WSN applications. The computation of FMGWO is performed centrally at a sink node or base station that has sufficient computational resources and energy. Sensor nodes periodically report their positions to the sink via multi-hop communication, assuming a connected network. The sink then computes the optimized positions and broadcasts repositioning commands back to the nodes. This centralized approach minimizes the computational burden on energy-constrained sensor nodes, though it requires reliable communication links and assumes nodes have mobility capabilities for repositioning. These requirements introduce additional energy overhead for communication and movement, which should be considered in resource-limited deployments.
In the simulation setup, sensor nodes are initially deployed at random locations within the 100 m × 100 m monitoring area, as per the random deployment strategy described in Section 4.1. The algorithms then iteratively optimize node positions to maximize coverage, with performance examined through metrics such as best coverage, mean coverage, and standard deviation over 30 independent runs, ensuring statistical reliability.
These parameters were selected based on standard practices in the swarm intelligence literature for similar optimization problems: a population size of 30 balances exploration and computational efficiency, 500 iterations ensure sufficient convergence without excessive runtime, and algorithm-specific values follow original proposals or empirical tuning for WSN scenarios to promote effective search. In order to verify that each algorithm achieves the highest coverage with the minimum number of nodes, we conducted simulation experiments and selected three scenarios of 20, 25, and 30 nodes from small to large for comparison and analysis. Each comparison algorithm was run independently for 30 times, and the final coverage optimum, mean, and standard deviation were used as the comparison data. The experimental results are shown in Table 19.
The FMGWO algorithm demonstrates notable efficacy across a range of node counts. When the number of nodes is 20, the optimal coverage is 84.39%, and the average coverage is 82.47%. These values are superior to those obtained via other algorithms, such as PSO and GWO. The standard deviation of 0.0092 demonstrates the stability of the algorithm. As the number of nodes is increased from 25 to 30, the optimal coverage rate increases from 94.85% to 98.63%. Concurrently, the average coverage rate continues to exceed the other rates, while the standard deviation experiences a slight increase. However, the high coverage rate persists. A comparative analysis of the FMGWO algorithm with other algorithms reveals its superiority in key performance indicators such as optimal coverage, average coverage, and stability. This reflects its ability to adaptively reposition nodes, ensuring connectivity and coverage. Furthermore, the FMGWO algorithm demonstrates a capacity for achieving high coverage with a reduced number of nodes.
A comparison of FMGWO with other algorithms reveals its superior performance. A comparison of the GWO with other algorithms, including PSO, GSA, FA, and GA, reveals that the GWO exhibits superior performance in terms of coverage rate across a range of node numbers. This observation suggests that the GWO enhances the standard GWO algorithm, thereby facilitating a more balanced exploration and development capacity and preventing the occurrence of local optimal solutions. When confronted with advanced algorithms such as OGWO, DGWO1, and DGWO2, FMGWO with 30 nodes exhibited an optimal coverage rate of 98.63% and an average coverage rate of 96.57%. These results substantiate the efficacy of the implemented improvement measures in enhancing the overall performance of the system. In practical applications, the FMGWO algorithm has been shown to achieve high coverage with fewer nodes, thereby effectively reducing costs. This has significant application value and broad application prospects. In order to provide a more intuitive demonstration of the efficacy of the FMGWO algorithm in addressing the WSN coverage problem, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 illustrate the distribution of nodes associated with the optimization of the WSN coverage problem by each comparative algorithm. The asterisks (*) in the figure represent the specific locations of the sensor nodes, and the circular area clearly depicts the actual sensing range of each sensor node.
As demonstrated in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16, the FMGWO algorithm exhibits superior performance in comparison to the PSO, GWO, CSA, DE, GA, FA, OGWO, DGWO1, and DGWO2 algorithms when the number of sensor nodes is 20, 25, and 30, respectively. It has been demonstrated that these comparative algorithms are generally afflicted with two main issues. Firstly, there is an absence of comprehensive coverage of monitoring areas. Secondly, and perhaps more significantly, there is redundant monitoring in areas that have undergone optimization. Conversely, the FMGWO algorithm employs optimization to achieve a more uniform distribution of nodes, thereby reducing the redundant monitoring area and enhancing network coverage.
As illustrated in Figure 17, Figure 18 and Figure 19, the coverage convergence curves for each algorithm are demonstrated after 500 iterations for three cases of 20, 25, and 30 nodes. The convergence accuracy and speed of the algorithms can be used to further understand their performance in optimizing the WSN coverage problem.
As demonstrated in Figure 17, Figure 18 and Figure 19, the FMGWO enhances the pre-convergence speed and post-convergence accuracy of WSN optimization while preserving the benefits inherent to GWO. The curves stabilize after approximately 450–500 iterations, as evidenced by the flattening of fitness values, indicating effective convergence to near-optimal solutions. The other two improved GWO algorithms mentioned above, i.e., OGWO, DGWO1, and DGWO2, are not as effective as FMGWO in optimizing the WSN coverage, although they improve GWO. Therefore, compared with the comparison algorithms, the FMGWO algorithm achieves stronger overall competitiveness in terms of searching accuracy, convergence speed, and stability.

6. Conclusions

In order to achieve the maximum coverage of WSN, this paper has proposed an improved gray wolf optimization algorithm (FMGWO). The FMGWO algorithm incorporates electrostatic field initialization, dynamic parameter tuning, an elder council mechanism, a head wolf tenure-checking and rotation mechanism, and a hybrid mutation strategy. The efficacy of the proposed strategies was initially validated through a series of ablation experiments, which demonstrate that these strategies enhance the accuracy and stability of the algorithm. Furthermore, the findings of simulation experiments demonstrate that the FMGWO algorithm possesses a distinct advantage in addressing the WSN coverage problem. It can be seen that FMGWO offers certain advantages in solving the WSN coverage problem, which is reflected in the fact that it not only effectively improves the coverage quality of network nodes but also achieves good stability. In summary, FMGWO excels in dynamic WSN deployment, offering a practical solution for real-time optimization under mobility and communication constraints. It helps maximize the network monitoring capability under limited resource conditions, which can effectively improve the network coverage efficiency and reduce the waste of resources in practical application scenarios. This is of great significance for WSN deployment.

Author Contributions

Conceptualization, Z.L. and Y.O.; methodology, Z.L. and Z.Y.; formal analysis, Y.O.; experiments, Z.L.; writing—original draft, Z.L. and Z.Y.; writing—review and editing, Y.O. and Z.L.; data analysis, Z.L. and S.W.; visualization, Z.L. and S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62266019 and 62066016) and the Natural Science Foundation of Hunan Province of China (2024JJ7395 and 2025JJ60926).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Osamy, W.; Khedr, A.M.; Salim, A.; Al Ali, A.I.; El-Sawy, A.A. Coverage, deployment and localization challenges in wireless sensor networks based on artificial intelligence techniques: A review. IEEE Access 2022, 10, 30232–30257. [Google Scholar] [CrossRef]
  2. Ouni, R.; Saleem, K. Framework for sustainable wireless sensor network based environmental monitoring. Sustainability 2022, 14, 8356. [Google Scholar] [CrossRef]
  3. Anusuya, P.; Vanitha, C.N.; Cho, J.; Easwaramoorthy, S.V. A comprehensive review of sensor node deployment strategies for maximized coverage and energy efficiency in wireless sensor networks. PeerJ Comput. Sci. 2024, 10, e2407. [Google Scholar] [CrossRef]
  4. Priyadarshi, R.; Vikram, R.; Huang, Z.; Yang, T.; Rathore, R.S. Multi-Objective Optimization for Coverage and Connectivity in Wireless Sensor Networks. In Proceedings of the 2024 13th International Conference on Modern Circuits and Systems Technologies (MOCAST), Sofia, Bulgaria, 26–28 June 2024; IEEE: New York, NY, USA, 2024; pp. 1–7. [Google Scholar]
  5. Sohail, M.S.; Saeed, M.O.B.; Rizvi, S.Z.; Shoaib, M.; Sheikh, A.U.H. Low-complexity particle swarm optimization for time-critical applications. arXiv 2014, arXiv:1401.0546. [Google Scholar]
  6. Nayar, N.; Gautam, S.; Singh, P.; Mehta, G. Ant colony optimization: A review of literature and application in feature selection. In Inventive Computation and Information Technologies: Proceedings of ICICIT 2020; Springer: Singapore, 2021; pp. 285–297. [Google Scholar]
  7. Löppenberg, M.; Yuwono, S.; Diprasetya, M.R.; Schwung, A. Dynamic robot routing optimization: State–space decomposition for operations research-informed reinforcement learning. Robot. Comput.-Integr. Manuf. 2024, 90, 102812. [Google Scholar] [CrossRef]
  8. Ou, Y.; Qin, F.; Zhou, K.-Q.; Yin, P.-F.; Mo, L.-P.; Zain, A.M. An improved grey wolf optimizer with multi-strategies coverage in wireless sensor networks. Symmetry 2024, 16, 286. [Google Scholar] [CrossRef]
  9. Egwuche, O.S.; Singh, A.; Ezugwu, A.E.; Greeff, J.; Olusanya, M.O.; Abualigah, L. Machine learning for coverage optimization in wireless sensor networks: A comprehensive review. Ann. Oper. Res. 2023, 1–67. [Google Scholar] [CrossRef]
  10. Liao, W.H.; Kao, Y.; Li, Y.S. A sensor deployment approach using glowworm swarm optimization algorithm in wireless sensor networks. Expert Syst. Appl. 2011, 38, 12180–12188. [Google Scholar] [CrossRef]
  11. Guo, X.; Zhao, C.; Yang, X.; Sun, C. A deterministic sensor node deployment method with target coverage and node connectivity. In Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence, Taiyuan, China, 24–25 September 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 201–207. [Google Scholar]
  12. Boualem, A.; Ayaida, M.; De Runz, C. Semi-deterministic deployment based area coverage optimization in mobile WSN. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  13. Zrelli, A.; Ezzedine, T. A new approach of WSN deployment, K-coverage and connectivity in border area. Wirel. Pers. Commun. 2021, 121, 3365–3381. [Google Scholar] [CrossRef]
  14. Kaur, J.; Bindal, A.K. Deployment Constraints based Routing over Wireless Sensor Networks. In Proceedings of the 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, India, 20–22 December 2018; IEEE: New York, NY, USA, 2018; pp. 265–270. [Google Scholar]
  15. Rout, M.; Roy, R. Optimal wireless sensor network information coverage using particle swarm optimisation method. Int. J. Electron. Lett. 2017, 5, 491–499. [Google Scholar] [CrossRef]
  16. Farias, W.A.; Freire, R.C.S.; Carvalho, E.Á.N.; Filho, J.G.N.d.C.; Molina, L.; Freire, E.O. Search and integration of nodes in hybrid wireless sensor networks by exploring the connectivity frontier. IEEE Sens. J. 2024, 24, 8615–8627. [Google Scholar] [CrossRef]
  17. Karpurasundharapondian, P.; Selvi, M. A comprehensive survey on optimization techniques for efficient cluster based routing in WSN. Peer-to-Peer Netw. Appl. 2024, 17, 3080–3093. [Google Scholar] [CrossRef]
  18. Xu, K.; Hassanein, H.; Takahara, G.; Wang, Q. Relay node deployment strategies in heterogeneous wireless sensor networks. IEEE Trans. Mob. Comput. 2009, 9, 145–159. [Google Scholar] [CrossRef]
  19. Quintão, F.P.; Nakamura, F.G.; Mateus, G.R. Evolutionary algorithm for the dynamic coverage problem applied to wireless sensor networks design. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; IEEE: New York, NY, USA, 2005; Volume 2, pp. 1589–1596. [Google Scholar]
  20. Fan, F.; Chu, S.C.; Pan, J.S.; Yang, Q.; Zhao, H. Parallel sine cosine algorithm for the dynamic deployment in wireless sensor networks. J. Internet Technol. 2021, 22, 499–512. [Google Scholar]
  21. Dao, T.C.; Tam, N.T.; Binh, H.T.T. Node depth representation-based evolutionary multitasking optimization for maximizing the network lifetime of wireless sensor networks. Eng. Appl. Artif. Intell. 2024, 128, 107463. [Google Scholar] [CrossRef]
  22. Sahmoud, S.; Topcuoglu, H.R. Exploiting characterization of dynamism for enhancing dynamic multi-objective evolutionary algorithms. Appl. Soft Comput. 2019, 85, 105783. [Google Scholar] [CrossRef]
  23. Moh’d Alia, O. Dynamic relocation of mobile base station in wireless sensor networks using a cluster-based harmony search algorithm. Inf. Sci. 2017, 385, 76–95. [Google Scholar] [CrossRef]
  24. Song, J.; Hu, Y.; Luo, Y. Wireless Sensor Network Coverage Optimization Based on the Novel Enhanced Hunter-Prey Optimization Algorithm. IEEE Sens. J. 2024, 24, 31172–31187. [Google Scholar] [CrossRef]
  25. Wang, J.; Zhu, Z.; Zhang, F.; Liu, Y. An improved salp swarm algorithm for solving node coverage optimization problem in WSN. Peer-to-Peer Netw. Appl. 2024, 17, 1091–1102. [Google Scholar] [CrossRef]
  26. Chen, X.; Zhang, M.; Yang, M.; Wang, D. NHBBWO: A novel hybrid butterfly-beluga whale optimization algorithm with the dynamic strategy for WSN coverage optimization. Peer-to-Peer Netw. Appl. 2025, 18, 1–25. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  28. Subburathinam, K.; Bakthavatchalam, V.; Pandian, R.K.C.; Subramaniam, K.M. Enhancing wireless sensor network connectivity and coverage using Hybrid GWO-HSA algorithm. Int. J. Commun. Syst. 2024, 37, e5858. [Google Scholar] [CrossRef]
  29. Ahmed, R.; Rangaiah, G.P.; Mahadzir, S.; Mirjalili, S.; Hassan, M.H.; Kamel, S. Memory, evolutionary operator, and local search based improved Grey Wolf Optimizer with linear population size reduction technique. Knowl.-Based Syst. 2023, 264, 110297. [Google Scholar] [CrossRef]
  30. Ebrahimi, S.M.; Hasanzadeh, S.; Khatibi, S. Parameter identification of fuel cell using repairable grey wolf optimization algorithm. Appl. Soft Comput. 2023, 147, 110791. [Google Scholar] [CrossRef]
  31. He, Z.; Yang, S. Improved gray wolf optimization algorithm based on a nonlinear convergence factor. In Proceedings of the 2023 4th International Conference on Computer Science and Management Technology, Xi’an, China, 13–15 October 2023; pp. 598–601. [Google Scholar]
  32. Kishor, A.; Singh, P.K. Empirical study of grey wolf optimizer. In Proceedings of the Fifth International Conference on Soft Computing for Problem Solving: SocProS 2015, Roorkee, India, 18–20 December 2015; Springer: Singapore, 2016; Volume 1, pp. 1037–1049. [Google Scholar]
  33. Ye, S.; Zhou, K.; Zain, A.M.; Wang, F.; Yusoff, Y. A modified harmony search algorithm and its applications in weighted fuzzy production rule extraction. Front. Inf. Technol. Electron. Eng. 2023, 24, 1574–1590. [Google Scholar] [CrossRef]
  34. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  35. Freidlin, M.I.; Wentzell, A.D. Random Perturbations; Springer: New York, NY, USA, 1998. [Google Scholar]
  36. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  37. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  38. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  39. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  40. Yu, X.; Xu, W.Y.; Li, C.L. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  41. Zhang, X.; Zhang, Y.; Ming, Z. Improved dynamic grey wolf optimizer. Front. Inf. Technol. Electron. Eng. 2021, 22, 877–890. [Google Scholar] [CrossRef]
Figure 1. (a) Random initialization; (b) After electrostatic adjustment.
Figure 1. (a) Random initialization; (b) After electrostatic adjustment.
Sensors 25 05405 g001
Figure 2. Convergence factor comparison curve.
Figure 2. Convergence factor comparison curve.
Sensors 25 05405 g002
Figure 3. FMGWO algorithm flow chart.
Figure 3. FMGWO algorithm flow chart.
Sensors 25 05405 g003
Figure 4. Convergence curves of basic unimodal functions, F1–F6.
Figure 4. Convergence curves of basic unimodal functions, F1–F6.
Sensors 25 05405 g004aSensors 25 05405 g004b
Figure 5. Convergence curves of classic multimodal functions, F7–F17.
Figure 5. Convergence curves of classic multimodal functions, F7–F17.
Sensors 25 05405 g005aSensors 25 05405 g005b
Figure 6. Convergence curves of complex composite functions, F18–F33.
Figure 6. Convergence curves of complex composite functions, F18–F33.
Sensors 25 05405 g006aSensors 25 05405 g006bSensors 25 05405 g006cSensors 25 05405 g006d
Figure 7. Optimizing WSN coverage with PSO.
Figure 7. Optimizing WSN coverage with PSO.
Sensors 25 05405 g007
Figure 8. Optimizing WSN coverage with GWO.
Figure 8. Optimizing WSN coverage with GWO.
Sensors 25 05405 g008
Figure 9. Optimizing WSN coverage with CSA.
Figure 9. Optimizing WSN coverage with CSA.
Sensors 25 05405 g009
Figure 10. Optimizing WSN coverage with DE.
Figure 10. Optimizing WSN coverage with DE.
Sensors 25 05405 g010
Figure 11. Optimizing WSN coverage with GA.
Figure 11. Optimizing WSN coverage with GA.
Sensors 25 05405 g011
Figure 12. Optimizing WSN coverage with FA.
Figure 12. Optimizing WSN coverage with FA.
Sensors 25 05405 g012
Figure 13. Optimizing WSN coverage with OGWO.
Figure 13. Optimizing WSN coverage with OGWO.
Sensors 25 05405 g013
Figure 14. Optimizing WSN coverage with DGWO1.
Figure 14. Optimizing WSN coverage with DGWO1.
Sensors 25 05405 g014
Figure 15. Optimizing WSN coverage with DGWO2.
Figure 15. Optimizing WSN coverage with DGWO2.
Sensors 25 05405 g015
Figure 16. Optimizing WSN coverage with FMGWO.
Figure 16. Optimizing WSN coverage with FMGWO.
Sensors 25 05405 g016
Figure 17. Convergence curves of each algorithm (Nodes 20).
Figure 17. Convergence curves of each algorithm (Nodes 20).
Sensors 25 05405 g017
Figure 18. Convergence curves of each algorithm (Nodes 25).
Figure 18. Convergence curves of each algorithm (Nodes 25).
Sensors 25 05405 g018
Figure 19. Convergence curves of each algorithm (Nodes 30).
Figure 19. Convergence curves of each algorithm (Nodes 30).
Sensors 25 05405 g019
Table 1. Comparison algorithm name for ablation experiment.
Table 1. Comparison algorithm name for ablation experiment.
Algorithm NameFunction Description
GWOStandard gray wolf algorithm
GWO1Only remove the strategy in Section 4.1 of this article
GWO2Only remove the strategy in Section 4.2 of this article
GWO3Only remove the strategy in Section 4.3 and Section 4.4 of this article
GWO4Only remove the strategy in Section 4.5 of this article
FMGWOCombining all strategies in this article
Table 2. Basic unimodal functions, F1–F6.
Table 2. Basic unimodal functions, F1–F6.
FunctionnRangefmin
F 1 = i = 1 n ( x i + 0.5 ) 2 30[−100, 100]0
F 2 = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2 2[−10, 10]0
F 3 = i = 1 n ( x i 1 ) 2 + i = 2 n x i x i 1 6[−36, 36]−50
F 4 = i = 1 n ( x i 1 ) 2 + i = 2 n x i x i 1 10[−100, 100]−210
F 5 = i = 1 n 1 100 ( x i + 1 x i ) 2 + ( x i 1 ) 2 30[−30, 30]0
F 6 = ( x 1 + 2 x 2 7 ) 2 + ( 2 x 1 + x 2 5 ) 2 2[−10, 10]0
Table 3. Classic multimodal functions, F7–F17.
Table 3. Classic multimodal functions, F7–F17.
FunctionnRangefmin
F 7 = sin 2 ( 3 π x 1 ) + ( x 1 1 ) 2 [ 1 + sin 2 ( 3 π x 2 ) ]    + ( x 2 1 ) 2 [ 1 + sin 2 ( 2 π x 2 ) ] 2[−10, 10]0
F 8 = sin 2 ( π w 1 ) + i = 1 n 1 ( w i 1 ) 2 [ 1 + 10 sin 2 ( π w i + 1 ) ] + ( w n 1 ) 2 [ 1 + sin 2 ( 2 π w n ) ] , w i = 1 + x i 1 4 , i = 1 , 2 , n 30[−10, 10]0
F 9 = c o s ( x 1 ) c o s ( x 2 ) e ( x 1 2 ) 2 ( x 2 π ) 2 2[−100, 100]−1
F 10 = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) 0.4 cos ( 4 π x 2 ) + 0.7 2[−100, 100]0
F 11 = i = 1 n x i 2 10 cos ( 2 π x i ) + 10 30[−5.12, 5.12]0
F 12 = i = 1 n x i sin ( | x i | ) 30[−500, 500]−12,569.5
F 13 = i = 1 n sin ( x i ) ( sin ( i x i 2 / π ) ) 20 2 [ 0 ,   π ]−1.8013
F 14 = i = 1 n sin ( x i ) ( sin ( i x i 2 / π ) ) 20 5 [ 0 ,   π ]−4.6877
F 15 = i = 1 n sin ( x i ) ( sin ( i x i 2 / π ) ) 20 10 [ 0 ,   π ]−9.6602
F 16 = 0.5 + sin 2 ( x 1 2 + x 2 2 ) 0.5 ( 1 + 0.001 ( x 1 2 + x 2 2 ) ) 2 2[−100, 100]0
F 17 = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.03163
Table 4. Complex composite functions, F18–F33.
Table 4. Complex composite functions, F18–F33.
FunctionnRangefmin
F 18 = ( 1.5 x 1 + x 1 x 2 ) 2 + ( 2.25 x 1 + x 1 x 2 2 ) 2 + ( 2.625 x 1 + x 1 x 2 3 ) 2 5[−4.5, 4.5]0
F 19 = 100 ( x 1 x 2 ) 2 + ( x 1 1 ) 2 + ( x 4 1 ) 2 + 90 ( x 3 2 x 4 2 ) 2 + 10.1 ( ( x 2 1 ) 2 + ( x 4 1 ) 2 ) + 19.8 ( x 2 1 ) ( x 4 1 ) 4[−10, 10]0
F 20 = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 2[−65.536, 65.536]0.998
F 21 = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) ( 4 π x 3 ) + 0.3 2[−100, 100]0
F 22 = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 + 4 π x 3 ) + 0.3 2[−100, 100]0
F 23 = i = 1 5 i cos ( ( i + 1 ) x 1 + i ) i = 1 5 i cos ( ( i + 1 ) x 2 + i ) 2[−10, 10]−186.7309
F 24 = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 + 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
F 25 = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00031
F 26 = i = 1 10 ( x i a i ) ( x i a i ) T + c i 1 4[0, 10]−10.4028
F 27 = k = 1 n i = 1 n i k + β x i i k 1 2 4[−4, 4]0
F 28 = k = 1 n i = 1 n x i k b k 2 4[0, 4]0
F 29 = i = 1 4 exp j = 1 3 a i j ( x j p i j ) 2 3[0, 1]−3.86
F 30 = i = 1 4 exp j = 1 3 a i j ( x j p i j ) 2 6[0, 1]−3.32
F 31 = i = 1 n j = 1 n a i j sin α j + b i j cos α j j = 1 n a i j sin x j + b i j cos x j 2 2 [ π ,   π ]0
F 32 = i = 1 n j = 1 n a i j sin α j + b i j cos α j j = 1 n a i j sin x j + b i j cos x j 2 5 [ π ,   π ]0
F 33 = i = 1 n j = 1 n a i j sin α j + b i j cos α j j = 1 n a i j sin x j + b i j cos x j 2 10 [ π ,   π ]0
Table 5. Optimal values of each algorithm tested on F1–F6 functions.
Table 5. Optimal values of each algorithm tested on F1–F6 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
10.000.000.000.000.000.00
21.49 × 10−1212.00 × 10−1472.90 × 10−2363.70 × 10−1440.001.30 × 10−153
3−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101
4−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102
52.60 × 1012.60 × 1012.61 × 1012.53 × 1012.58 × 1012.56 × 101
65.79 × 10−91.11 × 10−87.75 × 10−86.82 × 10−91.44 × 10−83.39 × 10−9
Table 6. Average of each algorithm tested on F1–F6 functions.
Table 6. Average of each algorithm tested on F1–F6 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
10.000.000.000.000.000.00
22.30 × 10−1009.60 × 10−1145.40 × 10−2281.70 × 10−1161.02 × 10−721.00 × 10−134
3−4.93 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101
4−1.69 × 102−2.10 × 102−2.10 × 102−1.75 × 102−2.10 × 102−2.10 × 102
52.71 × 1012.71 × 1012.62 × 1012.70 × 1012.78 × 1012.60 × 101
67.07 × 10−71.21 × 10−71.44 × 10−78.22 × 10−81.21 × 10−71.67 × 10−8
Table 7. Standard deviation of each algorithm tested on F1–F6 functions.
Table 7. Standard deviation of each algorithm tested on F1–F6 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
10.000.000.000.000.000.00
21.24 × 10−994.80 × 10−1130.005.50 × 10−1165.46 × 10−723.10 × 10−134
33.765.34 × 10−52.17 × 10−56.47 × 10−53.08 × 10−51.23 × 10−5
44.68 × 1011.24 × 10−23.31 × 10−35.56 × 1011.18 × 10−21.07 × 10−3
56.40 × 10−11.003.88 × 10−21.019.11 × 10−11.59 × 10−1
67.30 × 10−71.27 × 10−74.06 × 10−87.48 × 10−81.02 × 10−78.80 × 10−9
Table 8. Worst values of each algorithm tested on F1–F6 functions.
Table 8. Worst values of each algorithm tested on F1–F6 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
10.000.000.000.000.000.00
26.91 × 10−992.70 × 10−1125.60 × 10−2272.90 × 10−1153.04 × 10−711.40 × 10−133
3−2.91 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101
4−5.58 × 101−2.10 × 102−2.10 × 102−3.02 × 101−2.10 × 102−2.10 × 102
52.80 × 1012.95 × 1012.62 × 1012.88 × 1012.91 × 1012.62 × 101
63.04 × 10−65.03 × 10−72.23 × 10−73.61 × 10−74.34 × 10−73.21 × 10−8
Table 9. Optimal values of each algorithm tested on F7–F17 functions.
Table 9. Optimal values of each algorithm tested on F7–F17 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
72.38 × 10−97.56 × 10−81.14 × 10−79.16 × 10−91.70 × 10−83.69 × 10−11
83.28 × 10−85.07 × 10−106.25 × 10−81.74 × 10−101.93 × 10−92.15 × 10−10
9−1.00−1.00−1.00−1.00−1.00−1.00
100.000.000.000.000.000.00
11−5.29 × 102−4.04 × 103−6.26 × 103−5.42 × 102−3.31 × 103−4.08 × 103
12−7.04 × 103−7.66 × 103−7.29 × 103−8.31 × 103−8.66 × 103−8.40 × 103
13−1.80−1.80−1.80−1.80−1.80−1.80
14−4.69−4.65−4.65−4.69−4.69−4.69
15−9.10−9.58−8.64−9.26−9.06−9.16
160.000.000.000.000.000.00
17−1.03−1.03−1.03−1.03−1.03−1.03
Table 10. Average of each algorithm tested on F7–F17 functions.
Table 10. Average of each algorithm tested on F7–F17 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
75.24 × 10−71.12 × 10−72.24 × 10−71.53 × 10−72.23 × 10−71.84 × 10−8
85.07 × 10−71.22 × 10−71.05 × 10−71.63 × 10−71.30 × 10−72.00 × 10−8
9−1.00−1.00−1.00−1.00−1.00−1.00
100.000.000.000.000.000.00
11−4.35 × 102−2.73 × 103−5.82 × 103−4.51 × 102−2.44 × 103−3.17 × 103
12−5.87 × 103−6.15 × 103−6.70 × 103−6.61 × 103−6.21 × 103−7.73 × 103
13−1.80−1.80−1.80−1.80−1.77−1.80
14−4.23−4.41−4.60−4.44−4.37−4.66
15−7.37−7.96−8.26−7.95−7.65−8.84
165.01 × 10−33.57 × 10−30.002.59 × 10−38.85 × 10−30.00
17−1.03−1.03−1.03−1.03−1.03−1.03
Table 11. Standard deviation of each algorithm tested on F7–F17 functions.
Table 11. Standard deviation of each algorithm tested on F7–F17 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
75.19 × 10−72.63 × 10−87.74 × 10−81.22 × 10−71.94 × 10−71.23 × 10−8
84.15 × 10−79.95 × 10−83.48 × 10−81.74 × 10−71.26 × 10−71.31 × 10−8
99.09 × 10−72.03 × 10−74.95 × 10−81.33 × 10−72.72 × 10−71.28 × 10−8
100.000.000.000.000.000.00
113.88 × 1016.48 × 1022.42 × 1028.10 × 1014.23 × 1023.40 × 102
127.92 × 1021.21 × 1035.39 × 1021.46 × 1031.19 × 1033.02 × 102
132.54 × 10−61.61 × 10−64.93 × 10−71.90 × 10−61.44 × 10−18.22 × 10−8
143.71 × 10−13.04 × 10−15.72 × 10−23.05 × 10−13.58 × 10−12.01 × 10−2
151.051.272.40 × 10−18.45 × 10−18.36 × 10−11.65 × 10−1
164.77 × 10−34.68 × 10−30.004.30 × 10−32.64 × 10−30.00
173.13 × 10−84.04 × 10−91.98 × 10−93.24 × 10−97.35 × 10−96.74 × 10−10
Table 12. Worst values of each algorithm tested on F7–F17 functions.
Table 12. Worst values of each algorithm tested on F7–F17 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
72.10 × 10−61.59 × 10−73.65 × 10−74.31 × 10−77.67 × 10−73.74 × 10−8
81.55 × 10−63.26 × 10−71.95 × 10−77.10 × 10−74.98 × 10−74.16 × 10−8
9−1.00−1.00−1.00−1.00−1.00−1.00
100.000.000.000.000.000.00
11−3.30 × 102−1.24 × 103−5.41 × 103−2.15 × 102−1.69 × 103−2.77 × 103
12−4.37 × 103−4.02 × 103−5.49 × 103−3.50 × 103−3.57 × 103−7.30 × 103
13−1.80−1.80−1.80−1.80−1.00−1.80
14−3.50−3.65−4.50−3.50−3.54−4.65
15−4.48−5.22−7.90−5.22−5.67−8.53
169.72 × 10−39.72 × 10−30.009.72 × 10−39.72 × 10−30.00
17−1.03−1.03−1.03−1.03−1.03−1.03
Table 13. Optimal values of each algorithm tested on F18–F33 functions.
Table 13. Optimal values of each algorithm tested on F18–F33 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
181.50 × 10−93.25 × 10−93.26 × 10−86.11 × 10−105.86 × 10−105.23 × 10−12
197.48 × 10−51.52 × 10−42.35 × 10−32.80 × 10−61.33 × 10−31.14 × 10−5
209.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
210.000.000.000.000.000.00
220.000.000.000.000.000.00
23−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102
243.003.003.003.003.003.00
253.16 × 10−43.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−4
26−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
273.36 × 10−33.24 × 10−41.32 × 10−21.81 × 10−39.30 × 10−45.36 × 10−4
280.000.000.000.000.000.00
29−3.86−3.86−3.86−3.86−3.86−3.86
30−3.32−3.32−3.32−3.32−3.32−3.32
317.77 × 10−52.85 × 10−74.43 × 10−41.64 × 10−51.74 × 10−54.42 × 10−6
327.83 × 10−21.79 × 10−25.42 × 10−11.31 × 10−23.84 × 10−28.44 × 10−3
331.16 × 1029.51 × 1016.59 × 1026.77 × 1012.47 × 1022.06 × 101
Table 14. Average of each algorithm tested on F18–F33 functions.
Table 14. Average of each algorithm tested on F18–F33 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
182.31 × 10−73.74 × 10−85.67 × 10−84.72 × 10−85.08 × 10−25.19 × 10−9
191.151.652.40 × 10−27.66 × 10−13.392.34 × 10−3
205.821.049.98 × 10−11.216.289.98 × 10−1
210.000.000.000.000.000.00
220.000.000.000.000.000.00
23−1.87 × 102−1.87 × 102−1.87 × 102−1.86 × 102−1.87 × 102−1.87 × 102
243.003.003.003.008.433.00
259.10 × 10−31.13 × 10−33.08 × 10−41.16 × 10−32.24 × 10−33.08 × 10−4
26−1.04 × 101−1.04 × 101−1.04 × 101−9.80−9.62−1.04 × 101
279.172.52 × 10−13.68 × 10−22.06 × 10−11.277.73 × 10−3
280.000.000.000.000.000.00
29−3.86−3.86−3.86−3.86−3.86−3.86
30−3.25−3.25−3.32−3.28−3.28−3.32
312.57 × 10−28.93 × 10−37.38 × 10−42.14 × 10−32.75 × 10−45.90 × 10−5
328.57 × 1017.29 × 1013.61 × 1012.30 × 1021.26 × 1022.35 × 10−1
335.23 × 1032.98 × 1031.04 × 1033.27 × 1033.01 × 1032.89 × 102
Table 15. Standard deviation of each algorithm tested on F18–F33 functions.
Table 15. Standard deviation of each algorithm tested on F18–F33 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
181.96 × 10−73.87 × 10−81.62 × 10−83.82 × 10−81.90 × 10−14.44 × 10−9
191.312.133.06 × 10−28.13 × 10−12.922.94 × 10−3
204.421.78 × 10−11.26 × 10−116.45 × 10−14.275.64 × 10−12
210.000.000.000.000.000.00
220.000.000.000.000.000.00
232.35 × 10−11.43 × 10−14.58 × 10−41.021.67 × 10−12.44 × 10−5
244.35 × 10−51.32 × 10−54.50 × 10−79.44 × 10−62.02 × 1012.54 × 10−7
259.85 × 10−32.28 × 10−39.82 × 10−83.62 × 10−34.11 × 10−35.58 × 10−9
269.42 × 10−42.54 × 10−41.57 × 10−41.852.037.08 × 10−5
274.06 × 1013.18 × 10−11.78 × 10−22.23 × 10−11.386.97 × 10−3
280.000.000.000.000.000.00
291.49 × 10−32.07 × 10−43.43 × 10−63.05 × 10−43.04 × 10−33.40 × 10−7
308.00 × 10−25.92 × 10−25.43 × 10−65.78 × 10−29.75 × 10−24.51 × 10−7
315.57 × 10−22.84 × 10−21.85 × 10−48.85 × 10−32.19 × 10−44.07 × 10−5
328.44 × 1019.52 × 1015.76 × 1014.88 × 1022.04 × 1022.42 × 10−1
338.44 × 1034.15 × 1033.33 × 1024.23 × 1033.59 × 1031.85 × 102
Table 16. Worst values of each algorithm tested on F18–F33 functions.
Table 16. Worst values of each algorithm tested on F18–F33 functions.
FunctionGWOGWO1GWO2GWO3GWO4FMGWO
188.45 × 10−71.48 × 10−78.94 × 10−81.48 × 10−77.62 × 10−11.36 × 10−8
196.867.841.08 × 10−12.097.881.08 × 10−2
201.27 × 1011.999.98 × 10−13.971.55 × 1019.98 × 10−1
210.000.000.000.000.000.00
220.000.000.000.000.000.00
23−1.86 × 102−1.86 × 102−1.87 × 102−1.81 × 102−1.86 × 102−1.87 × 102
243.003.003.003.008.40 × 1013.00
252.04 × 10−28.97 × 10−33.08 × 10−42.04 × 10−21.29 × 10−23.08 × 10−4
26−1.04 × 101−1.04 × 101−1.04 × 101−2.77−2.77−1.04 × 101
272.28 × 1021.276.66 × 10−27.71 × 10−15.452.40 × 10−2
280.000.000.000.000.000.00
29−3.85−3.86−3.86−3.86−3.85−3.86
30−3.08−3.20−3.32−3.20−2.84−3.32
312.11 × 10−11.46 × 10−11.06 × 10−34.91 × 10−28.11 × 10−41.38 × 10−4
322.14 × 1023.42 × 1021.62 × 1022.37 × 1031.09 × 1037.57 × 10−1
334.08 × 1042.07 × 1041.59 × 1031.90 × 1041.83 × 1045.70 × 102
Table 17. Parameter setting of wireless sensor network coverage experiment.
Table 17. Parameter setting of wireless sensor network coverage experiment.
ParameterValue
Monitoring area100 m × 100 m
Number of nodes20/25/30
Node perception radius r12
Total number of iterations T500
Number of grids100 × 100
Table 18. Initialization parameters of all algorithms.
Table 18. Initialization parameters of all algorithms.
AlgorithmsParameter Settings
PSON = 30, ω = 0.8, c1 = 2, c2 = 2
GWON = 30
CSAAP = 0.1, FL = 2
DEF = 0.5, CR = 0.9
GAPc = 0.8, Pm = 0.1
FAα = 0.2, β0 = 1, γ = 0.00001
OGWON = 30, μ = 2, Jr = 0.05
DGWO1N = 30
DGWO2N = 30
FMGWON = 30
Table 19. Comparison data of WSN application experiments.
Table 19. Comparison data of WSN application experiments.
AlgorithmsPerformance IndexNodes 20Nodes 25Nodes 30
PSOBest78.98%85.85%91.55%
Mean75.32%83.24%88.92%
Std0.0147660.0141510.013767
GWOBest83.17%94.56%98.07%
Mean81.37%91.48%95.90%
Std0.0117050.0141960.030498
CSABest72.61%81.86%86.25%
Mean70.38%78.98%84.46%
Std0.0093830.011740.008077
DEBest84.03%87.94%91.36%
Mean70.78%77.64%84.39%
Std0.0408350.0250910.028479
GABest75.11%83.64%89.98%
Mean72.37%81.00%87.03%
Std0.0117970.0129780.010928
FABest68.92%79.48%85.26%
Mean66.43%75.46%82.55%
Std0.0137470.019460.012655
OGWOBest82.53%92.67%98.29%
Mean70.66%80.27%85.58%
Std0.0605190.0694330.065456
DGWO1Best83.98%94.03%98.15%
Mean81.89%91.69%96.09%
Std0.0136740.0100310.027406
DGWO2Best82.67%92.95%96.98%
Mean78.47%88.56%94.46%
Std0.0201770.0235260.014734
FMGWOBest84.39%94.85%98.63%
Mean82.47%92.24%96.57%
Std0.00920.0252690.031685
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Ou, Y.; Yang, Z.; Wang, S. A Fusion Multi-Strategy Gray Wolf Optimizer for Enhanced Coverage Optimization in Wireless Sensor Networks. Sensors 2025, 25, 5405. https://doi.org/10.3390/s25175405

AMA Style

Liu Z, Ou Y, Yang Z, Wang S. A Fusion Multi-Strategy Gray Wolf Optimizer for Enhanced Coverage Optimization in Wireless Sensor Networks. Sensors. 2025; 25(17):5405. https://doi.org/10.3390/s25175405

Chicago/Turabian Style

Liu, Zhenkun, Yun Ou, Zhuo Yang, and Shuanghu Wang. 2025. "A Fusion Multi-Strategy Gray Wolf Optimizer for Enhanced Coverage Optimization in Wireless Sensor Networks" Sensors 25, no. 17: 5405. https://doi.org/10.3390/s25175405

APA Style

Liu, Z., Ou, Y., Yang, Z., & Wang, S. (2025). A Fusion Multi-Strategy Gray Wolf Optimizer for Enhanced Coverage Optimization in Wireless Sensor Networks. Sensors, 25(17), 5405. https://doi.org/10.3390/s25175405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop