2.1. Sensing Scheduling Strategies
The research on energy-saving sensing scheduling strategies involves two levels. The first level is adaptive duty cycle scheduling, concerning active time, nap time and the idle listening period. The goal is to best model the event arrival patterns. The second level, collaborative sensing, considers the sensing coverage: how numerous sensor nodes (not just a single node) with different spatiotemporal coordinates in a public area can cooperate to achieve adequate coverage, which is conducive to global optimization of energy consumption.
We know that sensors consume the most energy when they are on duty and the least when they are sleeping. As a result, almost all scheduling strategies, such as ELECTION [
6] and additive increase/multiplicative decrease (AIDM) [
7], make full use of the energy-saving characteristic of the sleep mode. DANCE [
8] improves AIDM [
7] by incorporating the behavior of neighboring nodes into consideration scope. In DANCE, the sensor abandons its task if its neighbor has already done it, thus down-scaling the energy inefficiency. The authors argue that the data sampling rate affects the computing and communication load on the central server. Controlling the sampling rate within a specific range (if it exceeds, then the central server and nodes renegotiate a range) through Kalman filtering is believed to be capable of tackling this problem [
9].
Jothiraj et al. [
10] recommend fine-tuning the sensing frequency of each sensor dynamically. As the sensory readings increase, the detection accuracy improves. The challenge, however, lies in modeling the actual world. Maintaining the greatest possible coverage is a primary goal when designing a sensor network. Chen et al. [
11] defined a sensing coverage metric that measures a wireless sensor network’s QoS (quality of service). Furthermore, a polynomial-time complexity optimization algorithm using graphing theory and computational geometry is proposed to achieve optimum coverage [
12] broadens the work in [
11] by enhancing the algorithm. The first work that considers both energy consumption and sensing coverage was completed by [
13], which introduces an energy-efficient surveillance system (ESS). This was followed by lightweight deployment-aware scheduling (LDAS) [
14,
15], probing environment and adaptive sensing (PEAS) [
16], probing environment and collaborating adaptive sleeping (PECAS) [
17], and randomized independent scheduling (RIS) [
18], and so forth. These studies make the following two common assumptions: (1) each sensor is power-constrained, and (2) the network is expected to run for a long time. Other assumptions include:
Network Structure. The network structure can be flat or hierarchical.
Sensor Placement. The sensing coverage is usually affected by how sensors are initially placed. In most cases, the sensors follow a random distribution or two-dimensional (2-D) Poisson distribution [
12,
19].
Sensing Area: The sensing area can be 2-D circular or 3-D spherical.
Time Synchronization. The sensors are synchronized in time, and can be woken up simultaneously for the next scheduling round.
Failure Model. Almost all the studies assume that sensors fail when energy is exhausted.
Sensor Mobility. Most studies have assumed that the sensor is immobile.
Location Information. These studies typically associate location information with whether, or how much, a sensor’s sensing area overlaps with its neighbor’s.
Distance Information. The distance information can be inferred from the location information.
Randomized independent scheduling (RIS) [
18] can extend the sensor lifetime and obtain an asymptotic
k-coverage, and, it is simple. RIS does not need to provide location information or distance information, nor does it need adjustable transmission range and mobility. However, it is based on strict distribution assumptions—for example, the sensor is Poisson/uniform/grid distributed, the sensing range follows a uniform distribution and the network is flat and two-dimensional.
Lightweight deployment-aware scheduling (LDAS) [
15], probing environment and adaptive sensing (PEAS) [
16], probing environment and collaborating adaptive sleeping (PECAS) [
17], and so forth, make relatively loose assumptions. LDAS assumes that the sensor nodes do not own a positioning device, such as GPS, thus forces the sensors achieving the desired coverage by static sensing. PEAS consists of two mechanisms: sensing and adaptive sleep scheduling, and requires high sensor density. PECAS is an updated version of the PEAS. It posts its remaining/available hours in response to neighbors to avoid misperceptions. This leads to an increase in communication energy depletion. Balanced-energy scheduling (BS) [
20] is a balanced-energy scheduling model designed for dense sensor networks. It distributes the sensing and communication tasks to all sensor nodes in the cluster. The assumptions made by the reviewed studies are listed in
Table 1.
Applications differ in their necessities. Hence, the served sensor networks have diverse design objectives and priorities. We summarize these design objectives as follows:
Maximizing Network Lifetime. This is a goal that is hard not to consider.
Sensing Coverage. A network is said to achieve k-coverage if any event occurs within the jurisdiction of at least k sensors. 1-overage is a minimum requirement for WSN.
Network Connectivity. The developers applaud a model that offers a particular network connectivity needed by the application, but this requires a very high sensor density.
Balanced Energy Usage. In the case of a sensing coverage breach, when a node runs out of power before others, some studies endeavor to spread the energy utilization evenly among each node.
Simplicity. Sensors have exceptionally restricted memory space and limited computation power. For this reason, simple schemes are more popular.
Robustness: Robustness measures how well a network can withstand downtime and crashes.
Adaptive self-configuring sEnsor networks topologies (ASCENT) [
18], PEAS, PECAS, Low-energy adaptive clustering hierarchy (LEACH-GA) [
23], IBLEACH [
24], energy-efficient surveillance system (ESS) [
25] and BS [
20] do not take complete coverage of the region as their primary goal. Cooperative spectrum sensing (CSS) [
22], however, is the opposite. The coverage configuration protocol (CCP) [
21] introduced the concept of interconnected sensor coverage. It devised an approximation algorithm to construct a topology with near-optimal sensor coverage. The central controller periodically selects sensors along a trajectory until the target area is fully covered.
Most schemes strive to attain an energy balance [
28], for example, the distributed self-spreading algorithm (DSS) [
26] and intelligent deployment and clustering algorithm (IDCA) [
27]. In the DSS, the sensor nodes are initially deployed randomly and move because of the influence exerted by nearby nodes. In the IDCA, however, the remaining energy level of the node determines whether it moves or not. The idea behind DSS and IDCA is to reduce the residual capacity differential between nodes.
Many studies envisage network connectivity with sensing coverage, for example, [
29,
30,
31]. When the transmission range of the sensor node is at least twice its sensing range,
k-coverage leads to
k-connectivity [
29,
30]. Typically, high connectivity ensures high robustness, but one result of high connectivity is that data conflicts between nodes can seriously affect data transfer rates. The results of recent research presented in [
32] are not based on the assumption that the transmit range of the sensor nodes is not less than twice their sensing range. Dhumal et al. [
31] considered a tiered sensor network that includes sensors that could fail and they discussed the sensing coverage, network connectivity, and network diameter. In [
19], the authors proposed an optimal deployment strategy to achieve two connections that fully cover all communication and sensing ranges.
The design objectives of the reviewed studies are summarized in
Table 2.
2.2. Optimization Algorithms for Multi-Modal Function
Population or single solution search-based optimization algorithms (viz., meta, hyper-heuristic) have solid global search ability. Typical examples include inter alia evolutionary and swarm intelligence algorithms, such as the genetic algorithm (GA) [
33], differential evolution (DE) algorithm [
34], particle swarm optimization (PSO) algorithm [
35], comprehensive learning particle swarm optimization (CLPSO) [
36] and artificial bee colony (ABC) algorithm [
37]. These algorithms are essentially methods by trial and error. It can take many thousands or even millions of iterations to converge. New emerging algorithms, such as the multi-layered gravitational search algorithm (MLGSA) [
38] and quantum tabu search (QTS) algorithm [
39], improve classical versions by avoiding premature convergence or being trapped in local optima. When addressing practical issues, we search for the global extremum of a complicated or unknown function, but just finding one local minimum of a relatively simple but very high-dimensional function can also be a formidable challenge; for example, the Multi-modal Function Optimization (MFO) [
40]. Given a multi-modal problem, the optimization task is to identify the greatest number of optimal solutions (global and local) to help the decision-maker better understand the problem at hand.
It has developed many techniques to locate local optimums. These techniques are termed “niching” methods. A niching method can be built into a standard search-based optimization algorithm to identify several optimal or sub-optimal solutions sequentially or concurrently. Sequential approaches discover optimal solutions gradually over time, while concurrent methods encourage and maintain many stable sub-populations within a single population. Conventional niching techniques include crowding, fitness sharing, decommissioning, covariance matrix adaptation, cleaning, species conserving, and so forth. More recently, some variants of meta-heuristic algorithms, such as an ant colony system with nonlinear pheromone update (ACS-NP) [
41], and comprehensive learning particle swarm optimization with local search (CLPSO-LS) [
42], have incorporated niching methods. Despite niching methods’ first appearance over 30 years ago (in the 1980s), niching methods (or multi-modal optimization) are being resurrected as an increasingly important research subject, attracting researchers from a broad spectrum of research areas, including Evolutionary Computation (EC) [
43] and Swarm Intelligence (SI) [
42].
The Fibonacci tree optimization algorithm (FTO) [
44] is a sophisticated optimization algorithm. It resolves the optimal solution of the problem by alternating iterations of global scanning and local scanning. It fully utilizes computer memory to save the optimization process. FTO can provide a reasonable approximation of a global optimum for a function with ample search space. In each iteration, the golden ratio separation is used to compress the search space, so the local optimal solution can also be achieved. It particularly applies to multi-peak/multi-modal function optimization.
The comparison of features of the aforementioned optimization algorithms is shown in
Table 3.