Next Article in Journal
Calculus Through Transfer-Matrix Method of Continuous Circular Plates for Applications to Chemical Reactors
Previous Article in Journal
A Machine Proof of the Filter-Method Construction for Real Numbers
Previous Article in Special Issue
Green Technology Game and Data-Driven Parameter Identification in the Digital Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward a Distributed Potential Game Optimization to Sensor Area Coverage Problem

The College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
*
Author to whom correspondence should be addressed.
Current address: No. 460, Huangshan Road, Shushan District, Hefei 230037, China.
Mathematics 2025, 13(17), 2709; https://doi.org/10.3390/math13172709
Submission received: 20 June 2025 / Revised: 16 August 2025 / Accepted: 17 August 2025 / Published: 22 August 2025
(This article belongs to the Special Issue Dynamic Analysis and Decision-Making in Complex Networks)

Abstract

The sensor coverage problem is a well-known combinatorial optimization problem that continues to attract the attention of many researchers. The existing game-based algorithms mainly pursue a feasible solution when solving this problem. This problem is described as a potential game, and a memory-based greedy learning (MGL) algorithm is proposed, which can ensure convergence to Nash equilibrium. Compared with existing representative algorithms, our proposed algorithm performs the best in terms of average coverage, best value, and standard deviation within within a suitable time. In addition, increasing memory length helps to generate a better Nash equilibrium.

1. Introduction

The sensor area coverage problem is a fundamental optimization problem in wireless sensor systems, which has been proven to be NP-hard, with the goal of efficiently utilizing limited sensors to cover the largest area with minimal cost [1]. This problem is also a typical engineering application optimization problem, such as military reconnaissance [2,3], unmanned autonomous intelligent systems [4,5], Internet of Things [6,7], disaster relief [8,9], etc.
Due to the wide application of sensor area coverage problem, it has attracted the attention of many researchers who are committed to developing various optimization approaches for solving this problem. Early well-known approaches include genetic algorithm [10,11], particle swarm algorithm [12,13], simulated annealing algorithm [14,15], ant colony algorithm [16,17], etc. Those algorithms have long been proven to solve optimization problems and ensure that at least one feasible solution can be obtained. However, with the rapid development of science and technology, sensors have been mass-produced, especially when hundreds or thousands of sensors cooperate to perform this problem, the above algorithms are not applicable [18]. The reason is that those algorithms are essentially centralized algorithms, requiring a central manager to make decisions for all sensors [19]. This requirement not only causes a heavy computational burden on the sensor system, but also results in high communication requirements, and the robustness of the system will also be poor [20].
Therefore, how to develop distributed approaches to solve the sensor area coverage problem is the current research trend, and the most common one currently available is local search algorithm [21]. This algorithm starts from an initial solution, generates neighboring solutions through neighborhood actions, determines the quality of neighboring solutions, and repeats the above process until the termination condition is reached [22]. Although the local search algorithm is simple and easy to implement, it also has a drawback that it is prone to getting stuck in local optima, and the quality of the final solution is closely related to the choice of the initial solution. Note that a local optimal solution is usually not a global optimal solution and may not even reach an approximate optimal solution [23]. Therefore, in recent years, researchers have turned their attention to game theory.
Game theory is a theory that studies how individuals make reasonable strategies in complex interactions [24,25]. By establishing a game model for optimization problems and designing corresponding game algorithms, an equilibrium solution can be obtained [26]. As a special type of game, potential game has been widely applied. In a potential game, there exists a potential function, and the individual utility changes caused by changing strategies can be mapped to this potential function. By optimizing this potential function, the equilibrium state (i.e., Nash equilibrium) of the game can be achieved [27]. For optimization problems that can be modeled as potential games, various game algorithms have emerged, such as log-linear learning algorithm [28], binary log-linear learning algorithm [29], fictitious play algorithm [30], etc. The above algorithms all have a common point, that is, at each time step, each player only uses the information from the most recent past time step (i.e., local information) to make decisions. The goal of those algorithms is to obtain a feasible solution, but this feasible solution is usually far from the approximate optimal solution.
For this, for the sensor area coverage problem, we present a potential game model, where the potential function reflects the global objective, and a player’s utility function is the wonderful life utility. Then, we propose a random memory algorithm (RMA), in which each player has a memory length that can store past step strategies, and make decisions based on local information. Subsequently, we verify that our proposed RMA can converge to a Nash equilibrium under any memory length. Numerical simulation will further demonstrate the value of our proposed RMA.
The main contributions of this paper are summarized as follows.
(1) We present a potential game for the sensor area coverage problem, where player’s utility function is the wonderful life utility.
(2) We proposes a game-based distributed algorithm, i.e., the RMA, which can converge to a Nash equilibrium under any memory length.
(3) Numerical simulations demonstrate the effectiveness and superiority of our proposed RMA by comparing with the existing representative optimization algorithms. In addition, we found through numerical simulations that increasing the memory length can achieve better sensor area coverage.
The rest of this paper is organized as follows. Section 2 describes the sensor area coverage problem and presents a potential game model for this problem. Section 3 presents our proposed RMA in detail and provides theoretical analysis on its convergence. Section 4 provides numerical simulations to verify the performance of our proposed RMA. Section 5 summarizes this article and provides prospects for future research directions. The main notations of this article are listed in Abbreviations.

2. Preliminaries and Game Model

2.1. Sensor Area Coverage Problem

Consider a set of sensors, forming a set V = { 1 , 2 , , n } , which are randomly deployed to monitor (cover) a area Θ . Each sensor i V can cover the effective area Θ i ( r i ) of a circle under a coverage radius r i when it is in working condition. The location of the ith sensor can be denoted by l o c i . Without loss of generality, assume that the communication radius c of all sensors is the same. The set Γ i composed of neighbors of sensor i can be defined as
Γ i = { j V : l o c i l o c j c } .
Then, the sensor area coverage problem is whether each sensor chooses a coverage decision that collectively maximizes the global objective function
f ( x ) = i V Θ i ( r i ) i V C i ( r i ) ,
where i V Θ i ( r i ) is the union of all areas covered by the sensors, C i ( r i ) is the cost function of sensor i under the coverage radius r i , and C i ( r i ) is expressed as
C i ( r i ) = k S max ( r i ) ,
where S max ( r i ) is the maximum area covered by sensor i, k ( 0 , 1 ) is the penalty value, and S max ( r i ) is expressed as
S max ( r i ) = π r i 2 ,

2.2. Potential Game

In this subsection, we present the sensor area coverage problem as a game and prove that this game is a potential game.
Definition 1.
The sensor area coverage problem can be defined as a game G = { V , X , F } , where
(i) V = { 1 , 2 , , n } is the set of all players (sensors);
(ii) X = i = 1 n X i is the strategy space, X i is the strategy set of player i, X i = { r 0 , r 1 , r m } , and r 0 = 0 , r z > 0 , z { 1 , , m } .
(iii) F = { f 1 , f 2 , , f n } is the set of utility (i.e., payoff) functions, where f i F is the utility function of player i, i V .
Based the Definition 1, x = ( x 1 , x 2 , , x n ) is the strategy profile, x i is the strategy of player i, and x i = ( x 1 , , x i 1 , x i + 1 , , x n ) . Thus, we have x = ( x i , x i ) . Then, we will provide the definition of potential game.
Definition 2
([31]). Consider a game G = ( V , X , F ) with individual utility function f i , if there exists a function φ satisfies
f i ( x i , x i ) f i ( x i , x i ) = φ ( x i , x i ) φ ( x i , x i ) ,
then the game G = ( V , X , F ) is called potential game, and φ is the corresponding potential function.
Considering that when a pair of sensors are covered, once there is coverage overlap, they are within the communication range of each other. Thus, the communication radius c = 2 r m . The utility function of player i is designed as
f i ( x i , x i ) = Θ i ( x i ) j Γ i Θ j ( x j ) C i ( x i ) / S max ( r m ) .
Theorem 1.
A game G = ( V , X , F ) with individual utility function f i in (6) is potential game, and the corresponding potential function is expressed as
φ ( x ) = f ( x ) / Δ .
Proof. 
By (2), we have
f ( x i , x i ) = Θ i ( x i ) j Γ i Θ j ( x j ) + j V i Θ j ( x j ) C i ( x i ) j V i C j ( x j ) .
Further, for x i , x i X i , we have
φ ( x i , x i ) φ ( x i , x i ) = Θ i ( x i ) j Γ i Θ j ( x j ) C i ( x i ) / Δ Θ i ( x i ) j Γ i Θ j ( x j ) C i ( x i ) / Δ = f i ( x i , x i ) f i ( x i , x i ) .
Thus, the game G = ( V , X , F ) with individual utility function f i in (6) is potential game, and the corresponding potential function is (7). Then, the proof of Theorem 1 is completed. □
Remark 1.
The sensor coverage problem is an important application problem in sensor system, (2) transforms this problem into a combinatorial optimization problem, whose optimal solution is equivalent to an optimal sensor area coverage solution. In addition, (2) serves as the bridge that casts this problem into a potential game, which can be solved within a distributed optimization framework. Note that for any optimization problem that can be modeled as a potential game, there must exist at least one Nash equilibrium.

3. Distributed Algorithm Designed and Analysis

3.1. Algorithm Design

For an optimization problem that can be modeled as a potential game, it is crucial to design a suitable distributed algorithm, which requires a reasonable balance between the solution quality of the algorithm’s convergence and the runtime. For this, we propose a dynamic fictitious play (DFP) distributed algorithm, in which each player can dynamically adjust their own strategy in parallel based on local information. The procedure of our proposed DFP distributed algorithm is described as follows.
Step 1: At each time step t, each player randomly chooses its current strategy for the next time step or chooses to update its strategy.
Step 2: Each player i involved in updating strategy calculates its utility f i ( x i t , x i t ) and best response B R i t , then records strategy x i t and B R i t into a set Λ i t .
Step 3: Each player i involved in updating strategy choose a strategy x ^ i t from the set Λ i t for the next time step-based dynamic selection rule, i.e., x i t + 1 = x ^ i t .
In Step 2, B R i t is expressed as
B R i t = argmax x i t X i f i ( x i t , x i t ) .
In Step 3, the dynamic selection rule
x ^ i t = B R i t , w . p . δ i t x i t , w . p . else ,
where δ i t is expressed as
δ i t = e f i ( B R i t , x i t ) x i Λ i t e f i ( x i , x i t ) .
Remark 2.
(1) There exists multiple distributed algorithm used to solve potential game optimization problems, such as log-linear learning algorithm [28], binary log-linear learning algorithm [29], etc. Those algorithms have one point in common, which is that at each time step, only one player has the opportunity to update its strategy, while the remaining players keep their strategies unchanged. This sequential update strategy mechanism can cause those algorithms needing a long time to obtain a satisfactory solution in large-scale systems.
(2) Although there exist algorithms that allow each player to update their strategies at every time step, such as fictitious play algorithm [30]. In this algorithm, δ i t is set to a fixed value, which can cause this algorithm to easily fall into inefficient local optimal solution. In contrast, in our proposed DFP algorithm, δ i t is not limited to a fixed value, but dynamically changes based on local information.
(3) Our proposed DFP distributed algorithm is simple and easy to understand, and each player can independently update their strategy based on local information synchronization at each time step. This algorithm is suitable for multiple application fields. Especially in hostile environments, multiple drones collaborate in combat, and the sensors on each drone can autonomously monitor the target, as the sensors consume energy during operation. Therefore, in order to maximize efficiency and quickly execute monitoring tasks, we hope that each sensor can make independent decisions, and the DFP algorithm is suitable.

3.2. Convergence Analysis

For an algorithm, its convergence is crucial. Without convergence, the algorithm is difficult to apply in practice. Thus, in this subsection, we further analysis the convergence of this algorithm. First, we present the definition of Nash equilibrium.
Definition 3
([32]). A Nash equilibrium x = ( x 1 , , x n ) is such such a strategy profile where no player can unilaterally change its strategy to increase its utility, i.e.,
f i ( x i , x i ) f i ( x i , x i ) ,
where x i X i .
Theorem 2.
Consider a potential game G = ( V , X , F ) with individual utility function f i in (4), our proposed DFP distributed algorithm can converge to a Nash equilibrium with probability 1.
Proof. 
Since the number of sensors is finite and each player i have finite strategy set, i.e., | X i | < + . Thus, the number of Nash equilibrium is finite, and they form a set X ne = { x ne , 1 , , x ne , g } , g is the number of all Nash equilibriums, i.e., g < | X | . We will divide into two steps to prove the convergence of our proposed DFP algorithm. Firstly, we will prove that this algorithm can ensure that each strategy profile will touch a Nash equilibrium, and then prove that once touched, it locks onto this Nash equilibrium.
Part I: At each time step t, strategy profile x = ( x 1 , , x n t ) is not a Nash equilibrium. Thus, there must exists at least a player i that satisfies x i t B R i t . Then, according to Step 2 and Step 3 of our proposed DFP algorithm, there exists a positive probability for player i updating its strategy x i t to B R i t , and all its neighbors will keep their strategies, x j t + 1 = x j t , j Γ i , i.e.,
p ( x i t B R i t , x j t x j t ) = ( 1 δ i t ) j Γ i δ j t > 0 .
Similarly, there exists a positive probability for the strategy profile x t transform into the strategy profile x t + 1 , i.e.,
p ( x t x t + 1 ) = i C t ( 1 δ i t ) j V C t δ j t > 0 ,
where C t = { i | x i t B R i t , i V } is the set of player involved in updating strategies, and their neighbors keep their strategies.
Further, we have
φ ( x t + 1 ) φ ( x t ) = i C t f i ( x i t + 1 , x i t ) f i ( x i t , x i t ) > 0 ,
where x i t + 1 = x i t .
Thus, the value of the potential function φ strictly increases with a positive probability over time step t. Since the limited number of players, the value of the potential function is also finite. Thus, there must exist a time step t ne such that φ ( x t ) = φ ( x t ne ) , where t t ne , and x t ne X ne . This means that our proposed DFP algorithm can ensure that each strategy profile will touch a Nash equilibrium with a positive probability.
If the probability of an event E occurring with a positive probability at any time step t, i.e., p t ( E ) > 0 . Then, the event E will eventually occur due to p ( E ) = 1 z = 1 + ( 1 p z ( E ) ) = 1 . This means that our proposed DFP algorithm can ensure that each strategy profile must touch a Nash equilibrium in infinite time.
Part II: At time step t, if strategy profile x = ( x 1 , , x n t ) is a Nash equilibrium, then we have x i t = B R i t , and x ^ i t = x i t , i V . Thus, we have x i t + 1 = x i t . Similarly, we have x i t + τ = x i t , τ 0 . This means that any Nash equilibrium must touch this Nash equilibrium and lock it in.
Based on the analysis of part I and II, our proposed DFP distributed algorithm can converge to an Nash equilibrium with probability 1. Then, the proof of Theorem 2 is completed. □
Remark 3.
(1) Theorem 2 shows that our proposed DFP distributed algorithm can ensure that a Nash equilibrium can be obtained. A Nash equilibrium can be obtained through other ways. For example, any non-Nash equilibrium transforms into a Nash equilibrium within a time step.
(2) For common distributed game algorithms used to solve sensor coverage problems, such as log-linear learning algorithm [28], binary log-linear learning algorithm [29], etc. Although these algorithms can usually obtain a feasible solution, they theoretically cannot converge. For the FP algorithm, δ i t is a fixed value, and this algorithm is not affected by the strategy during the decision-making process.

4. Numerical Simulation

To evaluate the performance of our proposed DFP distributed algorithm, we compared it numerically with representative algorithms that can be modeled as potential games. Let us consider such a scenario: in a wireless sensor network composed of sensors from multiple unmanned aerial vehicles, the main requirement is to allow some sensors to monitor (i.e., sense) and collect effective information about the target area. Since sensors consume energy when in working, and the area the monitored area, the more energy is consumed. Therefore, how the sensor decides (to work or to turn off) to maximize the global objective value can be regarded as the sensor coverage problem of this paper. For fairness, each algorithm runs independently 1000 times to calculate the average value under each setting. All numerical simulations are realized on the MATLAB R2023a and the same computer with 3.20 GHz CPU and 16.0 G RAM.

4.1. Simulation Set

In this subsection, the area Θ i ( r i ) can be discretized as a two-dimensional grid represented by the Cartesian product { 0 , 1 , , d } × { 0 , 1 , , d } . The area length d is set from 20 to 200, and the number n of agents is set to 20, 50, 100, 200, respectively. The strategy set X i of player is set to { 0 , 5 } , i.e., X i = { 0 , 5 } , i V . the communication radius c = 2 r m = 10 . In other words, if a sensor i does not work, its sensing radius x i and the communication radius c are both 0. If a sensor i is working, its sensing radius x i is 5, and the communication radius c is 10. A visual diagram as shown in Figure 1, where n is 15 and d is 20. In Figure 1, each solid circle represents a sensor, and the black dots indicate sensors in the turn off state, meaning its strategy is 0; the orange-red solid dots represent sensors in working state, the orange-red solid dots represent sensors in 5. We can clearly see that within a given area, active (working) sensors near the center can cover a complete circle area, while active sensors near the periphery do not cover a complete circle area. Due to the possible overlap in coverage area between active sensors, it can be seen from (6) that the benefits brought by each active sensor coverage are its own unique coverage area, and its cost is only related to the coverage radius. Therefore, our proposed MGL algorithm allows each active sensor to access the strategies of neighboring sensors for decision-making, with the expectation of achieving global goal maximization. The k in (4) is set to 0.2. For each 1000 independent runs of the algorithm, we calculate its the corresponding average value f ¯ , best value f b e s t , standard deviation δ , and average runtime T ¯ .

4.2. Compared with the Existing Representative Algorithms

Typical representative algorithms that participate in numerical simulation comparisons include log-linear learning (LLL) algorithm [28], fictitious play (FP) algorithm [30], best response (BR) algorithm [33], and genetic algorithm (GA) [34]. The simple process of those algorithms are as follows:
LLL algorithm: This is a typical distributed algorithm that can be used to solve problems that can be modeled as potential games. The main idea of this algorithm is that, at each time step t, a player i is randomly selected to choose a strategy x i from its strategy profile X i with the probability p x i for the next time step, and the probability p x i is expressed as
p x i ( α ) = e α f i ( x i , x i t ) x ˜ i X i e α f i ( x ˜ i , x i t ) ,
where α 0 . When α = 0 , player i is randomly selects a strategy from its strategy set X i for the next time step. When α shows a tendency towards positive infinity, player i selects the best response B R i t for the next time step. Note that regardless of the given task value of α , the LLL algorithm cannot converge. Therefore, for the LLL algorithm, we will sample at the 100 n time step, and α is set 1000.
FP algorithm: This is a commonly distributed algorithm that can be used to solve problems that can be modeled as potential games. This algorithm has the characteristics of simplicity and fast convergence. The main idea of this algorithm is that, at each time step t, each player i chooses strategy x i t with probability ω ( 0 , 1 ) or selects the best response B R i t with probability 1 ω for the next time step. Since a small ω induce a better solution, ω is thus set to 0.1. This algorithm can converge to a Nash equilibrium.
BR algorithm: This is also a distributed algorithm that can be used to solve problems that can be modeled as potential games. The main idea of this algorithm is that, at each time step t, a player i is randomly selected to choose the best response B R i t . This algorithm can converge to a Nash equilibrium.
GA: This is a classic metaheuristic algorithm, a centralized optimization algorithm that simulates natural selection and genetic mechanisms and belongs to the branch of evolutionary computation. This algorithm has been applied to various fields, with main operations including selection, crossover, and mutation. In this section, the fitness function adopts (2).
The numerical simulation results are shown in Table 1. From this Table, we can see the following:
(1) When we observe the solution quality, i.e., average value f ¯ and standard deviation σ , our proposed DFP algorithm performs the best result by comparing with other algorithms, followed by the GA, LLL algorithm, BR algorithm and FP algorithm. Specifically, compared to the FP algorithm, the reason why our proposed DF algorithm performs better in this regard is that our algorithm has dynamic probability in strategy selection, while in the FP algorithm, each player tends to choose the best response.
(2) When we further observe the best value f b e s t , under all experimental settings (that is n and l), our proposed DFP algorithm can achieve the best values by comparing with other algorithms. In some cases, other algorithms can also achieve optimal values, such as in small-scale systems, i.e., n = 20 , 50 , the GA can also achieve the best values. It is worth noting that under various settings, we have found through extensive simulation testing that for a given setting of n / l , our proposed algorithm reaches its optimal value every 1000 runs.
(3) When we observe the average runtime T ¯ , FP algorithm performs the best result by comparing with other algorithms, followed by the GA, DFP algorithm, BR algorithm, and LLL algorithm. The reason why the LLL algorithm performs poorly in terms of average runtime is that its update strategy mechanism is essentially a serial update strategy mechanism, and its average running time is related to the system size (i.e., the number of sensors), while other algorithms are essentially parallel update strategy mechanism. Note that the average runtime of our proposed DFP algorithm is very close to that of the FP algorithm.
In sum, our proposed DFP algorithm can obtain a satisfactory solution within a suitable time.

5. Conclusions and Future Research Directions

This paper investigates the issue of the multi-agent area coverage problem and addresses it within a distributed framework. We have treated each sensor as a player and use the game theory approach to solve it. We have established a potential game for this problem and designed a novel dynamic fictitious play distributed algorithm, which theoretically demonstrates that this algorithm can ensure obtaining a Nash equilibrium. Simulation results have shown that compared with typical optimization algorithms, our proposed algorithm can obtain the best results in terms of average coverage, best value, and standard deviation in a suitable time.
For future research directions, we will consider the following three aspects: (i) developing a novel distributed optimization method to achieve the optimal solution of sensor area coverage problem; (ii) researching a sensor area coverage problem in more complex environments, such as sensors being able to move, exit, or join; (iii) explore applying the developed optimization approaches to real-world scenarios.

Author Contributions

Software, X.X.; Validation, J.C.; Formal analysis, R.D.; Investigation, J.H.; Resources, J.H. and S.X.; Data curation, X.X. and S.X.; Writing—original draft, J.H.; Writing—review & editing, J.C. and R.D.; Project administration, R.D.; Funding acquisition, S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the high-level talent fund No. 22-TDRCJH-02-013.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NotationDefinitions
Θ i ( r i ) the area covered by sensor i under coverage radius r i
C i ( r i ) the cost function of sensor i under coverage radius r i
Vthe set of players (sensors)
nthe number of players
Fthe set of cost functions
Xthe strategy space
X i the strategy set of player i
xthe strategy profile (solution)
x the Nash equilibrium
X ne the set of all Nash equilibriums
x i the strategy of player i
x i the strategy profile of all players expect player i
B R i t the best response of player i at time step t
r i the coverage radius of player i
cthe communication radius of all sensors
f i the utility of player i
f b e s t the best value in a set of data
φ the potential function
Γ i the neighbor set of player i

References

  1. Zhou, E.; Liu, Z.; Zhou, W.; Lan, P.; Dong, Z. Position and orientation planning of the uav with rectangle coverage area. IEEE Trans. Veh. Technol. 2025, 74, 1719–1724. [Google Scholar] [CrossRef]
  2. Li, X.; Lu, X.; Chen, W.; Ge, D.; Zhu, J. Research on uavs reconnaissance task allocation method based on communication preservation. IEEE Trans. Consum. Electron. 2024, 70, 684–695. [Google Scholar] [CrossRef]
  3. Shaofei, M.; Jiansheng, S.; Qi, Y.; Wei, X. Analysis of detection capabilities of leo reconnaissance satellite constellation based on coverage performance. J. Syst. Eng. Electron. 2018, 29, 98–104. [Google Scholar] [CrossRef]
  4. Chen, J.; Ling, F.; Zhang, Y.; You, T.; Liu, Y.; Du, X. Coverage path planning of heterogeneous unmanned aerial vehicles based on ant colony system. Swarm Evol. Comput. 2022, 69, 101005. [Google Scholar] [CrossRef]
  5. Hu, B.-B.; Zhang, H.-T.; Liu, B.; Ding, J.; Xu, Y.; Luo, C.; Cao, H. Coordinated navigation control of cross-domain unmanned systems via guiding vector fields. IEEE Trans. Control Syst. Technol. 2024, 32, 550–563. [Google Scholar] [CrossRef]
  6. Zhu, C.; Fan, X.; Deng, X.; Liu, S.; Yin, D.; Gao, H.; Yang, L.T. Area coverage reliability evaluation for collaborative intelligence and meta-computing of decentralized industrial internet of things. IEEE Internet Things J. 2025, 12, 13734–13745. [Google Scholar] [CrossRef]
  7. Yang, F.; Shu, L.; Duan, N.; Yang, X.; Hancke, G.P. Complete area c-probability coverage in solar insecticidal lamps internet of things. IEEE Internet Things J. 2023, 10, 22764–22774. [Google Scholar] [CrossRef]
  8. Gui, J.; Cai, F. Coverage probability and throughput optimization in integrated mmwave and sub-6 ghz multi-uav-assisted disaster relief networks. IEEE Trans. Mob. Comput. 2024, 23, 10918–10937. [Google Scholar] [CrossRef]
  9. Shi, K.; Peng, X.; Lu, H.; Zhu, Y.; Niu, Z. Application of social sensors in natural disasters emergency management: A review. IEEE Trans. Comput. Soc. Syst. 2023, 10, 3143–3158. [Google Scholar] [CrossRef]
  10. Cao, Y.; Feng, W.; Quan, Y.; Bao, W.; Dauphin, G.; Song, Y.; Ren, A.; Xing, M. A two-step ensemble-based genetic algorithm for land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 409–418. [Google Scholar] [CrossRef]
  11. Hanh, N.T.; Binh, H.T.T.; Hoai, N.X.; Palaniswami, M.S. An efficient genetic algorithm for maximizing area coverage in wireless sensor networks. Inf. Sci. 2019, 488, 58–75. [Google Scholar] [CrossRef]
  12. Wang, S.; Zhou, A. Leader prediction for multiobjective particle swarm optimization. IEEE Trans. Evol. Comput. 2024, 29, 1356–1370. [Google Scholar] [CrossRef]
  13. Bonnah, E.; Ju, S.; Cai, W. Coverage maximization in wireless sensor networks using minimal exposure path and particle swarm optimization. Sens. Imaging 2020, 21, 4. [Google Scholar] [CrossRef]
  14. He, Y.; Huang, J.; Li, W.; Zhang, L.; Wong, S.-W.; Chen, Z.N. Hybrid method of artificial neural network and simulated annealing algorithm for optimizing wideband patch antennas. IEEE Trans. Antennas Propag. 2024, 72, 944–949. [Google Scholar] [CrossRef]
  15. Wu, X.; Yang, Y.; Xie, Y.; Ma, Q.; Zhang, Z. Multiregion mission planning by satellite swarm using simulated annealing and neighborhood search. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 1416–1439. [Google Scholar] [CrossRef]
  16. Dai, L.-L.; Pan, Q.-K.; Miao, Z.-H.; Suganthan, P.N.; Gao, K.-Z. Multi-objective multi-picking-robot task allocation: Mathematical model and discrete artificial bee colony algorithm. IEEE Trans. OnIntelligent Transp. Syst. 2024, 25, 6061–6073. [Google Scholar] [CrossRef]
  17. Xie, X.; Yan, Z.; Zhang, Z.; Qin, Y.; Jin, H.; Xu, M. Hybrid genetic ant colony optimization algorithm for full-coverage path planning of gardening pruning robots. Intell. Serv. Robot. 2024, 17, 661–683. [Google Scholar] [CrossRef]
  18. Aga, R.S.; Duncan, L.; Davidson, L.; Ouchen, F.; Aga, R.; Heckman, E.M.; Bartsch, C.M. Design and fabrication of a metal resistance strain sensor with enhanced sensitivity. IEEE Sens. Lett. 2024, 8, 2504004. [Google Scholar] [CrossRef]
  19. Zhou, X.; Rao, W.; Liu, Y.; Sun, S. A decentralized optimization algorithm for multi-agent job shop scheduling with private information. Mathematics 2024, 12, 971. [Google Scholar] [CrossRef]
  20. Gao, Y.; Yang, S.; Li, F.; Trajanovski, S.; Zhou, P.; Hui, P.; Fu, X. Video content placement at the network edge: Centralized and distributed algorithms. IEEE Trans. Mob. Comput. 2023, 22, 6843–6859. [Google Scholar] [CrossRef]
  21. Luo, C.; Xing, W.; Cai, S.; Hu, C. Nusc: An effective local search algorithm for solving the set covering problem. IEEE Trans. OnCybernetics 2024, 54, 1403–1416. [Google Scholar] [CrossRef]
  22. Seyedkolaei, A.A.; Nasseri, S.H. Facilities location in the supply chain network using an iterated local search algorithm. Fuzzy Inf. Eng. 2023, 15, 14–25. [Google Scholar] [CrossRef]
  23. Zeng, L.; Chiang, H.-D.; Liang, D.; Xia, M.; Dong, N. Trust-tech source-point method for systematically computing multiple local optimal solutions: Theory and method. IEEE Trans. Cybern. 2022, 52, 11686–11697. [Google Scholar] [CrossRef]
  24. Guo, C.; Song, Y. Multi-subject decision-making analysis in the public opinion of emergencies: From an evolutionary game perspective. Mathematics 2025, 13, 1547. [Google Scholar] [CrossRef]
  25. Liu, S.; Li, L.; Zhang, L.; Shen, W. Game theory based dynamic event-driven service scheduling in cloud manufacturing. IEEE Trans. Autom. Sci. Eng. 2024, 21, 618–629. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Xiang, Z. Nash equilibrium solutions for switched nonlinear systems: A fuzzy-based dynamic game method. IEEE Trans. Fuzzy Syst. 2025, 33, 2006–2015. [Google Scholar] [CrossRef]
  27. Varga, B.; Inga, J.; Hohmann, S. Limited information shared control: A potential game approach. IEEE Trans. Hum.-Mach. Syst. 2023, 53, 282–292. [Google Scholar] [CrossRef]
  28. Li, Z.; Liu, C.; Tan, S.; Liu, Y. A timestamp-based log-linear algorithm for solving locally-informed multi-agent finite games. Expert Syst. Appl. 2024, 249, 123677. [Google Scholar] [CrossRef]
  29. Yan, K.; Xiang, L.; Yang, K. Cooperative target search algorithm for uav swarms with limited communication and energy capacity. IEEE Commun. Lett. 2024, 28, 1102–1106. [Google Scholar] [CrossRef]
  30. Dumitrescu, R.; Leutscher, M.; Tankov, P. Linear programming fictitious play algorithm for mean field games with optimal stopping and absorption. ESAIM: Math. Model. Numer. Anal. 2023, 57, 953–990. [Google Scholar] [CrossRef]
  31. Monderer, D.; Shapley, L.S. Potential games. Games Econ. 1996, 14, 124–143. [Google Scholar] [CrossRef]
  32. Nash, J.F., Jr. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef]
  33. Thiran, G.; Stupia, I.; Vandendorpe, L. Best response dynamics convergence for generalized nash equilibrium problems: An opportunity for autonomous multiple access design in federated learning. IEEE Internet Things J. 2024, 11, 18463–18482. [Google Scholar] [CrossRef]
  34. Kim, Y.; Khir, R.; Lee, S. Enhancing genetic algorithm with explainable artificial intelligence for last-mile routing. IEEE Trans. Evolutionary Comput. 2025, 1–17. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of sensor coverage. The coordinates of the fifteen sensors are (13.43,24.27), (16.63,53.01), (14.91,6.64), (43.73,50.18), (31.02,47.42), (50.31,1.78), (13.98,2.13), (38.30,9.81), (59.58,46.66), (29.84,28.08), (31.09,35.00), (59.85,23.66), (22.81, 45.58), (29.25,46.00), and (24.75,17.84), respectively.
Figure 1. Schematic diagram of sensor coverage. The coordinates of the fifteen sensors are (13.43,24.27), (16.63,53.01), (14.91,6.64), (43.73,50.18), (31.02,47.42), (50.31,1.78), (13.98,2.13), (38.30,9.81), (59.58,46.66), (29.84,28.08), (31.09,35.00), (59.85,23.66), (22.81, 45.58), (29.25,46.00), and (24.75,17.84), respectively.
Mathematics 13 02709 g001
Table 1. Comparison results among different algorithms: Average value f ¯ /Best value f b e s t /Standard deviation σ /Average runtime T ¯ (Unit: Seconds).
Table 1. Comparison results among different algorithms: Average value f ¯ /Best value f b e s t /Standard deviation σ /Average runtime T ¯ (Unit: Seconds).
n / l Algorithm Comparison
f ¯ f best σ T ¯
DFPGALLLBRFPDFPGALLLBRFPDFPGALLLBRFPDFPGALLLBRFP
20 / 20 279.7278.8278.1276.9276.8282.2282.2281.8281.4281.40.800.961.350.880.850.0060.5910.4690.0100.004
20 / 30 555.2553.9553.0552.7551.8558.0558.0557.4557.0557.01.241.511.881.341.300.0040.5900.4690.0090.003
20 / 40 785.2783.7782.6781.6781.5789.6789.6788.6788.0788.01.882.332.701.991.950.0030.5910.4680.0090.002
50 / 40 1018.11016.91016.01014.51014.41023.01023.01023.01022.41022.41.621.902.311.721.680.0061.4761.1710.0240.004
50 / 60 1824.21822.41821.11819.61819.51829.21829.21828.41827.61827.61.882.272.731.981.930.0051.4771.1710.0240.003
50 / 80 2271.52269.72268.22267.02266.82276.62276.62276.02275.42275.42.212.753.152.322.260.0041.4751.1700.0230.002
100 / 80 3220.23218.83217.83216.13215.93226.63225.83225.83225.03225.01.702.182.651.811.750.0062.9512.3410.0470.004
100 / 100 4008.84006.04005.44003.04002.94016.64015.64015.64014.84014.81.942.493.042.062.000.0062.9502.3400.0470.004
100 / 150 5070.25069.95066.15063.95063.85078.05078.05077.25076.45076.42.232.843.402.382.300.0052.9502.3400.0470.003
200 / 150 8433.78430.48428.08425.38425.28441.28438.68437.08436.28436.22.232.783.302.352.290.0075.8964.6780.0940.005
200 / 200 9850.39848.39846.89844.19843.919860.29859.09858.29857.49857.42.524.103.752.622.570.0065.8964.6770.0940.004
200 / 300 11,429.511,427.011,424.811,422.211,422.011,440.811,438.811,437.411,436.611,436.62.873.514.242.982.920.0055.8954.6750.0930.003
Bold values indicate the best value after comparison.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, J.; Chen, J.; Dong, R.; Xiong, X.; Xu, S. Toward a Distributed Potential Game Optimization to Sensor Area Coverage Problem. Mathematics 2025, 13, 2709. https://doi.org/10.3390/math13172709

AMA Style

Huang J, Chen J, Dong R, Xiong X, Xu S. Toward a Distributed Potential Game Optimization to Sensor Area Coverage Problem. Mathematics. 2025; 13(17):2709. https://doi.org/10.3390/math13172709

Chicago/Turabian Style

Huang, Jun, Jie Chen, Rongcheng Dong, Xinli Xiong, and Simao Xu. 2025. "Toward a Distributed Potential Game Optimization to Sensor Area Coverage Problem" Mathematics 13, no. 17: 2709. https://doi.org/10.3390/math13172709

APA Style

Huang, J., Chen, J., Dong, R., Xiong, X., & Xu, S. (2025). Toward a Distributed Potential Game Optimization to Sensor Area Coverage Problem. Mathematics, 13(17), 2709. https://doi.org/10.3390/math13172709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop