Node Deployment Optimization for Wireless Sensor Networks Based on Virtual Force-Directed Particle Swarm Optimization Algorithm and Evidence Theory

Wireless sensor network deployment should be optimized to maximize network coverage. The D-S evidence theory is an effective means of information fusion that can handle not only uncertainty and inconsistency, but also ambiguity and instability. This work develops a node sensing probability model based on D-S evidence. When there are major evidence disputes, the priority factor is introduced to reassign the sensing probability, with the purpose of addressing the issue of the traditional D-S evidence theory aggregation rule not conforming to the actual scenario and producing an erroneous result. For optimizing node deployment, a virtual force-directed particle swarm optimization approach is proposed, and the optimization goal is to maximize network coverage. The approach employs the virtual force algorithm, whose virtual forces are fine-tuned by the sensing probability. The sensing probability is fused by D-S evidence to drive particle swarm evolution and accelerate convergence. The simulation results show that the virtual force-directed particle swarm optimization approach improves network coverage while taking less time.


Introduction
Wireless sensor networks (WSNs) are commonly employed in harsh, high-risk, and distant unmanned environments. The sensor nodes in WSNs installed underwater (oceans, shallow seas, and lakes) are sometimes difficult to identify, and often swing or drift with the influence of ocean currents, waves, weather, ships, fish, and other factors. Physical causes of displacement or failure also have an impact on underwater acoustic detection equipment. Furthermore, monitoring signals attenuate quickly in an underwater environment, and different types of noise interference might occur often, resulting in quite significant energy loss of sensor nodes. As a result, while deploying wireless sensor network nodes, it is important to consider not only network coverage, but also the node displacement distance.
Many heuristic techniques, such as the genetic algorithm [1], the fish optimization algorithm [2], the ant colony algorithm [3], and the particle swarm optimization algorithm [4], are employed to optimize the placement of sensor nodes. These algorithms necessitate several repetitive computations. Wang et al. [5] suggested a virtual force-directed particle swarm algorithm (VF-PSO) deployment approach in which the moving distance is dictated by the connection between nodes, which is highly influenced by node density. Here, virtual force (VF) refers to the degree of mutual interference measured by the distance between nodes.
In this paper, a novel node deployment optimization strategy for WSNs is suggested. The virtual force-directed particle swarm (VF-PSO) algorithm and the D-S evidence theory are employed in the strategy. We provide the following contributions:

1.
A sensing probabilistic model is constructed. To aggregate the sensing probability in the circular area with radius R around the sensor node, the D-S evidence theory is applied. When there is a considerable evidence conflict, the typical D-S evidence theory aggregation rule does not conform to the real circumstances, and the result is not convincing. To address the concern, the priority factor is introduced to reassign the sensing probability.

2.
A virtual force-directed particle swarm optimization algorithm with sensing probabilities aggregated by D-S evidence (VF-PSO-DS for short) is proposed. The approach improves the virtual force-directed particle swarm optimization (VF-PSO) algorithm by considering sensing probability. The sensing probability is aggregated by D-S evidence, which guides the particle swarm evolution process and accelerates convergence.
The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 presents and improves the D-S evidence theory. Section 4 presents the construction of the system model. Section 5 proposes the improved virtual force-directed particle swarm optimization algorithm (VF-PSO-DS). Section 6 presents the simulation and explains the results. The conclusion is reached in Section 7.

Related Work
Recently, deployment optimization in WSNs has been discussed by many researchers. Most of them consider this problem under different constraints and deployment goals. Dou et al. [6], for example, look at the optimization of sensing coverage and regional connectivity in both deterministic and random deployment. Hosseinzadeh et al. [7] offered a multi-agent distributed solution for WSN deployment. The optimization is considered a linear programming problem with time-invariant box constraints and potentially timevarying inequality constraints. Kang [8] discovered that the average number of clusters and the transmission range of control messages have a substantial influence on network lifetime. Liu et al. [9] present an optimization approach for WSN node deployment based on an ant lion optimization algorithm to address difficulties in WSN node deployment such as uneven distribution of nodes and incomplete coverage. Other influential works include [7][8][9][10][11]. Table 1 lists the above-mentioned work by constraints.
The virtual force (VF) approach has been a prominent WSN node deployment strategy for the past 15 years. In 2007, Wang et al. [5] suggested a virtual force-directed PSO (VF-PSO) algorithm for WSN deployment in an early work in this area. Yu et al. [15] proposed a self-deployment algorithm for nodes with mobility based on virtual force (VF). Xiaoping et al. [16] investigated the characteristics of four VF models and then assessed Entropy 2022, 24, 1637 3 of 15 these models with several indicators: coverage increment scale, iterative number, coverage efficiency, and the average movement distance of nodes. They proved that the VF model with three parameters was superior, as was the VF model that does not include VF value but rather direction. To solve the problems of connectivity maintenance and node stacking in VF models, an extended VF-based approach was studied to achieve optimal deployment [17]. Yu et al. [18] proposed a sensor deployment strategy for mobile WSNs based on the van der Waals force. It is critical to improve sensing coverage and network connectivity to provide reliable communication [19]; therefore, a hybrid local virtual force algorithm (HLVFA) that incorporates the constraints of sensing coverage and network connectivity was proposed [20]. Theoretical analysis has shown that a hexagonal structure is the best two-dimensional network for providing the greatest coverage area with the fewest sensor nodes and the least amount of energy usage [21]. Yu et al. [22] reported a novel VF model based on virtual spring force (VFA-SF) and examined the associated efficiency in detail using statistical analysis. To place the self-consistent nodes in large-scale WSNs, a hybrid optimization algorithm based on two separate VF algorithms inspired by the interactions among physical sensor nodes was presented in [23]. Afterward, a method based on a 3D VF model driven by self-adaptive deployment (named 3DVFSD) was proposed in [24].
The application of D-S evidence theory in node deployment for WSNs has long been a hot topic. Zhao et al. [25] sought to reduce the influence of incomplete original or subjective parameters in evaluating the reliability of a distribution system. For WSNs, many factors, such as mutual interference of wireless links, battlefield applications, and nodes exposed to the environment without good physical protection, result in the sensor nodes being more vulnerable to attacks. To ensure network security, NBBTE (banding belief theory of node behavioral strategies of the trust evaluation algorithm) has been developed, which integrates the approach of node behavioral strategies with evidence theory [26]. Miao et al. [27] proposed a trustworthiness evaluation method for sensor nodes that can perceive multi-dimensional data based on D-S evidence theory at the data level. The Zigbee network was used to monitor the environmental parameters of aquiculture water [28]. Because artificial surveys have low accuracy, a gas pipeline leakage diagnosis system based on BP neural networks and D-S theory was presented by introducing WSN and information fusion theory [29]. Zhao et al. [30] proposed a novel fleet deployment approach based on route risk evaluation to fully use navigation resources while reducing risks. Song et al. [31] investigated many evidence theory-based approaches for node deployment optimization in WSNs. To improve the security in WSNs, Sun et. al. [32] proposed a trust ant colony routing algorithm by including a node trust evaluation model based on D-S evidence theory into the ant colony routing protocol. Song et. al. [33] presented a novel node deployment scheme that was based on the evidence theory approach and catered to 3D USWNs. To eliminate dependence on network training, an improved D-S evidence theory was applied to the host assessment [34].

D-S Evidence Theory
The Dempster-Shafer evidence theory, often known as the D-S evidence theory, is a widely used technique in the field of multi-sensor information fusion. D-S evidence theory, which is an inexact derivation of probability theory and is based on Bayesian reasoning, can work without a priori information and conditional probability [35]. D-S evidence theory begins with the assumption of a complete and incompatible element set (also called a recognition framework), denoted by Θ. Any element in the recognition framework is represented by Θ = {H 0 , H 1 }, where H 0 means "was not perceived by the sensor node" and H 1 means "was perceived by sensor node".
Let m 1 and m 2 be the basic probability distribution functions in the same recognition framework Θ, with focal elements A i (i = 1, 2, · · · , k) and B i (i = 1, 2, · · · , k), and then the aggregation formula of the two pieces of evidence is , which indicates the degree of evidence conflict. denotes exclusive-or calculation. The aggregation formula of N pieces of evidence is Then, a basic probability function of joint action of multiple pieces of evidence is obtained.

The Problem with D-S Evidence Theory
A basic probability assignment function of n pieces of evidence is It can be seen from Formula (4) that if k 2 = 1, there is a conflict between the first piece of evidence and the (n − 1)-th piece of evidence. When calculating the basic probability assignment after fusion, the denominator is 0, so the following calculation cannot be carried out normally, so the general evidence theory cannot easily solve this problem.
If k 2 = 1, the following can be obtained: According to Equation (5), the other n − 1 pieces of evidence have nothing to do with the result of information fusion, but they are the same as the first one. This shows that all the n − 1 pieces of evidence in this evidence are the same (i.e., m 2 ), and this shows that the result of information fusion has nothing to do with the n − 1 pieces of evidence. The first piece of evidence denies the n − 1 pieces of evidence, which is wrong.
If there is a conflict between one piece of evidence and other pieces of evidence, but the result of data fusion can be expressed, the denominator of zero will appear in the aggregation rules, resulting in the inability to use the D-S aggregation rules. Therefore, we should avoid the inconsistent results of the D-S evidence theory when it conflicts and synthesizes multiple pieces of evidence.

Introduction of Priority Factors
The priority factor is usually a correction method used in system decision-making. In this paper, the priority factor method is introduced into evidence aggregation rules, and the evidence provided by information sources is divided according to the priority level, to further solve the problems of invalidation and conflict in the D-S evidence theory aggregation rules.

Definition of Priority Factors
If there are n objective factors G 1 , G 2 , . . . G n that need to be considered when making a decision, these factors may be too many and inconsistent to be judged directly, and the priority factor can be used to solve this problem. First, we need to compare these objective factors, such as objective factors G i and G j . If G i is considered more important, it will be recorded as G i > G j ; otherwise, it will be recorded as G i < G j . Then, we count the times that G i appear on the left side of the inequality symbol and arrange the target factors in descending order according to these times so that the priority order of the target factors can be obtained. Finally, the system will make decisions according to this order.
Considering that different sources of information provide different priority levels of evidence, the priority factor P i (i = 1,2, . . . L) is defined to indicate the priority level of each piece of evidence. The priority factor is just a symbol. Whenever a piece of evidence is listed as the most important piece of evidence, the priority factor is assigned to P 1 . If the priority factor level of a piece of evidence is in the k-th place, the priority factor is P k . A priority factor can also be regarded as a special normal number and a weight coefficient with special meaning as evidence. The relationship between each priority factor P i (i = 1,2, . . . L) is P 1 > P 2 > . . . > P L . If any positive number M, P i MP i+1 can be established, it is said that P i is far greater than P i+1 , marked as P i P i+1 , which indicates that P i and P i+1 are not level quantities.

Determination of Priority Factors
To determine the priority factor of evidence, it is necessary to define the amount of selfconflict in the evidence. The amount of self-conflict of the evidence is the sum of conflicts between evidence i and evidence j (j = 1,2, . . . i − 1,i + 1, . . . , n), and its relationship is as follows: where n represents the amount of evidence.
The steps to determine the priority factor are as follows: First, all evidence pieces should be collected, and the self-conflict amount ϕ i (i = 1,2, . . . n) of each piece of evidence should be processed and calculated, and then the self-conflict amount ϕ i should be sorted from the largest to the smallest. The priority factor P 1 is the piece of evidence with the smallest conflict amount, and the priority factor P L refers to the piece of evidence with the largest self-conflict amount. The divergence of some evidence usually leads to conflicts between two pieces of evidence, so these conflicting pieces of evidence should have a relatively large number of self-conflicts ϕ i . If there are two pieces of evidence with the same number of self-conflicts ϕ i , then the values of the priority factors of the two pieces will be the same. The calculated priority factors are arranged according to the size of the last subscript, which is convenient for the subsequent synthesis.

D-S Evidence Aggregation Rules with Priority Factors
According to the above definition, the priority factor method of evidence synthesis is introduced. Firstly, the information about each piece of evidence is collected, and then the self-conflict quantity of evidence is calculated. According to the self-conflict quantity, the priority factor P i (i = 1,2, . . . L) is determined, and the value range of L is n ≥ L ≥ 1. By fusing the pieces of evidence of these priority factors P k , the k value is k = 1, 2, . . . L, when only the only evidence is left, the priority factor P k will not need to be fused continuously, and the number of existing pieces of evidence is L.
When the number of pieces of evidence L satisfies L ≥ 2, it means that the conflicts have occurred in the fusion of pieces of evidence, and the conflicts need to be dealt with. By using the aggregation rule of D-S evidence theory, the evidence with less conflict is directly fused, and the aggregation method of evidence with a priority factor is used to deal with the evidence with the highest conflict, thus reducing the conflict between two pieces of evidence.
This method uses conflicting evidence m(A) to assign values to different propositions, so the assignment relationship is In Equation (7), the basic probability assignment of conflict evidence is expressed in By using a voting method, the basic probability assignment with highly conflicting evidence is weighted, so f (A) can be taken as follows: where n A is given by In Equation (8), all supported propositions are counted. If the given threshold m th is less than or equal to its basic probability assignment, then 1 is added to the value of n A , and the statistical total of different propositions is denoted as n, while the initial value of the threshold m th is assigned to 1/2.
According to the synthesis method determined above, the method for the synthesis of new evidence again is as follows: Step 1. The two pieces of evidence of priority factors P 1 and P 2 are E 1 and E 2 . If the amount of mutual conflict k ij is the conflict sum between evidence i and evidence j, then the relationship between them can be expressed as: where i = 1, 2, . . . n and j = 1, 2, . . . n. Equation (12) shows the relationship between the amount of mutual conflict k ij and the amount of self-conflict ϕ i : Step 2. Calculate the conflict value between them.
The amount of mutual conflict calculated by the evidence E 1 and the evidence E 2 is k 12 , and then k th is assumed as the conflict threshold. If the relationship between them is k 12 < k th , the aggregation rule of D-S evidence theory will be used to deal with it when synthesizing the evidence. If the relationship between them is k 12 ≥ k th , when the evidence is aggregated, the priority factor P 1 and priority factor P 2 are added for processing, and then the formula is m = P 1 m 1 (E 1 ) ⊕ P 2 m 2 (E 2 ).
Therefore, the method of introducing the priority factor is used in evidence synthesis, which is shown as Step 3. Take the next piece of evidence E 3 of the next priority factor P 3 , and then re-synthesize the priority factor with the result obtained in the previous step according to the synthesis rules in Step 2. Step 4. Use the above method to continuously synthesize the newly generated priority factor. When the priority factor is P L , the improved result is obtained through fusion calculation. Finally, a reasonable judgment condition is established, and the result is decided.
The synthesis method based on the evidence classification strategy is the evidence synthesis method with a priority factor, and its application process is shown in Figure 1. Step 4. Use the above method to continuously synthesize the newly generated priority factor. When the priority factor is , the improved result is obtained through fusion calculation. Finally, a reasonable judgment condition is established, and the result is decided.
The synthesis method based on the evidence classification strategy is the evidence synthesis method with a priority factor, and its application process is shown in Figure 1.

Sensing Probability Model
If the two-dimensional plane monitoring area Λ is discretized into × pixels, we define the event that the -th pixel is sensed by the -th sensor , and the proba- where ( , ) is the distance between the -th pixel and the -th sensor , λ and β are parameters related to the sensing probability, and is the sensing error range of the sensor node .
By introducing the D-S evidence theory of the priority factor to synthesize perception probability, the probability that this pixel is perceived by a wireless sensor network is as follows: The evidence is aggregated only if the sensing probability ( , ) > 0.

Sensing Probability Model
If the two-dimensional plane monitoring area Λ is discretized into m × n pixels, we define the event r ij that the i-th pixel p i is sensed by the j-th sensor s j , and the probability of this event P r ij = P p i , s j , where d p i , s j is the distance between the i-th pixel p i and the j-th sensor s j , λ and β are parameters related to the sensing probability, and R e is the sensing error range of the sensor node s j . By introducing the D-S evidence theory of the priority factor to synthesize perception probability, the probability that this pixel is perceived by a wireless sensor network is as follows: The evidence is aggregated only if the sensing probability P p i , s j > 0.

Area Coverage Model
There are m × n pixels in the monitoring area of the bitplane Λ, and the area of each pixel is expressed as ∆x × ∆y (assuming that the area of each pixel is 1 in this paper). The probability of the i-th pixel being sensed by the wireless sensor network is P(p i ); when P(p i ) ≥ P µ (P µ denotes the lowest sensing probability allowed by the wireless sensor network), this pixel can be regarded as being sensed by the wireless sensor network.
Whether the i-th pixel is sensed by the sensor node is marked as P COV (p i ); i.e., In this paper, the coverage of the monitoring area R area is defined as the ratio of the perceived pixel area Λ area to the total area of the monitoring area Λ s , namely

Virtual Force
By calculating the sensing probability of pixels in a circular area with the sensor node s j as the center and R as the radius, the movement of the sensor node s j is determined.
Assuming any pixel p i and sensor node s j , the force exerted by the sensor node s j by the pixel p i can be expressed as follows: where P(i) is the comprehensive probability that the i-th pixel p i is perceived by the wireless sensor network, k and α denote the gain coefficients, and d p i , s j is the distance between the pixel p i and the sensor node s j . The force direction points from the pixel p i to the sensor node s j , and the component of the X-axis F x p i , s j and the component of the Y-axis F y p i , s j can be obtained after decomposition.
The resultant force in the X-axis direction is F x s j = ∑ F x p i , s j , and the resultant force in the Y-axis direction is F y s j = ∑ F y p i , s j s j , so the resultant force generated by the pixels in the circular area with radius R on the sensor node is F xy s j = F 2 x s j + F 2 y s j .

Optimization Problem Formulation
The goal of deployment optimization is to maximize aware coverage where R area = Λ area Λ s can be obtained by the following equations: F xy s j = F 2 x s j + F 2 y s j F x p i , s j = F p i , s j sinθ F y p i , s j = F p i , s j cosθ (21) where θ measures the angle between the sensor s j and the pixel p i . The constraints are listed as follows: It can be seen from (21) that d p i , s j (i.e., the distance between the sensor s j and the pixel p i ) is the variable need to be determined. From (22), we can say that the optimization problem is, in fact, adjusting the locations of the sensor set s j . What needs to be done is to adjust the lateral movement distance d x o, s j and longitudinal movement distance d y o, s j to the central point o of the circular area with radius R, so as to maximize the coverage.
The distribution of pixel set {p i } is known. For an example, see Figure 2. It can be seen from (21) that ( , s ) (i.e., the distance between the sensor s and th pixel ) is the variable need to be determined. From (22), we can say that the optimizatio problem is, in fact, adjusting the locations of the sensor set {s }. What needs to be done to adjust the lateral movement distance

The Process of Particle Swarm Optimization
Particle swarm optimization (PSO) regards each sensor node as a massless particle flying at a certain speed in the search space. This speed is dynamically adjusted according to the flight experience of itself and its companions. Reference [10] discusses in detail that the optimal position and velocity of the node particles in particle swarm optimization depend on the initial random position and velocity of the nodes and the evolution formula. Because the virtual force algorithm can effectively guide the distribution process of mobile nodes, and the swarm intelligence optimization strategy has a strong global optimization ability, this paper adds the influence of virtual force into the evolution formula of the swarm intelligence algorithm, and its evolution formula is as follows: where the particle swarm size is Q; v q;Dim (t) is the velocity of the particle q in the Dim-th dimension in the t-th generation; Dim = 1 expresses the transverse direction (X-axis); Dim = 2 expresses the longitudinal direction (Y-axis); w(t) is inertia weight; c 1 and c 2 are acceleration constants; c 3 is the acceleration factor used to adjust the influence of virtual force; and Rand() denotes the random function varying in the range of [0, 1], which should gradually decrease with the increase of iteration times. L * q,Dim is the best position of the particle q in the Dim-th dimension, the index number of the best position of all particles in the Dim-th dimension is L * Dim , and the current position of the particle q in the Dim-th dimension is L q,Dim . We have where MaxNumber is the maximum number of iterations and t denotes the number of iterations. Driven by virtual force, the lateral movement distance ∆x q (t) of the particle q in the t-th generation is The longitudinal moving distance ∆y q (t) of the particle q in the t-th generation is where MaxStep is the maximum allowable moving distance of the sensor node, F q xy is the virtual force acting on the particle q, F q x and F q y are the virtual force components of the particle q on the X-axis and the Y-axis, and F th is the threshold value of the virtual force.

Deployment Strategy
The algorithm steps are as follows: Step 1. Each wireless sensor node in the area Λ broadcasts information, which includes the index and location information of the node, and enters step 2.
Step 2. If the wireless sensor node s j receives the broadcast information of the neighbor node, it updates the neighbor list information and enters step 3.
Step 3. By introducing the D-S evidence theory of the priority factor, the sensing probability P(p i ) of the pixel p i in the circular area R = (R s + R e ) with a radius around the sensor node s j is aggregated.
Step 4. Using the particle swarm optimization algorithm, calculate the optimal distance ∆x(t) that sensor node s j needs to move horizontally and the optimal distance ∆y(t) that sensor node s j needs to move vertically, and enter step 5.
Step 5. If the new position to be moved is outside the monitoring area Λ, the moving process will not be carried out, and the wireless sensor node will not move, so go to step 6. Otherwise, move to a new position and go to step 6.
Step 6. If the preset number of cycles is reached, the algorithm ends. Otherwise, go to step 1 and start the next moving process.

Simulation and Discussion
Python language was used to program the above deployment model. It is assumed that N wireless sensor nodes are randomly deployed in a monitoring area Λ of 100 m × 100 m, with the sensing radius of all nodes R s = 10 m, sensing error range R e = 2 m, and communication radius R c = 3R s = 30 m. m = 100; n = 100.
The parameters of the VF algorithm are as follows: k 1 = 1, α 1 = 2, F th = 1, MaxStep = 5 m. The parameters of this algorithm are as follows: sensing probability parameters, β = 2, λ = 0.2; gain coefficients, k = 2, α = 2; VF threshold, F th = 1; minimum sensing probability allowed by a wireless sensor network, P µ = 1; maximum moving distance allowed by a sensor, MaxStep = 5 m; sensing radius, R = 16 m. The parameters related to the optimization of the particle swarm are set as follows: acceleration factors of the particle swarm, To prevent wireless sensor nodes from moving out of the monitoring area Λ, if the new position to be moved to is located at the edge 5 m wide inside the monitoring area, the nodes will not move and will remain in the original position.

The Deployment Results
The fixed position of the sensor node after the initial random arrangement is shown in Figure 3a. Figure 3b shows the deployment of sensor nodes using the VF algorithm. The hot attraction of fixed nodes to ordinary nodes limits the movement of nodes, resulting in a local accumulation of nodes. Figure 3c shows the optimized deployment effect of particle swarm optimization. The local aggregation phenomenon has been alleviated, and the average coverage rate of the network has reached 83.62. Figure 3d shows the sensor node deployment optimized by the proposed VF-PSO-DS strategy. The blind area of regional coverage is reduced, and the local aggregation phenomenon almost disappears. This shows that the aggregation of the two algorithms plays an effective role in the uniform coverage of the network, and the effective coverage of the optimized network reaches 89.38%.
tors of the particle swarm, 1 = 2 = 3 = 1 ; maximum number of iterations, = 300; velocity interval of particles [ , ] = 0.2 × [0,50] = [0,10]. To prevent wireless sensor nodes from moving out of the monitoring area Λ, if the new position to be moved to is located at the edge 5 m wide inside the monitoring area, the nodes will not move and will remain in the original position.

The Deployment Results
The fixed position of the sensor node after the initial random arrangement is shown in Figure 3a. Figure 3b shows the deployment of sensor nodes using the VF algorithm. The hot attraction of fixed nodes to ordinary nodes limits the movement of nodes, resulting in a local accumulation of nodes. Figure 3c shows the optimized deployment effect of particle swarm optimization. The local aggregation phenomenon has been alleviated, and the average coverage rate of the network has reached 83.62. Figure 3d shows the sensor node deployment optimized by the proposed VF-PSO-DS strategy. The blind area of regional coverage is reduced, and the local aggregation phenomenon almost disappears. This shows that the aggregation of the two algorithms plays an effective role in the uniform coverage of the network, and the effective coverage of the optimized network reaches 89.38%.  To further verify the performance of the three algorithms, the convergence rates were compared through experiments, as shown in Figure 4. The original VF strategy does not take moving distance optimization into account, and most work determines the moving distance based on the amount of the virtual force (see [36]). As a result, the VF algorithm provides the lowest coverage rate. In addition, the PSO algorithm does not account for virtual force, and the convergence time is substantially slower; the final optimization result, however, is gratifying.
One hundred independent simulation experiments were conducted. The average effective coverage, average calculation time, and average iteration times of the three algorithms are shown in Table 2. Experimental results show that the proposed VF-PSO-DS strategy can quickly and effectively optimize the deployment of wireless sensor networks, and its calculation time is 85.1% and 41.6% of that of the VF algorithm and particle swarm optimization algorithm, respectively, while the effective coverage of the network in- To further verify the performance of the three algorithms, the convergence rates were compared through experiments, as shown in Figure 4. The original VF strategy does not take moving distance optimization into account, and most work determines the moving distance based on the amount of the virtual force (see [36]). As a result, the VF algorithm provides the lowest coverage rate. In addition, the PSO algorithm does not account for virtual force, and the convergence time is substantially slower; the final optimization result, however, is gratifying.

The Adaptability Tests
To verify the adaptability of the proposed VF-PSO-DS strategy to hot spots stacles, it is assumed that the area to be measured contains one hot spot and sev stacles, and the wireless sensor network contains 16 fixed sensor nodes and 80 sensor nodes, while other network parameters and algorithm parameters rem changed. The initial deployment of the network is shown in Figure 5a,b. Based measurement requirements of different areas in the deployment of the network, work can effectively avoid obstacles, and the density of sensor nodes in hot spots improved. Thus, the VF-PSO-DS strategy can meet the needs in a complex enviro and the algorithm is robust.
The stability of the deployment strategy was tested for the case of the num sensor nodes increasing. At each of the values of = 300,350,400,450,500, the sim was carried out 20 times, and the results were averaged. From Tables 3 and 4, it concluded that with the increase in the number of sensor nodes, the coverage of th itoring area increases, and the moving distance of the nodes also increases. Beca VF algorithm does not control the position of nodes' movement in the dense nod moving distance is too large, the moving effect is not good, and it is less helpful to im the coverage rate. One hundred independent simulation experiments were conducted. The average effective coverage, average calculation time, and average iteration times of the three algorithms are shown in Table 2. Experimental results show that the proposed VF-PSO-DS strategy can quickly and effectively optimize the deployment of wireless sensor networks, and its calculation time is 85.1% and 41.6% of that of the VF algorithm and particle swarm optimization algorithm, respectively, while the effective coverage of the network increases by 19.34% and 3.37% respectively. Compared with the VF algorithm and particle swarm optimization algorithm, the algorithm not only has a better deployment optimization effect for wireless sensor networks, but also has a faster convergence speed and requires less calculation time.

The Adaptability Tests
To verify the adaptability of the proposed VF-PSO-DS strategy to hot spots and obstacles, it is assumed that the area to be measured contains one hot spot and several obstacles, and the wireless sensor network contains 16 fixed sensor nodes and 80 mobile sensor nodes, while other network parameters and algorithm parameters remain unchanged. The initial deployment of the network is shown in Figure 5a,b. Based on the measurement requirements of different areas in the deployment of the network, the network can effectively avoid obstacles, and the density of sensor nodes in hot spots can be improved. Thus, the VF-PSO-DS strategy can meet the needs in a complex environment, and the algorithm is robust.
The stability of the deployment strategy was tested for the case of the number of sensor nodes increasing. At each of the values of N = 300, 350, 400, 450, 500, the simulation was carried out 20 times, and the results were averaged. From Tables 3 and 4, it can be concluded that with the increase in the number of sensor nodes, the coverage of the monitoring area increases, and the moving distance of the nodes also increases. Because the VF algorithm does not control the position of nodes' movement in the dense nodes, the moving distance is too large, the moving effect is not good, and it is less helpful to improve the coverage rate. and the algorithm is robust.
The stability of the deployment strategy was tested for the case of the number of sensor nodes increasing. At each of the values of = 300,350,400,450,500, the simulation was carried out 20 times, and the results were averaged. From Tables 3 and 4, it can be concluded that with the increase in the number of sensor nodes, the coverage of the monitoring area increases, and the moving distance of the nodes also increases. Because the VF algorithm does not control the position of nodes' movement in the dense nodes, the moving distance is too large, the moving effect is not good, and it is less helpful to improve the coverage rate.

Conclusions
In this paper, a sensing probability model of nodes is suggested and the sensing probability is aggregated by D-S evidence. Because the traditional D-S evidence theory is inconsistent with the actual situation and the result is invalid when there is a large evidence conflict, the priority factor is introduced to reassign the sensing probability in the part where the evidence conflict occurs. A virtual force-directed particle swarm optimization algorithm based on D-S evidence theory (VF-PSO-DS) for node deployment is proposed. In this approach, the effective coverage rate of the network is taken as the optimization goal, the global optimal scheme is searched for by particle swarm optimization, and the virtual force fine-tuned by sensing probability given by D-S evidence is employed to guide the particle evolution and speed up the convergence. Experimental results show that the virtual force-directed particle swarm optimization approach can quickly and effectively achieve global optimization. Compared to the virtual force algorithm and the particle swarm optimization algorithm, the virtual force-directed particle swarm optimization approach takes less time, converges faster, and increases the sensing coverage of wireless sensor networks.