An Enhanced Food Digestion Algorithm for Mobile Sensor Localization

Mobile sensors can extend the range of monitoring and overcome static sensors’ limitations and are increasingly used in real-life applications. Since there can be significant errors in mobile sensor localization using the Monte Carlo Localization (MCL), this paper improves the food digestion algorithm (FDA). This paper applies the improved algorithm to the mobile sensor localization problem to reduce localization errors and improve localization accuracy. Firstly, this paper proposes three inter-group communication strategies to speed up the convergence of the algorithm based on the topology that exists between groups. Finally, the improved algorithm is applied to the mobile sensor localization problem, reducing the localization error and achieving good localization results.

Although metaheuristics are excellent at solving problems with real-world applications, they are not a panacea, and, as mentioned in the No Free Lunch Theorem [18], each optimization algorithm may be good at solving different problems.Therefore, researchers are constantly exploring new optimization algorithms.For example, Holland proposed the Genetic Algorithm (GA) in 1975 based on Darwinian evolutionary theory [19].Dorigo et al. proposed the Ant Colony Optimization (ACO) in 1992 [20].Storn et al. proposed Differential Evolution (DE) in 1995 [21].Kennedy and Eberhart proposed the Particle Swarm Optimization (PSO) algorithm in 1995 [22].Karaboga et al. proposed the Artificial Bee Colony algorithm (ABC) in 2005 [23].Yang et al. proposed the Cuckoo Search (CS) in 2009 [24].Rashedi et al. proposed the Gravitational Search Algorithm in 2009 [25].Yang et al. proposed the Bat Algorithm (BA) in 2010 [26].Mirjalili et al. proposed the Grey Wolf Optimizer (GWO) in 2014 [27].Mirjalili et al. proposed the Sine Cosine Algorithm(SCA) in 2016 [28].Abualigah et al. proposed the Aquila Optimizer (AO) in 2021 [29].Song et al. proposed the Phasmatodea Population Evolution algorithm (PPE) in 2021 [30].Pan et al. proposed the Gannet Optimization Algorithm (GOA) in 2022 [31].
Numerous researchers have dedicated their efforts to enhancing the performance of metaheuristic algorithms.Among the various approaches, parallel and compact strategies have gained significant attention due to their simplicity and effectiveness.The parallel strategy emphasizes the grouping of populations, facilitating the exchange of information between groups to accelerate the algorithm's convergence and enhance its ability to discover optimal solutions accurately.On the other hand, the compact strategy involves mapping the population onto a probabilistic model and performing operations on the entire population through manipulations of this model.This approach offers notable benefits such as reduced computational time and memory usage.In this study, we propose a novel approach that combines both parallel and compact strategies to enhance the performance of the food digestion algorithm.We expect that this integrated methodology will effectively enhance the algorithm's ability to seek optimal solutions in the optimization process, leading to improved outcomes.
Numerous researchers have combined these two strategies to improve metaheuristic algorithms.In reference [32], the authors combine the parallel and compact strategies to enhance DE and utilize the enhanced algorithm for image segmentation, yielding superior outcomes.In reference [33], the authors initially introduce six enhancements to the compact strategy CS, subsequently selecting the algorithm with the most favorable results and incorporating the parallel strategy.Ultimately, the authors apply the improved algorithm to underwater robot path planning, which yields promising results.
Wireless sensor networks (WSNs) are self-organized communication systems consisting of multiple nodes that enable the monitoring of specific areas through multi-hop communication.In a static WSN, the nodes are randomly distributed and their locations remain fixed once determined.However, in practical environments, mobile sensor nodes are in greater demand.For instance, in target tracking applications, real-time positioning of moving targets is essential [34,35].The mobility of sensor nodes allows for an extended monitoring range, overcoming coverage gaps that may occur due to the failure of static nodes.Furthermore, the movement of nodes enables the network to discover and observe events more effectively, while also enhancing the communication quality among the sensor nodes [36].Despite the importance of mobile node localization, there is a relative scarcity of research in this area.Most localization methods developed for static sensor nodes are unsuitable for the mobile sensor localization problem, making the study of mobile sensor localization a current research focal point [37].Additionally, the study of outdoor mobile sensors holds particular significance due to the complex and ever-changing nature of the outdoor environment.
Based on the above reasons, this paper uses parallel and compact strategies to improve the food digestion algorithm and apply it to the outdoor mobile sensor localization problem.Section 2 mainly introduces the food digestion algorithm and mobile sensor localization techniques.Section 3 mainly introduces the implementation of the Parallel Compact Food Digestion Algorithm (PCFDA).Section 4 tests the performance of PCFDA.Section 5 uses PCFDA to optimize the error in mobile sensor localization.Section 6 gives the conclusion of this paper.

Related Works
This section mainly introduces the food digestion algorithm and the mobile sensor localization problem.

Food Digestion Algorithm
The food digestion algorithm mainly covers the process of food digestion in the mouth, stomach, and small intestine.This section describes the modeling processes in these three sites in detail.

Digestion in the Oral Cavity
The digestion of food in the mouth involves both physical and chemical digestion.The process of physical digestion mainly consists of the action of forces, which are represented as follows: F1 denotes the force on the food in the mouth, iter denotes the current number of iterations, Max − iter denotes the maximum number of iterations, and a is used to adjust the size of F1, which has a value of 1.5574.F1 − d denotes forces with different sizes and directions, where rand is a random value in the range [0, 1].
The chemical digestion of food in the mouth is dominated by the digestion of starch by salivary amylase, and, considering the effect of substrate concentration on the enzymatic reaction, the modeling process is as follows: Em denotes the enzyme in the oral cavity, randomly setting half of the dimension values to 0 and the other half to 1. r = randperm(D) denotes that the values of the D dimensions are randomly scrambled.Equation ( 4) is the Mie equation, which reflects the relationship between substrate concentration and reaction rate [38].V represents the rate of the enzymatic reaction, V max represents the maximum reaction rate, and its value is 2. S represents the substrate concentration, and we express it as a sine function, represented by Equation (5), where rand is a number in the range [0,1].π represents the mathematical constant pi.K m is a characteristic constant of the enzyme, and in the oral cavity, the value of K m is 0.8.Therefore, the particle update equation in the oral cavity is as follows: Food t+1 i denotes the ith particle at generation t + 1.Food t k denotes the kth particle at generation t. k is a randomly selected particle from among N particles.Food t R denotes the Rth particle of the tth generation, and R is chosen as shown in Equation ( 8).b is a constant which has a value of 1.5.Food t i denotes the ith particle of the tth generation.Best − p represents the global optimal value.C1 and C2 are two random numbers that change with the number of iterations.ceil denotes rounding to positive infinity, and randi is a random rounding function.

Digestion in the Stomach
The digestion of food in the stomach also involves two processes: physical and chemical digestion.Physical digestion is primarily governed by the forces generated by the contraction and diastole of the stomach as well as peristalsis.The forces are expressed as follows: F2 represents the force on the food in the stomach.F2 − d represents a directed force, which takes values in the range [−2, 2].The chemical digestion modeling process in the stomach is similar to that in the oral cavity.The difference is that different enzymes Em and different characteristic constants K m are selected for each iteration.In the stomach, the value of K m is 0.9.Therefore, the particle update equation in the stomach is as follows: Food t+1 m selects particles according to Equation (13).If the optimal fitness value of the first one-third of the updated particles is less than the global optimum, then we select this particle.Otherwise, we perturb the globally optimal particle and select the perturbed particle.Therefore, its selection condition is i f min f itness t+1 i Mean is calculated according to Equation (12).

Digestion in the Small Intestine
The digestion of food in the small intestine also involves two processes: physical and chemical digestion.Physical digestion is primarily governed by forces generated by peristalsis of the small intestine, which is expressed as follows: F3 represents the force on the food in the small intestine.a is a constant that has a value of 1.5574.a 1 is used to regulate the magnitude of the force, which has a value of 1. F3 − d represents a directed force, which is a random value in the range [−2, 2].Thus, the equation for particles updated in the small intestine is as in Equation (16).
The judgment condition for Food t+1 )), which is calculated from Equation (17).Levy(D) denotes Lévy flight, which is calculated as follows: µ and δ are random numbers in the range [0, 1], and β is a constant whose value is 1.5.The food digestion algorithm simulates the process of food digestion in the three main digestive sites in the human body to construct the particle optimization process.In the oral cavity, particles always follow a random particle to update their positions, which promotes the diversity of particles.As the number of iterations increases, particles gradually select particles with better fitness values to update their positions.This selection enhances population diversity in the early iterations and facilitates rapid convergence in later stages.
In the stomach, particles follow the optimal particles from the previous site or particles after perturbation to update their positions.This accelerates the convergence process.Additionally, particles follow the average particles to update their positions, promoting particle diversity and preventing them from getting trapped in local optima.
In the small intestine, particles update their positions after the global optimum, enabling quick convergence.Furthermore, particles update their positions using the Lévy flight strategy, which helps avoid falling into local optima.
Algorithm 1 provides a detailed description of the FDA.Backup the initialized populations and their fitness values; 6: Calculate the values of F1, F2, and F3 according to Equations ( 1), ( 9) and ( Calculate the value of R according to Equation ( Calculate the values of C1 and C2; 9: Calculate the values of Em and S according to Equations ( 3) and (5); 11: Calculate the values of F1 − d and V according to Equations ( 2) and (4); 13: Update the particle according to Equation (6); 14: Calculate the fitness value of the particle; 15: if i == N/3 then 16: Find the minimum fitness value in the oral cavity f itness m ; 17: Update the particle according to Equation (13); if The historical optimal fitness value of the particle < Updated particle optimal fitness value then 37: Replace the updated particle position and fitness value with the particles' optimal historical position and fitness value; end for 40: Backup of the particle's historical optimal position and its fitness value; 41: Update optimal global position and optimal global value; 42: iter = iter + 1; 43: end while

Mobile Sensor Localization Problem
This section introduces a localization method called Monte Carlo Localization (MCL) for mobile sensor networks, as described in references [39,40].In wireless sensor networks, Monte Carlo localization methods typically involve fixed anchor nodes.These anchor nodes serve as reference points in the localization algorithm, and their positions are known in advance and remain unchanged over time.During the localization process, anchor nodes send signals to the mobile node and receive signals back from it, aiding in determining the mobile node's position.
The Monte Carlo localization method is a probabilistic and statistical-based algorithm used to estimate the location of a mobile node through multiple random simulations.It calculates the position of the mobile node using measurements such as received signal strength, arrival time, or other relevant data.The algorithm relies on important parameters, among which the pre-known position of the anchor node plays a crucial role.
In Monte Carlo localization methods, the use of multiple fixed anchor nodes enables the provision of additional measurements for estimating the position of the mobile node.This, in turn, improves the accuracy of the localization process.The fixed positions of the anchor nodes, along with reliable measurement data, form the foundation for the effectiveness of the Monte Carlo localization method in achieving accurate localization.
The MCL (Monte Carlo Localization) method consists of three main phases: initialization, prediction, and filtering [41].In the initialization phase, each node is assigned motion regions and maximum motion speeds.During the prediction phase, a preliminary estimate of the mobile node's location is calculated.This estimate corresponds to a circular region, where the last known position of the node serves as the center, and the product of the velocity and positioning interval time determines the radius.Figure 1 illustrates the execution flow of the MCL algorithm.
The filtering phase plays a crucial role in MCL.Initially, MCL calculates the set of single-hop beacon nodes, denoted as S1, and the set of two-hop beacon nodes, denoted as S2, based on their distances to other nodes.Subsequently, MCL randomly selects points within the feasible region and checks if they belong to the set of unknown nodes by verifying if they fall within the range of either single-hop or two-hop beacon nodes.Specifically, a selected point is classified as an unknown node if its nearest anchor is within the range of S1, or if both its closest and next closest anchors fall within the range of S2.
Points that fail to satisfy these criteria are filtered out.The filtering condition is expressed in Equation (20).As shown in Figure 2, the unknown node L senses the information of the surrounding anchor nodes at the moment t, where S1 is its one-hop anchor node and S2 is its two-hop anchor node, and the estimated coordinate sample of the unknown node L is a valid sample only if it satisfies the filter condition that the distance from S1 is less than R and the distance from S2 is between R and 2R, Lt in the figure meets the filter condition, and is retained as a reasonable sample particle.After the filtering phase, numerous sample particles' coordinates are eliminated, resulting in an insufficient number of sample sets.Hence, the prediction and filtering phases are iteratively executed until an adequately high number of samples remain in the sample set.Eventually, the arithmetic mean of the sample coordinates is calculated, serving as an estimation for the final node coordinates, thereby concluding the localization at the current moment.Equation ( 21) was employed to estimate the locations of the unknown nodes based on the filtered reference points.

Enhanced Food Digestion Algorithm
This section introduces three intergroup communication strategies and proposes a concise approach to enhance the food digestion algorithm.

Design of Parallel Strategies
This section proposes three parallel strategies to speed up the convergence of the algorithm and to improve the algorithm's optimization finding accuracy.These three parallel strategies use different topologies.Their topologies are shown in Figure 3.The first parallelization strategy uses a star topology.First, we choose one group as the central group and the others as subgroups.Particles in the central group exchange information with particles in the subgroups, and there is no communication between subgroups.The pseudo-code for the algorithm is shown in Algorithm 2.
The second parallel strategy uses a unidirectional ring topology.The structure allows only subgroups to communicate with their neighboring side, and the side that each group chooses to communicate with is in the same direction in the ring structure.Algorithm 3 shows the details of the communication strategy.
Algorithm 2 Parallel strategy for star topology.
1: Calculate the average position of the first three groups of optimal particles and their fitness values; 2: if The fitness value of the average position < The fitness value of the optimal particle in the central group then Replace the position of the central group of optimal particles and its fitness value with the average position and its fitness value; 4: end if 5: Perturbing the central group of optimal particles and calculating its fitness value; 6: if Particle fitness values after perturbation < The fitness value of the first group of optimal particles then 7: Replace the position of the first group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value 8: end if 9: if Particle fitness values after perturbation < The fitness value of the second group of optimal particles then 10: Replace the position of the second group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value 11: end if 12: if Particle fitness values after perturbation < The fitness value of the third group of optimal particles then 13: Replace the position of the third group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value Use g + 1 to find the remainder of 4 and record the remainder as sg if The fitness value of the optimal particle in group g > The fitness value of the optimal particle in group sg then 7: Replace the optimal particle position and its fitness value of group g with the optimal particle position and its fitness value of group sg if Particle fitness value after perturbed < The optimal particle fitness value of group g then 12: Use the perturbed particle position and its fitness value to replace the optimal particle position and its fitness value of group g 13:

end if 14: end for
The third parallel strategy uses a bi-directional ring topology.The structure allows subgroups to exchange information with their neighboring groups, and in a ring structure, subgroups exchange information in a specific direction.Implementation details are given in Algorithm 4.

Algorithm 4
Parallel strategy for bi-directional ring topology.
1: for g = 1 : 4 do 2: Use g + 1 to find the remainder of 4 and record the remainder as sg Calculate the average position of the optimal particle in group sg and group vg and its fitness value 11: if The fitness value of the average position < The fitness value of the optimal particle in group g then 12: Replace the position of the optimal particle in group g and its fitness value using the average position and its fitness value 13: end if 14: end for

Design of Compact Strategy
This section describes the principles of the compact mechanism and the detailed process for improving the food digestion algorithm using the compact mechanism.

Principles Of The Compact Mechanism
The Distribution Estimation Algorithm (EDA) is a method based on probabilistic models [42].It maps the population into a probability model and realizes the operation of the population by operating the probability model [43].Compact algorithms are a type of EDA.It dramatically reduces the use of memory space and speeds up the algorithm's operation by using a probabilistic model to characterize the distribution of the entire population.The compact algorithm uses a virtual population instead of the actual population.This virtual population is encoded in a PV vector.It is an N ×2 matrix in compact differential evolution (CDE) [43] and real-valued compact genetic algorithms (RCGAs) [44].
µ and δ denote the mean and standard deviation of the PV, respectively, and t denotes the current number of iterations.Each pair of mean and standard deviation in PV corresponds to the corresponding Probability Density Function (PDF), which is truncated at [−1, 1] and normalizes the amplitude area to 1 [45].The calculation of PDF is given by Equation (23).
er f is the error function.By constructing Chebyshev polynomials, PDF can correspond to a Cumulative Distribution Function (CDF) with values ranging from 0 to 1 [46,47].CDF is calculated as shown in Equation (24): In Equation ( 24), x takes values in the range [−1, 1].The function CDF can be expressed as Equation (25): CDF returns the value in the range [0, 1].The process of sampling the design variable X i from the PV vector is to first generate a random number R from a uniform distribution and then calculate its corresponding inverse function of CDF to obtain a new value.This newly generated value is compared with another value, with the one with the better fitness value being the winner and the one with the worse fitness value being the loser, both of which are retained for updating the PV vector.The updated equations of mean and standard deviation are shown in Equations ( 26) and (27).
N p denotes the size of the virtual population, which is a typical parameter of compact algorithms, and the size of this parameter is usually several times the size of the actual population [44].

Compact Food Digestion Algorithm
Compact algorithms reduce memory space usage and speed up algorithms, but they reduce population diversity and tend to fall into local optima.A solution is generated by sampling from the probabilistic model during each iteration to solve this problem.Then three solutions are generated using the sampled solutions in conjunction with the characteristics of the FDA algorithm.These three solutions are generated using the particle update formulae in the oral cavity, stomach, and small intestine.Since the extent of the sampling space is not the same as the actual space, it is essential to map the generated solution Food t 1 to the actual computational space once it has been sampled in the probabilistic model, and we use Equation (28) to complete this process.
ub and lb are the maximum and minimum bounds on the actual space, respectively.The updated equation for the three solutions is given by Equations ( 29)- (31).
Food t 2 is the particle generated using the particle update equation in the oral cavity, where Food t 1 is the particle generated by sampling from the probabilistic model, Best − p is the optimal global particle, and group(g).Best − p is the optimal particle of the gth group.Food t 3 is the particle generated using the particle update equation in the stomach, and Mean is the particle obtained by averaging Food t 1 and Food t 2 .Food t 4 is the particle generated using the particle update equation in the small intestine.The meaning of the other variables in these three equations is the same as in the FDA in Section 2. The pseudo-code of the FDA algorithm for the parallel compact strategy is shown in Algorithm 5. Find the particle with the best and worst fitness value among the four particles, denoted as winner and loser; Use winner and loser to update PV; Find the global optimal solution Best − p and its fitness value Best − f ; 17: iter = iter + 1; 18: end while

Numerical Experimental Results and Analysis
This section not only compares the PCFDA with the original FDA but also compares it with the PCSCA [28].In reference [28], the authors propose three strategies for parallel communication, which apply to solving single-peak, multi-peak, and mixed-function problems.This section verifies the effectiveness of PCFDA by comparing it with them.

Parameter Settings
In this section, experiments are conducted using a Lenovo computer manufactured in Shanghai, China, equipped with an Intel(R) Core(TM) i3-8100 CPU at 3.60 GHz, 24 GB of RAM, a 64-bit Windows 10 operating system, and MATLAB2018a.
This section uses the CEC2013 test set for test experiments.The test set consists of 28 test functions, including five unimodal, fifteen multimodal, and eight mixed functions.Unimodal functions have only one global optimal solution and are used to test the ability of the algorithm to develop.Multimodal functions have multiple local optimal solutions and are mainly used to test the ability of the algorithm to escape from local optimal solutions.Mixed functions are extremely complex, they have the characteristics of both single-peak and multi-peak functions, and can test both the development ability of the algorithm and the ability of the algorithm to escape from the local optimal solution, which is the function that can best reflect the ability of the algorithm to solve complex problems.Using these three types of function tests to test the metaheuristic algorithm can effectively assess the performance and reliability of the algorithm and improve the practical application value of the algorithm.
To ensure the experiments' fairness and reduce the effect of algorithmic instability, we let all algorithms run ten times on 28 test functions for 1000 iterations.Finally, the mean and standard deviation of their runs on each function are compared.The dimension of each particle is set to 30, and the range of the particle search is in the range [−100, 100].The number of groups in the algorithm is set to 4, and the initial mean and standard deviation values are set to 0 and 10.The number of particles in the FDA is set to 20.K m has three different values that indicate the three characteristic constant values of the algorithm in the oral cavity, stomach, and small intestine, which have values of 0.8, 0.9, and 1, respectively.The parameter settings of PCSCA follow the original paper, and its three algorithms are denoted by PCSCAS1, PCSCAS2, and PCSCAS3, respectively.For the experiments in this section, we use PCFDA1, PCFDA2, and PCFDA3 to represent the enhanced FDA using Algorithms 2-4.

Comparison with the Original FDA
In this section, we use PCFDA to compare with the original FDA, mainly comparing the mean and standard deviation of their runs on each function as well as the time cost and memory usage of their runs to determine the performance of PCFDA.The mean and standard deviation comparison results are shown in Tables 1 and 2.
In Tables 1 and 2, the data in the last row indicate the number of PCFDAs better than the FDA.On the Unimodal functions f1-f5, PCFDA1 has a better searching ability on the first three functions and is more stable on f2 and f3.On the Multimodal functions f6-f20, all the algorithms have good searchability and stability on f8 and f20.PCFDA3's search ability is poor on Multimodal functions.FDA and PCFDA2, and PCFDA3 outperformed on different Multimodal functions with comparable performance.On the Mixed functions f21-f28, PCFDA2 has better searchability and stability on four functions, while PCFDA3 only performs better on f26.Overall, PCFDA1 and PCFDA2 are comparable to the original FDA regarding merit-seeking ability but are more stable than the FDA.PCFDA3 has improved performance on a few functions, but overall performance is not as good as the FDA.In order to statistically verify the effectiveness of the improved algorithm, this paper uses the Wilkerson rank sum test to verify the significant difference between the improved algorithm and the original algorithm.The significance level alpha is set to 0.05.Table 3 displays the p-values for the comparison results.The data with p-values less than 0.05 are highlighted in red.From the data in the table, it can be observed that the improved algorithm holds a significant advantage.
Improving the algorithms using compact strategies is more concerned with the time cost and the memory footprint size.Table 4 shows the time loss and memory usage for each algorithm.
In Table 4, the average running time indicates the average time to run each algorithm 10 times on 28 functions, the memory usage indicates the memory space occupied by each particle in each algorithm, the * is used as a multiplication sign, and D indicates the particle dimension.(20 + 1) * D denotes the memory occupied by the 20 particles in the FDA and one globally optimal particle.In the last three columns of Table 4, (2 * 4) * D denotes the memory occupied by µ and δ in the four groups.The following two 4s represent the memory occupied by the four particles obtained from each update (including one sampled particle and three generated particles) and the optimal particle in the four groups, respectively.The last 1 denotes the memory occupied by a temporary particle needed in the communication strategy.Combining the results of each algorithm in Tables 1 and 2 leads to the conclusion that the improved algorithms are improved in terms of both time cost and memory space.

Comparison with PCSCA
This section compares the improved FDA with PCSCA.Both algorithms use parallel and compact strategies for improvement, so we only compare their searchability and stability here.Tables 5 and 6 show the mean and standard deviation comparison results.
The red font in Tables 5 and 6 indicates the mean and standard deviation of the optimum found by each algorithm on each function.As seen from the tables, on the f20, all algorithms show good searching ability and stability.On the f8, all algorithms have the same search ability, but PCSCAS3 is more stable.On the other functions, the PCFDA outperformed the PCSCA regarding searching superiority.
In this section, the Wilcoxon rank sum test was also used for the significance analysis of the proposed algorithm in this paper.We conducted significance analysis of the three algorithms proposed in this paper with the parallel compact SCA algorithm.Tables 7-9 display the comparison results, with red font indicating data with p-values greater than 0.05.From the data in the table, it can be observed that the proposed algorithm in this paper outperforms the parallel compact SCA algorithm in most functions.

Convergence Analysis
This section evaluates the performance of the algorithms by comparing the convergence curves of the PCFDA and PCSCA algorithms on three classes of functions.Figures 4-6 show the corresponding experimental results.From the convergence curves of the three types of functions, on the unimodal function, the convergence speed of each algorithm is not much different.Only on f1 do the PCFDA1 and PCFDA2 algorithms converge faster in the early stage.On the multimodal functions f8 and f20, although the convergence speeds of the algorithms are quite different, they have similar optimization capabilities based on the data in Tables 1 and 5. On f6, f7, f10, and f19, the convergence speed of each algorithm is similar.Due to the instability of each algorithm's search on other multimodal functions, the convergence speed and accuracy are different.On the mixed functions f23, f24, f25, and f27, PCFDA2 converges faster and has the best optimization accuracy.On the function f22, the FDA has better convergence speed and accuracy than other algorithms.

Application of PCFDA in Mobile Sensor Localization Problem
This section discusses the PCFDA algorithm for mobile sensor localization and compares it with the original MCL algorithm under different numbers of anchor nodes and communication radii.Locations with large errors are first obtained by the MCL localization technique, and then the PCFDA algorithm is applied for further optimization around the obtained locations to reduce the localization error.The error function is defined as Equation ( 32): Z represents the total number of unknown nodes, and N represents the total number of anchor nodes.(x l , y k ) denotes the estimated location of the unknown node l, and (x k , y k ) denotes the location of the anchor node.D lk represents the distance between unknown node l and anchor node k.This section assumes that anchor node k can obtain the distance between anchor node k and unknown node l through the signal strength received from unknown node l.The smaller the error value, the higher the positioning accuracy.

Experimental Analysis of Different Numbers of Anchor Nodes
In this section, the total number of nodes is set to 300, randomly distributed within the space range of 300 × 300.The number of anchor nodes is set to 10, 20, 30, 40, and 50, and the communication radius is set to 50.Experiments were performed using the MCL localization algorithm, FDA, and PCFDA.To avoid randomness, this section runs each algorithm 10 times and takes the average of 10 runs as the final result.The experimental results are shown in Table 10.In Table 10, Ave and Std represent the mean and standard deviation of the run results, respectively.It can be seen from Table 10 that under the condition of a fixed communication radius, the more the number of anchor nodes, the smaller the positioning error and the more accurate the positioning.Compared with the MCL positioning algorithm, the positioning accuracy of the FDA has improved a lot, but the FDA is extremely unstable.The cAPSO [48] algorithm has comparable localization accuracy to the FDA algorithm, but it is more stable than the FDA algorithm.Under the same experimental conditions, the performance of the PCFDA is remarkable, both in positioning accuracy and algorithm stability are better than the FDA, and the positioning accuracy is much better than the MCL algorithm.

Experimental Analysis of Different Communication Radius
This section also uses 300 nodes for experiments and distributes them in the space of 300*300.The number of anchor nodes is set to 50, and the communication radius is set to 20, 40, 60, and 80, respectively.Each algorithm is run 10 times in this section, and the mean and standard deviation of 10 runs are taken for experimental analysis.The experimental results are shown in Table 11.Table 11 shows that when the number of anchor nodes is fixed, the larger the communication radius, the smaller the positioning error, and the more accurate the positioning.The positioning accuracy of the FDA is better than the MCL positioning algorithm, but the stability is poor.The cAPSO algorithm is comparable to the FDA algorithm in terms of localization accuracy, but with better stability.The performance improvement of PCFDA is more significant and has good results in positioning accuracy and operational stability.

Conclusions
This paper proposes three intergroup communication strategies to improve the food digestion algorithm.These three strategies use different topologies, which significantly demonstrate the efficiency of particle communication and speed up the algorithm's convergence.This paper also uses a compact strategy to improve the food digestion algorithm, reducing the algorithm's running time and saving memory space.Then, this paper tested the PCFDA algorithm on the CEC2013 test set and achieved good results.Finally, this paper uses the improved algorithm to solve the problem of mobile sensor localization, which reduces the error of positioning and improves the accuracy of positioning.
In the future, we can use other inter-group communication strategies to further improve the FDA's search accuracy.In the meantime, we will consider using the improved algorithm for other localization problems in wireless sensor networks.The design process of the algorithm does not take into account issues such as communication barriers of mobile sensors in real environments, so these factors can be considered to be added in future research.

Figure 1 .
Figure 1.Flowchart of the MCL algorithm.

9 :
Disturb the optimal particle of group g and calculate its fitness value 10:

3 :
Use g − 1 to find the remainder of 4 and record the remainder as vg 4:if sg == 0 then

Figure 4 .
Figure 4. Convergence curves on the unimodal functions.
Global optimal position Best − p, Global optimal fitness value Best − f ; 1: Initialize populations and calculate their fitness values; 2: Record the optimum global position Best − p; 3: Initialize the parameters a, b, a 1 , V max , K m ; 4: while iter < Max − iter do

end if 34: end for Algorithm 1 Cont.
Compact Food Digestion Algorithm.Population size N p ; Dimension D; Maximum number of iterations Max − iter; Lower boundary lb; Upper boundary ub; Input: Global optimal position Best − p, Global optimal fitness value Best − f ; 1: Initialize the parameters a, b, a 1 , K m , V max , iter and the number of groups groups as well as the mean and standard deviation µ and δ for each group; 2: while iter < Max − iter do

Table 1 .
The average of the running results of the improved FDA and the original FDA.

Table 2 .
The standard deviation of the running results of the improved FDA and the original FDA.

Table 4 .
The average running time and memory usage of each algorithm.

Table 5 .
The running results of the average value of each algorithm.

Table 6 .
The running results of the standard deviation of each algorithm.

Table 7 .
The comparison results between PCFDA1 with three improved SCA algorithms.

Table 8 .
The comparison results between PCFDA2 with three improved SCA algorithms.

Table 10 .
Experimental results of the localization error of different anchor nodes.

Table 11 .
Experimental results of the localization error of different communication radius.