Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology

In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments.


Introduction
Research on the deployment planning of digital measuring instruments in large-scale measurement fields mainly focuses on two categories. One is the influence of measurement uncertainty [1] for a specific point on the measurement to determine the interval of the station and optimize the station according to the uncertainty. The other type considers whether the measurement target is measurable as a planning condition and obtains the actual station position of the measuring instrument through accessibility judgment. Accessibility can be divided into two categories: accessibility and visual methods, which are mainly used in the fields of contact and visual measurements. Accessibility analysis [2] is the smallest conical area in a series of observable directions. It is mainly used to analyze whether a target can reach the surface of a measured object without collision and it is suitable for the contact measurement of small and medium parts on a three-coordinate measuring machine. The principle of the visibility graph method [3] is similar to that of the accessibility analysis method. The detection plan derived from the geometry refers to the collection of a series of position points where the optical instrument can measure the light to measure the target point. For the plane measurement point, the visibility graph was a hemisphere collection. A single measuring instrument cannot measure all target points in a large-scale measurement field. Therefore, multiple instruments are needed to work together to build a measurement network covering the entire assembly space. Due to the discrete nature of station locations, Monte Carlo simulation is usually used to solve for the effects of the number of stations, the distance, and the uniformity of distribution on the overall measurement field [4]. However, the reference points to be measured in the traditional method are fixed points, which have certain limitations. Meanwhile, the traditional method of station deployment cannot calculate the overall accessibility. In this study, we transform deployment planning into an optimization problem and obtain the best deployment plan by optimizing the station coordinates using an optimization algorithm with light detection as the rule.
Optimization algorithms can be divided into traditional and metaheuristic optimization algorithms based on the process of solving optimization problems. Meta-heuristic optimization algorithms include evolutionary algorithms, swarm intelligence algorithms, intelligent bionic optimization algorithms, and other intelligent optimization algorithms. In the field of evolutionary algorithms, Dong et al. [5] proposed a novel multi-objective, evolutionary-based, probabilistic transformation inspired by a genetic algorithm. Wan et al. [6] introduced Gaussian chaos mapping and other evolutionary strategies to improve the black widow spider optimization algorithm. Wu et al. [7] combined the Bernstein operator and the differential evolution algorithm and proposed refracted oppositional-mutual learning. Pang et al. [8] used a differential evolution algorithm and multitask learning to predict photovoltaic power. In the field of swarm intelligence algorithms, Opoku et al. [9] combined an ant colony optimization algorithm with iterative conditional patterns for computing estimates of neural source activity. To optimize wireless sensor node deployment, Wu et al. [10] proposed a virtual force-directed particle swarm optimization approach, where the optimization objective is to maximize network coverage. Dai et al. [11] solved the problem of gravity anomaly matching using an artificial bee colony algorithm based on a radiation transformation. Dong et al. [12] combined time-shift multi-scale weighted permutation entropy with a gray-wolf-optimized support vector machine to classify the faults of rolling bearings. In the field of intelligent bionic optimization, Zhou et al. [13] used the immune fruit fly optimization algorithm to search the combined parameters of k and α in variational mode decomposition. Lu et al. [14] optimized the extreme learning machine for better classification performance using the chaotic bat algorithm. Tong et al. [15] improved the cuckoo algorithm to support continuous hyper-parameters, integer hyper-parameters, and mixed hyper-parameters. Deb et al. [16] commented on the variants and applications of flock optimization algorithms. In other areas of intelligent optimization algorithms, Kuo et al. [17] used simulated annealing to reduce the complexity of a fully connected network. Shang et al. [18] used an artificial immune algorithm to solve the multi-objective clustering problem and to obtain a Pareto optimal solution set. Liao et al. [19] used a firefly algorithm to reduce energy costs. Goh et al. [20] proposed the use of harmony search, to form a hybrid HS-SVM, to perform feature selection and hyperparameter tuning simultaneously and a hybrid HS-RF to tune the hyperparameters.
The remora optimization algorithm (ROA) [21] is a relatively new meta-heuristic optimization algorithm inspired by the parasitic properties of the remora. The algorithm combines the whale optimization algorithm (WOA) [22] and sailfish optimization algorithm (SFO) [23], and the population is updated by switching the two strategies. Almalawi et al. [24] focused on the design of remora optimization and a deep learning heavy metal adsorption rate prediction model for biochar. Raamesh et al. [25] proposed a combination of battle royale optimization and remora optimization to address the selection of software test cases. In this study, different improvements were used. Based on the original ROA, the Poisson-like randomness strategy and enhanced randomness strategy were added such that the population individuals have more changes. In addition, an optimization model was established for the engineering problem of deployment planning in large-scale surveying fields, and high-dimensional parameters were obtained through the improved remora algorithm (PROA) and converted into effective station parameters.
To test the proposed PROA, we used 45 CEC benchmark functions for testing on the base dimension and selected four other meta-heuristic optimization algorithms for performance comparison. Simultaneously, to test the performance of the algorithm in optimizing high-dimensional parameters, we selected 12 CEC benchmark functions with scalable dimensions for testing and comparison. Finally, the improved algorithm was tested and compared to the engineering problem of deployment planning in a large-scale measurement field, and its usability was verified.

Original ROA
The original ROA was optimized by exploiting the parasitic properties of the remora. Initialization is first performed, and the individuals of the population randomly start their respective initial positions within the upper and lower boundaries. Subsequently, the fitness function of each individual is calculated, and the optimal position and fitness are updated. Attempt a new location using the following formula: where R att is the attempted new position, R t i is the i-th individual in the course of the t-th iteration, R pre is the last historical position, and rand 1 is a normally distributed random number between [0, 1]. The fitness f (R att ) of the attempted new position and the fitness f R t i of the current individual are calculated and compared. When the latter is greater than the former, the host feeds as follows: where R t+1 i is the ith individual in the t-th iteration process, R best is the global optimal position, rand 2 is a random number between [0, 1], max_iter is the maximum number of iterations, t is the current iteration number, V is the host feeding range, and C is a fixed coefficient of 0.1. Otherwise, the host is changed and the WOA or SFO strategy is used to update the location. The WOA strategy formula is as follows: where rand 3 is a random number between [0, 1] and α is a random number between [−1, 1]. The formula for the SFO strategy is as follows: where rand 4 is a random number between [0, 1], and R t m is a random individual in the population. Finally, the above steps are repeated until the maximum number of iterations is reached.

Poisson-like Randomness Strategy
In the original ROA, a new position was attempted using Equation (1). However, this attempt is only related to the population individuals and their historical positions; the search space is limited, and it easily falls into a local optimum. Therefore, this study introduces a Poisson-like randomness strategy that is obtained by deforming the probability Entropy 2023, 25, 450 4 of 28 density function of the Poisson distribution. The Poisson probability density function formula is as follows: where k = 0, 1, 2, . . .. Figure 1 shows the probability density function curve for λ ∈ [1,6]. The horizontal axis is x, and the vertical axis represents the probability density.
In this study, we set λ = 6 for two reasons: 1. The slope was gentle, and there was no sudden change in the function value 2. The peak and surrounding area are close to one side, which is the opposit trend of the change in the strength of the search strategy.
The steps to obtain the two parametric curves of a Poisson-like randomness s are as follows: 1. Horizontally mirror the probability density function curve of = 6 in Figur that it conforms to the trend of the search strategy strength changes. 2. Parameter curve is obtained by stretching the x-axis according to the ma number of iterations of the optimization algorithm. 3. Because the two parameters have opposite trends, 1 − is the parameter cu Considering the maximum number of iterations set to 500 as an example, t parameter curves are shown in Figure 2. The entire iterative process is divided in phases: the yellow area is phase 1, which implies global search; the green area is p which is close to the optimal solution; and the blue area is phase 3, which impli search. In this study, we set λ = 6 for two reasons: 1.
The slope was gentle, and there was no sudden change in the function value.

2.
The peak and surrounding area are close to one side, which is the opposite of the trend of the change in the strength of the search strategy.
The steps to obtain the two parametric curves of a Poisson-like randomness strategy are as follows:

1.
Horizontally mirror the probability density function curve of λ = 6 in Figure 1 such that it conforms to the trend of the search strategy strength changes.

2.
Parameter curve r 1 is obtained by stretching the x-axis according to the maximum number of iterations of the optimization algorithm.

3.
Because the two parameters have opposite trends, 1 − r 1 is the parameter curve r 2 .
Considering the maximum number of iterations set to 500 as an example, the two-parameter curves are shown in Figure 2. The entire iterative process is divided into three phases: the yellow area is phase 1, which implies global search; the green area is phase 2, which is close to the optimal solution; and the blue area is phase 3, which implies local search. Finally, two change parameters were used to adjust the position of other individuals in the population and the optimal position to affect the new position of the attempt. The formula used is as follows: where is the new location attempted and and are the parameter values during the t-th iteration.
As shown in Figure 2, the new positions that were tried in the global search phase gradually approached the global optimal solution, and the distance was closest in Phase 2. However, the new locations that were tried during the local search phase were closer to other individuals in the population. From the perspective of the overall search process, Phase 1 enhances the spatial search ability of individual populations. In Phase 3, the individuals are all close to the global optimum, and each individual increases the diversity of local search directions by approaching other individuals.

Enhanced Randomness Strategy
In the original ROA, the SFO strategy was associated with only one individual in the population, and replacement host diversity was not high. Therefore, this study used three enhanced randomness strategies to replace the original SFO strategy. The formula used is as follows: where , , , , and are other random individuals in the iterative process and , , and are random numbers between 0,1 . Compared with the original single strategy, the enhanced randomness strategy strengthens the connection with other individuals in the population, strengthens the connection with the optimal individual, and increases the diversity of the replacement hosts.

Steps to the PROA
In the proposed PROA, the original trial strategy was replaced with a Poisson-like randomness strategy. Simultaneously, the direction of free travel was extended using an Finally, two change parameters were used to adjust the position of other individuals in the population and the optimal position to affect the new position of the attempt. The formula used is as follows: where R att is the new location attempted and r t 1 and r t 2 are the parameter values during the t-th iteration.
As shown in Figure 2, the new positions that were tried in the global search phase gradually approached the global optimal solution, and the distance was closest in Phase 2. However, the new locations that were tried during the local search phase were closer to other individuals in the population. From the perspective of the overall search process, Phase 1 enhances the spatial search ability of individual populations. In Phase 3, the individuals are all close to the global optimum, and each individual increases the diversity of local search directions by approaching other individuals.

Enhanced Randomness Strategy
In the original ROA, the SFO strategy was associated with only one individual in the population, and replacement host diversity was not high. Therefore, this study used three enhanced randomness strategies to replace the original SFO strategy. The formula used is as follows: and R t f are other random individuals in the iterative process and rand 6 , rand 7 , and rand 8 are random numbers between [0, 1].
Compared with the original single strategy, the enhanced randomness strategy strengthens the connection with other individuals in the population, strengthens the connection with the optimal individual, and increases the diversity of the replacement hosts.

Steps to the PROA
In the proposed PROA, the original trial strategy was replaced with a Poisson-like randomness strategy. Simultaneously, the direction of free travel was extended using an Entropy 2023, 25, 450 6 of 28 augmented randomness strategy. A flowchart is shown in Figure 3, and the pseudocode is presented in Algorithm 1.

1:
Initialize the pre-population dataset R pre ; 2: While t < max_iter carry out 3: Amend agent if out of bound [lb, ub]; 4: Calculate f R t i of each agent; 5: Update R best and f R t best ; 6: For each agent indexed by i carry out 7: Using Equation (8) to make an experienced attempt R att with Poisson-like distribution; 8: Calculate f (R att ) and f R t i ; 9: If f R t i > f (R att ) then 10: Perform host feeding by Equation (2); 11: Else 12: If random(i) = 1 then 13: Using Equation (4)  augmented randomness strategy. A flowchart is shown in Figure 3, and the pseudocode is presented in Algorithm 1.

Experimental Configuration
To examine the convergence speed and robustness of the PROA, we selected 45 benchmark functions proposed in the IEEE CEC competition as fitness functions for testing [26] (refer to the Supplementary Materials for details). In addition, we compared the PROA with the artificial electric field algorithm (AEFA) [27], white shark optimization algorithm (WSO) [28], sooty tern optimization algorithm (STOA) [29], squirrel optimization algorithm (SSA) [30], and the original ROA. In order to ensure the integrity of the six algorithms' data and to ensure minimum expense, the number of populations is set to 20 and the maximum number of iterations is set to 500. Each algorithm was run 50 times for this configuration. In addition, we selected benchmark functions with 12 scalable dimensions for high-dimensional (D = 100/500/1000) testing with the same configuration as the standard dimensions.

Comparison of Standard Dimension Results
The experimental results for the standard dimensions are listed in Table 2. From the experimental results, it can be observed that the PROA achieves the best results when optimizing F16, F17, F22, F41, F43, and F44, both in terms of the average result of optimization and robustness. Compared with the ROA, the average value of the PROA increased by two orders of magnitude in the experimental results of optimizing F16, F43, and F44, and the average value also increased by one order of magnitude in the experimental results of optimizing F44. In terms of robustness, the standard deviation of the PROA was reduced to 1% of the ROA in the experimental results of optimizing F16, F17, and F44. In the experimental results of optimizing F43, it was also reduced to 1/20 of the ROA. In terms of the convergence speed, the experimental results are shown in Figures 4-9. The horizontal axis represents the number of iterations, and the vertical axis represents the fitness function value. It can be observed from the experimental results that, compared with the ROA, the PROA achieves the optimal result twice in advance when optimizing F16, and four times ahead, to obtain the minimum value when optimizing F17, F22, and F41. In the optimization of F43 and F44, the iterative process was also advanced by one round. In terms of the convergence speed, the experimental res 9. The horizontal axis represents the number of iterations, and the fitness function value. It can be observed from the experim with the ROA, the PROA achieves the optimal result twice i F16, and four times ahead, to obtain the minimum value wh F41. In the optimization of F43 and F44, the iterative proces round.

Comparison of Results under Dimension 100
The experimental results listed in Table 3 are from when dimension was 100. From the experimental results, it can be o performs well in high dimensions. When optimizing F16, th optimizations of the PROA is 0.1% of the ROA, and the stan ROA. However, when optimizing F17, the average result of m 1/20 of the ROA. However, its robustness is still 100 times PROA is optimized with F43 as the objective function, the re that of the ROA regardless of whether the mean or the standar results are optimized. When optimizing F44, the result reduce

Comparison of Results under Dimension 100
The experimental results listed in Table 3 are from when the optimization parameter dimension was 100. From the experimental results, it can be observed that the PROA still performs well in high dimensions. When optimizing F16, the average result of multiple optimizations of the PROA is 0.1% of the ROA, and the standard deviation is 1% of the ROA. However, when optimizing F17, the average result of multiple optimizations is only 1/20 of the ROA. However, its robustness is still 100 times that of the ROA. When the PROA is optimized with F43 as the objective function, the result is 100 times better than that of the ROA regardless of whether the mean or the standard deviation of the optimized results are optimized. When optimizing F44, the result reduces to 0.01% of the ROA result. From the experimental results in Figures 10-15, it can be observed that the convergence speed of the PROA still has certain advantages when the dimension is 100. The ROA requires two additional iterations to reach the minimum when optimizing F16 and F17. When optimizing F22 and F41, five additional iterations were required. Only when optimizing F43 and F44 is only one additional iteration required to achieve the same results as the PROA. From the experimental results in Figures 10-15, it can be observ gence speed of the PROA still has certain advantages when the dimens requires two additional iterations to reach the minimum when optim When optimizing F22 and F41, five additional iterations were require mizing F43 and F44 is only one additional iteration required to achie as the PROA.    From the experimental results in Figures 10-15, it can be observ gence speed of the PROA still has certain advantages when the dimens requires two additional iterations to reach the minimum when optim When optimizing F22 and F41, five additional iterations were require mizing F43 and F44 is only one additional iteration required to achie as the PROA.

Comparison of Results under Dimension 500
The experimental results listed in Table 4 are from whe

Comparison of Results under Dimension 500
The experimental results listed in Table 4 are from when the number of optimized parameters was 500. When the PROA optimizes F16, both the mean and standard deviation of the optimized results are reduced to 0.1% of the ROA. When optimizing F17, the PROA results were more common, and the results were only reduced by 1/2. When the PROA and the ROA were optimized for F22 and F41, the same results were achieved. However, better results were obtained when F43 and F44 were optimized. The mean and standard deviation of the ROA when optimizing F43 were 4% and 3%, respectively. When optimizing F44, the PROA achieved 0.1% and the ROA achieved 0.15%, respectively. The convergence when the dimension was 500 is shown in Figures 16-21. When the PROA optimized F16, F22, and F44, compared with the ROA, the optimal result was achieved by two iterations ahead of time. While optimizing F17 and F41, it was advanced by five times. When the PROA optimizes F43, the advantage is not obvious, and it only leads to the iterative process in one round.

Comparison of Results under Dimension 1000
The experimental results shown in Table 5 are from when the number of optimized parameters reached 1000. Compared to the ROA, the average optimization result of the PROA was reduced by three orders of magnitude, and the standard deviation was reduced by two orders of magnitude when optimizing F16. However, the PROA's performance in optimizing F17 was average, the average value only dropped by 1/2, and the standard deviation was similar to that of the ROA. The comparison results of the PROA and the ROA when optimizing F22 and F41 were the same as those of the other dimensions. The PROA obtained better results when F43 and F44 were optimized. In terms of the mean, they were 1% and 0.1% for the ROA results, respectively, and the robustness reached 1% for the ROA. The convergence results for 1000 dimensions are shown in Figure  experimental results, it can be observed that, compared with the ROA, t the optimal result by two iterations ahead of time when optimizing F1 were advanced for optimizing F17 and F41. However, the PROA rea value with only five iterations when optimizing F22, which was five tim The PROA achieved average results in optimizing F43 and F44, leading tion.

Results, Statistics, and Performance Analysis
According to the convergence curve (refer to Supplementar we calculated the convergence rate statistics, as shown in Figure

Results, Statistics, and Performance Analysis
According to the convergence curve (refer to Supplementar we calculated the convergence rate statistics, as shown in Figure

Results, Statistics, and Performance Analysis
According to the convergence curve (refer to Supplementary Materials for details), we calculated the convergence rate statistics, as shown in Figure 28. Each ring represents the experimental result of one dimension; green indicates that the PROA has a better convergence curve than the ROA; light yellow indicates that the convergence speed of the two algorithms is ambiguous; and orange indicates that the convergence speed of the PROA is worse than that of the ROA. From the statistical results of the experiment, it can be observed that the PROA convergence speed changes on the CEC benchmark function by 67.5-74%. In all dimensions, the rate of slower convergence was 7.5-8%. In addition, there are cases in which the convergence curves of the ROA and the PROA are entangled with each other, but the results of high-dimensional tests are much better than those of standard dimensions. The statistical results based on the standard deviation obtained from the experiments (refer to Supplementary Materials for details) are shown in Figure 29. The horizontal axis represents the difference between the standard deviations of the PROA and the ROA and the vertical axis represents the proportion of the difference in the overall results. It can be observed from the figure that better results than the original ROA were obtained on approximately 75% of the CEC benchmark function. In the test results for the standard dimension, the proportion of performance degradation was less than 9%. In addition, the part whose standard deviation from the ROA was less than 10 -6 only accounted for 0-2%. However, more than 10 -6 and less than 10 -3 accounted for 8-16%. The experimental results show that the PROA can converge faster than the original ROA in most CEC benchmark functions, whether it is a standard dimension or a highdimensional parameter. This is because, in the global search stage, the PROA speeds up the search speed and improves the search ability through the Poisson-like randomness The statistical results based on the standard deviation obtained from the experiments (refer to Supplementary Materials for details) are shown in Figure 29. The horizontal axis represents the difference between the standard deviations of the PROA and the ROA and the vertical axis represents the proportion of the difference in the overall results. It can be observed from the figure that better results than the original ROA were obtained on approximately 75% of the CEC benchmark function. In the test results for the standard dimension, the proportion of performance degradation was less than 9%. In addition, the part whose standard deviation from the ROA was less than 10 -6 only accounted for 0-2%. However, more than 10 -6 and less than 10 -3 accounted for 8-16%. The statistical results based on the standard deviation obtained from the experiments (refer to Supplementary Materials for details) are shown in Figure 29. The horizontal axis represents the difference between the standard deviations of the PROA and the ROA and the vertical axis represents the proportion of the difference in the overall results. It can be observed from the figure that better results than the original ROA were obtained on approximately 75% of the CEC benchmark function. In the test results for the standard dimension, the proportion of performance degradation was less than 9%. In addition, the part whose standard deviation from the ROA was less than 10 -6 only accounted for 0-2%. However, more than 10 -6 and less than 10 -3 accounted for 8-16%. The experimental results show that the PROA can converge faster than the original ROA in most CEC benchmark functions, whether it is a standard dimension or a highdimensional parameter. This is because, in the global search stage, the PROA speeds up the search speed and improves the search ability through the Poisson-like randomness strategy, making it more directional than the original ordinary random. Furthermore, the The experimental results show that the PROA can converge faster than the original ROA in most CEC benchmark functions, whether it is a standard dimension or a highdimensional parameter. This is because, in the global search stage, the PROA speeds up the search speed and improves the search ability through the Poisson-like randomness strategy, making it more directional than the original ordinary random. Furthermore, the subsequent augmented randomness strategy enables individuals of the population to reach more hosts during the free travel phase. The local search ability near the optimal solution and the overall robustness of the algorithm are enhanced.

Deployment Planning Model
In large-scale measurements, the target to be measured must have many features and a wide distribution range. One station cannot measure all feature points. Therefore, multiple stations must be determined for planning and measurement. Adjacent stations require at least three public transfer points to complete the coordinate system fitting. With an increase in the number of transfer points, the fitting variance of the coordinate system decreases continuously, but when the number exceeds seven, the reduction speed of the error slows down. Therefore, in the actual measurement process, 5-7 public transfer points are selected for measurement, and the coordinate system is fitted [37]. The following principles should be followed in the deployment of the measuring instruments [38]:

1.
A single station can directly measure most features and cover tooling or ground transfer points as much as possible. Simultaneously, priority should be given to selecting transfer points with a large distance and at the edge of the venue.

2.
The location of the station should avoid areas with frequent changes in temperature and airflow. Excessive fluctuations directly affected the measurement accuracy of the entire measurement field. 3.
The accuracy of the measuring instrument is closely related to the measurement distance. In the establishment of the range, minimizing the distance between the station and the feature to be measured can reduce the measurement error.

4.
In the case of tool occlusion, the sum of the fields of view of all the stations should be as large as possible and enclose the entire measurement space.
Based on these principles, it is necessary to first set the planning range of the station. For the target to be measured, the side of the bounding box is the limit planning range, and there is a risk of bumping parts into the arrangement of the measuring equipment. Therefore, the bounding box is first enlarged by b zoom_in , and then the side is divided into k areas, where q = k/4 , p = k/4 . As shown in Figure 30, the translucent blue area is the bounding box of the target to be measured, and the yellow translucent area is the enlarged bounding box based on the original bounding box, which is also the definition domain of the measurement device.

Deployment Planning Model
In large-scale measurements, the target to be measured must have many features and a wide distribution range. One station cannot measure all feature points. Therefore, multiple stations must be determined for planning and measurement. Adjacent stations require at least three public transfer points to complete the coordinate system fitting. With an increase in the number of transfer points, the fitting variance of the coordinate system decreases continuously, but when the number exceeds seven, the reduction speed of the error slows down. Therefore, in the actual measurement process, 5-7 public transfer points are selected for measurement, and the coordinate system is fitted [37]. The following principles should be followed in the deployment of the measuring instruments [38]: 1. A single station can directly measure most features and cover tooling or ground transfer points as much as possible. Simultaneously, priority should be given to selecting transfer points with a large distance and at the edge of the venue. 2. The location of the station should avoid areas with frequent changes in temperature and airflow. Excessive fluctuations directly affected the measurement accuracy of the entire measurement field. 3. The accuracy of the measuring instrument is closely related to the measurement distance. In the establishment of the range, minimizing the distance between the station and the feature to be measured can reduce the measurement error. 4. In the case of tool occlusion, the sum of the fields of view of all the stations should be as large as possible and enclose the entire measurement space.
Based on these principles, it is necessary to first set the planning range of the station. For the target to be measured, the side of the bounding box is the limit planning range, and there is a risk of bumping parts into the arrangement of the measuring equipment. Therefore, the bounding box is first enlarged by _ , and then the side is divided into k areas, where = ⌊ /4⌋, = ⌈ /4⌉. As shown in Figure 30, the translucent blue area is the bounding box of the target to be measured, and the yellow translucent area is the enlarged bounding box based on the original bounding box, which is also the definition domain of the measurement device. Next, we converted the principles that need to be followed for site deployment into a mathematical model. Station deployment has the following constraints: 1. Between two adjacent stations, it can be observed that the number of public transfer Next, we converted the principles that need to be followed for site deployment into a mathematical model. Station deployment has the following constraints:

1.
Between two adjacent stations, it can be observed that the number of public transfer points on the target to be tested cannot be less than c 1 , that is, the constraint C 1 (x) ≥ c 1 .

2.
The number of public transfer points on the tooling that can be observed between two adjacent stations cannot be less than c 2 , that is, constraint The number of public transfer points on the ground between two adjacent stations cannot be less than c 3 , that is, the constraint C 3 (x) ≥ c 3 . 4.
The number of reference points that can be observed from all stations should account for above c 4 of the total number of key points, that is, the constraint C 4 (x) ≥ c 4 .
Here, c 1 , c 2 and c 3 are integers and c 4 is a decimal in the interval [0, 1]. C 1 (x), C 2 (x), C 3 (x), and C 4 (x) are constraint functions. All of the constraints filter the visible part of the object under test using a "hidden" point removal operator [39]. The constraint calculation formula is as follows: where c is the constraint value and C(x) is the constraint function. Finally, the objective function of the deployment model is set. The ultimate goal of this placement model is to minimize the invisible area when it is obscured by tooling. Because the model has multiple constraints, we introduced a large penalty factor σ according to the characteristics of the outlier penalty function. The objective function is then expressed as: Here, V j is the visible point cloud seen by the j-th station, V ratio is the overall point cloud of the object to be tested, V total is the ratio of the visible area to the total area, and C i is the station constraint.

Simulation Results and 3D Visualization
To verify the feasibility of the station deployment model and the stability of accessibility, the text runs 30 times with the ROA and the PROA as the optimizers under the configuration in Table 6. In addition, in the simulation experiment, the number of target point clouds to be measured, tooling point clouds, and ground point clouds were 51,433, 30,100, and 6847, respectively. Figure 31 shows the average historical results of the maximum visible area of the deployment plan, where the horizontal axis is the number of iterations and the vertical axis is the area ratio of the visible area. The statistical results of all of the experiments are shown in Table 7. The number of public transfer points on the target to be tested can be seen between two adjacent stations c 1 2 Table 6. Cont.

7
The number of public transfer points on the tooling can be seen between two adjacent stations c 2 1 8 The number of public transfer points seen on the ground between two adjacent stations c 3 2 9 The proportion of the number of key points that can be seen in all stations c 4 0.75 10 Penalty factor σ 106 Here, is the visible point cloud seen by the j-th station, is the overall point cloud of the object to be tested, is the ratio of the visible area to the total area, and is the station constraint.

Simulation Results and 3D Visualization
To verify the feasibility of the station deployment model and the stability of accessibility, the text runs 30 times with the ROA and the PROA as the optimizers under the configuration in Table 6. In addition, in the simulation experiment, the number of target point clouds to be measured, tooling point clouds, and ground point clouds were 51433, 30100, and 6847, respectively. Figure 31 shows the average historical results of the maximum visible area of the deployment plan, where the horizontal axis is the number of iterations and the vertical axis is the area ratio of the visible area. The statistical results of all of the experiments are shown in Table 7.  From the experimental results shown in Figure 31, it can be observed that, from the 10th iteration, the ROA convergence speed becomes slower. At the 100th iteration, the maximum visible area obtained by the ROA optimization was 64.95%, whereas the PROA reached 81.7% after rapid convergence. In the subsequent iteration interval of 100-500, the ROA optimization trend tends to be stable. The PROA increased to 83.02%, an increase of 1.32% within this range. When the final iteration completed the entire optimization process, the performance of the PROA was 18.07% higher than that of the ROA.
Conversely, as shown in the statistical results in Table 7, the results of the maximum visible area obtained by the PROA optimization are better than the ROA in three aspects: maximum value, minimum value, and mean value. In terms of robustness, the PROA stability was improved by 1/3. The simulation experiment verifies that the PROA is superior to the ROA, in terms of both convergence speed and robustness.
Finally, we used PyVista to render the station's historical and optimal positions in a 3D space, as shown in Figure 32. The light blue grid is the ground, translucent brown is the station definition domain, dark blue is tooling, red sphere is the key point, green is the visible area, orange is the invisible area, black dots are the historical positions of population exploration during the optimization process, and the red point marked with a red box is the optimal position of the deployment plan after the iteration has been completed.

Conclusions
In this paper, we propose the PROA. The algorithm introduces a Poisson-like randomness strategy to enhance the global search ability of individual populations. Simultaneously, an enhanced randomness strategy is introduced to improve the local search ability of the population and the robustness of the algorithm. The ROA and PROA were tested with different dimensions (D = standard/100/500/1000) using the CEC benchmark function. The convergence curve results of 67.5-74% of the PROA are better than those of the ROA, and the robustness results of 66.67-75% are better than those of ROA. This study establishes a deployment optimization model for a large-scale measurement field layout planning problem. The PROA was applied to the deployment planning model, and the performance of PROA and the feasibility of the model were verified through simulation experiments. Compared to the ROA, the performance improved by 18.07%, and the maximum viewing area of the PROA can reach 83.02%. It improves the computational efficiency and calculates the overall accessibility compared to traditional station planning methods. Next, we will study the deployment optimization model more deeply from the aspects of cooperation target point measurement accuracy and station transfer accuracy and explore more complex location configuration modes to solve the booth optimization problem [40].  Institutional Review Board Statement: Not applicable.

Data Availability Statement:
Owing to the large dataset, we only uploaded the entire code to GitHub at https://github.com/YDM-Cloud/PROA. The dataset in this study is available on request from the corresponding author.

Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this study.

Conclusions
In this paper, we propose the PROA. The algorithm introduces a Poisson-like randomness strategy to enhance the global search ability of individual populations. Simultaneously, an enhanced randomness strategy is introduced to improve the local search ability of the population and the robustness of the algorithm. The ROA and PROA were tested with different dimensions (D = standard/100/500/1000) using the CEC benchmark function. The convergence curve results of 67.5-74% of the PROA are better than those of the ROA, and the robustness results of 66.67-75% are better than those of ROA. This study establishes a deployment optimization model for a large-scale measurement field layout planning problem. The PROA was applied to the deployment planning model, and the performance of PROA and the feasibility of the model were verified through simulation experiments. Compared to the ROA, the performance improved by 18.07%, and the maximum viewing area of the PROA can reach 83.02%. It improves the computational efficiency and calculates the overall accessibility compared to traditional station planning methods. Next, we will study the deployment optimization model more deeply from the aspects of cooperation target point measurement accuracy and station transfer accuracy and explore more complex location configuration modes to solve the booth optimization problem [40].  Institutional Review Board Statement: Not applicable.

Data Availability Statement:
Owing to the large dataset, we only uploaded the entire code to GitHub at https://github.com/YDM-Cloud/PROA. The dataset in this study is available on request from the corresponding author.

Conflicts of Interest:
The authors declare that they have no known competing financial interest or personal relationships that could have influenced the work reported in this study.