Next Article in Journal
Spectral Form Factor and Dynamical Localization
Previous Article in Journal
Computing the Integrated Information of a Quantum Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology

1
School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
2
Zhongshan Institute, Changchun University of Science and Technology, Zhongshan 528400, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(3), 450; https://doi.org/10.3390/e25030450
Submission received: 1 December 2022 / Revised: 5 January 2023 / Accepted: 11 January 2023 / Published: 4 March 2023

Abstract

:
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments.

1. Introduction

Research on the deployment planning of digital measuring instruments in large-scale measurement fields mainly focuses on two categories. One is the influence of measurement uncertainty [1] for a specific point on the measurement to determine the interval of the station and optimize the station according to the uncertainty. The other type considers whether the measurement target is measurable as a planning condition and obtains the actual station position of the measuring instrument through accessibility judgment. Accessibility can be divided into two categories: accessibility and visual methods, which are mainly used in the fields of contact and visual measurements. Accessibility analysis [2] is the smallest conical area in a series of observable directions. It is mainly used to analyze whether a target can reach the surface of a measured object without collision and it is suitable for the contact measurement of small and medium parts on a three-coordinate measuring machine. The principle of the visibility graph method [3] is similar to that of the accessibility analysis method. The detection plan derived from the geometry refers to the collection of a series of position points where the optical instrument can measure the light to measure the target point. For the plane measurement point, the visibility graph was a hemisphere collection. A single measuring instrument cannot measure all target points in a large-scale measurement field. Therefore, multiple instruments are needed to work together to build a measurement network covering the entire assembly space. Due to the discrete nature of station locations, Monte Carlo simulation is usually used to solve for the effects of the number of stations, the distance, and the uniformity of distribution on the overall measurement field [4]. However, the reference points to be measured in the traditional method are fixed points, which have certain limitations. Meanwhile, the traditional method of station deployment cannot calculate the overall accessibility. In this study, we transform deployment planning into an optimization problem and obtain the best deployment plan by optimizing the station coordinates using an optimization algorithm with light detection as the rule.
Optimization algorithms can be divided into traditional and metaheuristic optimization algorithms based on the process of solving optimization problems. Meta-heuristic optimization algorithms include evolutionary algorithms, swarm intelligence algorithms, intelligent bionic optimization algorithms, and other intelligent optimization algorithms. In the field of evolutionary algorithms, Dong et al. [5] proposed a novel multi-objective, evolutionary-based, probabilistic transformation inspired by a genetic algorithm. Wan et al. [6] introduced Gaussian chaos mapping and other evolutionary strategies to improve the black widow spider optimization algorithm. Wu et al. [7] combined the Bernstein operator and the differential evolution algorithm and proposed refracted oppositional-mutual learning. Pang et al. [8] used a differential evolution algorithm and multitask learning to predict photovoltaic power. In the field of swarm intelligence algorithms, Opoku et al. [9] combined an ant colony optimization algorithm with iterative conditional patterns for computing estimates of neural source activity. To optimize wireless sensor node deployment, Wu et al. [10] proposed a virtual force-directed particle swarm optimization approach, where the optimization objective is to maximize network coverage. Dai et al. [11] solved the problem of gravity anomaly matching using an artificial bee colony algorithm based on a radiation transformation. Dong et al. [12] combined time-shift multi-scale weighted permutation entropy with a gray-wolf-optimized support vector machine to classify the faults of rolling bearings. In the field of intelligent bionic optimization, Zhou et al. [13] used the immune fruit fly optimization algorithm to search the combined parameters of k and α in variational mode decomposition. Lu et al. [14] optimized the extreme learning machine for better classification performance using the chaotic bat algorithm. Tong et al. [15] improved the cuckoo algorithm to support continuous hyper-parameters, integer hyper-parameters, and mixed hyper-parameters. Deb et al. [16] commented on the variants and applications of flock optimization algorithms. In other areas of intelligent optimization algorithms, Kuo et al. [17] used simulated annealing to reduce the complexity of a fully connected network. Shang et al. [18] used an artificial immune algorithm to solve the multi-objective clustering problem and to obtain a Pareto optimal solution set. Liao et al. [19] used a firefly algorithm to reduce energy costs. Goh et al. [20] proposed the use of harmony search, to form a hybrid HS-SVM, to perform feature selection and hyperparameter tuning simultaneously and a hybrid HS-RF to tune the hyperparameters.
The remora optimization algorithm (ROA) [21] is a relatively new meta-heuristic optimization algorithm inspired by the parasitic properties of the remora. The algorithm combines the whale optimization algorithm (WOA) [22] and sailfish optimization algorithm (SFO) [23], and the population is updated by switching the two strategies. Almalawi et al. [24] focused on the design of remora optimization and a deep learning heavy metal adsorption rate prediction model for biochar. Raamesh et al. [25] proposed a combination of battle royale optimization and remora optimization to address the selection of software test cases. In this study, different improvements were used. Based on the original ROA, the Poisson-like randomness strategy and enhanced randomness strategy were added such that the population individuals have more changes. In addition, an optimization model was established for the engineering problem of deployment planning in large-scale surveying fields, and high-dimensional parameters were obtained through the improved remora algorithm (PROA) and converted into effective station parameters.
To test the proposed PROA, we used 45 CEC benchmark functions for testing on the base dimension and selected four other meta-heuristic optimization algorithms for performance comparison. Simultaneously, to test the performance of the algorithm in optimizing high-dimensional parameters, we selected 12 CEC benchmark functions with scalable dimensions for testing and comparison. Finally, the improved algorithm was tested and compared to the engineering problem of deployment planning in a large-scale measurement field, and its usability was verified.

2. Original ROA

The original ROA was optimized by exploiting the parasitic properties of the remora. Initialization is first performed, and the individuals of the population randomly start their respective initial positions within the upper and lower boundaries. Subsequently, the fitness function of each individual is calculated, and the optimal position and fitness are updated. Attempt a new location using the following formula:
R a t t = R i t + ( R i t R p r e ) × r a n d 1 ,
where R a t t is the attempted new position, R i t is the i-th individual in the course of the t-th iteration, R p r e is the last historical position, and r a n d 1 is a normally distributed random number between [ 0 , 1 ] . The fitness f ( R a t t ) of the attempted new position and the fitness f ( R i t ) of the current individual are calculated and compared. When the latter is greater than the former, the host feeds as follows:
R i t + 1 = R i t + ( 2 V × r a n d 2 V ) × ( R i t C × R b e s t )
V = 2 × ( 1 t m a x _ i t e r ) ,
where R i t + 1 is the ith individual in the t-th iteration process, R b e s t is the global optimal position, r a n d 2 is a random number between [ 0 , 1 ] , m a x _ i t e r is the maximum number of iterations, t is the current iteration number, V is the host feeding range, and C is a fixed coefficient of 0.1. Otherwise, the host is changed and the WOA or SFO strategy is used to update the location. The WOA strategy formula is as follows:
R i t + 1 = | R b e s t R i t | × e α × cos ( 2 π α ) + R i t
α = r a n d 3 × ( ( 1 + t m a x _ i t e r ) 1 ) + 1 ,
where r a n d 3 is a random number between [ 0 , 1 ] and α is a random number between [ 1 , 1 ] . The formula for the SFO strategy is as follows:
R i t + 1 = R b e s t ( r a n d 4 × ( R b e s t + R m t ) 2 R m t ) ,
where r a n d 4 is a random number between [ 0 , 1 ] , and R m t is a random individual in the population. Finally, the above steps are repeated until the maximum number of iterations is reached.

3. Proposed PROA

3.1. Poisson-like Randomness Strategy

In the original ROA, a new position was attempted using Equation (1). However, this attempt is only related to the population individuals and their historical positions; the search space is limited, and it easily falls into a local optimum. Therefore, this study introduces a Poisson-like randomness strategy that is obtained by deforming the probability density function of the Poisson distribution. The Poisson probability density function formula is as follows:
P ( X = k ) = λ k k ! e λ ,
where k = 0 , 1 , 2 , . Figure 1 shows the probability density function curve for λ [ 1 , 6 ] . The horizontal axis is x, and the vertical axis represents the probability density.
In this study, we set λ = 6 for two reasons:
  • The slope was gentle, and there was no sudden change in the function value.
  • The peak and surrounding area are close to one side, which is the opposite of the trend of the change in the strength of the search strategy.
The steps to obtain the two parametric curves of a Poisson-like randomness strategy are as follows:
  • Horizontally mirror the probability density function curve of λ = 6 in Figure 1 such that it conforms to the trend of the search strategy strength changes.
  • Parameter curve r 1 is obtained by stretching the x-axis according to the maximum number of iterations of the optimization algorithm.
  • Because the two parameters have opposite trends, 1 r 1 is the parameter curve r 2 .
Considering the maximum number of iterations set to 500 as an example, the two-parameter curves are shown in Figure 2. The entire iterative process is divided into three phases: the yellow area is phase 1, which implies global search; the green area is phase 2, which is close to the optimal solution; and the blue area is phase 3, which implies local search.
Finally, two change parameters were used to adjust the position of other individuals in the population and the optimal position to affect the new position of the attempt. The formula used is as follows:
R a t t = R r t + r 2 t × ( R r t R i t ) + r 1 t × ( R b e s t R r t ) ,
where R a t t is the new location attempted and r 1 t and r 2 t are the parameter values during the t-th iteration.
As shown in Figure 2, the new positions that were tried in the global search phase gradually approached the global optimal solution, and the distance was closest in Phase 2. However, the new locations that were tried during the local search phase were closer to other individuals in the population. From the perspective of the overall search process, Phase 1 enhances the spatial search ability of individual populations. In Phase 3, the individuals are all close to the global optimum, and each individual increases the diversity of local search directions by approaching other individuals.

3.2. Enhanced Randomness Strategy

In the original ROA, the SFO strategy was associated with only one individual in the population, and replacement host diversity was not high. Therefore, this study used three enhanced randomness strategies to replace the original SFO strategy. The formula used is as follows:
R i t + 1 = R i t + r a n d 5 × ( R i t ( R k t + R h t ) / 2 )
R i t + 1 = R b e s t   + R d t + r a n d 6 × ( R e t R f t )
R i t + 1 = r a n d 7 × R i t + r a n d 8 × ( R b e s t   R i t ) ,
where R k t , R h t , R d t , R e t , and R f t are other random individuals in the iterative process and r a n d 6 , r a n d 7 , and r a n d 8 are random numbers between [ 0 , 1 ] .
Compared with the original single strategy, the enhanced randomness strategy strengthens the connection with other individuals in the population, strengthens the connection with the optimal individual, and increases the diversity of the replacement hosts.

3.3. Steps to the PROA

In the proposed PROA, the original trial strategy was replaced with a Poisson-like randomness strategy. Simultaneously, the direction of free travel was extended using an augmented randomness strategy. A flowchart is shown in Figure 3, and the pseudocode is presented in Algorithm 1.
Algorithm 1: Pseudocode for the PROA.
Input: population position R i ( 1 , 2 , , n ) , the number of iterations m a x _ i t e r , fitness function f , and bound [ l b , u b ] .
Output: best position, best fitness, and fitness history.
  1:Initialize the pre-population dataset R p r e ;
  2:While  t < m a x _ i t e r carry out
  3: Amend agent if out of bound [ l b , u b ] ;
  4: Calculate f ( R i t ) of each agent;
  5: Update R b e s t and f ( R b e s t t ) ;
  6:For each agent indexed by i carry out
  7:  Using Equation (8) to make an experienced attempt R a t t with Poisson-like distribution;
  8:  Calculate f ( R a t t ) and f ( R i t ) ;
  9:  If  f ( R i t ) > f ( R a t t ) then
10:   Perform host feeding by Equation (2);
11:  Else
12:   If  r a n d o m ( i ) = 1 then
13:    Using Equation (4) to update the position by WOA policy;
14:   If r a n d o m ( i )   i n   [ 2 , 4 ] then
15:    Using Equations (9)–(11) to update the position with enhanced randomness SFO policy;
16:   End if
17:  End if
18:  Add current population to R p r e ;
19:End for
20: t = t + 1 ;
21:End while

4. Performance Comparison under the CEC Benchmark Function

4.1. Experimental Configuration

To examine the convergence speed and robustness of the PROA, we selected 45 benchmark functions proposed in the IEEE CEC competition as fitness functions for testing [26] (refer to the Supplementary Materials for details). In addition, we compared the PROA with the artificial electric field algorithm (AEFA) [27], white shark optimization algorithm (WSO) [28], sooty tern optimization algorithm (STOA) [29], squirrel optimization algorithm (SSA) [30], and the original ROA. In order to ensure the integrity of the six algorithms’ data and to ensure minimum expense, the number of populations is set to 20 and the maximum number of iterations is set to 500. Each algorithm was run 50 times for this configuration. In addition, we selected benchmark functions with 12 scalable dimensions for high-dimensional (D = 100/500/1000) testing with the same configuration as the standard dimensions.

4.2. Comparison of Experimental Results

Because of the plethora of CEC benchmark functions for comparison, we present the full experimental results in the Supplementary Materials. In Section 4.2.1, Section 4.2.2, Section 4.2.3, Section 4.2.4, we only present the optimization results of six of the CEC benchmark functions. Information on Rosenbrock (F16) [31], Dixon–Price (F17) [32], Rastrigin (F22) [33], Griewank (F41) [34], Penalized (F43) [35], and Penalized2 (F44) [36] is shown in Table 1.

4.2.1. Comparison of Standard Dimension Results

The experimental results for the standard dimensions are listed in Table 2. From the experimental results, it can be observed that the PROA achieves the best results when optimizing F16, F17, F22, F41, F43, and F44, both in terms of the average result of optimization and robustness. Compared with the ROA, the average value of the PROA increased by two orders of magnitude in the experimental results of optimizing F16, F43, and F44, and the average value also increased by one order of magnitude in the experimental results of optimizing F44. In terms of robustness, the standard deviation of the PROA was reduced to 1% of the ROA in the experimental results of optimizing F16, F17, and F44. In the experimental results of optimizing F43, it was also reduced to 1/20 of the ROA.
In terms of the convergence speed, the experimental results are shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. The horizontal axis represents the number of iterations, and the vertical axis represents the fitness function value. It can be observed from the experimental results that, compared with the ROA, the PROA achieves the optimal result twice in advance when optimizing F16, and four times ahead, to obtain the minimum value when optimizing F17, F22, and F41. In the optimization of F43 and F44, the iterative process was also advanced by one round.

4.2.2. Comparison of Results under Dimension 100

The experimental results listed in Table 3 are from when the optimization parameter dimension was 100. From the experimental results, it can be observed that the PROA still performs well in high dimensions. When optimizing F16, the average result of multiple optimizations of the PROA is 0.1% of the ROA, and the standard deviation is 1% of the ROA. However, when optimizing F17, the average result of multiple optimizations is only 1/20 of the ROA. However, its robustness is still 100 times that of the ROA. When the PROA is optimized with F43 as the objective function, the result is 100 times better than that of the ROA regardless of whether the mean or the standard deviation of the optimized results are optimized. When optimizing F44, the result reduces to 0.01% of the ROA result.
From the experimental results in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, it can be observed that the convergence speed of the PROA still has certain advantages when the dimension is 100. The ROA requires two additional iterations to reach the minimum when optimizing F16 and F17. When optimizing F22 and F41, five additional iterations were required. Only when optimizing F43 and F44 is only one additional iteration required to achieve the same results as the PROA.

4.2.3. Comparison of Results under Dimension 500

The experimental results listed in Table 4 are from when the number of optimized parameters was 500. When the PROA optimizes F16, both the mean and standard deviation of the optimized results are reduced to 0.1% of the ROA. When optimizing F17, the PROA results were more common, and the results were only reduced by 1/2. When the PROA and the ROA were optimized for F22 and F41, the same results were achieved. However, better results were obtained when F43 and F44 were optimized. The mean and standard deviation of the ROA when optimizing F43 were 4% and 3%, respectively. When optimizing F44, the PROA achieved 0.1% and the ROA achieved 0.15%, respectively.
The convergence when the dimension was 500 is shown in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21. When the PROA optimized F16, F22, and F44, compared with the ROA, the optimal result was achieved by two iterations ahead of time. While optimizing F17 and F41, it was advanced by five times. When the PROA optimizes F43, the advantage is not obvious, and it only leads to the iterative process in one round.

4.2.4. Comparison of Results under Dimension 1000

The experimental results shown in Table 5 are from when the number of optimized parameters reached 1000. Compared to the ROA, the average optimization result of the PROA was reduced by three orders of magnitude, and the standard deviation was reduced by two orders of magnitude when optimizing F16. However, the PROA’s performance in optimizing F17 was average, the average value only dropped by 1/2, and the standard deviation was similar to that of the ROA. The comparison results of the PROA and the ROA when optimizing F22 and F41 were the same as those of the other dimensions. The PROA obtained better results when F43 and F44 were optimized. In terms of the mean, they were 1% and 0.1% for the ROA results, respectively, and the robustness reached 1% for the ROA.
The convergence results for 1000 dimensions are shown in Figure 22, Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27. From the experimental results, it can be observed that, compared with the ROA, the PROA obtained the optimal result by two iterations ahead of time when optimizing F16. Three iterations were advanced for optimizing F17 and F41. However, the PROA reached its minimum value with only five iterations when optimizing F22, which was five times that of the ROA. The PROA achieved average results in optimizing F43 and F44, leading to only one iteration.

4.3. Results, Statistics, and Performance Analysis

According to the convergence curve (refer to Supplementary Materials for details), we calculated the convergence rate statistics, as shown in Figure 28. Each ring represents the experimental result of one dimension; green indicates that the PROA has a better convergence curve than the ROA; light yellow indicates that the convergence speed of the two algorithms is ambiguous; and orange indicates that the convergence speed of the PROA is worse than that of the ROA. From the statistical results of the experiment, it can be observed that the PROA convergence speed changes on the CEC benchmark function by 67.5–74%. In all dimensions, the rate of slower convergence was 7.5–8%. In addition, there are cases in which the convergence curves of the ROA and the PROA are entangled with each other, but the results of high-dimensional tests are much better than those of standard dimensions.
The statistical results based on the standard deviation obtained from the experiments (refer to Supplementary Materials for details) are shown in Figure 29. The horizontal axis represents the difference between the standard deviations of the PROA and the ROA and the vertical axis represents the proportion of the difference in the overall results. It can be observed from the figure that better results than the original ROA were obtained on approximately 75% of the CEC benchmark function. In the test results for the standard dimension, the proportion of performance degradation was less than 9%. In addition, the part whose standard deviation from the ROA was less than 10–6 only accounted for 0–2%. However, more than 10–6 and less than 10–3 accounted for 8–16%.
The experimental results show that the PROA can converge faster than the original ROA in most CEC benchmark functions, whether it is a standard dimension or a high-dimensional parameter. This is because, in the global search stage, the PROA speeds up the search speed and improves the search ability through the Poisson-like randomness strategy, making it more directional than the original ordinary random. Furthermore, the subsequent augmented randomness strategy enables individuals of the population to reach more hosts during the free travel phase. The local search ability near the optimal solution and the overall robustness of the algorithm are enhanced.

5. PROA Applied to Deployment Planning

5.1. Deployment Planning Model

In large-scale measurements, the target to be measured must have many features and a wide distribution range. One station cannot measure all feature points. Therefore, multiple stations must be determined for planning and measurement. Adjacent stations require at least three public transfer points to complete the coordinate system fitting. With an increase in the number of transfer points, the fitting variance of the coordinate system decreases continuously, but when the number exceeds seven, the reduction speed of the error slows down. Therefore, in the actual measurement process, 5–7 public transfer points are selected for measurement, and the coordinate system is fitted [37]. The following principles should be followed in the deployment of the measuring instruments [38]:
  • A single station can directly measure most features and cover tooling or ground transfer points as much as possible. Simultaneously, priority should be given to selecting transfer points with a large distance and at the edge of the venue.
  • The location of the station should avoid areas with frequent changes in temperature and airflow. Excessive fluctuations directly affected the measurement accuracy of the entire measurement field.
  • The accuracy of the measuring instrument is closely related to the measurement distance. In the establishment of the range, minimizing the distance between the station and the feature to be measured can reduce the measurement error.
  • In the case of tool occlusion, the sum of the fields of view of all the stations should be as large as possible and enclose the entire measurement space.
Based on these principles, it is necessary to first set the planning range of the station. For the target to be measured, the side of the bounding box is the limit planning range, and there is a risk of bumping parts into the arrangement of the measuring equipment. Therefore, the bounding box is first enlarged by b z o o m _ i n , and then the side is divided into k areas, where q = k / 4 , p = k / 4 . As shown in Figure 30, the translucent blue area is the bounding box of the target to be measured, and the yellow translucent area is the enlarged bounding box based on the original bounding box, which is also the definition domain of the measurement device.
Next, we converted the principles that need to be followed for site deployment into a mathematical model. Station deployment has the following constraints:
  • Between two adjacent stations, it can be observed that the number of public transfer points on the target to be tested cannot be less than c 1 , that is, the constraint C 1 ( x ) c 1 .
  • The number of public transfer points on the tooling that can be observed between two adjacent stations cannot be less than c 2 , that is, constraint C 2 ( x ) c 2 .
  • The number of public transfer points on the ground between two adjacent stations cannot be less than c 3 , that is, the constraint C 3 ( x ) c 3 .
  • The number of reference points that can be observed from all stations should account for above c 4 of the total number of key points, that is, the constraint C 4 ( x ) c 4 .
Here, c 1 , c 2 and c 3 are integers and c 4 is a decimal in the interval [ 0 , 1 ] . C 1 ( x ) , C 2 ( x ) , C 3 ( x ) , and C 4 ( x ) are constraint functions. All of the constraints filter the visible part of the object under test using a “hidden” point removal operator [39]. The constraint calculation formula is as follows:
C ( x ) = { c C ( x ) , C ( x ) < c 0 , C ( x ) c ,
where c is the constraint value and C ( x ) is the constraint function.
Finally, the objective function of the deployment model is set. The ultimate goal of this placement model is to minimize the invisible area when it is obscured by tooling. Because the model has multiple constraints, we introduced a large penalty factor σ according to the characteristics of the outlier penalty function. The objective function is then expressed as:
F ( x ) = 1 V r a t i o + σ × i = 1 4 C i 2 ( x )
V r a t i o = j = 1 k V j / V t o t a l
Here, V j is the visible point cloud seen by the j-th station, V r a t i o is the overall point cloud of the object to be tested, V t o t a l is the ratio of the visible area to the total area, and C i is the station constraint.

5.2. Simulation Results and 3D Visualization

To verify the feasibility of the station deployment model and the stability of accessibility, the text runs 30 times with the ROA and the PROA as the optimizers under the configuration in Table 6. In addition, in the simulation experiment, the number of target point clouds to be measured, tooling point clouds, and ground point clouds were 51,433, 30,100, and 6847, respectively. Figure 31 shows the average historical results of the maximum visible area of the deployment plan, where the horizontal axis is the number of iterations and the vertical axis is the area ratio of the visible area. The statistical results of all of the experiments are shown in Table 7.
From the experimental results shown in Figure 31, it can be observed that, from the 10th iteration, the ROA convergence speed becomes slower. At the 100th iteration, the maximum visible area obtained by the ROA optimization was 64.95%, whereas the PROA reached 81.7% after rapid convergence. In the subsequent iteration interval of 100–500, the ROA optimization trend tends to be stable. The PROA increased to 83.02%, an increase of 1.32% within this range. When the final iteration completed the entire optimization process, the performance of the PROA was 18.07% higher than that of the ROA.
Conversely, as shown in the statistical results in Table 7, the results of the maximum visible area obtained by the PROA optimization are better than the ROA in three aspects: maximum value, minimum value, and mean value. In terms of robustness, the PROA stability was improved by 1/3. The simulation experiment verifies that the PROA is superior to the ROA, in terms of both convergence speed and robustness.
Finally, we used PyVista to render the station’s historical and optimal positions in a 3D space, as shown in Figure 32. The light blue grid is the ground, translucent brown is the station definition domain, dark blue is tooling, red sphere is the key point, green is the visible area, orange is the invisible area, black dots are the historical positions of population exploration during the optimization process, and the red point marked with a red box is the optimal position of the deployment plan after the iteration has been completed.

6. Conclusions

In this paper, we propose the PROA. The algorithm introduces a Poisson-like randomness strategy to enhance the global search ability of individual populations. Simultaneously, an enhanced randomness strategy is introduced to improve the local search ability of the population and the robustness of the algorithm. The ROA and PROA were tested with different dimensions (D = standard/100/500/1000) using the CEC benchmark function. The convergence curve results of 67.5–74% of the PROA are better than those of the ROA, and the robustness results of 66.67–75% are better than those of ROA. This study establishes a deployment optimization model for a large-scale measurement field layout planning problem. The PROA was applied to the deployment planning model, and the performance of PROA and the feasibility of the model were verified through simulation experiments. Compared to the ROA, the performance improved by 18.07%, and the maximum viewing area of the PROA can reach 83.02%. It improves the computational efficiency and calculates the overall accessibility compared to traditional station planning methods. Next, we will study the deployment optimization model more deeply from the aspects of cooperation target point measurement accuracy and station transfer accuracy and explore more complex location configuration modes to solve the booth optimization problem [40].

Supplementary Materials

The following supporting information can be downloaded at: https://github.com/YDM-Cloud/PROA/tree/main/documents.

Author Contributions

Conceptualization, D.Y.; methodology, D.Y.; software, D.Y.; validation, D.Y.; formal analysis, D.Y.; investigation, D.Y.; resources, D.Y.; data curation, D.Y.; writing—original draft preparation, D.Y.; writing—review and editing, D.Y.; visualization, D.Y.; supervision, Y.L., L.L., X.L. and L.G.; project administration, D.Y. and X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Research and Development Project of the Jilin Province Science and Technology Development Program (No. 20200401019GX) and the Zhongshan Social Public Welfare Science and Technology Research Project (No. 2022B2013).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Owing to the large dataset, we only uploaded the entire code to GitHub at https://github.com/YDM-Cloud/PROA. The dataset in this study is available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have influenced the work reported in this study.

References

  1. Muelaner, J.E.; Wang, Z.; Martin, O.; Jamshidi, J.; Maropoulos, P.G. Estimation of uncertainty in three-dimensional coordinate measurement by comparison with calibrated points. Meas. Sci. Technol. 2010, 21, 025106. [Google Scholar] [CrossRef] [Green Version]
  2. Suthunyatanakit, K.; Bohez, E.L.; Annanon, K. A new global accessibility algorithm for a polyhedral model with convex polygonal facets. Comput. Des. 2009, 41, 1020–1033. [Google Scholar] [CrossRef]
  3. Nuñez, A.; Lacasa, L.; Valero, E.; Gómez, J.P.; Luque, B. Detecting series periodicity with horizontal visibility graphs. Int. J. Bifurc. Chaos 2012, 22, 1250160. [Google Scholar] [CrossRef] [Green Version]
  4. Lin, X. Based on the Full 3D Model, the Measurement Method and Experimental Research of Large Aircraft Parts Assembly Docking. Doctoral Dissertation, Changchun University of Science and Technology, Changchun, China, 2016. Available online: https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C447WN1SO36whLpCgh0R0Z-iTEMuTidDzndci_h58Y6oubBYhL_o8y-To6aH2TABovVpVwv0SYvb-fpIBBby6sz-&uniplatform=NZKPT (accessed on 3 January 2023).
  5. Dong, Y.; Cao, L.; Zuo, K. Genetic algorithm based on a new similarity for probabilistic transformation of belief functions. Entropy 2022, 24, 1680. [Google Scholar] [CrossRef] [PubMed]
  6. Wan, C.; He, B.; Fan, Y.; Tan, W.; Qin, T.; Yang, J. Improved black widow spider optimization algorithm integrating multiple strategies. Entropy 2022, 24, 1640. [Google Scholar] [CrossRef] [PubMed]
  7. Wu, F.; Zhang, J.; Li, S.; Lv, D.; Li, M. An enhanced differential evolution algorithm with bernstein operator and refracted oppositional-mutual learning strategy. Entropy 2022, 24, 1205. [Google Scholar] [CrossRef] [PubMed]
  8. Pang, S.; Liu, J.; Zhang, Z.; Fan, X.; Zhang, Y.; Zhang, D.; Hwang, G.H. A photovoltaic power predicting model using the differential evolution algorithm and multi-task learning. Front. Mater. 2022, 9, 938167. [Google Scholar] [CrossRef]
  9. Opoku, E.; Ahmed, S.; Song, Y.; Nathoo, F. Ant colony system optimization for spatiotemporal modelling of combined EEG and MEG data. Entropy 2021, 23, 329. [Google Scholar] [CrossRef]
  10. Wu, L.; Qu, J.; Shi, H.; Li, P. Node deployment optimization for wireless sensor networks based on virtual force-directed particle swarm optimization algorithm and evidence theory. Entropy 2022, 24, 1637. [Google Scholar] [CrossRef]
  11. Dai, T.; Miao, L.; Shao, H.; Shi, Y. Solving gravity anomaly matching problem under large initial errors in gravity aided navigation by using an affine transformation based artificial bee colony algorithm. Front. Neurorobotics 2019, 13, 19. [Google Scholar] [CrossRef]
  12. Dong, Z.; Zheng, J.; Huang, S.; Pan, H.; Liu, Q. Time-shift multi-scale weighted permutation entropy and GWO-SVM based fault diagnosis approach for rolling bearing. Entropy 2019, 21, 621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhou, J.; Guo, X.; Wang, Z.; Du, W.; Han, X.; He, G.; Xue, H.; Kou, Y. Research on fault extraction method of variational mode decomposition based on immunized fruit fly optimization algorithm. Entropy 2019, 21, 400. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lu, S.; Wang, S.-H.; Zhang, Y.-D. Detection of abnormal brain in MRI via improved AlexNet and ELM optimized by chaotic bat algorithm. Neural Comput. Appl. 2021, 33, 10799–10811. [Google Scholar] [CrossRef]
  15. Tong, Y.; Yu, B. Research on hyper-parameter optimization of activity recognition algorithm based on improved cuckoo search. Entropy 2022, 24, 845. [Google Scholar] [CrossRef]
  16. Deb, S.; Gao, X.-Z.; Tammi, K.; Kalita, K.; Mahanta, P. Recent studies on chicken swarm optimization algorithm: A review (2014–2018). Artif. Intell. Rev. 2020, 53, 1737–1765. [Google Scholar] [CrossRef]
  17. Kuo, C.L.; Kuruoglu, E.E.; Chan, W.K.V. Neural network structure optimization by simulated annealing. Entropy 2022, 24, 348. [Google Scholar] [CrossRef]
  18. Shang, R.; Zhang, W.; Li, F.; Jiao, L.; Stolkin, R. Multi-objective artificial immune algorithm for fuzzy clustering based on multiple kernels. Swarm Evol. Comput. 2019, 50, 100485. [Google Scholar] [CrossRef]
  19. Liao, Y.; Liu, Y.; Chen, C.; Zhang, L. Green building energy cost optimization with deep belief network and firefly algorithm. Front. Energy Res. 2021, 9, 805206. [Google Scholar] [CrossRef]
  20. Goh, R.; Lee, L.; Seow, H.-V.; Gopal, K. Hybrid harmony search—Artificial intelligence models in credit scoring. Entropy 2020, 22, 989. [Google Scholar] [CrossRef]
  21. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  23. Shadravan, S.; Naji, H.; Bardsiri, V. The sailfish optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  24. Almalawi, A.; Khan, A.I.; Alqurashi, F.; Abushark, Y.B.; Alam, M.; Qaiyum, S. Modeling of remora optimization with deep learning enabled heavy metal sorption efficiency prediction onto biochar. Chemosphere 2022, 303, 135065. [Google Scholar] [CrossRef] [PubMed]
  25. Raamesh, L.; Radhika, S.; Jothi, S. A cost-effective test case selection and prioritization using hybrid battle royale-based remora optimization. Neural Comput. Appl. 2022, 34, 22435–22447. [Google Scholar] [CrossRef]
  26. Chou, J.-S.; Nguyen, N.-M. FBI inspired meta-optimization. Appl. Soft Comput. 2020, 93, 106339. [Google Scholar] [CrossRef]
  27. Anita; Yadav, A. AEFA: Artificial electric field algorithm for global optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar] [CrossRef]
  28. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White shark optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  29. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  30. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  31. Tan, C.; Chang, S.; Liu, L. Hierarchical genetic-particle swarm optimization for bistable permanent magnet actuators. Appl. Soft Comput. 2017, 61, 1–7. [Google Scholar] [CrossRef]
  32. Qiao, W.; Yang, Z. An improved dolphin swarm algorithm based on kernel fuzzy C-means in the application of solving the optimal problems of large-scale function. IEEE Access 2019, 8, 2073–2089. [Google Scholar] [CrossRef]
  33. Wang, H.; Jin, Y.; Doherty, J. Committee-based active learning for surrogate-assisted particle swarm optimization of expensive problems. IEEE Trans. Cybern. 2017, 47, 2664–2677. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Kan, G.; Zhang, M.; Liang, K.; Wang, H.; Jiang, Y.; Li, J.; Ding, L.; He, X.; Hong, Y.; Zuo, D.; et al. Improving water quantity simulation & forecasting to solve the energy-water-food nexus issue by using heterogeneous computing accelerated global optimization method. Appl. Energy 2018, 210, 420–433. [Google Scholar] [CrossRef]
  35. Chen, K.; Zhou, F.; Liu, A. Chaotic dynamic weight particle swarm optimization for numerical function optimization. Knowl.-Based Syst. 2018, 139, 23–40. [Google Scholar] [CrossRef]
  36. Samma, H.; Sama, A.S.B. Rules embedded harris hawks optimizer for large-scale optimization problems. Neural Comput. Appl. 2022, 34, 13599–13624. [Google Scholar] [CrossRef] [PubMed]
  37. Prasad, S.; Kumar, D.V. Trade-offs in PMU and IED deployment for active distribution state estimation using multi-objective evolutionary algorithm. IEEE Trans. Instrum. Meas. 2018, 67, 1298–1307. [Google Scholar] [CrossRef]
  38. Agrawal, B.N.; Platzer, M.F. Standard Handbook for Aerospace Engineers; McGraw-Hill Education: New York, NY, USA, 2018; ISBN 978-1-259-58517-3. [Google Scholar]
  39. Katz, S.; Tal, A.; Basri, R. Direct visibility of point sets. In ACM SIGGRAPH 2007 Papers; Association for Computing Machinery: New York, NY, USA, 2007; p. 24-es. [Google Scholar] [CrossRef]
  40. Li, Y.; Zhang, X.; Zhao, J.; Yang, X.; Xi, M. Position deployment optimization of maneuvering conventional missile based on improved whale optimization algorithm. Int. J. Aerosp. Eng. 2022, 2022, 4373879. [Google Scholar] [CrossRef]
Figure 1. Poisson distribution probability density function.
Figure 1. Poisson distribution probability density function.
Entropy 25 00450 g001
Figure 2. Parameter curve of r 1 and r 2 .
Figure 2. Parameter curve of r 1 and r 2 .
Entropy 25 00450 g002
Figure 3. The flowchart of PROA.
Figure 3. The flowchart of PROA.
Entropy 25 00450 g003
Figure 4. F16’s convergence curve (standard D).
Figure 4. F16’s convergence curve (standard D).
Entropy 25 00450 g004
Figure 5. F17’s convergence curve (standard D).
Figure 5. F17’s convergence curve (standard D).
Entropy 25 00450 g005
Figure 6. F22’s convergence curve (standard D).
Figure 6. F22’s convergence curve (standard D).
Entropy 25 00450 g006
Figure 7. F41’s convergence curve (standard D).
Figure 7. F41’s convergence curve (standard D).
Entropy 25 00450 g007
Figure 8. F43’s convergence curve (standard D).
Figure 8. F43’s convergence curve (standard D).
Entropy 25 00450 g008
Figure 9. F44’s convergence curve (standard D).
Figure 9. F44’s convergence curve (standard D).
Entropy 25 00450 g009
Figure 10. F16’s convergence curve (D = 100).
Figure 10. F16’s convergence curve (D = 100).
Entropy 25 00450 g010
Figure 11. F17’s convergence curve (D = 100).
Figure 11. F17’s convergence curve (D = 100).
Entropy 25 00450 g011
Figure 12. F22’s convergence curve (D = 100).
Figure 12. F22’s convergence curve (D = 100).
Entropy 25 00450 g012
Figure 13. F41’s convergence curve (D = 100).
Figure 13. F41’s convergence curve (D = 100).
Entropy 25 00450 g013
Figure 14. F43’s convergence curve (D = 100).
Figure 14. F43’s convergence curve (D = 100).
Entropy 25 00450 g014
Figure 15. F44’s convergence curve (D = 100).
Figure 15. F44’s convergence curve (D = 100).
Entropy 25 00450 g015
Figure 16. F16’s convergence curve (D = 500).
Figure 16. F16’s convergence curve (D = 500).
Entropy 25 00450 g016
Figure 17. F17’s convergence curve (D = 500).
Figure 17. F17’s convergence curve (D = 500).
Entropy 25 00450 g017
Figure 18. F22’s convergence curve (D = 500).
Figure 18. F22’s convergence curve (D = 500).
Entropy 25 00450 g018
Figure 19. F41’s convergence curve (D = 500).
Figure 19. F41’s convergence curve (D = 500).
Entropy 25 00450 g019
Figure 20. F43’s convergence curve (D = 500).
Figure 20. F43’s convergence curve (D = 500).
Entropy 25 00450 g020
Figure 21. F44’s convergence curve (D = 500).
Figure 21. F44’s convergence curve (D = 500).
Entropy 25 00450 g021
Figure 22. F16’s convergence curve (D = 1000).
Figure 22. F16’s convergence curve (D = 1000).
Entropy 25 00450 g022
Figure 23. F17’s convergence curve (D = 1000).
Figure 23. F17’s convergence curve (D = 1000).
Entropy 25 00450 g023
Figure 24. F22’s convergence curve (D = 1000).
Figure 24. F22’s convergence curve (D = 1000).
Entropy 25 00450 g024
Figure 25. F41’s convergence curve (D = 1000).
Figure 25. F41’s convergence curve (D = 1000).
Entropy 25 00450 g025
Figure 26. F43’s convergence curve (D = 1000).
Figure 26. F43’s convergence curve (D = 1000).
Entropy 25 00450 g026
Figure 27. F44’s convergence curve (D = 1000).
Figure 27. F44’s convergence curve (D = 1000).
Entropy 25 00450 g027
Figure 28. Statistics for convergence speed.
Figure 28. Statistics for convergence speed.
Entropy 25 00450 g028
Figure 29. Statistics for the std.
Figure 29. Statistics for the std.
Entropy 25 00450 g029
Figure 30. Divide domain.
Figure 30. Divide domain.
Entropy 25 00450 g030
Figure 31. Deployment history.
Figure 31. Deployment history.
Entropy 25 00450 g031
Figure 32. Three-dimensional search result of deployment.
Figure 32. Three-dimensional search result of deployment.
Entropy 25 00450 g032
Table 1. Part of CEC benchmark functions.
Table 1. Part of CEC benchmark functions.
No.FunctionD 1RangeFormulation
F16Rosenbrock30[−30, 30] f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ]
F17Dixon–Price30[−10, 10] f ( x ) = ( x i 1 ) 2 + i = 2 n i ( 2 x i 2 x i 1 ) 2
F22Rastrigin30[−5.12, 5.12] f ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ]
F41Griewank30[−600, 600] f ( x ) = 1 4000 i = 1 n x 2 2 i = 1 n cos ( x i i ) + 1
F43Penalized30[−50, 50] f ( x ) = π n { 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + 1 4 ( x i + 1 ) u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a x i a k ( x i a ) m , x < a
F44Penalized230[−50, 50] f ( x ) = 0.1 { sin 2 ( π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ]   + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 )
1 Dimension of parameters.
Table 2. Results of solving CEC benchmark functions (standard D).
Table 2. Results of solving CEC benchmark functions (standard D).
FunctionMetricAEFAWSOSTOASSAROAPROA
F16Mean5.45 × 1041.21 × 1052.85 × 1013.23 × 1061.08 × 1002.05 × 10−2
Std1.25 × 1051.08 × 1054.08 × 10−11.58 × 1074.18 × 1005.20 × 10−2
F17Mean2.87 × 1038.46 × 1026.73 × 10−15.63 × 1043.99 × 10−12.49 × 10−1
Std1.19 × 1048.98 × 1024.06 × 10−21.02 × 1052.85 × 10−14.18 × 10−3
F22Mean1.15 × 1002.17 × 1000.00 × 1007.86 × 1000.00 × 1000.00 × 100
Std2.87 × 1001.20 × 1000.00 × 1001.21 × 1010.00 × 1000.00 × 100
F41Mean1.66 × 1017.85 × 1004.88 × 10−21.64 × 1010.00 × 1000.00 × 100
Std5.84 × 1004.13 × 1006.15 × 10−24.35 × 1010.00 × 1000.00 × 100
F43Mean2.67 × 1013.67 × 1001.52 × 10−11.32 × 1021.69 × 10−49.50 × 10−6
Std9.25 × 1004.90 × 1005.98 × 10−21.22 × 1023.80 × 10−41.81 × 10−5
F44Mean6.25 × 1021.07 × 1032.04 × 1003.04 × 1036.76 × 10−34.68 × 10−5
Std2.03 × 1022.72 × 1032.44 × 10−17.28 × 1021.35 × 10−21.55 × 10−4
Table 3. Results of solving CEC benchmark functions (D = 100).
Table 3. Results of solving CEC benchmark functions (D = 100).
FunctionMetricAEFAWSOSTOASSAROAPROA
F16Mean6.68 × 1061.58 × 1061.10 × 1024.17 × 1081.38 × 1018.67 × 10−2
Std2.57 × 1067.11 × 1051.46 × 1013.75 × 1083.19 × 1011.80 × 10−1
F17Mean1.41 × 1063.70 × 1041.42 × 1003.71 × 1066.14 × 10−12.54 × 10−1
Std4.84 × 1052.14 × 1045.09 × 10−12.06 × 1063.58 × 10−16.42 × 10−3
F22Mean1.20 × 1021.88 × 1014.34 × 10−59.55 × 1010.00 × 1000.00 × 100
Std3.22 × 1014.32 × 1007.00 × 10−53.88 × 1010.00 × 1000.00 × 100
F41Mean1.26 × 1026.46 × 1014.80 × 10−23.84 × 1020.00 × 1000.00 × 100
Std1.69 × 1011.34 × 1016.57 × 10−21.41 × 1020.00 × 1000.00 × 100
F43Mean2.42 × 1029.56 × 1023.96 × 10−15.12 × 1062.70 × 10−45.84 × 10−6
Std6.28 × 1025.54 × 1035.02 × 10−13.62 × 1071.43 × 10−31.24 × 10−5
F44Mean7.87 × 1041.64 × 1051.10 × 1013.28 × 1071.17 × 10−17.19 × 10−5
Std1.19 × 1052.59 × 1057.92 × 10−11.12 × 1085.78 × 10−18.77 × 10−5
Table 4. Results of solving CEC benchmark functions (D = 500).
Table 4. Results of solving CEC benchmark functions (D = 500).
FunctionMetricAEFAWSOSTOASSAROAPROA
F16Mean4.94 × 1072.03 × 1072.04 × 1047.24 × 1091.32 × 1022.61 × 10−1
Std7.66 × 1064.27 × 1061.87 × 1042.29 × 1081.96 × 1023.29 × 10−1
F17Mean4.59 × 1072.42 × 1062.74 × 1038.77 × 1088.16 × 10−13.37 × 10−1
Std4.95 × 1066.15 × 1052.05 × 1033.49 × 1073.20 × 10−11.81 × 10−1
F22Mean1.54 × 1031.71 × 1022.72 × 10−23.98 × 1030.00 × 1000.00 × 100
Std1.08 × 1021.71 × 1012.53 × 10−21.02 × 1020.00 × 1000.00 × 100
F41Mean1.82 × 1036.01 × 1022.94 × 10−11.39 × 1040.00 × 1000.00 × 100
Std6.10 × 1015.11 × 1012.04 × 10−12.70 × 1020.00 × 1000.00 × 100
F43Mean2.75 × 1052.13 × 1055.12 × 1008.25 × 1091.18 × 10−45.00 × 10−6
Std2.63 × 1052.15 × 1054.48 × 1006.39 × 1083.04 × 10−41.06 × 10−5
F44Mean8.40 × 1064.99 × 1063.83 × 1021.51 × 10102.98 × 10−15.35 × 10−4
Std3.12 × 1063.17 × 1062.87 × 1029.20 × 1087.00 × 10−11.05 × 10−3
Table 5. Results of solving CEC benchmark functions (D = 1000).
Table 5. Results of solving CEC benchmark functions (D = 1000).
FunctionMetricAEFAWSOSTOASSAROAPROA
F16Mean8.39 × 1075.33 × 1073.09 × 1051.50 × 10101.82 × 1026.81 × 10−1
Std8.79 × 1061.06 × 1074.38 × 1052.93 × 1083.35 × 1021.09 × 100
F17Mean4.24 × 1071.30 × 1078.78 × 1043.63 × 1099.07 × 10−14.89 × 10−1
Std3.96 × 1062.49 × 1068.20 × 1041.01 × 1082.27 × 10−12.91 × 10−1
F22Mean2.66 × 1033.92 × 1021.47 × 10−18.21 × 1030.00 × 1000.00 × 100
Std1.20 × 1023.76 × 1011.08 × 10−11.48 × 1020.00 × 1000.00 × 100
F41Mean8.15 × 1031.32 × 1038.81 × 10−12.83 × 1040.00 × 1000.00 × 100
Std1.42 × 1021.48 × 1024.75 × 10−14.13 × 1020.00 × 1000.00 × 100
F43Mean1.14 × 1061.13 × 1063.13 × 1021.75 × 10101.01 × 10−42.16 × 10−6
Std6.52 × 1051.30 × 1062.10 × 1036.93 × 1081.83 × 10−43.84 × 10−6
F44Mean2.42 × 1071.65 × 1072.72 × 1033.18 × 10102.18 × 10−19.81 × 10−4
Std4.93 × 1065.46 × 1062.52 × 1031.32 × 1094.93 × 10−11.83 × 10−3
Table 6. Simulation configuration.
Table 6. Simulation configuration.
No.ParameterSymbolValue
1Key point (xyz)/220, 24, −640
−230, 24, −640
19, 62, −590
−19, 60, −290
2Number of laser trackers k 6
3Bounding box magnification factor b z o o m _ i n 0.2
4Number of populations N P 20
5Maximum number of iterations m a x _ i t e r 500
6The number of public transfer points on the target to be tested can be seen between two adjacent stations c 1 2
7The number of public transfer points on the tooling can be seen between two adjacent stations c 2 1
8The number of public transfer points seen on the ground between two adjacent stations c 3 2
9The proportion of the number of key points that can be seen in all stations c 4 0.75
10Penalty factor σ 106
Table 7. Deployment result (30 times).
Table 7. Deployment result (30 times).
AlgorithmMinMaxMeanStd
ROA64.95%80.18%75.82%0.0350
PROA73.51%83.02%79.63%0.0225
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, D.; Liu, Y.; Li, L.; Lin, X.; Guo, L. Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology. Entropy 2023, 25, 450. https://doi.org/10.3390/e25030450

AMA Style

Yan D, Liu Y, Li L, Lin X, Guo L. Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology. Entropy. 2023; 25(3):450. https://doi.org/10.3390/e25030450

Chicago/Turabian Style

Yan, Dongming, Yue Liu, Lijuan Li, Xuezhu Lin, and Lili Guo. 2023. "Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology" Entropy 25, no. 3: 450. https://doi.org/10.3390/e25030450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop