Next Article in Journal
Use of Technologies for the Acquisition and Processing Strategies for Motion Data Analysis
Previous Article in Journal
A Review of Wearable Back-Support Exoskeletons for Preventing Work-Related Musculoskeletal Disorders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Starfish Optimization Algorithm via Joint Strategy and Its Application in Ultra-Wideband Indoor Positioning

School of Electronics and Information Engineering, West Anhui University, Lu’an 237012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 338; https://doi.org/10.3390/biomimetics10050338
Submission received: 8 April 2025 / Revised: 15 May 2025 / Accepted: 17 May 2025 / Published: 20 May 2025

Abstract

:
The starfish optimization algorithm (SFOA) is a metaheuristic evolutionary intelligence algorithm with a great global search capability and strong adaptability. Although the SFOA has a good global search capability, it is not accurate enough in local search and converges slowly. To further enhance this convergence ability and global optimization ability, an enhanced starfish optimization algorithm (SFOAL) is proposed that combines sine chaotic mapping, t-distribution mutation, and logarithmic spiral reverse learning. The SFOAL can remarkably enhance both the global and local convergence capabilities of the algorithm, leading to a more rapid convergence speed and greater stability. In total, 23 benchmark functions and CEC2021 were used to test the development, search, and convergence capabilities of the SFOAL. The SFOAL was compared in detail with other algorithms. The experimental results demonstrated that the overall performance of the SFOAL was better than that of other algorithms, and the joint strategy could effectively balance the development and search capabilities to obtain stronger global and local optimization capabilities. For solving practical problems, the SFOAL was used to optimize the back propagation (BP) neural network to solve the ultra-wideband line-of-sight positioning problem. The results showed that the SFOAL-BP neural network had a smaller average position error compared to the random BP neural network and the SFOA-BP neural network, so it can be used to solve practical application problems.

1. Introduction

In many subject areas, such as engineering, science, and economics, a large number of complex optimization problems have arisen, such as large-scale production scheduling, complex system control, and nonlinear function optimization [1,2,3,4,5]. These problems are highly nonlinear, multimodal, and constrained. Traditional optimization algorithms face many difficulties in dealing with these problems, such as their proneness to local optimal solutions and inefficiency in computation, which makes it difficult to meet actual needs. Therefore, a more effective optimization method is needed. People have observed and studied phenomena such as biological evolution and group behavior in nature, finding that organisms have formed efficient adaptation and optimization mechanisms in the long-term evolution process [6,7,8,9,10]. For example, the inheritance, variation, and natural selection mechanisms of organisms enable species to continuously adapt to environmental changes and evolve in a better direction; for example, ant colonies can find the optimal path from the ant nest to a food source by transmitting pheromones during foraging, bird flocks can achieve the optimal group flight posture through cooperation and information sharing between individuals during flight, etc. [11,12,13]. These natural phenomena provide a rich source of inspiration for designing intelligent optimization algorithms.
Swarm intelligence algorithms have developed rapidly since the beginning of the 21st century [14,15,16,17]. On one hand, different intelligent optimization algorithms are integrated with each other, for instance, combining the genetic algorithm (GA) with the particle swarm optimization algorithm (PSO) or the ant colony algorithm with others. This integration capitalizes on their respective strengths, yielding more potent hybrid optimization algorithms [18,19]. Pan et al. developed an agent-assisted hybrid optimization (SAHO) algorithm to tackle the computationally expensive optimization problem [20]. SAHO combines teaching–learning-based optimization and differential evolution, obtaining stronger solving capabilities. Sangeetha et al. introduced a sentiment analysis method called the Taylor–Harris hawk optimization-driven long short-term memory network (THHO-BiLSTM) [21]. THHO-BiLSTM was formed by incorporating Taylor series into Harris hawk optimization (HHO), which can improve the performance of BiLSTM classifiers by selecting the optimal weights for hidden layers. Yıldız et al. used a new hybrid optimizer, AOA-NM (Arithmetic Optimization–Naider–Mead), to solve engineering design and manufacturing problems [22]. To address the AOA’s tendency to become trapped in local optima and boost solution quality, they incorporated the Naider–Mead local search method into AOA’s basic framework. This hybrid approach optimized AOA’s exploration and exploitation during the search.
On the other hand, these algorithms are combined with technologies in other fields, such as applying intelligent optimization algorithms to model selection and hyperparameter optimization in machine learning, as well as combining them with deep learning, big data processing, and other technologies, promoting the development of related fields [23,24,25]. Zheng et al. put forward a path prediction model integrating the GA, ant colony algorithm (ACO), and BP neural network, named GA-ACO-BP [26]. This model first conducts in-depth preprocessing on the original AIS data. It takes the BP neural network as the core prediction model. Leveraging the complementary nature of the GA and ACO, the GA determines the initial pheromone concentration of the ant colony. This effectively improves the convergence speed and performance of the traditional BP neural network. Cheng et al. proposed an approach for monitoring tool wear, which optimized the BP neural network with the firefly algorithm (FA) to enhance the accuracy of online tool wear prediction [27]. It did so by using the FA to modify the weights and thresholds of the BP neural network, which improved its performance. The experimental results validated the accuracy and reliability of this method.
In addition, new intelligent optimization algorithms continue to emerge, such as the artificial bee colony algorithm, bat algorithm, firefly algorithm, moth-to-fire optimization algorithm, etc., which further enrich the system of intelligent optimization algorithms [28,29,30]. The starfish optimization algorithm (SFOA) is an intelligent optimization algorithm developed based on the foraging behavior of starfish [31]. It aims to solve various optimization problems, especially complex problems difficult to handle with traditional methods. The SFOA simulates the foraging activities of starfish in a large range, allowing the algorithm to conduct a more extensive search in the entire solution space, increasing the chance of finding the global optimal solution and avoiding falling into the local optimum. During the search process, the movement strategy of individual starfish can be adaptively adjusted according to the search situation. For example, when approaching the optimal solution area, the step size may automatically decrease for a more refined search; in the early stages of the search, the step size is better able to quickly explore different areas. Although the starfish algorithm performs well, it still has problems, such as an insufficient local search capability and a slow convergence speed, and its global search capability needs to be improved. Therefore, this study modifies the SFOA through the joint strategy of sine chaotic mapping, t-distribution variation, and logarithmic spiral reverse learning to obtain an enhanced starfish optimization algorithm (SFOAL). It is found that the SFOAL has stronger local search and global search capabilities, as well as a faster convergence speed and greater stability, than the SFOA.
The subsequent parts of this study are arranged as follows: in Section 2, the principles underlying the SFOA and SFOAL are expounded upon in detail. In Section 3, the algorithms are rigorously evaluated, and the experimental results are dissected meticulously. In Section 4, the practical implications of the algorithms for engineering problems are explored. Finally, a summary of the key findings is presented, encapsulating the main outcomes of this study. Additionally, suggestions for future research directions are put forward, highlighting potential areas for further exploration and improvement.

2. SFOA

The exploration, predation, and regeneration behaviors of starfish inspired the SFOA, which also has its own exploration and development phases. Most popular algorithms use vector search mode in the exploration phase to process separable functions with good performances, but they are inefficient in processing inseparable functions. One-dimensional search mode is efficient in processing inseparable functions but may fall into local convergence or converge slowly. During the exploration stage, the SFOA capitalizes on the five-armed structure of the starfish (where eyes are located on the arms) to integrate five-dimensional and one-dimensional search strategies. Depending on the dimension d, it employs the following distinct search patterns: a five-dimensional search when d > 5 and a one-dimensional search when d ≤ 5. This approach is designed to address the previously mentioned drawbacks. In the development stage, the SFOA is designed via predation and regeneration strategies. The predation strategy is the main update process, and a parallel bidirectional search using the information of the two starfishes is used to encourage the candidate solution to move to a better position.

3. SFOAL

The SFOAL modifies the SFOA by introducing a multi-strategy mode of sine chaotic map initialization population, t-distribution mutation, and logarithmic spiral reverse learning to update the position. By introducing multiple strategies, the search space can be explored from different angles, increasing the possibility of the algorithm finding the optimal solution in the global scope. When the algorithm uses a single strategy, it is easy to fall into the local optimal solution and be unable to jump out, with the final result not being the global optimal. Multi-strategy modification can provide the algorithm with more opportunities to jump out of the local optimal solution by switching between different strategies.

3.1. Initialization with Sine Chaotic Map

Initial conditions have a significant impact on the sine chaotic map. Even the slightest change in these initial conditions may result in chaotic sequences that are completely different from one another. This feature can be used to initialize multiple different populations or search directions in intelligent optimization algorithms, thereby increasing the diversity of the algorithm. For complex multi-peak optimization problems, different initial conditions allow the algorithm to start searching from different starting points, increasing the probability of finding the global optimal solution. In the initialization stage, the positions of the starfish are randomly produced. xij denotes the position of the ith starfish within the jth dimension and is expressed in matrix form. The size of the matrix is n × d , where n represents the population size and d represents the dimension of the design variable.
Sine chaotic mapping is a classic one-dimensional mapping method, which the SFOAL uses to optimize the initial population of starfish. Its mathematical expression is shown in Equation (1), as follows:
x n + 1 = δ sin ( π x n ) 4 ( u b j l b j ) + l b j
where δ is a constant between 0 and 4. When it is greater than 3.8, the mapping system shows a chaotic state, and the closer it is to 4, the more obvious this chaotic state is. Here, δ is equal to four. ubj and lbj are the upper and lower boundaries of the variables, respectively. xn and xn+1 represent the current position of the starfish and its position after sine chaotic mapping, respectively.
After the position is initialized, the positions of all starfish are evaluated to obtain the fitness value of each starfish, which are saved and updated using the vector F of size n × 1, as seen in Equation (2), as follows:
F = F 1 F 2 F n n × 1

3.2. Exploration

After completing the position initialization, the size of Gp will be compared to a number in the range (0, 1); when the number is not greater than Gp, the exploration phase will begin. Then, d is determined, and if d is greater than five, the starfish moves its five arms to explore the environment. Moreover, the starfish’s movement is updated by searching for the best position information in the agent. The model for this stage is presented in Equation (3), as follows:
P i , p t = x i , p t + a 1 ( x b e s t , p t x i , p t ) cos θ , r a n d < 0.5 P i , p t = x i , p t + a 1 ( x b e s t , p t x i , p t ) sin θ , r a n d > 0.5
where P i , p t denotes the obtained position; x i , p t and x b e s t , p t denote the current position and current best position of dimension p, respectively; and p denotes five randomly selected dimensions of dimension d. a1 and θ are obtained using Equations (4) and (5), respectively, as follows:
a 1 = ( 2 r 1 ) π
θ = π 2 t t max
where t and tmax represent the present and maximum iteration number, respectively. During the exploration phase, a1 is randomly generated and used to update the position, while θ varies with the number of iterations. These two parameters jointly assess the influence of the distance between the optimal and present position in the chosen update dimension. For the position outside the boundary after the update, the previous position is maintained instead of moving to the updated position, as shown in Equation (6), as follows:
x i , p t + 1 = P i , p t l b P i , p t u b x i , p t e l s e
where p denotes the update dimension, and lb and ub denote the lower and upper bounds of the design variables, respectively.
If d 5 , one arm of the starfish will move to search for food sources while using the position information of the other starfish. The formula for the position is shown in Equation (7), as follows:
P i , q t = E t x i , p t + b 1 x k 1 , p t x i , p t + b 2 x k 2 , p t x i , p t
where x k 1 , p T and x k 2 , p T are the dimensional positions of two selected starfish, b1 and b2 are two numbers in the range (1, −1), and p is a number randomly chosen in the d dimension. Et is the energy of the starfish, obtained via Equation (8), as follows:
E t = t max t t max cos θ
θ is calculated via Equation (5), and the position out of the bounds is determined in the same way as before.

3.3. Exploitation

During the development phase, two update strategies were designed for the SFOA. The SFOA uses a parallel bidirectional search strategy that requires the use of information from other starfish and the present optimal position in the population. First, five distances between the optimal position and other starfish are calculated, then two distances are randomly selected as confirmation information, and the position of each starfish is updated using a parallel bidirectional search strategy. The distance can be obtained via Equation (9), as follows:
d o = x b e s t t x o p t , o = 1 , 2 , 3 , 4 , 5
where do is the distance between the best position and other starfish, and op is the five randomly chosen starfish.
Therefore, the position update rule for each starfish for the predation behavior can be obtained via Equation (10), as follows:
P i t = x i t + r 1 d o 1 + r 2 d o 2
where r1 and r2 are random numbers between 0 and 1, and do1 and do2 are randomly selected from do.
In addition, starfish are vulnerable to attacks from other predators during the predation process. When a predator attempts to catch a starfish, the starfish might sever and jettison one of its arms to elude capture. In the starfish optimization algorithm, this concept is reflected in the regeneration phase, which is carried out solely on the last starfish within the population. The formula for updating the position during this regeneration phase is presented in Equation (11), as follows:
P i t = exp ( t × n / t m a x ) x i t
For out-of-bounds positions, the following update rules are used (Equation (12)):
x i t + 1 = P i t l b P i t u b l b P i t < l b u b P i t > u b

3.4. T-Distribution Mutation

The basic aim of updating the population position using t-distribution mutation is to introduce mutation by using the characteristics of t-distribution, which brings a certain degree of randomness and robustness to the search process. After each position update of the starfish, the SFOAL incorporates t-distribution variation. This addition accelerates the convergence rate of the SFOA and enhances its stability and accuracy. The position update formula is established in Equations (13) and (14), as follows:
P n t = P b e s t t + P b e s t t t ( i t e r )
x i , p t + 1 = P n t F n e w < F i x i , p t + 1 e l s e
where P n t represents the new position after the t-distribution mutation disturbance, P b e s t t represents the best position searched by the current discoverer, t ( i t e r ) represents the t-distribution in which the degree of freedom is given by the present iteration number, x i , p t + 1 represents the position of the ith starfish, Fnew represents the fitness value of the new position, and Fi represents the fitness value of the ith starfish. When Fnew < Fi, the new position obtained after the t-distribution mutation disturbance is better than the current starfish position, and the position is updated to the starfish.

3.5. Logarithmic Spiral Opposition-Based Learning

Logarithmic Spiral Opposition-Based Learning (LSOBL) focuses specifically on the reverse solution of the optimal individual within the boundary. As the iteration progresses, the position of the optimal individual constantly changes, and the spiral reverse learning search space of OBL decreases. However, LSOBL operates on individuals that change in a smaller space (especially those close to the best solution), which is very valuable in the later iterations when the population diversity decreases. This can prevent the algorithm from converging prematurely and ensure that the global optimal value is not missed. Moreover, the LSOBL model is defined in Equations (15) and (16), as follows:
P o p t = r 1 u b j + r 2 l b j P b e s t t
x i , p t + 1 = P b e s t t P o p t × e b l × cos ( 2 π l ) + P b e s t t
Among them, P o p t is the position of the spiral after reverse learning, and l is a random number between −1 and 1. Flowcharts of SFOA and SFOAL are presented in Figure 1.

4. Experimental Results and Analysis

4.1. Experimental Environment

The simulation platform is configured with Windows 11, a 12th-generation Core i7 processor with a 2.10 GHz frequency, and 16 GB of dynamic memory. All algorithms are realized in MATLAB R2023a.

4.2. Benchmark Functions

Section 4 describes the utilization of the benchmark functions to evaluate the performances of all the algorithms, including the SFOAL, SFOA, grey wolf optimizer (GWO) [32], goose algorithm (GOOSE) [33], GA [34], pied kingfisher optimizer (PKO) [35], dreaming optimization algorithm (DOA) [36], crayfish optimization algorithm (COA) [6], beluga whale optimization algorithm (BWO) [37], and escape algorithm (ESC) [38]. Functions F1 to F7 in Appendix A (Table A1) are single peak functions for verifying the convergence speed of the evolutionary algorithm. Functions F8 to F23 in Appendix A (Table A2 and Table A3) are multi-peak functions with multiple local optimal values that can test the ability to avoid premature convergence.

4.2.1. Unimodal Functions

Table 1 lists the results of different methods on the unimodal benchmark functions in 20 independent runs. The statistics include the best fitness value (min), standard deviation (std), and mean fitness value (avg) of the function. Table 1 shows that for functions F1 to F4, the SFOAL and SFOA can find accurate optimal solutions. For functions F5 and F6, the SFOAL has the best optimal value, standard deviation, and average value. For function F7, the SFOAL ranks second in statistics, only being worse than the COA, but it is improved compared to the SFOA. Therefore, the SFOAL can effectively utilize the search space to produce satisfactory results for unimodal functions and has strong search capabilities.

4.2.2. Multimodal Functions

Multimodal functions F8 to F13 are used to evaluate the exploration ability of the evolutionary algorithm, and their results are presented in Table 2. From the data in the table, we can see that for functions F8, F12, and F13, the SFOAL sees a significant improvement over the SFOA, with better optimal values and a smaller standard deviation. Among all algorithms, the SFOAL ranks in the top three. The SFOAL, which represents an improvement on the joint strategy, can effectively avoid premature convergence and falling into a local optimal solution. For functions F9 to F11, SFOAL ranks first and has a smaller standard deviation and better optimal values compared to other algorithms, showing that the SFOAL has a satisfactory ability to find the global optimal solution.

4.2.3. Fixed Dimension Multimodal Functions

The results of multimodal functions (F14F23) with fixed variables are presented in Table 3 and Table 4. As the table shows, for multimodal functions of fixed dimensions, the SFOAL shows an excellent global optimization ability and stability compared with other functions. For functions F14 to F23, the SFOAL has the smallest mean and a small standard deviation. This shows that by combining sine chaotic mapping, t-distribution variation, and the logarithmic spiral reverse learning strategy, it is possible to effectively avoid falling into local convergence and enhance the stability of the algorithm.
Figure 2 shows the convergence curves of all functions. We can see that SFOAL has obvious advantages. For functions F1 to F7, compared with the SFOA, the convergence speed is greatly improved, indicating that the joint-strategy SFOAL can find the optimal solution of the unimodal function faster and has a strong search ability. For functions F8 to F13, the SFOAL is significantly improved compared with the SFOA, and it also has the fastest convergence speed compared with other algorithms. For F14 to F23, the SFOAL is significantly improved compared with the SFOA. The SFOAL also has a faster convergence speed and a better average fitness value than other algorithms. For the convergence curve analysis, the SFOAL modified using the joint strategy greatly improves the stability and global optimization ability of the SFOA, and it can obtain a faster convergence speed and a smaller fitness value.
Figure 3 shows the ANOVA test of all functions. From the ANOVA test diagram, it can been seen that, for unimodal functions (F1F7), the SFOAL shows lower fitness values and smaller fluctuations compared with other algorithms, indicating that it has certain advantages in terms of convergence speed, solution quality, and stability. For F8, the fitness value of the SFOAL is about −12,569, while the fitness value of the SFOA is about −7380, with the SFOAL being significantly lower than the SFOA. For F9 to F13, the fitness values of the SFOAL and SFOA are mostly zero or close to zero, and the two perform similarly, showing a good performance and the ability to quickly converge to a better solution or avoid falling into a poor solution area. Compared with other algorithms, the SFOAL performs better overall. The SFOAL can mostly maintain low fitness values on multiple functions, showing that it has a strong ability to find the optimal solution or a better solution, and it also has certain advantages in terms of stability. For F15 to F23, the SFOAL has a relatively stable overall performance under different fitness value evaluations. In most cases, the fitness value is low and the change range is small, indicating that it has a high stability. Compared with the SFOA, its fitness value is relatively close, and in most cases, the difference between the two values is not large. Compared with other algorithms, the SFOAL has a better optimization ability and stability. As mentioned above, the SFOAL modified by the joint strategy has a better stability and global optimization ability, and it can avoid falling into local convergence too early.
The results of Wilcoxon rank sum test are presented in Table 5. In statistics, 0.05 is usually used as the significance level threshold. If the p value is less than 0.05, there is a significant difference between the two algorithms. In the statistical results, there are significant differences between the SFOAL and the GWO, GOOSE, GA, PKO, and DOA for all 23 test functions, indicating that the performances of the SFOAL and these algorithms are very different. There are significant differences among the GOOSE, BWO, and ESC for 22 test functions, indicating that the SFOAL also has obvious performance differences from them. There are significant differences with GWO for 21 test functions, which also shows that the performance difference is quite significant. The number of significant differences with the SFOA and COA is relatively small, but differences are also shown for most test functions. In summary, there are significant differences between the SFOAL and the algorithms in the table for most test functions, which shows that the SFOAL has obvious performance differences from the other algorithms.
In the radar chart (Figure 4a), the distribution of the SFOAL (blue dots) on F1 to F23 is more concentrated in the inner circle, which shows that the SFOAL performs better on F1 to F23 and has a better performance than the other algorithms. From the ranking chart, it can be seen that the average ranking of the SFOAL is 1.87, which is the highest average ranking among all algorithms. This shows that, considering all evaluation functions (F1 to F23), the SFOAL has a clear advantage in overall performance and performs better than the other algorithms. In summary, for benchmark functions, using the joint strategy to improve the SFOA is a feasible solution. The SFOAL can improve the convergence speed and global optimization ability of the algorithm, obtain more stable results, and avoid falling into local convergence.

4.3. CEC2021 Test Functions

CEC2021 covers unimodal functions, multimodal functions, and composite functions, which can comprehensively evaluate the performances of optimization algorithms.
Table 6 depicts the results for the CEC2021 test set. For the CEC2021 test set, the SFOA has a strong large-scale optimization ability and stability. The SFOAL modified via the joint strategy retains the global optimization ability and stability of the SFOA. Both algorithms can find accurate optimal solutions. Compared with other algorithms, the SFOAL has an absolute advantage. Convergence curves are shown in Figure 5. The figure shows that, although SFOA can obtain an accurate optimal solution, its convergence speed is worse than that of algorithms such as DBO. The SFOAL improved via the joint strategy shows an unmatched advantage over the other algorithms in terms of convergence speed. For functions f1f10, the SFOAL can converge faster and accurately find the best solution for the function, showing its powerful global solution ability and avoiding premature convergence.
The ANOVA test results for CEC2021 are presented in Figure 6. The figure shows that for functions f1f10, the SFOAL shows an extremely strong stability and can accurately find the best solution. Compared with other functions, only the original SFOA can compare to it in terms of stability, and other functions do not show an extremely strong stability on all functions. In summary, for the CEC2021 test set, the SFOAL after the joint strategy can maintain the stability and global optimization ability of the SFOA with a greatly improved convergence speed, indicating that the joint strategy can enhance the search ability of the algorithm.

5. Ultra-Wideband Indoor Localization

5.1. The Ultra-Wideband Localization Principle

Ultra-wideband indoor positioning (UWB) is achieved by arranging four positioning base stations with known coordinates indoors, as shown in Figure 7. The personnel or equipment that need to be positioned carry a positioning tag. The tag sends pulses at a certain frequency and continuously measures the distance with the four base stations with known positions.

5.2. Back Propagation Neural Network

The BP neural network is a forward feedback neural network, which consists of an input layer, several hidden layers, and an output layer. The BP neural network includes the following two components: forward propagation and back propagation. During the forward propagation phase, input data initiate from the input layer. They successively undergo calculations in the hidden layers and ultimately arrive at the output layer. At each layer, a neuron conducts a weighted sum of the received input signals. Subsequently, it undertakes a nonlinear transformation via the activation function to generate an output, which is then transmitted to the next layer. In the back propagation phase, the error at the output layer is computed. This error represents the disparity between the predicted value and the actual value. Then, the error is backpropagated from the output layer to the input layer, and the connection weights between the neurons in each layer are adjusted according to the error so that it gradually decreases. Here, a simplified three-layer network model is utilized, as shown in Figure 8.

5.3. Hybrid Positioning Algorithm

The hybrid BP algorithm is a new method that combines the SFOA or SFOAL with the BP neural network. The algorithm combines the comprehensive and targeted exploration function of the starfish algorithm with the nonlinear fitting and approximation function of the BP neural network to enhance the optimization ability and prediction accuracy of the algorithm.
The algorithm follows the following steps:
  • Position initialization: The initialization method of the starfish algorithm is used to determine the starting position.
  • Input data determination: The starfish position is used as the input data for the BP neural network training and prediction.
  • BP neural network training: The difference between the expected output and the actual output is used as the loss function. Back propagation is used to adjust the weights and bias of the network to minimize the error.
  • Starfish position update: The position of the starfish is updated according to the update strategy of the starfish algorithm and the results of the BP network training.
  • Iterative execution: Steps 2 to 4 are repeated until the termination condition is met.
The hybrid algorithm effectively uses the global search capability to find the initial position of the optimal solution by combining the SFOA or SFOAL with the BP neural network, and it improves accuracy and precision through BP network training. This hybrid algorithm provides an excellent optimization and prediction performance for complex problems. Using hybrid algorithms for UWB positioning can improve positioning accuracy and stability.
Figure 9 describes the evolution of the hybrid algorithm. First, the dataset is divided into a training set and a validation set. Subsequently, random values are assigned to the weights and thresholds of the BP neural network, and the training set is input into the network for training. The SFOA adopts the global optimal search solution through random and local search strategies. Finally, a BP neural network model optimized via the starfish algorithm is obtained. The test dataset is then input into the final model to predict its results. To measure the accuracy and reliability of the model, the predicted results are compared with the true coordinates, and the prediction error is calculated to evaluate the performance of the model. Through these steps, the hybrid BP model is trained, optimized, and evaluated, making it suitable for predicting and optimizing positioning data.

5.4. Line-of-Sight (LOS) Scenario

To verify the accuracy of UWB positioning and the effectiveness of the algorithm and provide necessary data for subsequent research, an experimental environment was built, systematic experiments were conducted, and the performance and practical feasibility of UWB positioning were evaluated. A line-of-sight (LOS) scenario was built here.
To simulate the indoor positioning scenario, four UWB base stations were installed in the designated area, each with a height of 2.2 m and located at different coordinates, (0,0), (0,5), (5,0), and (5,5). These base stations were used to provide known location information for indoor positioning. The height of the tag was set to 1.8 m. In this environment, the UWB base station provided a stable reference system for determining the location of the target object. By cooperating with the upper-level computer software, the real-time location of the UWB tag could be obtained, thereby recording and analyzing the indoor positioning data. Figure 10 shows the experimental scenario.
Figure 11a,b show the test results of traditional UWB positioning technology. The black asterisk in Figure 11a is the actual position, and the red diamond is the UWB positioning measurement value. The figure shows that the overlap rate between the two is low, indicating that traditional UWB positioning technology has a large positioning error. First, we made a detailed comparison between the positioning measurement data and the actual coordinate position to confirm that the algorithm needed to be optimized to improve the positioning accuracy. As Figure 11b shows, there are obvious errors in the UWB positioning results. The positioning accuracy of some points does not reach the centimeter level. This phenomenon is very common in traditional UWB positioning technology, indicating that there are large fluctuations or deviations. To address this problem, the sample set was divided into a training dataset and a test dataset, and then the BP neural network model and the hybrid BP model were input for position prediction. Figure 11c,d show the test error and average position error of the sample points, respectively. As shown in Figure 11c, the blue line is the BP network test result, the green line is the SFOAL-BP hybrid model test result, and the red line is the SFOA-BP hybrid model test result. As the figure shows, for most sample points, the position error of the SFOAL-BP hybrid model achieves the minimum value, and only a few points have large errors. The error of SFOA-BP is greater than that of the SFOAL optimization model, but it has a greater advantage over the BP model. As shown in Figure 11d, the average position error of SFOAL-BP is 0.0814 m, which is the smallest among the three models, and the SFOA-BP model ranks second (the average position error is 0.0136 m). This shows that the SFOAL with joint strategy optimization can effectively jump out of the local optimal solution and obtain stronger global optimization capabilities.

6. Conclusions and Future Work

In summary, this study modifies the SFOA by combining sine chaotic mapping, t-distribution mutation, and the logarithmic spiral reverse learning strategy to obtain a strong global optimization ability and stability. A total of 23 basic functions, the CEC2021 function set, and an ultra-wideband indoor positioning problem are used to evaluate the development, exploration, and convergence ability of the SFOAL. Through testing the unimodal function, it is found that the joint strategy can enhance the local optimization ability of the algorithm. The SFOAL has the fastest convergence speed compared with the SFOA and other algorithms. Through testing the multimodal function and multimodal function of fixed dimensions, it can be clearly seen that the SFOAL has a stronger global optimization ability and a smaller standard deviation than other algorithms, indicating that the joint strategy can prevent the SFOAL from falling into local convergence too early and can effectively improve stability. The results of the UWB line-of-sight positioning problem show that the SFOAL-BP neural network has the smallest average position error compared with the random BP neural network and the SFOA-BP neural network, and it can be used to solve practical application problems.
In future studies, by mixing the SFOA with other algorithms, it may be possible to further improve the algorithm’s global optimization ability, avoid premature convergence, improve the algorithm’s stability, and use it for solving more functions. The enhanced SFOAL can be applied to artificial intelligence and machine learning. Through reinforcement learning algorithms, different network layers, the number of neurons, the convolution kernel size, and other architectural parameters can be explored to find the neural network architecture with the best performance in specific tasks (such as image classification, speech recognition, etc.), reducing the time and workload required for manually designing network architectures.

Author Contributions

Conceptualization, Y.L. and Z.L.; methodology, H.L. and X.Z.; software, C.J.; formal analysis, Y.L.; investigation, Y.Y.; writing—original draft preparation, Y.L.; writing—review and editing, L.L.; supervision, Z.L.; funding acquisition, Y.L., C.J., Z.L., W.P. and M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Natural Science Key Project of West Anhui University (WXZR202307, WXZR202312), the Open Project of Anhui Dabieshan Academy of Traditional Chinese Medicine (TCMADM-2024-07), the High-level Talent Start-up Project of West Anhui University (WGKQ2023003, WGKQ2021052), the Horizontal Project of West Anhui University (0045022006, 0045022007, 0045023151), and the University Innovation Team Project of the Department of Education Anhui Province (2023AH010078).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All relevant data are within the paper.

Conflicts of Interest

The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this study.

Appendix A

Table A1. Unimodal functions.
Table A1. Unimodal functions.
Benchmark FunctionsRangefmin
F 1 ( x ) = i = 1 D x i 2 [−100,100]0
F 2 ( x ) = i = 1 D x i + i = 1 D x i [−10,10]0
F 3 ( x ) = i = 1 D ( j = 1 i x j ) 2 [−100,100]0
F 4 ( x ) = max i { x i } [−100,100]0
F 5 ( x ) = i = 1 D 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−30,30]0
F 6 ( x ) = i = 1 D ( x i + 0.5 ) 2 [−100,100]0
F 7 ( x ) = i = 1 D i x i 2 + random [ 0 , 1 ) [−1.28,1.28]0
Table A2. Multimodal functions.
Table A2. Multimodal functions.
Benchmark FunctionsRangefmin
F 8 ( x ) = i = 1 D x i 2 sin ( x i ) [−500,500]−418.9829D
F 9 ( x ) = i = 1 D [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12,5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 D i = 1 D x i 2 exp ( 1 D i = 1 D cos ( 2 π x i ) ) + 20 + e [−32,32]0
F 11 ( x ) = i = 1 D x i 2 4000 i = 1 D cos ( x i i ) + 1 [−600,600]0
F 12 ( x ) = π D { 10 sin 2 ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y D 1 ) 2 } + i = 1 D μ ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 μ ( x i , a , k , m ) = k ( x i a ) m x i > a 0 a x i a k ( x i a ) m x i < a [−50,50]0
F 13 ( x ) = 0.1 ( sin 2 ( 3 π x 1 + g ( x ) ) ) + i = 1 D μ ( x i , 5 , 100 , 4 ) g ( x ) = i = 1 D 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x D 1 ) 2 [ 1 + sin 2 ( 2 π x D ) ] [−50,50]0
Table A3. Fixed-dimension multimodal functions.
Table A3. Fixed-dimension multimodal functions.
Benchmark FunctionsRangefmin
F 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 [−65,65]1
F 15 ( x ) = i = 1 11 a i x 1 ( b i 2 + b 1 x 2 ) b i 2 + b 1 x 3 + x 4 2 [−5,5]0.1484
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [−5,5]−1.0316
F 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 [−5,5], [10,15]0.3979
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [−2,2]3
F 19 ( x ) = i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2 [0,1]−3
F 20 ( x ) = i = 1 4 c i exp j = 1 36 a i j ( x j p i j ) 2 [0,1]−3
F 21 ( x ) = i = 1 5 ( X a i ) ( X a i ) T + c i 1 [0,10]−1
F 22 ( x ) = i = 1 7 ( X a i ) ( X a i ) T + c i 1 [0,10]−1
F 23 ( x ) = i = 1 10 ( X a i ) ( X a i ) T + c i 1 [0,10]−1

References

  1. Funke, J. Solving complex problems: Exploration and control of complex systems. In Complex Problem Solving; Psychology Press: New York, NY, USA, 2014; pp. 185–222. [Google Scholar]
  2. Herskovits, J. A view on nonlinear optimization. In Advances in Structural Optimization; Springer: Berlin/Heidelberg, Germany, 1995; pp. 71–116. [Google Scholar]
  3. Lootsma, F.A.; Ragsdell, K.M. State-of-the-art in parallel nonlinear optimization. Parallel Comput. 1988, 6, 133–155. [Google Scholar] [CrossRef]
  4. Schlenkrich, M.; Parragh, S.N. Solving large scale industrial production scheduling problems with complex constraints: An overview of the state-of-the-art. Procedia Comput. Sci. 2023, 217, 1028–1037. [Google Scholar] [CrossRef]
  5. Sujith, R.I.; Unni, V.R. Dynamical systems and complex systems theory to study unsteady combustion. Proc. Combust. Inst. 2021, 38, 3445–3462. [Google Scholar] [CrossRef]
  6. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  7. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef]
  8. Liu, Y.; Hu, L.; Ma, Z. A new adaptive differential evolution algorithm fused with multiple strategies for robot path planning. Arab. J. Sci. Eng. 2024, 49, 11907–11924. [Google Scholar] [CrossRef]
  9. Peraza-Vázquez, H.; Peña-Delgado, A.F.; Echavarría-Castillo, G.; Morales-Cepeda, A.B.; Velasco-Álvarez, J.; Ruiz-Perez, F. A bio-inspired method for engineering design optimization inspired by dingoes hunting strategies. Math. Probl. Eng. 2021, 2021, 9107547. [Google Scholar] [CrossRef]
  10. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  11. Alreffaee, M.A.-A.A.T.R. Exploring Ant Lion optimization Algorithm to Enhance the Choice of an Appropriate Software Reliability Growth Model. Int. J. Comput. Appl. 2018, 182, 1–8. [Google Scholar]
  12. Wu, H.; Gao, Y.; Wang, W.; Zhang, Z. A hybrid ant colony algorithm based on multiple strategies for the vehicle routing problem with time windows. Complex Intell. Syst. 2021, 9, 2491–2508. [Google Scholar] [CrossRef]
  13. Yang, X.-S. Firefly algorithms for multimodal optimization. In Proceedings of the International Symposium on Stochastic Algorithms, Sapporo, Japan, 26–28 October 2009; pp. 169–178. [Google Scholar]
  14. Ang, K.M.; Lim, W.H.; Isa, N.A.M.; Tiang, S.S.; Wong, C.H. A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems. Expert Syst. Appl. 2020, 140, 112882. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  16. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  17. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  18. Lien, L.-C.; Cheng, M.-Y. A hybrid swarm intelligence based particle-bee algorithm for construction site layout optimization. Expert Syst. Appl. 2012, 39, 9642–9650. [Google Scholar] [CrossRef]
  19. Zhang, W.; Wang, N.; Yang, S. Hybrid artificial bee colony algorithm for parameter estimation of proton exchange membrane fuel cell. Int. J. Hydrogen Energy 2013, 38, 5796–5806. [Google Scholar] [CrossRef]
  20. Pan, J.-S.; Liu, N.; Chu, S.-C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  21. Sangeetha, J.; Kumaran, U. A hybrid optimization algorithm using BiLSTM structure for sentiment analysis. Meas. Sens. 2023, 25, 100619. [Google Scholar] [CrossRef]
  22. Yıldız, B.S.; Kumar, S.; Panagant, N.; Mehta, P.; Sait, S.M.; Yildiz, A.R.; Pholdee, N.; Bureerat, S.; Mirjalili, S. A novel hybrid arithmetic optimization algorithm for solving constrained optimization problems. Knowl.-Based Syst. 2023, 271, 110554. [Google Scholar] [CrossRef]
  23. Li, T.; Sun, J.; Wang, L. An intelligent optimization method of motion management system based on BP neural network. Neural Comput. Appl. 2021, 33, 707–722. [Google Scholar] [CrossRef]
  24. Wang, L.; Pan, Z.; Wang, J. A review of reinforcement learning based intelligent optimization for manufacturing scheduling. Complex Syst. Model. Simul. 2021, 1, 257–270. [Google Scholar] [CrossRef]
  25. Yan, P.; Shang, S.; Zhang, C.; Yin, N.; Zhang, X.; Yang, G.; Zhang, Z.; Sun, Q. Research on the Processing of Coal Mine Water Source Data by Optimizing BP Neural Network Algorithm With Sparrow Search Algorithm. IEEE Access 2021, 9, 108718–108730. [Google Scholar] [CrossRef]
  26. Zheng, Y.; Lv, X.; Qian, L.; Liu, X. An optimal BP neural network track prediction method based on a GA–ACO hybrid algorithm. J. Mar. Sci. Eng. 2022, 10, 1399. [Google Scholar] [CrossRef]
  27. Cheng, Y.-N.; Jin, Y.-B.; Gai, X.-Y.; Guan, R.; Lu, M.-D. Prediction of tool wear in milling process based on BP neural network optimized by firefly algorithm. Proc. Inst. Mech. Eng. Part E J. Process Mech. Eng. 2024, 238, 2387–2403. [Google Scholar] [CrossRef]
  28. Agarwal, T.; Kumar, V. A systematic review on bat algorithm: Theoretical foundation, variants, and applications. Arch. Comput. Methods Eng. 2022, 29, 2707–2736. [Google Scholar] [CrossRef]
  29. Fister, I.; Fister, I., Jr.; Yang, X.-S.; Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 2013, 13, 34–46. [Google Scholar] [CrossRef]
  30. Sarumaha, Y.A.; Firdaus, D.R.; Moridu, I. The Application of Artificial Bee Colony Algorithm to Optimizing Vehicle Routes Problem. J. Inf. Syst. Technol. Eng. 2023, 1, 11–15. [Google Scholar] [CrossRef]
  31. Zhong, C.; Li, G.; Meng, Z.; Li, H.; Yildiz, A.R.; Mirjalili, S. Starfish optimization algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl. 2025, 37, 3641–3683. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  33. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–55. [Google Scholar]
  35. Bouaouda, A.; Hashim, F.A.; Sayouti, Y.; Hussien, A.G. Pied kingfisher optimizer: A new bio-inspired algorithm for solving numerical optimization and industrial engineering problems. Neural Comput. Appl. 2024, 36, 15455–15513. [Google Scholar] [CrossRef]
  36. Lang, J.Y.; Gao, Y. Dream Optimization Algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  37. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  38. Ouyang, K.; Fu, S.; Chen, Y.; Cai, Q.; Heidari, A.A.; Chen, H. Escape: An optimization method based on crowd evacuation behaviors. Artif. Intell. Rev. 2024, 58, 19. [Google Scholar] [CrossRef]
Figure 1. The flowcharts of SFOA and SFOAL. (a) SFOA flowchart and (b) SFOAL flowchart.
Figure 1. The flowcharts of SFOA and SFOAL. (a) SFOA flowchart and (b) SFOAL flowchart.
Biomimetics 10 00338 g001
Figure 2. Convergence graphs of all functions.
Figure 2. Convergence graphs of all functions.
Biomimetics 10 00338 g002aBiomimetics 10 00338 g002b
Figure 3. ANOVA test of all functions.
Figure 3. ANOVA test of all functions.
Biomimetics 10 00338 g003aBiomimetics 10 00338 g003b
Figure 4. Radar and ranking charts for benchmark functions. (a) Radar chart and (b) ranking chart.
Figure 4. Radar and ranking charts for benchmark functions. (a) Radar chart and (b) ranking chart.
Biomimetics 10 00338 g004
Figure 5. Convergence graphs of CEC2021 test functions.
Figure 5. Convergence graphs of CEC2021 test functions.
Biomimetics 10 00338 g005
Figure 6. ANOVA test of CEC2021 test functions.
Figure 6. ANOVA test of CEC2021 test functions.
Biomimetics 10 00338 g006
Figure 7. The diagram of the UWB positioning principle.
Figure 7. The diagram of the UWB positioning principle.
Biomimetics 10 00338 g007
Figure 8. BP neural network structure diagram.
Figure 8. BP neural network structure diagram.
Biomimetics 10 00338 g008
Figure 9. Flowchart of hybrid algorithm optimization BP neural network.
Figure 9. Flowchart of hybrid algorithm optimization BP neural network.
Biomimetics 10 00338 g009
Figure 10. Actual test scene diagram.
Figure 10. Actual test scene diagram.
Biomimetics 10 00338 g010
Figure 11. Line-of-sight positioning test results. (a) UWB measured position, (b) positioning error, (c) predict errors and, (d) average positioning errors.
Figure 11. Line-of-sight positioning test results. (a) UWB measured position, (b) positioning error, (c) predict errors and, (d) average positioning errors.
Biomimetics 10 00338 g011
Table 1. Results of unimodal benchmark functions.
Table 1. Results of unimodal benchmark functions.
fResultsSFOALSFOAGWOGOOSEGAPKODOACOABWOESC
F1min008.73 × 10−290.00594850.20.00020.022800.00070.0001
std001.75 × 10−2757.474627.30.07190.025600.00477.84 × 10−05
avg009.93 × 10−2813.2612,149.30.06820.059500.00600.0001
F2min001.50 × 10−170.463426.050.00020.05507.1 × 10−1920.00860.0235
std006.96 × 10−17781.59.200.07330.014200.01950.0081
avg009.44 × 10−17243.539.180.03250.0824.31 × 10−1830.03170.0351
F3min001.89 × 10−091868.221,483.7594.721679.800.0743609.5
std003.06 × 10−052035.39322.81241.69957.801.991168.7
avg001.36 × 10−054701.835,837.12181.473238.602.292820.6
F4min003.59 × 10−080.147259.5262.722.502.38 × 10−1950.00551.13
std008.80 × 10−0721.796.083.2270.9200.00584.73
avg006.66 × 10−0723.6274.155.964.141.28 × 10−1790.01475.81
F5min1.09 × 10−0728.159525.8325.5841,406.628.2029.981.32 × 10−060.001027.55
std0.00020.2571790.7512154.0761,2151.4288.7945.050.40990.0182109.4
avg0.000128.631226.96121.8448,6956.2177.56108.640.18650.0243105.1
F6min2.47 × 10−070.2856450.25030.00583390.227.70 × 10−050.03270.00010.02896.16 × 10−05
std0.00060.37360.365060.606788.070.05750.02510.01040.51320.0001
avg0.00050.80340.862617.6610,953.750.03930.06920.00810.56190.0001
F7min1.07 × 10−051.29 × 10−050.00070.04440.15390.00790.15193.30 × 10−062.23 × 10−050.0041
std0.00010.00180.00110.05731.090.02620.26740.00010.00030.0043
avg0.000140.00160.00200.11591.070.04400.41210.00010.00060.0106
Table 2. Results of multimodal functions.
Table 2. Results of multimodal functions.
fResultsSFOALSFOAGWOGOOSEGAPKODOA COABWOESC
F8min−12,569−7380−7879−7966−3092−7951−12,332−12,569−12,569−11,868
std778.1856.948703.4572.1588.66363.33280.40.18645.66425.1
avg−12,033−5800−6308−6945−2115−6984−11,900−12,569−12,562−11,146
F9min005.68 × 10−1487.4201.04.741.030700.000210.8
std003.9737.7631.2421.31.8400.00184.67
avg002.52165.83256.233.04.2500.001917.9
F10min4.44 × 10−164.44 × 10−167.50 × 10−140.097219.40.00180.04794.44 × 10−160.00150.0064
std002.11 × 10−149.080.330510.00.013500.01020.8686
avg4.44 × 10−164.44 × 10−161.06 × 10−1312.6420.111.90.07334.44−160.01731.04
F11min0000.000527.74.47 × 10−050.062600.00171.35 × 10−05
std000.0074209.062.80.09020.038600.01550.1716
avg000.0038274.8114.30.06520.130200.01650.1161
F12min1.25 × 10−070.00810.01295.331.578.72 × 10−058.02 × 10−051.28 × 10−075.87 × 10−051.08 × 10−06
std1.21 × 10−050.018550.11925.654.822.660.04630.00010.00050.1026
avg1.06 × 10−050.03060.068715.799.971.470.01059.99 × 10−050.00070.0366
F13min1.39 × 10−070.34180.29930.002139.30.00160.00116.22 × 10−061.95 × 10−051.19 × 10−05
std0.00010.60500.213010.71969,20715.10.00410.00330.00010.6936
avg0.0001050.99000.5839095.46548,6057.220.00350.00120.00010.3416
Table 3. Results of multimodal functions with fixed variables (F14F18).
Table 3. Results of multimodal functions with fixed variables (F14F18).
fResultsSFOALSFOAGWOGOOSEGAPKODOACOABWOESC
F14min0.9980.9980.9981.990.9980.9980.9980.9980.9980.998
std2.48−151.64 × 10−145.055.463.33 × 10−1002.62 × 10−142.19−102.0555.09 × 10−17
avg0.99800.9985.7811.50.9980.9980.9980.99805.060.998
F15min0.00030.00030.00030.00030.00080.00050.00030.00030.00030.0003
std2.33−100.00020.00020.01510.01370.00020.00430.00025.03 × 10−050.0001
avg0.00030.00030.00040.00750.01420.00090.00190.00050.00040.0007
F16min−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
std7.41 × 10−099.74 × 10−111.48−080.29890.1142.27 × 10−162.86−103.80 × 10−060.00031.76 × 10−16
avg−1.03−1.03−1.03−0.9092−0.9720−1.03−1.03−1.03−1.03−1.031
F17min0.39780.39780.39780.39780.39780.39780.39780.39780.39820.3978
std7.79 × 10−085.13 × 10−063.37 × 10−063.07 × 10−107.72 × 10−0500.00013.10 × 10−100.00600
avg0.39780.39780.39780.39780.39790.39780.39790.39780.40410.3978
F18min333333333.253
std4.18 × 10−132.43 × 10−1518.125.199154170.00047.13 × 10−166.030.001211.14.66 × 10−16
avg337.0512.4334.35318.73
Table 4. Statistical results of multimodal functions with fixed variables (F19F23).
Table 4. Statistical results of multimodal functions with fixed variables (F19F23).
fResultsSFOALSFOAGWOGOOSEGAPKODOACOABWOESC
F19min−3.86−3.86−3.86−3.86−3.83−3.86−3.86−3.86−3.80−3.86
std2.39 × 10−102.30 × 10−150.00211.04 × 10−060.43532.21 × 10−150.00010.01820.06822.27 × 10−15
avg−3.86−3.86−3.86−3.86−3.3−3.86−3.86−3.85−3.71−3.86
F20min−3.32−3.32−3.32−3.32−2.41−3.32−3.32−3.2−3.15−3.32
std0.02650.04810.07100.06060.46050.04870.00140.39490.40790.0581
avg−3.31−3.29−3.26−3.24−1.69−3.29−3.31−2.72−2.75−3.28
F21min−10.15−10.15−10.15−10.15−2.73−10.15−10.07−10.15−10.15−10.15
std3.69 × 10−071.121.562.260.50362.092.370.00010.01623.39
avg−10.15−9.90−9.64−4.50−1.57−9.13−7.51−10.15−10.13−7.23
F22min−10.40−10.40−10.40−10.40−3.88−10.40−10.33−10.40−10.39−10.40
std2.98 × 10−074.08 × 10−061.702.700.94402.181.990.00010.02331.51
avg−10.40−10.40−10.01−4.57−1.87−9.33−8.89−10.40−10.37−9.99
F23min−10.53−10.53−10.53−10.53−3.8−10.53−10.51−10.53−10.53−10.53
std7.06 × 10−060.00020.00103.600.71171.202.438.45 × 10−050.01552.59
avg−10.53−10.53−10.53−4.48−1.88−10.26−8.59−10.53−10.51−9.27
Table 5. Wilcoxon rank sum test results.
Table 5. Wilcoxon rank sum test results.
fSFOAGWOGOOSEGAPKODOACOABWOESC
F118.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−0918.00 × 10−098.00 × 10−09
F218.00 × 10−098.00−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−09
F318.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−0918.00 × 10−098.00 × 10−09
F418.00 × 10−098.00−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−09
F56.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−082.35 × 10−066.79 × 10−086.79 × 10−08
F66.79 × 10−086.79 × 10−086.79−086.79 × 10−080.01236.79 × 10−080.00016.79 × 10−080.0531
F70.00016.79 × 10−086.79 × 10−086.79 × 10−086.7 × 10−086.79 × 10−080.92455.89 × 10−056.7 × 10−08
F86.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−080.08104.53 × 10−0710.0001
F917.71 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−0918.00 × 10−098.00 × 10−09
F1017.78 × 10−098.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−0918.00 × 10−098.00 × 10−09
F1110.00958.00 × 10−098.00 × 10−098.00 × 10−098.00 × 10−0918.00 × 10−098.00 × 10−09
F126.79−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−080.02076.79 × 10−080.1895
F136.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−080.00270.02560.0001
F140.58166.10× 10−086.01 × 10−086.19 × 10−086.97 × 10−094.40 × 10−076.71 × 10−066.10× 10−089.86 × 10−09
F150.00056.79−086.79 × 10−086.79 × 10−086.7 × 10−086.79 × 10−086.79 × 10−086.79 × 10−086.79 × 10−08
F160.13283.06 × 10−060.00166.79 × 10−088.0 × 10−090.00972.92 × 10−056.79−083.95 × 10−08
F170.00302.21 × 10−071.20 × 10−066.79 × 10−082.99 × 10−081.91 × 10−075.25 × 10−056.79 × 10−082.99 × 10−08
F186.43 × 10−086.79 × 10−086.79 × 10−086.79 × 10−082.93 × 10−086.79 × 10−086.79 × 10−086.79 × 10−083.93 × 10−08
F194.45 × 10−086.79−086.79 × 10−086.79 × 10−081.94 × 10−086.79 × 10−086.79 × 10−086.79−088.00 × 10−09
F200.22843.41 × 10−072.2 × 10−076.79 × 10−080.00071.20 × 10−069.17 × 10−086.79 × 10−080.1257
F210.45696.79 × 10−086.7 × 10−086.79 × 10−080.0016.79 × 10−083.70 × 10−056.79 × 10−080.5963
F220.79716.79−087.8 × 10−086.79 × 10−080.00096.79 × 10−086.79 × 10−086.79−086.03 × 10−06
F230.35076.79 × 10−081.06 × 10−076.79 × 10−083.53 × 10−076.79 × 10−081.06−076.79 × 10−080.0011
Table 6. Results of CEC2021.
Table 6. Results of CEC2021.
fResultsSFOALSFOAGWOGOOSEGAPKODOACOABWOESC
f1min007.59 × 10−32912.912,604,4150.00543679.3082.624.78
std001.00 × 10−281612.1124,214,368601.31765.304562.0516.6
avg003.40 × 10−292474.080,788,533408.14956.402851.3303.7
f2min001.8 × 10−120.01663264.9326.91.8900.00331.93
std0010.221707.1488.2513.037.7000.014313.42
avg009.012103.44110.11087.118.6800.022116.33
f3min002.40 × 10−2918.91105.26.602.0600.002123.57
std0053.26645.751.128.359.1700.01104.95
avg0088.19351.0180.237.5413.8200.013929.05
f4min0001.264.330.06530.753803.10 × 10−092.26
std000.439831.631.481.860.365902.13 × 10−081.12
avg000.253546.707.263.851.3402.54 × 10−083.52
f5min007.38 × 10−171549.18115.223.750.4179015.262.23
std0013.14170,910172,13170.243.730428.2197.95
avg0011.09145,504115,80177.572.641.00 × 10−279327.390.73
f6min0000.08320.958929.934.310.4496−1.11 × 10−160.05201.01
std0026.49666.9159.058.492.903.51 × 10−170.05803.80
avg0012.46671.5194.845.743.10−1.11 × 10−170.10376.82
f7min000.0631452.51550.91.571.15−1.11 × 10−1624.001.06
std003.8722,54470,05039.710.75214.68 × 10−17105.940.04
avg002.2112,52164,96521.611.92−2.22 × 10−17150.817.31
f8min0000.0291424.62.92 × 10−060.043800.00329.88
std0001934.11317.516.324.600.050623.65
avg0002233.92861.614.242.2800.082429.55
f9min003.55 × 10−140.075771.310.00010.17258776700.05350.003
std007.48 × 10−1564.3542.260.00680.07403600900.09430.0037
avg004.97 × 10−1431.39122.500.00530.25950661500.16720.0078
f10min0052.780.126470.8249.9149.0533418900.135748.85
std0012.5354.5122.932.028.68086625500.101816.79
avg0077.83107.1101.8751.5653.0652258500.250062.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Fu, M.; Liu, Z.; Liu, H.; Peng, W.; Li, L.; Yang, Y.; Zhou, X.; Jia, C. An Enhanced Starfish Optimization Algorithm via Joint Strategy and Its Application in Ultra-Wideband Indoor Positioning. Biomimetics 2025, 10, 338. https://doi.org/10.3390/biomimetics10050338

AMA Style

Liu Y, Fu M, Liu Z, Liu H, Peng W, Li L, Yang Y, Zhou X, Jia C. An Enhanced Starfish Optimization Algorithm via Joint Strategy and Its Application in Ultra-Wideband Indoor Positioning. Biomimetics. 2025; 10(5):338. https://doi.org/10.3390/biomimetics10050338

Chicago/Turabian Style

Liu, Yu, Maosheng Fu, Zhengyu Liu, Huaiqing Liu, Wei Peng, Ling Li, Yang Yang, Xiancun Zhou, and Chaochuan Jia. 2025. "An Enhanced Starfish Optimization Algorithm via Joint Strategy and Its Application in Ultra-Wideband Indoor Positioning" Biomimetics 10, no. 5: 338. https://doi.org/10.3390/biomimetics10050338

APA Style

Liu, Y., Fu, M., Liu, Z., Liu, H., Peng, W., Li, L., Yang, Y., Zhou, X., & Jia, C. (2025). An Enhanced Starfish Optimization Algorithm via Joint Strategy and Its Application in Ultra-Wideband Indoor Positioning. Biomimetics, 10(5), 338. https://doi.org/10.3390/biomimetics10050338

Article Metrics

Back to TopTop