Optimal Performance and Application for Seagull Optimization Algorithm Using a Hybrid Strategy

This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull population to enhance the population’s diversity and ergodicity. Then, inspired by the sigmoid function, a new parameter is designed to strengthen the ability of the algorithm to coordinate early exploration and late development. Finally, the particle swarm optimization learning strategy is introduced into the seagull position updating method to improve the ability of the algorithm to jump out of local optimization. Through the simulation comparison with other algorithms on 12 benchmark test functions from different angles, the experimental results show that SPSOA is superior to other algorithms in stability, convergence accuracy, and speed. In engineering applications, SPSOA is applied to blind source separation of mixed images. The experimental results show that SPSOA can successfully realize the blind source separation of noisy mixed images and achieve higher separation performance than the compared algorithms.


Introduction
Optimization has great importance and applications and is used to address complex issues to reduce computational cost, increase accuracy, and enhance performance, particularly in the field of engineering. Optimization aims to maximize efficiency, performance, and productivity through calculation under certain constraints [1]. Traditional optimization methods such as Newton and conjugate gradient methods can only deal with simple, continuously differentiable, or high-order differentiable objective functions [2,3]. With the increasing diversity and complexity of problems, the traditional optimization algorithms cannot meet the requirements of high computing speed and low error rate. Therefore, it is of great practical significance to find new optimization methods with fast calculation speed and strong convergence ability [4].
Metaheuristic algorithms have the characteristics of self-organization, mutual compatibility, parallelism, integrity, and coordination characteristics. This kind of algorithm only needs to know the objective function and the search range and can achieve the target solution regardless of whether the search range is continuously differentiable, which provides a new way to solve the optimization problem [5]. Metaheuristic algorithms belong to the stochastic optimization method and are mainly driven by the random streams (single or multiple) utilized in the stochastic search mechanism. A recent study shows that if the randomness of the random streams of interest is deliberately controlled without disturbing its expectation, then the desired effectiveness in the optimization search can be eventually gained [6]. Metaheuristic optimization algorithms are mainly divided into biological evolution, natural phenomena, and species living habits. Biological evolution methods, such as genetic algorithm (GA) [7] and differential evolution (DE) algorithm [8], are inspired by biological genetics, mutation, and evolution strategies. A natural phenomenon algorithm is a kind of algorithm based on the physical laws of nature, such as the sine cosine algorithm (SCA) [9] and biogeography-based optimization (BBO) [10]. The inspiration for the population life habit algorithm comes from the relationship between population individuals, including particle swarm optimization (PSO) [11], artificial bee colony (ABC) algorithm [12], cuckoo search (CS) algorithm [13], bat algorithm (BA) [14], and ant colony optimization (ACO) [15].
Seagull optimization algorithm (SOA) is a new metaheuristic algorithm inspired by species' living habits proposed in 2019 [16]. SOA realizes the function of global and local search by simulating the long-distance migration behavior and foraging attack behavior of seagulls. The principle of SOA is relatively simple and easy to implement, and it has been used to address some engineering problems [17][18][19]. However, due to the low searchability of the fundamental SOA, the algorithm falls into local optimization. Therefore, improving SOA is an essential step to expanding the application scope of SOA and improving the utilization value of SOA.
In recent years, scholars have proposed many improved algorithms. Che et al. [20] introduced the reciprocity mechanism and coexistence mechanism in the symbiotic organism's search (SOS) algorithm, which improved the development ability of the algorithm. Zhu et al. [21] applied Henon chaotic map to initialize the seagull population and combined it with differential evolution based on an adaptive formula, which improved the diversity of the seagull population. Wu et al. [22] used a chaotic tent map to initialize the population and designed a nonlinear inertia weight and random double helix formula. Experiments show that the algorithm improves the optimization accuracy and efficiency of SOA. Muthubalaji et al. [23] designed a hybrid algorithm SOSA, which uses the advantages of the owl search algorithm (OSA) to improve the global search ability of SOA. Hu et al. [24] proposed an ISOA with higher optimization accuracy, introducing non-uniform mutation and an opposite-based learning strategy. Wang et al. [25] analyzed the parameter A of SOA in detail, presented the best advantage set theory and the idea of the Yin-Yang Pair, and proposed an improved seagull fusion algorithm, YYPSOA. Ewees et al. [26] introduced Levy flight strategy and mutation operator to prevent the algorithm from falling into local optimum. Wang et al. [27] introduced the opposite-based learning strategy to initialize the population, and used the quantum optimization method to update the seagull population. The algorithm is effective in multi-objective optimization problems. The above references are some improvement methods by scholars for SOA. Although they can improve the search performance of the algorithm and reduce the premature convergence of the algorithm to a certain extent, most references only focus on the improvement of single search performance and ignore the balance between global search ability and local development ability. This paper proposes a new SOA algorithm based on hybrid strategies named SPSOA. Firstly, the seagull population is initialized by the Sobol sequence so that the seagulls are more evenly distributed in the initial solution space. Then, through the expansion and translation of the sigmoid function, a new parameter is proposed to further enhance the algorithm's ability to coordinate the early exploration and late development. Finally, the PSO learning strategy is introduced into the updating method of seagull attack position to enhance the ability of the algorithm to jump out of local optimization. In this paper, 12 benchmark test functions are selected to test the algorithm's performance from different aspects. The experimental results show that the stability, convergence accuracy, and speed of SPSOA are better than other algorithms. In applying blind source separation (BSS), SPSOA can successfully separate noisy mixed images and has better separation performance than the compared algorithms.
The remainder of this paper is organized as follows: Section 2 discusses the details of the SOA. Section 3 addresses the SPSOA implementation. Section 4 verifies the effectiveness of SPSOA through experiments. Section 5 applies SPSOA to the problem of BSS of mixed images, and Section 6 concludes the paper and proposes future work.

The Basic Seagull Optimization Algorithm (SOA)
Seagulls have a natural ability to migrate and attack. Migration is a seasonal longdistance movement from one place to another. The initial position of seagulls is in different spatial areas to avoid collision during movement. In group migration, the most suitable seagull leads the migration group, and the rest of the seagulls follow this leader and update their current position in the migration process. The attack is manifested in the process of foraging, making a similar spiral action to attack the prey. SOA was used to establish a mathematical model for these two behaviors and iteratively seek the optimal solution by constantly updating the seagull positions.

Migration Behavior
During migration, SOA simulates how seagulls move from one location to another. At this stage, seagulls should meet three conditions.
(1) Avoid the collisions: In order to avoid colliding with the surrounding seagulls, SOA uses variable A to adjust the position of each seagull.
where → C S represents the position where there is no collision with other seagulls, → P S (t) describes the current position of the seagull, and t represents the current number of iterations.
The calculation formula of variable A is as follows: where the value f C is 2, T is the maximum number of iterations. The value of A decreases linearly from 2 to 0 with the increase of the number of iterations t.
(2) Determine the best seagull direction After ensuring no collision between seagulls, the best direction of seagulls is calculated.
where → M S represents the direction in which the individual seagull moves to the best position, → P bS (t) represents the direction of the best seagull. The calculation formula of variable B is as follows: where rand shows a random number between 0 and 1.
(3) Move in the direction of the best seagull After calculating the best seagull position, the seagull begins to move to this position.
where → D S represents the distance between each seagull and the best position.

Attack Behavior
When seagulls attack the prey, they spiral in the air, constantly changing the attack angle and speed. This behavior in the x , y , and z planes is described as follows.
where r represents the radius of each helix circle when the seagull attacks, k is the random number between [0,2π], u and v are the constants defining the shape of the helix, and e is the base of the natural logarithm.
where → P S1 (t) represents the attack position of seagulls. The pseudo code of SOA is provided in Algorithm 1

Algorithm 1: SOA
Input: Objective function f (x), seagull population size N, dimensional space D, maximum number of iterations T. 1. Initialize population; 2. Set f C to 2; 3. Set u and v to 1; 4. While t < T 5.

Sobol Sequence Initialization
In a metaheuristic algorithm, the initialization population's distribution greatly affects the algorithm's convergence speed and accuracy [28]. When dealing with the problem of unknown distribution, the initial value of the population should be evenly distributed in the search space as much as possible to ensure high ergodicity and diversity and improve search efficiency [29]. In SOA, the random number in the search space generates the initialization population. This method has low ergodicity, uneven individual distribution, and unpredictability, which affects the algorithm's performance to a certain extent.
To solve the above problem, some scholars use chaos search to optimize the initialization sequence [21,22,[30][31][32]. Although the diversity and ergodicity of the population are improved to a certain extent, the chaotic map is greatly affected by the initial solution, and the inappropriate initial solution will lead to negative optimization of the algorithm [33].
The Sobol sequence is a low-discrepancy sequence with the advantages of short calculation cycles, fast sampling speeds, and higher efficiency in processing high-dimensional sequences [34,35]. Unlike the pseudo-random number, the low-discrepancy sequences use the deterministic low-discrepancy sequence to replace the pseudo-random sequence. By selecting a reasonable sampling direction, the points, as uniform as possible, are filled into the multi-dimensional hypercube unit. Therefore, it has higher efficiency and uniformity in dealing with probability problems. Therefore, this paper uses the Sobol sequence to map the initial population. Let the upper and lower bounds of the optimal solution be ub and lb, respectively, and the random number generated by the Sobol sequence be S i ∈ [0, 1], then the mathematical model of the initialization population of the Sobol sequence is: Let the search space dimension D be 2, the upper and lower bounds be 1 and 0, respectively, and the population size N be 100. Compare the initial population distribution of the Sobol sequence with the random initial population distribution, as shown in Figure 1.
sional sequences [34,35]. Unlike the pseudo-random number, the low-discrepancy sequences use the deterministic low-discrepancy sequence to replace the pseudo-random sequence. By selecting a reasonable sampling direction, the points, as uniform as possible, are filled into the multi-dimensional hypercube unit. Therefore, it has higher efficiency and uniformity in dealing with probability problems. Therefore, this paper uses the Sobol sequence to map the initial population. Let the upper and lower bounds of the optimal solution be ub and lb, respectively, and the random number generated by the Sobol sequence be , then the mathematical model of the initialization population of the Sobol sequence is: Let the search space dimension D be 2, the upper and lower bounds be 1 and 0, respectively, and the population size N be 100. Compare the initial population distribution of the Sobol sequence with the random initial population distribution, as shown in Figure  1. It can be seen from Figure 1 that the spatial distribution of the Sobol sequentially initialized population is more uniform than that of the randomly initialized population, and there is no overlapping of individuals, resulting in better initial population diversity, which lays a foundation for the global search of the algorithm.

Improvement of Parameter A
SOA controls the frequency of parameter A by introducing C f so that the value of parameter A decreases linearly from 2 to 0 with the iteration to avoid the collision between individuals during the flight of seagulls and produce repeated optimization values. Parameter A plays a vital role in solving optimization problems and balancing algorithms. However, in practical optimization problems, the process presents a nonlinear downward trend, and the process is also highly complex. Therefore, parameter A of linear convergence is not fully applicable to the search process of SOA.
This paper proposes an adaptive parameter * A based on the sigmoid function. In this method, the value * A presents a nonlinear change trend in the decreasing process. In each iteration, it can avoid the position conflict between seagulls and better balance early exploration and late development. The sigmoid function can map variables between intervals [0,1], and its mathematical expression is: It can be seen from Figure 1 that the spatial distribution of the Sobol sequentially initialized population is more uniform than that of the randomly initialized population, and there is no overlapping of individuals, resulting in better initial population diversity, which lays a foundation for the global search of the algorithm.

Improvement of Parameter A
SOA controls the frequency of parameter A by introducing f C so that the value of parameter A decreases linearly from 2 to 0 with the iteration to avoid the collision between individuals during the flight of seagulls and produce repeated optimization values. Parameter A plays a vital role in solving optimization problems and balancing algorithms. However, in practical optimization problems, the process presents a nonlinear downward trend, and the process is also highly complex. Therefore, parameter A of linear convergence is not fully applicable to the search process of SOA.
This paper proposes an adaptive parameter A * based on the sigmoid function. In this method, the value A * presents a nonlinear change trend in the decreasing process. In each iteration, it can avoid the position conflict between seagulls and better balance early exploration and late development. The sigmoid function can map variables between intervals [0,1], and its mathematical expression is: Entropy 2022, 24, 973 6 of 21 As seen from Figure 2a, the sigmoid function is a strictly monotonically increasing, continuous, and smooth threshold function. Perform telescopic translation on Equation (12) and introduce amplitude, telescopic, and translational factors to obtain: where L represents the amplitude gain, and a and b represent the expansion and translation factors. Figure 2b,d shows the iterative comparison between SOA2 with different parameters and basic SOA under the Sphere test function [33].
Entropy 2022, 24, x FOR PEER REVIEW 6 of 23 As seen from Figure 2a, the sigmoid function is a strictly monotonically increasing, continuous, and smooth threshold function. Perform telescopic translation on Equation (12) and introduce amplitude, telescopic, and translational factors to obtain: where L represents the amplitude gain, and a and b represent the expansion and translation factors. Figure 2b,d shows the iterative comparison between SOA2 with different parameters and basic SOA under the Sphere test function [33]. As shown in Figure 2b,d, when the maximum number of iterations T is 500 2, 1/ 50, 5 , the search accuracy and speed of SOA2 are the highest. There is a negative optimization relative to basic BOA for some other parameters. When 2, 1/ 50, 5 , the improved parameter * A expression is: As shown in Figure 2b,d, when the maximum number of iterations T is 500L = 2, a = 1/50, b = −5, the search accuracy and speed of SOA2 are the highest. There is a negative optimization relative to basic BOA for some other parameters.
When L = 2, a = 1/50, b = −5, the improved parameter A * expression is: Entropy 2022, 24, 973 7 of 21 Figure 3 shows the iterative comparison curve of parameters A and A * . Figure 3 shows the iterative comparison curve of parameters A and * A . It can be seen from Figure 3 that parameter * A can make the algorithm m large individual degree of freedom of the population in the early stage and enh global optimization ability. In the later stage, the individual degree of freedom d rapidly, and the local optimization ability is strengthened. Compared with para this paper uses the parameter * A to smooth the excessive migration and attack which can better balance early exploration and late development and make the o tion process nonlinear. Therefore, this improvement can theoretically improve t racy of population optimization and accelerate optimization speed.

Improvement of Update Function
The optimal global individual primarily guides the location update of SOA fore, if the optimal global individual falls into the local optimal, the optimization to stagnate. To solve this problem, this paper introduces the learning strategy introduces the learning factor based on Equation (10), and increases the process o individual learning to the optimal global position and individual historical optim tion to improve the optimization performance of the algorithm and weigh th search and local search ability through dynamic inertia weight. The attack positio formula with learning strategy is: where the learning factors c , c are set to 1.5 and r , r are random numbers It can be seen from Figure 3 that parameter A * can make the algorithm maintain a large individual degree of freedom of the population in the early stage and enhance the global optimization ability. In the later stage, the individual degree of freedom decreases rapidly, and the local optimization ability is strengthened. Compared with parameter A, this paper uses the parameter A * to smooth the excessive migration and attack process, which can better balance early exploration and late development and make the optimization process nonlinear. Therefore, this improvement can theoretically improve the accuracy of population optimization and accelerate optimization speed.

Improvement of Update Function
The optimal global individual primarily guides the location update of SOA. Therefore, if the optimal global individual falls into the local optimal, the optimization is likely to stagnate. To solve this problem, this paper introduces the learning strategy in PSO, introduces the learning factor based on Equation (10), and increases the process of seagull individual learning to the optimal global position and individual historical optimal position to improve the optimization performance of the algorithm and weigh the global search and local search ability through dynamic inertia weight. The attack position update formula with learning strategy is: where the learning factors c 1 , c 2 are set to 1.5 and r 1 , r 2 are random numbers between Compute x ,y ,z ,r using Equations (6)-(9); 7.
Calculate learning location → P l (t) by Equation (15)

Time Complexity Calculation
In the basic SOA, the dimension of the position-independent variable is n, and the population size is represented by N. In the initialization stage, generate a uniformly distributed random number to objectify the time to set the initial value of each parameter. Then, calculate the value of the objective function and sequence the fitness values of all individuals to obtain the contemporary optimal individual fitness value is t 1 , t 2 , f (n), and t 3 respectively. Then, the overall time complexity of this stage is: In the collision avoidance stage of migration behavior, parameter A is generated from Equation (2), which changes with the number of iterations, but the value of parameter A is the same in the population of the same generation, so the generation time is t 4 . According to Equation (1), the time for updating the position of the individual seagull in each dimension is t 5 , and the calculation time of the new seagull fitness value is f (n), then the time complexity of this stage is: In calculating the best seagull direction of migration behavior, parameter B is generated from Equation (4), and the value of parameter B in the same generation population is the same, and its generation time is t 6 . According to Equation (3), the time for updating the position of the seagull individual in each dimension is t 7 , and the calculation time of the new seagull fitness value is f (n), then the time complexity of this stage is: In the stage of moving towards the best seagull direction of migration behavior, the time of generating each one-dimensional element in the new individual according to Equation (5) is t 8 , and the calculation time of the new fitness value is f (n), then the time complexity of this stage is: In the stage of seagull attack behavior, the time required to calculate x ,y ,z , and r according to Equations (6)-(9) is t 9 , the time of updating according to Equation (10) is t 10 , and the time of calculating the new fitness is f (n), then the time complexity of this stage is: In the phase of updating the optimal solution, assuming that the replacement time of each fitness value compared with the current optimal solution is t 11 , the time complexity of this phase is: To sum up, the total time complexity of SOA is: where T is the maximum number of iterations.
In SPSOA, the dimensions of population size and location independent variables are entirely consistent with the basic SOA. In the initialization stage, the time for parameter setting, solving, and sorting the fitness value of the objective function and obtaining the contemporary optimal individual fitness value is also the same as that of SOA. The time for generating the random number of Sobol sequence is t 12 , so the time complexity of this stage is: An adaptive parameter based on sigmoid function is introduced in the migration behavior collision avoidance stage of SPSOA. Its generation time is t 13 , and the generation time of new seagull individuals is t 14 . The parameter also changes with the number of iterations, and the value in the same generation population is the same. The remaining time of this stage and the time of calculating the best seagull direction, moving to the best seagull direction, and updating the optimal solution are the same as those of SOA. Therefore, the time complexity of the SPSOA migration stage is: In the stage of SPSOA attack behavior, the learning strategy of PSO is introduced. The location update time is t 15 , and the remaining time is the same as SOA. Therefore, the time complexity of the SPSOA attack phase is: To sum up, the total time complexity of SPSOA is: According to the analysis in this section, compared with the basic SOA, SPSOA does not add additional time complexity, the two are exactly the same, and the execution efficiency does not decrease.

Simulation and Result Analysis
In this section, to verify the performance of SPSOA more comprehensively, 12 benchmark test functions are used for experiments. The experimentation is divided into two parts: the first part compares the three improvement strategies proposed in this paper with SP-SOA and basic SOA, respectively, proving that these improvement strategies are effective. The second one compares SPSOA with other metaheuristic algorithms to verify that the search performance of SPSOA is better than the compared algorithm. To ensure the fairness of the experimental results, each algorithm was performed separately 30 times to minimize the error, and all tests were conducted on a laptop equipped with an Intel (R) Core (TM) i7-6500 CPU at 2.50 GHz and 8 GB of RAM. The population size N of all experiments is 30, and the maximum number of iterations T is 500.
The detailed characteristics of each test function are listed in Table 1. In Table 1, Dim denotes the function dimension, Scope represents the value range of x, and f min indicates the ideal value of each function.  10, 100, 4), [−50,50] 0

Effectiveness Analysis of Improvement Strategy
The SPSOA proposed in this paper is a hybrid algorithm based on SOA using three strategies. However, it is not known whether any strategy will work, so it needs to be verified. In this part, SOA1 (introduce Sobol sequence initialization), SOA2 (design new parameter A * ), and SOA3 (introduce the learning strategy of PSO) are compared with SPSOA and basic SOA. Table 2 shows the optimal fitness value (BEST), the worst fitness value (WORST), the average fitness value (MEAN), and the standard deviation (STD) of 30 experiments of each algorithm under the 12 test functions in Table 1. In Table 2, Dim denotes the function dimension, Scope represents the value range of x, f min indicates the ideal value of each function, and the best test results of all algorithms are in bold.  According to Table 2, the indexes of SOA1, SOA2, and SOA3 proposed in this paper improved to varying degrees compared with the basic SOA. In the three test functions, F 1 , F 7, and F 8 , all the algorithms can find the theoretical optimal value. However, from the MEAN and WORST of F 1 , it can be seen that SOA1, SOA2, and SOA3 have better stability than the basic SOA. In F 4 , all algorithms are prone to falling into local optimum, but the BEST, WORST, and MEAN of SPSOA are better than other algorithms, especially BEST, which is significantly improved. However, the STD of SPSOA is higher than that of other algorithms. This is because the characteristics of F 4 lead to low search accuracy in most cases, so the experimental results are within a reasonable range. Based on the data shown in Table 2, the three improvement strategies proposed in this paper are effective and have a stable improvement in the convergence speed, convergence accuracy, and jumping out of the local optimum of the algorithm. In other test functions, the improved SPSOA with a mixed strategy is better than the enhanced algorithm with a single strategy in solving the four evaluation indexes, which shows that the optimization ability and stability of the algorithm are improved to a greater extent under the joint influence of different strategies.
Since each algorithm in some test functions has a strong optimization ability and cannot reflect the role of each strategy, further explanation and analysis are required. As shown in Figure 4 with the two test functions, F 7 and F 8 , although SOA can also converge to the theoretical optimum, it is not as good as other algorithms in terms of search speed. SOA1, SOA2, and SOA3 improved by three single strategies are better than the basic SOA in convergence speed and optimization accuracy but inferior to the SPSOA improved by mixed strategies. It shows that each strategy fully plays its role and is effective. SOA1 introduces the Sobol sequence to ensure the initial population's diversity and evenly distribute the search space. SOA2 designs a new parameter based on the sigmoid function, which is more suitable for the nonlinear iterative process of the algorithm and coordinates the global search and local search of the algorithm. After SOA3 introduces PSO learning strategy, the ability to jump out of local optimization and convergence speed of the algorithm is enhanced, which further verifies the effectiveness of the three hybrid strategies proposed in this paper.
Since each algorithm in some test functions has a strong optimization ability and cannot reflect the role of each strategy, further explanation and analysis are required. As shown in Figure 4 with the two test functions, F7 and F8, although SOA can also converge to the theoretical optimum, it is not as good as other algorithms in terms of search speed. SOA1, SOA2, and SOA3 improved by three single strategies are better than the basic SOA in convergence speed and optimization accuracy but inferior to the SPSOA improved by mixed strategies. It shows that each strategy fully plays its role and is effective. SOA1 introduces the Sobol sequence to ensure the initial population's diversity and evenly distribute the search space. SOA2 designs a new parameter based on the sigmoid function, which is more suitable for the nonlinear iterative process of the algorithm and coordinates the global search and local search of the algorithm. After SOA3 introduces PSO learning strategy, the ability to jump out of local optimization and convergence speed of the algorithm is enhanced, which further verifies the effectiveness of the three hybrid strategies proposed in this paper.

Comparative Analysis of Algorithm Performance
To verify the superiority and feasibility of SPSOA, this part adopts six optimization algorithms: MSOA [36], BSOA [37], PSO [11], GWO [38], WSO [39], and BOA [40], and makes a comprehensive comparison with SPSOA under the 12 test functions in Table 1. The parameters of other algorithms are consistent with those in the corresponding references. The experiment in Table 3 adds the algorithm running time (TIME) based on Table 2, in which the time unit is second. The best test results of all algorithms have been highlighted in Table 3. In Section 4.1, it has been proven that SPSOA has better performance than the basic SOA, so the SOA is not added for comparison in the following comparative experiment.  It can be seen from the test results in Table 3 that SPSOA is optimal in the three indexes of BEST, WORST, and MEAN of all test functions. It shows that the global search ability and local development ability of SPSOA are better than the compared algorithms. In F 1 , BSOA can find the theoretical optimal value, but it is inferior to SPSOA in WORST and MEAN. In F 7 and F 8 , the performance indexes of MSOA and BSOA are as excellent as SPSOA. Although in F 4 , SPSOA is worse than GWO and WOA in STD, it performs better in other indexes, especially in BEST. This is because the function makes the algorithm fall into local optimization, and the excellent global search ability of SPSOA improves the probability of jumping out of local optimization in the iterative process. As for the calculation time in Table 3, SPSOA has the smallest execution time in all test functions, which shows that the convergence speed of SPSOA is better than the compared algorithms and can be adopted to a variety of real-time environments.
To more intuitively display the convergence speed and optimization accuracy of the algorithm and show the ability of the algorithm to jump out of the local optimization, Figure 5 gives the convergence curves of 12 test functions according to the number of iterations and fitness value. In F 7 and F 8 , MSOA and BSOA can also search the optimal solution, but the number of iterations is more significant than that of SPSOA. The search speed of PSO is slow in the early iteration of the algorithm. The overall convergence performance of GWO is mediocre. The search performance of WSO and BOA increases as the function complexity increases.
To further evaluate the performance of the algorithm, under the significance level of α = 5%, the Wilcoxon signed rank sum test was performed on the best results of SPSOA and 6 other algorithms under 30 independent operations [41]. We used the p value of the test result to compare whether there were differences between the two algorithms. When p < 0.05, it indicates that there are significant differences between the two algorithms; when p > 0.05, it shows that the optimization performance of the two algorithms is the same. The result analysis is shown in Table 4. The symbol "+", "=", "−" indicates that the performance of SPSOA is better than, equivalent to, and worse than the compared algorithms, respectively, and NaN indicates that the algorithm result is close and cannot be judged for significance.

Basic Theory of Blind Source Separation
Blind source separation (BSS), sometimes referred to as blind signal processing, is capable of recovering the source signal from the observed signal in the absence of critical information such as source and channel [42]. Among them, blind image separation is the process of estimating or separating the original source image from the fuzzy image features. It mainly eliminates or minimizes the degradation of the image caused by interference and noise through the prior knowledge of image degradation [43].
The linear mixed BSS model is described below: where t is the sampling moment, A is a mixed matrix of order m × n (m ≥ n), X(t) is a vector of the m-dimensional observed signals,X(t) = [X 1 (t), X 2 (t), . . . , X m (t)], S(t) is a vector of the n-dimensional source signals, S(t) = [S 1 (t), S 2 (t), . . . , S n (t)], and N(t) is a vector of the m-dimensional noise signals. BSS represents the cases in which an optimization algorithm determines the separation matrix, W, when only the observed signals, X(t), are known. The separated signals, Y(t), are obtained using Equation (2).

REVIEW
19 of 23 Figure 6 shows the linear mixed BSS model.

Hybrid System Am×n
Separation System Wn×m

Noise Signals
Observed Signals Separated Signals

S(t)
Source Signals Figure 6. Linear mixed blind source separation model. [44]. ICA means that under the condition that the source signals are independent of each other, the appropriate signal independence criterion is used to establish the objective function. The optimal separation matrix is obtained through iterative optimization to maximize the independence of the separated signals.

Independent component analysis (ICA) is an important BSS method
The commonly used independence criterion of signals includes mutual information, kurtosis, and negative entropy. Kurtosis is calculated using Equation (30) as follows: where yi is a Gaussian random variable.  [44]. ICA means that under the condition that the source signals are independent of each other, the appropriate signal independence criterion is used to establish the objective function. The optimal separation matrix is obtained through iterative optimization to maximize the independence of the separated signals.
The commonly used independence criterion of signals includes mutual information, kurtosis, and negative entropy. Kurtosis is calculated using Equation (30) as follows: where y i is a Gaussian random variable.
The sum of absolute values of kurtosis is used as a criterion of signal independence in this paper, and the objective function is specified as follows: where ε is an extremely small amount that prevents division by zero. According to the information theory, for a Gaussian random vector y i , when E[yy T ] = I the larger the kurtosis of the signals, the greater their independence. The SPSOA, as mentioned above, will optimize the separation matrix W, maximize the kurtosis, and finally complete the separation of the observed signals.
Before the iterative optimization of the objective function, it is also necessary to preprocess the observed signal, such as centralization and whitening, which can reduce the algorithm's complexity and make a single observation signal statistically independent. Figure 7 shows the flow chart of SPSOA-ICA.  Figure 7. The flow chart of SPSOA-ICA.

Image Signal Separation
Three gray-scale images and one random noise image were used as source signals and combined to produce the observed signals. To acquire the separated signals, SPSOA SOA, BSOA, and MSOA were used to separate the observed signals blindly. The simulation diagram is depicted in Figure 8.
In order to quantitatively analyze and compare the separation performance of the four algorithms, Table 5 compares the similarity coefficient, performance index (PI) o separated signals, and the SSIM of an output image. The data results in Table 5

Image Signal Separation
Three gray-scale images and one random noise image were used as source signals and combined to produce the observed signals. To acquire the separated signals, SPSOA, SOA, BSOA, and MSOA were used to separate the observed signals blindly. The simulation diagram is depicted in Figure 8.  It can be seen from Figure 8 that the separated signal obtained by SPSOA proposed in this paper can restore the source signal better, and its image features are similar to the source image, which can reduce the degradation of the image caused by noise. However, the separated signals obtained by other algorithms have different degrees of distortion. In addition, the sequence of the separated signals is inconsistent with the source signals, which is caused by the ambiguity of the BSS. However, in most scientific research and production practices, the ambiguity of BSS will not have a significant impact on the results.

Conclusions and Future Work
This paper proposes a hybrid strategy to improve SPSOA, which is a great improvement on the basic SOA. The algorithm uses the Sobol sequence to initialize the population, which improves the diversity of the initial population and lays a foundation for the global search of the algorithm. Using the sigmoid function to improve parameters can better adapt to the nonlinear optimization process of the algorithm and enhance the ability of In order to quantitatively analyze and compare the separation performance of the four algorithms, Table 5 compares the similarity coefficient, performance index (PI) of separated signals, and the SSIM of an output image. The data results in Table 5 are the average values under multiple experiments. In Equation (32), ρ ij is a similarity index used to compare the source signal with the separated signal. The greater the ρ ij , more effective the separation. In this section, ρ ij is a 4 × 4 matrix. The maximum value of each channel is taken as the experimental data, and N is set to 4. In Equation (33) G = WA, the closer the PI is to 0, the more similar the separated signal is to the source signal. In Equation (34), C 1 and C 2 are constant, σx x represents the covariance of the image, µx represents the mean value of the two images, respectively, and represents the variance of two images, respectively. SSIM is in [0,1], which is a value closer to one indicating better structure preservation.
As shown in Table 5, SPSOA produces not only the highest similarity coefficient and SSIM but also the smallest PI of the separated signals, allowing for a more accurate restoration of the source signals.
It can be seen from Figure 8 that the separated signal obtained by SPSOA proposed in this paper can restore the source signal better, and its image features are similar to the source image, which can reduce the degradation of the image caused by noise. However, the separated signals obtained by other algorithms have different degrees of distortion. In addition, the sequence of the separated signals is inconsistent with the source signals, which is caused by the ambiguity of the BSS. However, in most scientific research and production practices, the ambiguity of BSS will not have a significant impact on the results.

Conclusions and Future Work
This paper proposes a hybrid strategy to improve SPSOA, which is a great improvement on the basic SOA. The algorithm uses the Sobol sequence to initialize the population, which improves the diversity of the initial population and lays a foundation for the global search of the algorithm. Using the sigmoid function to improve parameters can better adapt to the nonlinear optimization process of the algorithm and enhance the ability of the algorithm to coordinate the early exploration and later development. The learning strategy of PSO is introduced to increase the process of seagull learning from the optimal global position and individual historical optimal position, and improve the ability of the algorithm to jump out of the optimal local position. Moreover, compared with the basic SOA, SPSOA does not increase the time complexity of the algorithm. According to the simulation results, we draw the following conclusions: (1) When optimizing 12 benchmark functions, SPSOA outperforms the other 6 algorithms.
The three improvement methods proposed in this study increased the performance of SOA to varying degrees in the algorithm ablation experiment. All of this demonstrates that SPSOA has a high level of search performance and strong robustness. (2) In BSS, SPSOA can successfully separate noisy mixed images. In addition, the algorithm is superior to the compared algorithms in the SSIM of output images, similarity coefficient, and PI of separated signals. SPSOA has a broad application prospect in modern signal processing.
In the future, the proposed algorithm can be used to solve more engineering problems, such as path planning, data compression, and resource allocation. In addition, the capability of SPSOA in solving multi-objective optimization problems needs further research.