Next Article in Journal
Comparison of Fracture Resistance and Microleakage Properties of Two Different Prefabricated Zirconia Crowns After Thermocycling: An In Vitro Study
Next Article in Special Issue
Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO)
Previous Article in Journal
Multi-AUV Dynamic Cooperative Path Planning with Hybrid Particle Swarm and Dynamic Window Algorithm in Three-Dimensional Terrain and Ocean Current Environment
Previous Article in Special Issue
An Improved Elk Herd Optimization Algorithm for Maximum Power Point Tracking in Photovoltaic Systems Under Partial Shading Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem

1
Department of Science and Technology Teaching, China University of Political Science and Law, Beijing 100088, China
2
College of Artificial Intelligence, Guangxi Minzu University, Nanning 530006, China
3
Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 537; https://doi.org/10.3390/biomimetics10080537
Submission received: 27 June 2025 / Revised: 9 August 2025 / Accepted: 11 August 2025 / Published: 15 August 2025

Abstract

A swarm intelligence optimization algorithm called white shark optimizer (WSO) has been proposed and successfully applied in regard to many aspects. In this paper, the location problem of sports facilities is regarded as a multi-objective problem, and the number of residents covered by sports facilities and the Weber problem are introduced as objective functions. A multi-objective white shark optimizer (MOWSO) is proposed, and MOWSO introduced an archived mechanism to store the non-dominated solutions obtained by the algorithm. When the Pareto solutions in the archive overflow, the solutions are removed by calculating the true distance of the Pareto optimal solution. The performance of the MOWSO is verified on CEC 2020 benchmark functions, and the results show that the proposed MOWSO is better than other algorithms in the diversity and distribution of solutions. The MOWSO is applied to solve the rural sports facilities location problem, and a variety of different sports facilities location schemes are obtained. It can provide a variety of options for the location of rural sports facilities, and promote the intelligent design of sports facilities.

1. Introduction

The World Health Organization (WHO) advocates that humans can promote physical health and develop a healthy lifestyle through appropriate physical exercise, but progress is slow at present [1]. With rapid urbanization, China faces problems of obesity and a lack of physical exercise. Therefore, how sports are promoted in China is gaining people’s attention more and more [2]. One of the core aspects of China’s sports policy is to build a viable environment to promote people’s participation in sports activities in a way that is consistent with the health-promotion strategy advocated by the World Health Organization [3]. Compared with developed countries, the average level of sports facilities in China is relatively backward. Compared with the city, the average level of rural sports facilities has an obvious lag, and the number is small, unable to meet the needs of the majority of residents for physical exercise, which has become one of the factors restricting the physical fitness training of villagers. The above situation can be improved by promoting the process of rural sports development, and the construction of rural sports facilities is the key to promoting the development of rural sports. The state promotes the implementation of the rural revitalization strategy by increasing the capital investment in the construction of rural sports facilities. In the process of the construction of rural sports facilities, how to choose the right location for the construction of rural sports facilities is one of the key problems in the construction of rural sports facilities. The construction of sports facilities can promote the physical exercise of rural residents, improve the physical fitness of residents, contribute to the sustainable development of rural areas, and promote the implementation of rural revitalization strategies. Based on the classical continuous location model, this paper constructs a multi-objective location model of rural sports facilities from the perspectives of fairness and efficiency.
Over the past few decades, different meta-heuristic algorithms have been developed to handle these complex optimization problems, such as differential evolution (DE) [4,5,6,7], particle swarm optimization (PSO) [8,9,10,11], flower pollination algorithm (FA) [12], ant colony optimization (ACO) [13,14], sine cosine algorithm (SCA) [15,16], gray wolf optimization (GWO) [17,18,19], Tianji’s horse racing optimization (THRO) [20], the Divine Religions Algorithm (DRA) [21], and Stellar oscillation optimizer (SOO) [22], etc. It is proven that the metaheuristic algorithm has important application value. In academia and industry, these algorithms are widely used to solve various types of optimization problems. Because of their simple principle and easy implementation, these algorithms have attracted more and more attention. Unlike traditional methods to solve optimization problems, metaheuristic algorithms do not need to analyze objective functions and constraints, which is the most significant advantage of metaheuristic algorithms. In general, a metaheuristic algorithm starts with a random set and then searches globally for the best solution according to the set rules. These rules often mimic the laws of natural evolution, the laws of human behavior [23], the social behavior of birds, insects, and other organisms [24,25], the laws of plant growth [26], mathematical planning [27], and the laws of physics [28,29], etc. The literature review is summarized in Table 1.
Multi-objective optimization is a common problem in all fields. In multi-objective problems, each objective will not reach the optimal solution at the same time. It is important to find multiple sets of solutions for multi-objective problems because different solutions can be applied to decision-makers with different needs. In recent years, many meta-heuristic algorithms have been successfully applied to multi-objective optimization problems. One of the very classic and popular multi-objective algorithms is multi-objective particle swarm optimization (MOPSO) [30]. In MOPSO, the optimal Pareto solution is selected by external archiving and the adaptive grid mechanism. K. Deb improved NSGA by introducing fast elite non-dominated sorting to obtain NSGA-II [31,32]. The algorithm proposes a fast non-dominated ranking to improve the population level. In addition, crowding ranking and elite selection strategies are introduced. There are other classical multi-objective algorithms, such as SPEA2 [33] and Omni-optimizer [34], etc. In addition, there are some excellent multi-objective algorithms, such as DN-NSGAII, MO_Ring_ MO_Ring_PSO_SCD, MOHHO, MOFOA, MaODA, DPDCA, and so on [35,36,37,38,39]. These multi-objective algorithms can solve multi-objective optimization problems well. A new swarm intelligence optimization algorithm called white shark optimizer (WSO) was proposed by Malik B. et al. in 2022 [40]. Inspired by the great white shark’s search for food and tracking of prey in the deep sea, it combines the great white shark’s keen hearing and smell abilities with fish behavior. Based on the mathematical modeling of great-white-shark hunting behavior, the great-white-shark population can search through random search, locate the behavior of prey and fish, and search in the search space, so as to update position and finally achieve the purpose of searching for the best. At present, the white shark optimizer algorithm has been applied to some optimization problems and has achieved good results [41,42,43,44,45,46]. In this paper, the white shark optimizer is improved to the multi-objective white shark optimizer (MOWSO) to solve the multi-objective problem.
The following is a summary of the principal achievements:
(1)
A novel multi-objective white shark optimizer (MOWSO) is proposed;
(2)
The archived mechanism is introduced in MOWSO to store the current Pareto optimal solution, and it is screened by calculating the true distance of the Pareto optimal solution;
(3)
The performance is tested and compared to MOWSO on CEC2020 benchmark functions;
(4)
A novel rural sports-facilities location problem model is proposed;
(5)
We used MOWSO to solve the rural sports-facilities location problem and obtained the best result.
The rest of this article is divided into four parts. Section 2 is the introduction of the white shark optimizer algorithm and location problem. In Section 3, the multi-objective problem is introduced, and the multi-objective white shark optimizer algorithm is designed. Section 4 evaluates the performance of the multi-objective white shark optimizer algorithm on the test set. Section 5 introduces the rural sports-facility location model and uses the white shark algorithm to solve it. Section 6 is the summary of this paper and future work.

2. Related Work

2.1. White Shark Optimizer

The white shark is one of the most ferocious and dangerous predators in the ocean. It has powerful muscles, keen eyesight, and a keen sense of smell. When searching for prey, great white sharks first use their hearing to scout large spaces. Second, they can find prey by using their keen sense of smell. These features help them explore nearby areas. Great white sharks can use two lines on their sides to detect changes in water pressure and thus find the direction of prey movement. Great white sharks sense changes in pressure in the ocean and move toward prey. They can also sense the weak electromagnetic field generated by the movement of prey. When locating prey, the great white shark moves toward it in a fluctuating motion. The algorithm is divided into three parts: speed of movement towards prey, movement towards optimal prey, and clustering behavior of fish.

2.1.1. Movement Speed Towards Prey

White sharks spend most of their time hunting and tracking prey. They track their prey in a variety of ways, using their superior sensory abilities like hearing, sight, and smell. When white sharks sense the location of their prey based on the hesitation of the waves they hear as the prey moves, they move toward the prey in an undulating manner. This mode of motion can be defined as shown in Equation (1).
v k + 1 i = μ [ v k i + p 1 ( w g b e s t k w k i ) × c 1 + p 2 ( w b e s t v k i w k i ) × c 2 ]
where v k + 1 i represents the speed of the i -th white shark to step k + 1 in the white-shark population. μ represent the contraction factor, which is used to analyze the convergence behavior of great white sharks, and its definition is shown in Equation (5). v i is the i -th indicator vector when the white shark searches for the best prey, and its definition is shown in Equation (2). p 1 and p 2 are used to control the influence coefficients of w g b e s t k and w b e s t v k i on w k i , and their calculation methods are shown in Equations (3) and (4), respectively. w g b e s t k is the global optimal position obtained by the white shark in the first k searches so far in the search process, w b e s t v k i is the optimal position corresponding to the i -th white shark in the population in the k iterations, and w k i is the position of the i -th white shark in the population at the k searches. c 1 and c 2 are randomly generated numbers between 0 and 1. i represents the i -th white shark in the white-shark population.
v = n × r a n d ( 1 , n ) + 1
where r a n d 1 , n is a uniformly distributed random number generated in the range of 0 to 1.
p 1 = p max + ( p max p min ) × e ( 4 k / K ) 2
p 2 = p min + ( p max p min ) × e ( 4 k / K ) 2
where k and K, respectively, represent the number of iterations and the maximum number of iterations of the white-shark population so far. p min and p max are two fixed values set empirically, 0.5 and 1.5, respectively, in order to control the p 1 and p 2 values in combination with the number of iterations.
μ = 2 2 τ τ 2 4 τ
where τ represents the coefficient of acceleration, thorough analysis results in a value of 4.125.

2.1.2. Movement Towards Optimal Prey

White sharks are constantly shifting their position, and when they hear waves from prey movement or smell prey, they move closer to their prey. The location of the prey changes as it searches for food or the white shark moves towards it. However, the prey will leave some scent in the previous location. In this case, white sharks can search for prey based on the scent they left behind. This behavior of the white shark can be described in Equation (6).
w k + 1 i = w k i ¬ w o + u a + l b ;           r a n d < m v w k i + v k i / f ;           r a n d m v
where w k + 1 i represent the latest position of the i -th white shark in the population after the (k + 1) iteration, ¬ indicates a logical non-operation, a and b are binary vectors whose formulas are Equations (7) and (8), and u and l represent the upper and lower bounds of the entire search space, respectively. w o represents a logical vector calculated by Equation (9), and f represents the frequency of the wave motion of the white shark, and the formula is represented by Equation (10). r a n d represents a random number between 0 and 1, and m v represents the motion force of a white shark approaching prey, which increases with the number of iterations and which is calculated by Equation (11).
a = sgn ( w k i u ) > 0
b = sgn ( w k i l ) < 0
w o = ( a , b )
where stands for the XOR operation.
f = f min + f max f min f max + f min
where f max and f min represent the maximum and minimum frequency of the great white shark’s fluctuating movement, respectively. By conducting a large number of experimental tests, f max and f min were finally selected as 0.75 and 0.07, respectively.
m v = 1 ( a 0 + e ( K / 2 k ) / a 1 )
where a 0 and a 1 are constants used to manage exploration and exploitation capabilities. Typically, the values are 6.25 and 100.

2.1.3. The Swarming Behavior of White Sharks

When great white sharks hunt, they cooperate and move toward the white shark closest to their prey. This behavior is represented by Equation (12).
w k + 1 i = w g b e s t k + r 1 D w sgn ( r 2 0.5 )         r 3 < s s
The w k + 1 i is the i -th update of the position of the white shark relative to prey. The sgn ( r 2 0.5 ) value is 1 or −1, indicating the search direction. r 1 , r 2 , and r 3 represent random numbers between 0 and 1. D w is the distance between the white shark and its prey, calculated by Equation (13). s s represents the olfactory- and visual-intensity parameters of a great white shark following other great whites approaching prey, as calculated by Equation (14).
D w = r a n d × ( w g b e s t k w k i )
where r a n d is a random number between 0 and 1.
s s = 1 e ( a 2 × k / K )
where a 2 is a constant, and through extensive experimental analysis, the value has been taken as 0.0005.
White sharks are highly social animals that prefer to hunt in groups. The fish behavior of white sharks is represented by Equation (15)
w k + 1 i = w k i + w k + 1 i 2 × r a n d
where r a n d is a random number between 0 and 1. White sharks can update their location based on the nearest white shark to their prey.

2.2. Continuous Facility Location

There are many kinds of location problems, which exist in different disciplines, and different disciplines have different descriptions of location problems. However, according to the description of a location problem from different disciplines, the most essential definition of a location problem is given a metric space and the location of demand points in the space, and the location of new facilities in the space, a certain goal between facility points and demand points (generally called customers) can be optimized. This problem has an important position in real life. For more than half a century, with the development of mathematical tools, such as the introduction of optimization techniques to the problem of facility location, the field has entered a period of flourishing. People have conducted in-depth research into and analysis of facility-location problems in different fields, established optimization models and studied theoretical properties, proposed effective numerical algorithms to solve the models, and used the research results to solve practical problems.
The study of location problems spans numerous research fields such as management, engineering, geography, economics, computer science, and mathematics, etc. Its typical application is the location of factories, warehouses, hospitals, retail stores, and in addition, it is also used in fire centers, fire stations, gas stations, exploration oil wells, missile warehouses, and so on. From the perspective of decision makers, site selection is of great significance. Facility location has been accompanied by the development of human history; it has an important and profound impact on human life and production activities.
The rural location problem can be classified in different ways. According to the topology of the metric space, the location problem can be divided into discrete location, continuous location, and network location. Discrete location is the location of a given series of discrete points in the metric space. Continuous siting has a long history. New facilities in continuous siting can be located in a continuous region of the metric space. Continuous optimization, such as linear and nonlinear programming, can be used to solve the continuous siting problem. Network location is the location of the vertex or edge of a given graph or network, which is solved by combinatorial optimization and integer programming. Discrete location, continuous location, and network location have important applications in real life, which can be used to solve practical problems in different situations. The following are several key elements in continuous facility siting: the number of new facilities, the distance metric function, and the objective function, etc.

2.2.1. Number of Facilities

In the facility-location problem, the number of new facilities can be one facility point or multiple facility points. According to the different number of facility points, the problem of facility location can be divided into single-facility location and multi-facility location. In general, the need to locate multiple facilities is more difficult than the need to locate only one facility. This is because in the multi-facility location problem, it is necessary to consider to which facility point the customer goes to obtain the service, or the fact that, in some cases, a particular amount of service is required by the customer, but the amount of service provided by the facility cannot meet the conditions.

2.2.2. Distance Metric Function

There are various distance measurement functions in the field of location selection. One of the important distance measurement functions is the norm. It satisfies positive definiteness, homogeneity, and subadditivity, and its measured distance reflects the actual distance more truly. The LP-norm is a distance metric commonly used in continuous siting. When p = 1, this norm measures the L1-distance, which is often used to represent the distance between two points in a city. When p = 2, this norm measures the Euclidean distance, which represents the straight-line distance between two points. The distance measured by this norm when p = ∞ is often referred to as the Chebyshev distance. Some generalized LP-norms are also commonly used to measure the distance between a facility and a customer. When customers in the location of continuous facilities occupy a large range in the metric space, they should not be simply regarded as points but as regions. The distance between the facility and the customer becomes the distance between the point and the area.

2.2.3. Correlation Function of Location Problem

According to different objectives, the functions of facility location can be roughly divided into pull objectives, push objectives, and push-and-pull objectives [42,43]. When the facility is of the type desired by the demand point (such as a supermarket, bank, fire station, etc.), the demand point always wants the facility as close to it as possible. Therefore, pull goals are often used, such as minimizing the distance between the demand point and the facility and maximizing market share. When the facility belongs to the type that is excluded by the demand point (such as polluting factories, garbage disposal stations, etc.), the demand point wants the facility to be as far away from itself as possible. Push goals are often used, such as maximizing the sum of distances between customers and facility points, minimizing the number of customers covered by the facility, and so on. In some cases, the type of facilities is between expectation and exclusion, and there will be two contradictory or opposite factors in the problem of attraction and exclusion. This kind of location problem often adopts push–pull goals.
The minisum function is a commonly used pull target to minimize the distance between the facility and the customer. The corresponding location problem is called the minisum problem. The Weber problem is the minisum problem, where the number of facilities is one. The problem with more than one facility is called the multi-facility minisum problem, which includes two kinds of important problems: the multi-facility Weber problem and the multi-source Weber problem. The minimax function is another commonly used pull goal that minimizes the maximum distance between the client and the setup, and the corresponding model is called the minimax problem. Because the model focuses on the farthest customers, it reflects fairness to some extent. The coverage problem is also an important problem in the use of pull targets, and the coverage problem includes two types of problems: maximum coverage and minimum coverage. The former is to make facilities cover as many customers as possible when the number of facilities is given, while the latter is to cover all customers with the least facilities when the number of facilities is variable. Problems such as locating a point on the network to maximize its weighted distance to the node are called maxisum problems [47]. Like pull targets, the maxisum and maximin functions are two important types of push targets. The maxisum problem seeks to maximize the distance between the customer and the facility, while maximin seeks to maximize the closest distance between the customer and the facility. When the number of facilities is more than one, the multi-facility model of the push target can be divided into max–min–min, max–min–sum, max–sum–min, and max–sum–sum, etc., in which min/sum of the third layer represents the minimum distance or distance sum between a given customer and all facilities. Because of the attraction and repulsion of push–pull targets, the location problem of push–pull targets can be modeled as two-objective optimization. Therefore, the method and technology of double objective optimization can be effectively used to solve the location problem of push–pull targets.

3. Multi-Objective White Shark Optimizer

3.1. Multi-Objective Optimization Problems

Multi-objective optimization is a common problem in real life, which refers to the realization of multiple objectives under certain circumstances. However, in a multi-objective problem, each object contradicts each other, and when one object tends to the optimal value, the value of the other object may be weakened. That is, all the objects in the multi-objective problem cannot obtain the optimal solution at the same time. Solving this problem usually involves finding a balance between multiple goals, making compromises, and making each goal as relatively optimal as possible. The multi-objective optimization problem can be expressed by the following formula [48].
M i n i m i z e : F ( x ) = { f 1 ( x ) , f 2 ( x ) , , f o ( x ) } S u b j e c t   t o : g i ( x ) 0 ,                         i = 1 , 2 , , m h i ( x ) = 0 ,                         i = 1 , 2 , , p L i ( x ) x i U i ,   i = 1 , 2 , , n
where o represents the number of objective functions, m and p represent the number of inequality constraints and equality constraints, respectively, and n represents the number of independent variables. [ L i , U i ] are the upper and lower bounds of the i -th variable.
The concept of Pareto optimality is widely used in the search of multi-objective optimization algorithms. Pareto optimality is an ideal situation for resource allocation. Pareto improvement is a change from one allocation state to another, assuming that there are fixed units and resources that can be allocated, without reducing the resources allocated to any unit, so that the resources allocated to another unit are increased. Pareto optimality means that there are no more Pareto improvements. Here are some concepts and terms about Pareto optimality.
Pareto domination: If x A and x B are two solutions to a multi-objective optimization problem, x A performs no worse than x B on any objective function, and x A performs better than x B on some objective function, x A is said to be Pareto superior to x B .
i { 1 , 2 , , o } , f i ( x a ) f i ( x b ) i { 1 , 2 , , o } , f i ( x a ) < f i ( x b )
Pareto optimal solution: When there is no solution that dominates x , the solution to x is a Pareto optimal solution.
x o ¬ X f : f i ( x o ) < f i ( x )
Pareto optimal solution set: The set of all Pareto solutions is called the Pareto solution set, to wit,
P S = { x | ¬ x o X f : f i ( x o ) < f i ( x ) }
Pareto optimal frontier: The Pareto frontier is formed when all the Pareto optimal solutions are mapped by the objective function.
P F = { F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f o ( x ) ) | x P S }

3.2. The Proposed Method

The white shark optimizer algorithm can be improved by introducing two components into the multi-objective white shark optimizer algorithm. The first component is the Pareto archive, which holds the best Pareto solutions found so far [30]. The archive has rules for storing and deleting Pareto solutions, and compares the newly generated Pareto solutions with those existing in the archive to determine whether the newly generated solutions are better than the ones in the archive. When the new solution is superior to the existing solution in the archive, the new solution is not dominated by other solutions, and the new solution is stored in the archive; otherwise the new solution is discarded.
When the number of solutions in the archive reaches the maximum capacity of the archive, some solutions in the archive need to be deleted to make the solution set distributed more evenly. The second component is introduced at this point. The component deletes the solution according to the true distance between the solution sets. Since the solutions in the archive are all Pareto optimal solutions, it is necessary to delete some densely distributed solutions to make the distribution of solutions more uniform. In this case, the distance of each solution to its nearest solution is calculated, and then the solution with the smaller distance is preferentially deleted, but there will be cases where the minimum distance of several solutions is equal. At this point we calculate the distance between these solutions and the other closest solutions, and add these distances until we can identify the solution that needs to be specifically deleted, which we call the true distance (TD). As shown in Figure 1, when the minimum distance of solution 1, solution 2 and solution 3 is almost equal, we can calculate the true distance of each solution and delete the solution more conveniently. The calculation formulas are shown in Equations (21) and (22). As the algorithm runs, solutions with smaller true distances are removed in turn according to the second component. In addition, MOWSO guides other individuals towards the individual by selecting the one with the greatest true distance in the current archive. This method can lead the search for the optimal Pareto solution of the population, and can ensure the convergence and diversity of understanding. The flow chart of MOWSO is shown in Figure 2.
D ( i ) = F = 1 , n ( x F i x F j ) 2               i , j = 1 , 2 , N ; i j
T D = p = 1 q D ( p )
where n represents the number of objective functions and N represents the number of Pareto solutions. x F i represents the value of one set of solutions, and x F j represents the value of the j -th set of solutions. p represents the index of the smaller value in D .

4. Experimental Results and Analysis

4.1. Experimental Setup

All algorithms in this experiment were programmed in MATLAB R2021b and numerical experiments were performed on an AMD Ryzen 7 6800H processor with 32 GB of memory. The proposed MOWSO algorithm is compared with other classical multi-objective algorithms to evaluate its effectiveness. It includes popular multi-objective optimization algorithms NSGAII [31] and SPEA2 [33], the multi-objective particle swarm algorithm (MOPSO) [30], the generic evolutionary algorithm (Omni-optimizer) [34], and the decision space base niching NSGAII (DN-NSGAII) [32]. MOPSO was proposed by Coello C. et al. The algorithm mainly uses the archive controller and adaptive grid mechanism. DN-NSGAII is modified from NSGAII. It embeds niche techniques in mating and environmental selection schemes. Omni-optimizer is designed to solve different types of optimization problems including single-objective, multi-objective, single-modal and multimodal problems. Unlike these multi-objective algorithms, MOWSO selects the Pareto solution set based on the true distance of the solution set. When archive files overflow, the solution with the smallest true distance in the solution set is deleted. These strategies can improve the performance of the MOWSO algorithm, and can solve multi-objective problems well.

4.2. Evaluation Index

In this paper, the performance of MOWSO and other comparison algorithms is evaluated by using four different evaluation indexes from decision space and target space. In the decision space, Pareto Sets Proximity (PSP) [49] and Inverted Generational Distance (IGD) [50] are used, respectively. In objective space, Hypervolume (HV) is used [51], and IGD is used in objective space (IGDF) [52]. The larger the PSP and HV, the better the algorithm performance; the smaller the IGDX and IGDF, the better the algorithm performance. In order to unify, 1/PSP and 1/HV are used to evaluate the performance of the algorithm. The smaller the value of 1/PSP and 1/HV, the better the performance of the algorithm.
Pareto Sets Proximity (PSP) reflects the similarity between the true Pareto solution set (PSs) and the PSs obtained by the algorithm.
P S P = C R I G D X
where CR represents the coverage rate of the optimal solution set obtained by the solution to the reference solution set. The calculation formula is as follows.
C R = l = 1 n δ l 1 / 2 n
δ l = 1 V l max = V l min 0 v l min V l max | | v l max V l min min ( v l max , V l max ) max ( v l min , V l min ) V l max V l min 2 o t h e r w i s e
where n is the dimension of the decision space, and where v l max and v l min are the maximum and minimum values of the PSs obtained by the algorithm on the l t h variable. V l max and V l min are, respectively, the maximum and minimum values of the true PSs obtained by the algorithm in the l t h variable.
The index IGDX is used to judge the diversity and convergence distribution of the solution obtained by the algorithm [53]. IGDX is calculated according to the Euclidean distance between the solution obtained by the algorithm and the reference solution, which is publicized as follows:
I G D X ( O , P * ) = v P * d ( v , O ) P *
where P * is the true PSs and O represents the solution obtained by the algorithm. d v , O represents the solution to the minimum Euclidean distance between the set v and O . The IGD index can also be found using the Pareto front (PF), so that the convergence distribution of the solution in the target space can be obtained. The calculation formula of IGDF is the same as that of IGDX, except that the space is different.
I G D F ( O , P * ) = v P * d ( v , O ) P *
In the objective space, the reciprocal of the hyper volume is also an important index, and its calculation formula is as follows:
H V ( S ) = v S v o l ( v )
HV can measure and evaluate the convergence and diversity of the resulting solutions.

4.3. Experimental Result

In this paper, the performance of MOWSO was tested used CEC 2020 multi-modal multi-objective optimization benchmark functions. CEC 2020 contains 24 test functions, including different types of Pareto optimal frontiers, such as convex, concave, linear, non-linear, and disconnected. For each function, all algorithms were run independently 20 times, and the results were counted, choosing the mean and standard deviation to compare the algorithms. Table 1, Table 2, Table 3 and Table 4 shows the comparison between MOWSO and other multi-objective algorithms for the mean and standard deviation of the four indicators. Table 2 shows the 1/PSP index of different algorithms, from which it can be seen that MOWSO can find the optimal value in 14 functions, whether it is mean or standard deviation, and outperforms other algorithms. And in the other five functions, the optimal value of the mean can be found. This suggests that the Photoshop obtained by MOWSO is more like real Photoshop. For IGDX indicators, it can be seen from Table 3 that MOWSO can find the optimal value of the mean value in all 19 functions, and the performance is better than other algorithms. For the IGDF index, it can be seen from Table 4 that MOWSO can find the optimal value in 10 functions, no matter the mean value or standard deviation, and it performs better than other algorithms. And in the other eight functions, the optimal value of the mean can be found. For the index of 1/HV, MOWSO performs poorly. As can be seen from Table 5, MOWSO only finds the optimal value of mean or standard deviation in seven functions, and performs worse than several of them. In summary, the MOWSO algorithm performs better than the other five algorithms.
The best results of PSs and PF were obtained by running each algorithm 20 times in functions F3, F14, F16 and F24. This is shown in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. The number of dimensions and objective functions of the functions F3 and F16 is two, and the number of dimensions and objective functions of F14 and F24 is three [54]. As can be seen from Figure 3, the solution-set distribution obtained by the MOWSO algorithm on function F3 is more uniform; among the six functions, the solution set distribution of NSGA-II is more disorderly and the number of solutions obtained is less. As can be seen from Figure 4, on function F14, the MOWSO algorithm performs well, while SPEA2 obtains a poor distribution of solution sets. Figure 5 shows that on function F16, the solution set obtained by MOWSO is significantly better than the other five algorithms, and the solution-set distribution is more uniform than that obtained by the other five algorithms. As you can see from Figure 6, MOWSO also performs well on function F24. In Figure 7, it can be seen that the Pareto front obtained by MOWSO algorithm is widely distributed and uniform, indicating that the MOWSO algorithm is not easily trapped in local optimization when searching for solutions in the global. This paper presents drawings of the boxplot of IGDF on each function of different algorithms. The boxplot can display the distribution of data obtained by different algorithms intuitively. The box plot of the algorithm on the IGDF index is shown in Figure 8. It can be seen from the figure that MOWSO has the smallest upper and lower edges, upper and lower quartiles, and median on most functions compared with the other five algorithms. These results show that MOWSO performs stably and with minimal fluctuations on CEC 2020, and can reach optimal values compared to other algorithms.
Friedman rank can be used to compare the overall performance of different algorithms on data sets. In this paper, the performance of six algorithms on CEC 2020 was evaluated through a Friedman rank test experiment. The Friedman test obtains the average score and average ranking of each algorithm, thereby ranking the performance of the algorithm. Table 6 shows the results of the Friedman rank test for each algorithm. MOWSO’s overall ranking is No. 1. This proves the performance benefits of MOWSO.

5. Rural Sports-Facility Location Problem

5.1. The Objective Function of Sports-Facilities Location

In the location of traditional sports facilities, designers analyze various constraints, and constantly optimize and adjust the location scheme. Using the algorithm to solve this problem not only improves work efficiency and saves human resources, but also provides a new idea and method for the location and layout of sports facilities. When solving the location problem of sports facilities in real life, since sports facilities and villages occupy a relatively small area in the whole space, sports facilities (hereinafter referred to as facility points) and villages (hereinafter referred to as customers) are regarded as a point in the plane. According to the concept of a 15 min living circle, customers are required to walk 15 min to reach the location of the facility point. With the normal speed of 1.5 m/s as the standard, the effective service radius of the facility point is set to 1350 m. For the customer, the facility point is somewhere between expectation and exclusion. Customers not only want to be able to reach the facility point in a short time, but also because the facility point will bring certain congestion or other unfavorable factors, so customers do not want the facility point to be too close to them. Considering the maximum utilization of the facility point, it is hoped that there will be large number of customers within the effective coverage area of the facility point. This section introduces three objective functions in the continuous location of sports facilities. These objective functions are based on the classical model of a continuous facility location, and on this basis, is combined with practical problems to improve the objective functions.
The purpose of the classic multi-source Weber problem (MSWP) is to find the location of new facilities and minimize the sum of transportation costs between these new facilities and all customers. According to the customer’s hope to reach the service point in a short time, MSWP is selected as the first objective function. The mathematical model of this problem is as follows:
M S W P : min i = 1 m j = 1 n w i j x i a j s . t i = 1 m w i j = s j , j = 1 , 2 , , n , x i R 2 , i = 1 , 2 , , m ,
where m is the number of new facilities that need to be located. x i represents the position coordinate of the i -th facility point in the sought facility point. n represents the number of customers in the space. a j represents the location coordinates of the j -th customer. w i j indicates the weight between x i and a j .   is to calculate the Euclidean distance between two points.
In this model, each facility can provide enough services to customers in the target space, so that each customer can choose the nearest facility point to obtain services, thus minimizing the cost of the system. To avoid resource waste caused by too close a distance between sites’ selected points, constraints are added to the model, that is, the distance between two facility points should be greater than the minimum distance D . The model was adopted as the first objective function F1 of the sports-facility location problem. Finally, MSWP is equivalent to
F 1 = min j = 1 n s j min i = 1 , m x i a j s . t .           x i R 2 , i = 1 , 2 , , m ,           x p x q > D , p , q = 1 , 2 , m ,
In order to avoid the waste of resources, more customers should exist within the effective coverage area of the facility, so that the facility can serve more customers. When there is a facility point within 1350 m from the customer, that is, the customer can reach the facility point within 15 min, it means that the customer is covered by the facility point. To enable more customers to be covered, the second objective function F2 uses a mathematical model as follows:
F 2 = max j = 1 n z j s . t .           z j = 1 , a j x R 0 , a j x > R
Here, z = 1 indicates that the customer has a facility point within a range of R , which ensures that the customer can walk to the facility point within 15 min. n is the number of customers.
Because the facilities will bring certain congestion or other adverse factors, customers want to be relatively far away from the facilities. Based on this, the distance of all customers to their nearest facility point is calculated. The customer closest to the facility point among all customers is expected to have the greatest distance from the facility point. Therefore, the third objective function F3 uses the Maximin model, as shown below:
F 3 = max x i R 2     min j = 1 , , n     min i = 1 , , m x i a j s . t .           x i R 2 , i = 1 , 2 , , m ,                     x p x q > D , p , q = 1 , 2 , m ,
where m represents the number of facility and x i represents the location of the i -th facility point. n is the number of customers and a j is the location of the j -th customer.   is the Euclidean distance. D represents the minimum distance between two facility points.

5.2. Simulation Experiment Results

This experiment uses a real case study, which is located in a township in Guangxi Province, China. ArcGIS was used to extract the village point coordinates of the township and it regarded them as customers, as shown in Figure 9. The number of customers involved in this case is 76, and the area of customers is about 120 square kilometers. A single sports facility has a service radius of 1350 m and covers an area of about 5.7 square kilometers. Considering the scattered distribution of customers, the number of experimental facilities in this experiment is nine facilities, and the number of facilities can be changed according to the actual situation. The problem is a multi-objective problem where F1 is the sum of the distances between all customers and the facility (SDCF), as shown in Equation (30). F2 is the number of customers covered by the sports facilities (NCCF), as shown in Equation (31). F3 is the distance between the nearest customer and the sports facility (DCCF), as shown in Equation (32).
The initial population of the MOWSO algorithm was set to 50 and the number of iterations to 100. Other MOWSO parameters are shown in Table 7. According to the three objective function models, MOWSO was used to solve the problem, and MOPSO and SPEA2 were compared. Finally, the location information of the facility point was obtained. The specific information of the results obtained by the three algorithms in each objective function is shown in Table 8. The Pareto frontier obtained by the solution is shown in Figure 10. From the Pareto front of the algorithm, it can be seen that the function values obtained by MOWSO are evenly distributed, which indicates that the MOWSO algorithm can provide more comprehensive and extensive reference information for the location problem of sports facilities and also proves the effectiveness of MOWSO algorithm.
The specific information when MOWSO solves the location problem of sports facilities and obtains the best value in each objective function is shown in Table 9, and the specific location is shown in Figure 11. In the first case in the figure, the sum of the distances that all customers need to walk to reach the facility point is the minimum. At this time, the number of customers within the effective coverage of the facility is 42, and the distance between the customers closest to the sports facility and the facility is 84 m. In the second case, the number of customers within the effective coverage of the facility is the largest. At this time, all customers need to walk 112,556 m to reach the facility point, and the distance between the customers who are closest to the sports facility and the facility is 43 m. The third case is that the distance between the customers closest to the sports facilities and the facilities reaches the maximum. At this time, all customers need to walk 184,416 m to reach the facilities, and the number of customers within the effective coverage of the facilities is 19.
In order to verify the effectiveness of MOWSO in the location of sports facilities on a larger scale, 800 customer points were randomly generated in an area of 1225 square kilometers. The service radius of a single sports facility is 1350 m, covering an area of about 5.7 square kilometers, and the number of sports facilities is 100 to conduct experiments. According to the three objective function models, MOWSO was used to solve the problem, and MOPSO and SPEA2 were compared. The three algorithms were run separately ten times. Table 10 shows the average value of the optimal value obtained by the algorithm after ten times running on the three objective functions. F1 represents the sum of the distance that all customers need to travel to reach the facility point, in kilometers. F2 represents the number of customers within the effective coverage area of the facility. F3 is the distance between the nearest customer and the facility, in meters.
The distribution of the solution was evaluated using the indices SP and UD, which are the diversity indices proposed by Schott and Tan et al. to measure the distribution of solutions [53,55]. The smaller the values of SP and UD, the better the distribution of the algorithm. The minimum value and average value of each algorithm running ten times were calculated, respectively, and the information is shown in Table 11. It can be seen from the table that MOWSO was run ten times on SP and UD indexes to obtain the smallest average value, indicating that the solution set obtained by the MOWSO algorithm has the best distribution. Figure 12 shows the Pareto front when each algorithm ran ten times to obtain the minimum value on UD. It can also be seen from the figure that the Pareto front of MOWSO is more evenly distributed.

6. Conclusions and Future Work

In this paper, we improve the white shark optimizer algorithm by adding two mechanisms. The first mechanism is to introduce the non-dominated Pareto solution obtained by archiving. The second mechanism is to calculate the true distance between Pareto solutions to filter the best Pareto solutions in the archive. The algorithm is tested on CEC2020 benchmark test function and compared with other classical multi-objective algorithms to verify the performance of MOWSO. The experimental results show that the Pareto solution obtained by MOWSO algorithm is closer to the real Pareto solution and has certain reliability. Then, MOWSO was applied to solve the location problem of rural sports facilities. Through the experiment, it was found that MOWSO can obtain multiple sets of solutions, corresponding to different layouts of sports facilities, and can provide reference for the location problem of rural sports facilities. In real life, there are many similar problems, such as the location of health stations, market sites, and so on. Although these problems have different constraints, similar ideas can be used to solve them. In this paper, the location of sports facilities is taken as a variable, and the location of sports facilities under different target conditions is studied carefully, but the terrain limitation is not considered. If the obtained sports facilities are located on non-construction land, such as paddy fields and agricultural land, etc., it is necessary to avoid these addresses manually. In real life, faced with these complex conditions, the constraints on the location of sports facilities will increase. Due to the high complexity of the specific problem, this experiment may not be considered enough. Therefore, it is very important to propose a new model and improve MOWSO to solve problems in this field under the condition of considering many factors comprehensively.

Author Contributions

Y.Z. (Yan Zheng): experimental results analysis. B.G.: methodology, writing—original draft. Y.Z. (Yongquan Zhou): writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant Nos. U21A20464.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. WHO Highlights High Cost of Physical Inactivity in First-Ever Global Report; WHO: Geneva, Switzerland, 2022. [Google Scholar]
  2. Day, K. Built environmental correlates of physical activity in China: A review. Prev. Med. Rep. 2016, 3, 303–316. [Google Scholar] [CrossRef]
  3. World Health Organization, Division of Health Promotion, Education & Communication. Ottawa Charter for Health Promotion; World Health Organization: Copenhagen, Denmark, 1986; pp. 3–5. [Google Scholar]
  4. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  5. Ilonen, J.; Kamarainen, J.K.; Lampinen, J. Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 2003, 17, 93–105. [Google Scholar] [CrossRef]
  6. Cao, J.; Lin, Z.; Huang, G.B. Composite function wavelet neural networks with differential evolution and extreme learning machine. Neural Process. Lett. 2011, 33, 251–265. [Google Scholar] [CrossRef]
  7. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  8. Yu, J.; Xi, L.; Wang, S. An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Process. Lett. 2007, 26, 217–231. [Google Scholar] [CrossRef]
  9. Fan, S.K.S.; Zahara, E. A hybrid simplex search and particle swarm optimization for unconstrained optimization. Eur. J. Oper. Res. 2007, 181, 527–548. [Google Scholar] [CrossRef]
  10. Ali, Y.M.B. Unsupervised clustering based an adaptive particle swarm optimization algorithm. Neural Process. Lett. 2016, 44, 221–244. [Google Scholar] [CrossRef]
  11. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Li, T. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  12. Wang, Z.; Luo, Q.; Zhou, Y. Hybrid metaheuristic algorithm using butterfly and flower pollination base on mutualism mechanism for global optimization problem. Eng. Comput. 2022, 37, 3665–3698. [Google Scholar] [CrossRef]
  13. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, WA, USA, 6–9 July 1999; IEEE: New York, NY, USA, 1999; Volume 2, pp. 1470–1477. [Google Scholar]
  14. Wang, P.; Zhou, Y.; Luo, Q.; Han, C.; Niu, Y.; Lei, M. Complex-valued encoding metaheuristic optimization algorithm: A comprehensive survey. Neurocomputing 2020, 407, 313–342. [Google Scholar] [CrossRef]
  15. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  16. Gupta, S.; Deep, K.; Engelbrecht, A.P. A memory guided sine cosine algorithm for global optimization. Eng. Appl. Artif. Intell. 2020, 93, 103718. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  18. Luo, J.; Liu, Z. Novel grey wolf optimization based on modified differential evolution for numerical function optimization. Appl. Intell. 2020, 50, 468–486. [Google Scholar] [CrossRef]
  19. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  20. Wang, L.; Du, H.; Zhang, Z.; Hu, G.; Mirjalili, S.; Khodadadi, N.; Hussien, A.G.; Liao, Y.; Zhao, W. Tianji’s horse racing optimization (THRO): A new metaheuristic inspired by ancient wisdom and its engineering optimization applications. Artif. Intell. Rev. 2025, 58, 282. [Google Scholar] [CrossRef]
  21. Mozhdehi, A.T.; Khodadadi, N.; Aboutalebi, M.; El-kenawy, E.-S.M.; Hussien, A.G.; Zhao, W.; Nadimi-Shahraki, M.H.; Mirjalil, S. Divine Religions Algorithm: A novel social-inspired metaheuristic algorithm for engineering and continuous optimization problems. Clust. Comput. 2025, 28, 253. [Google Scholar] [CrossRef]
  22. Rodan, A.; Al-Tamimi, A.-K.; Al-Alnemer, L.; Mirjalili, S. Stellar oscillation optimizer: A nature-inspired metaheuristic optimization algorithm. Clust. Comput. 2025, 28, 362. [Google Scholar] [CrossRef]
  23. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  25. Zhong, K.; Zhou, G.; Deng, W.; Zhou, Y.; Luo, Q. MOMPA: Multi-objective marine predator algorithm. Comput. Methods Appl. Mech. Eng. 2021, 385, 114029. [Google Scholar] [CrossRef]
  26. Zhou, Y.; Miao, F.; Luo, Q. Symbiotic organisms search algorithm for optimal evolutionary controller tuning of fractional fuzzy controllers. Appl. Soft Comput. 2019, 77, 497–508. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  28. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; AI-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  29. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  30. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  31. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature PPSN VI: 6th International Conference, Paris, France, 18–20 September 2000; Springer: Berlin/Heidelberg, Germany, 2000; pp. 849–858. [Google Scholar]
  32. Srinivas, N.; Deb, K. Muilti-objective optimization using non-dominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  33. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multi-Objective Optimization. In Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems, Proceedings of the EUROGEN’2001, Athens, Greece, 19–21 September 2001; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  34. Deb, K.; Tiwari, S. Omni-optimizer: A generic evolutionary algorithm for single and multi-objective optimization. Eur. J. Oper. Res. 2008, 185, 1062–1087. [Google Scholar] [CrossRef]
  35. Liang, J.J.; Yue, C.T.; Qu, B.Y. Multimodal multi-objective optimization: A preliminary study. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; IEEE: New York, NY, USA, 2016; pp. 2454–2461. [Google Scholar]
  36. Yue, C.; Qu, B.; Liang, J. A multi-objective particle swarm optimizer using ring topology for solving multimodal multi-objective problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  37. Abdollahzadeh, B.; Gharehchopogh, F.S. A multi-objective optimization algorithm for feature selection problems. Eng. Comput. 2022, 38, 1845–1863. [Google Scholar] [CrossRef]
  38. Kalita, K.; Jangir, P.; Pandya, S.B.; Shanmugasundar, G.; Abualigah, L. Unveiling the Many-Objective Dragonfly Algorithm’s (MaODA) efficacy in complex optimization. Evol. Intell. 2024, 17, 3505–3533. [Google Scholar] [CrossRef]
  39. Yang, Y.; Yang, Y.; Liao, B. Dual population multi-objective evolutionary algorithm for dynamic co-transformations. Evol. Intell. 2024, 17, 3269–3289. [Google Scholar] [CrossRef]
  40. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  41. Lakshmanan, M.; Kumar, C.; Jasper, J.S. Optimal parameter characterization of an enhanced mathematical model of solar photovoltaic cell/module using an improved white shark optimization algorithm. Optim. Control Appl. Methods 2023, 44, 2374–2425. [Google Scholar] [CrossRef]
  42. Supreeth, S.; Bhargavi, S.; Margam, R.; Annaiah, H.; Nandalike, R. Virtual Machine Placement Using Adam White Shark Optimization Algorithm in Cloud Computing. SN Comput. Sci. 2023, 5, 21. [Google Scholar] [CrossRef]
  43. Farhat, M.; Kamel, S.; Elseify, M.A.; Abdelaziz, A.Y. A modified white shark optimizer for optimal power flow considering uncertainty of renewable energy sources. Sci. Rep. 2024, 14, 3051. [Google Scholar] [CrossRef]
  44. Hassan, M.H.; Kamel, S.; Selim, A.; Shaheen, A.; Yu, J.; El-Sehiemy, R. Efficient economic operation based on load dispatch of power systems using a leader white shark optimization algorithm. Neural Comput. Appl. 2024, 36, 10613–10635. [Google Scholar] [CrossRef]
  45. Drezner, E. Facility location: A survey of applications and methods. J. Oper. Res. Soc. 1996, 47, 1421. [Google Scholar] [PubMed]
  46. Farahani, R.Z.; Hekmatfar, M. Facility Location: Concepts, Models, Algorithms and Case Studies; Farahani, R.Z., Hekmatfar, M., Eds.; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  47. Church, R.L.; Garfinkel, R.S. Locating an obnoxious facility on a network. Transp. Sci. 1978, 12, 107–118. [Google Scholar] [CrossRef]
  48. Justesen, P.D. Multi-Objective Optimization Using Evolutionary Algorithms; University of Aarhus: Aarhus, Denmark, 2009. [Google Scholar]
  49. Deng, W.; Zhang, L.; Zhou, X.; Zhou, Y.; Sun, Y.; Zhu, W.; Chen, H.; Deng, W.; Chen, H.; Zhao, H. Multi-strategy particle swarm and ant colony hybrid optimization for airport taxiway planning problem. Inf. Sci. 2022, 612, 576–593. [Google Scholar] [CrossRef]
  50. Zhang, Q.; Zhou, A.; Jin, Y. RM-MEDA: A regularity model-based multi-objective estimation of distribution algorithm. IEEE Trans. Evol. Comput. 2008, 12, 41–63. [Google Scholar] [CrossRef]
  51. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; Da Fonseca, V.G. Performance assessment of multi-objective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
  52. Zhou, A.; Zhang, Q.; Jin, Y. Approximating the set of Pareto-optimal solutions in both the decision and objective spaces by an estimation of distribution algorithm. IEEE Trans. Evol. Comput. 2009, 13, 1167–1189. [Google Scholar] [CrossRef]
  53. Tan, K.C.; Lee, T.H.; Khor, E.F. Evolutionary algorithms for multi-objective optimization: Performance assessments and comparisons. Artif. Intell. Rev. 2002, 17, 251–290. [Google Scholar] [CrossRef]
  54. Schott, J.R. Fault Tolerant Design Using Single and Multi-Criteria Genetic Algorithm Optimization. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995. [Google Scholar]
  55. Liang, J.; Suganthan, P.N.; Qu, B.Y.; Gong, D.W. Problem definitions and evaluation criteria for the CEC 2020 special session on multimodal multiobjective optimization. Comput. Intell. Lab. Zhengzhou Univ. 2019, 10, 353–370. [Google Scholar]
Figure 1. The true distance of the solution.
Figure 1. The true distance of the solution.
Biomimetics 10 00537 g001
Figure 2. Flow chart of the MOWSO.
Figure 2. Flow chart of the MOWSO.
Biomimetics 10 00537 g002
Figure 3. The Pareto set obtained by all algorithms on F3 (MMF4).
Figure 3. The Pareto set obtained by all algorithms on F3 (MMF4).
Biomimetics 10 00537 g003
Figure 4. The Pareto set obtained by all algorithms on F14 (MMF14_a).
Figure 4. The Pareto set obtained by all algorithms on F14 (MMF14_a).
Biomimetics 10 00537 g004
Figure 5. The Pareto set obtained by all algorithms on F16 (MMF10_l).
Figure 5. The Pareto set obtained by all algorithms on F16 (MMF10_l).
Biomimetics 10 00537 g005aBiomimetics 10 00537 g005b
Figure 6. The Pareto set obtained by all algorithms on F24 (MMF16_l3).
Figure 6. The Pareto set obtained by all algorithms on F24 (MMF16_l3).
Biomimetics 10 00537 g006
Figure 7. Pareto front of all algorithms on four functions.
Figure 7. Pareto front of all algorithms on four functions.
Biomimetics 10 00537 g007aBiomimetics 10 00537 g007b
Figure 8. Comparison results (IGDF) of the box plot on CEC 2020 test functions.
Figure 8. Comparison results (IGDF) of the box plot on CEC 2020 test functions.
Biomimetics 10 00537 g008aBiomimetics 10 00537 g008b
Figure 9. Case location and village distribution map.
Figure 9. Case location and village distribution map.
Biomimetics 10 00537 g009
Figure 10. Location of sports facilities on the Pareto front.
Figure 10. Location of sports facilities on the Pareto front.
Biomimetics 10 00537 g010
Figure 11. MOWSO obtained the distribution of sports facilities.
Figure 11. MOWSO obtained the distribution of sports facilities.
Biomimetics 10 00537 g011
Figure 12. Pareto front of MOWSO, MOPSO, and SPEA2 algorithms.
Figure 12. Pareto front of MOWSO, MOPSO, and SPEA2 algorithms.
Biomimetics 10 00537 g012
Table 1. Literature review of metaheuristic algorithms.
Table 1. Literature review of metaheuristic algorithms.
ClassificationAlgorithmAdvantagesImproved Directing
Single objective/Multi-objectDE [4,5,6,7]Simple principle, few controllable parameters, strong robustnessHow to improve the optimization ability and convergence speed of the algorithm, and overcome the common defects of precocious convergence and search stagnation of heuristic algorithms
PSO [8,9,10,11]The convergence speed is fast, the parameters are few, and the algorithm is simple and easy to implementThere is a problem of becoming stuck in local optimal solution, so it relies on good initialization
FA [12]Effective for multimodal optimization and performs well when control parameters are adaptiveHighly sensitive to parameter settings and requires extensive tuning for complex problems
ACO [13,14]Robust for combinatorial problems, adaptable via pheromone information, and inspired by real ant behaviorSlow convergence and high computational complexity in large search spaces
SCA [15,16]Uses sine and cosine functions to explore and exploit search spaces effectivelyMay struggle with complex, highly constrained problems
GWO [17,18,19]Inspired by grey-wolf hierarchy and hunting mechanisms. Performs well on engineering design problems and benchmark functionStruggles with high-dimensional spaces compared to newer algorithm
THRO [20]The need for such a proposal arises from the limitations of existing optimization algorithms, which often struggle with convergence speed and solution accuracy when solving complex problemsTHRO addresses these challenges by employing a unique dynamic individual matching strategy that enhances the algorithm’s convergence rate and solution precision
DRA [21]The DRA significantly outperforms other methods, delivering superior outcomes across multiple aspects, proving its efficacy in solving complex optimization problemsTo explore the application of DRA in addressing optimization challenges across diverse scientific domains and practical scenarios, considering its potential to contribute to numerous future endeavors
SOO [22]SOO consistently outperforms its competitors across a wide variety of optimization tasksSOO will focus on extending its capabilities to address a broader range of optimization challenges. The versatility of SOO opens doors to numerous application areas, particularly in machine learning and data analysis
Table 2. The mean and standard deviation of 1/PSP obtained by running the algorithm 20 times.
Table 2. The mean and standard deviation of 1/PSP obtained by running the algorithm 20 times.
NumberMOWSOMOPSODN NSGA-IINSGA-IIOmni-OptSPEA2
F1Mean0.05240.12160.08590.11330.08300.0722
Std0.00590.02240.01410.01630.01410.0275
F2Mean0.02050.22180.09320.08040.14710.0726
Std0.00500.15460.04500.02280.17330.0588
F3Mean0.03640.06110.09610.16170.08460.0770
Std0.01060.01780.01510.03560.02110.0332
F4Mean0.10120.18790.16350.20600.16070.1253
Std0.02050.02690.01430.03130.01450.0372
F5Mean0.03680.06930.04780.07650.04080.0474
Std0.01200.02150.00800.01000.00920.0156
F6Mean0.37360.94560.29942.69980.34718.9296
Std0.73960.60400.13772.36630.16596.1643
F7Mean0.01440.12820.11520.04600.13560.0024
Std0.00340.15510.13360.08580.16400.0002
F8Mean0.00560.00490.00460.00330.00450.0033
Std0.00040.00030.00020.00040.00030.0004
F9Mean0.00290.00540.00400.00160.00380.0014
Std0.00070.00130.00780.00040.00790.0005
F10Mean0.03260.07370.06680.11390.06590.4046
Std0.00290.05600.00920.04210.00880.6052
F11Mean0.06730.09100.12820.12830.11230.2698
Std0.00210.02800.01080.01110.00810.0354
F12Mean0.04790.04790.08090.07490.07100.1076
Std0.00170.00200.01050.00640.00630.0138
F13Mean2.35805.49571.83123.16923.682111.6075
Std1.51594.97641.26741.76905.64897.3521
F14Mean0.07940.10370.15170.16320.13840.3530
Std0.00360.01140.01440.01500.01090.1802
F15Mean0.05440.06310.10900.11150.09390.1862
Std0.00130.00390.01440.01530.01500.0413
F16Mean0.06173.07573.20475.12043.7723.2035
Std0.03562.59333.91276.68014.36181.5423
F17Mean0.54721.64141.64692.6451.81043.3101
Std0.51580.25740.07990.65420.10862.0882
F18Mean0.58721.33752.07863.80002.27363.5535
Std0.66630.25870.53441.43320.14941.4231
F19Mean0.34400.58540.60990.79480.61641.3279
Std0.08660.06780.02060.13590.04740.7369
F20Mean0.15160.59130.32050.45570.31890.455
Std0.01570.03040.14260.21080.14020.1533
F21Mean0.16660.27380.20690.28290.23500.2991
Std0.01720.01980.02070.02960.03220.0485
F22Mean0.12260.22800.23040.24950.22770.2795
Std0.00640.01930.04470.04450.03800.0513
F23Mean0.18840.78380.37770.40950.45710.5305
Std0.01960.07390.12530.19560.17400.1430
F24Mean0.15480.31360.27530.31870.29700.3553
Std0.00930.04940.02960.05600.04230.0794
Table 3. The mean and standard deviation of IGDX obtained by running the algorithm 20 times.
Table 3. The mean and standard deviation of IGDX obtained by running the algorithm 20 times.
NumberMOWSOMOPSODN NSGA-IINSGA-IIOmni-OptSPEA2
F1Mean0.05190.11920.08480.11080.08120.0704
Std0.00550.02150.01360.01550.01320.0246
F2Mean0.01980.18950.08650.07540.11620.0683
Std0.00440.09210.04020.02290.09800.0524
F3Mean0.03590.06080.09580.15600.08420.0742
Std0.00990.01740.01500.03320.02110.0295
F4Mean0.10010.18290.16040.20140.15840.1218
Std0.01980.02480.01370.02670.01390.0319
F5Mean0.03610.06310.04740.07390.04000.0448
Std0.01140.01640.00760.00870.00850.0134
F6Mean0.25610.77310.28941.33620.33822.3822
Std0.35940.39720.13010.58090.16160.6387
F7Mean0.01400.12610.11250.04410.13390.0024
Std0.00320.15260.13270.08510.16420.0002
F8Mean0.00560.00490.00460.00330.00450.0033
Std0.00040.00030.00020.00040.00030.0004
F9Mean0.00290.00540.00400.00160.00380.0014
Std0.00070.00130.00780.00040.00790.0005
F10Mean0.03230.09360.06670.09570.06570.1342
Std0.00270.05930.00910.01820.00870.0425
F11Mean0.06730.09100.12820.12830.11230.2609
Std0.00210.02800.01080.01110.00810.0315
F12Mean0.04790.04760.08090.07490.07100.1033
Std0.00170.00200.01050.00640.00630.0107
F13Mean1.36292.12211.17461.71551.44042.9145
Std0.62730.82210.53680.60440.84860.5991
F14Mean0.07930.10370.15170.16300.13830.2651
Std0.00360.01140.01440.01510.01090.0708
F15Mean0.05430.06310.10900.11150.09390.1627
Std0.00120.00390.01440.01530.01500.0166
F16Mean0.06110.17720.18190.17770.17700.1967
Std0.03520.04460.03770.03880.04030.0114
F17Mean0.21460.24920.25040.25130.25030.2509
Std0.02660.00080.00020.00040.00040.0019
F18Mean0.18260.24460.24400.24720.24640.2455
Std0.04600.00150.01160.00040.00020.0045
F19Mean0.23980.28060.29100.31250.28850.3511
Std0.01770.01960.00970.01160.01150.0273
F20Mean0.15160.26680.24580.26770.24400.2955
Std0.01570.00180.02660.02520.02520.0222
F21Mean0.16400.22660.20430.24410.21910.2519
Std0.01360.00630.01590.00800.01420.0211
F22Mean0.12260.17560.20380.21430.19900.2388
Std0.00640.01180.01410.01300.00940.0160
F23Mean0.18830.33810.30230.30810.30740.3453
Std0.01930.00260.02520.03640.03420.0226
F24Mean0.15480.23090.24420.26190.24630.2779
Std0.00930.02800.01150.01650.01240.0284
Table 4. The mean and standard deviation of IGDF obtained by running the algorithm 20 times.
Table 4. The mean and standard deviation of IGDF obtained by running the algorithm 20 times.
NumberMOWSOMOPSODN NSGA-IINSGA-IIOmni-OptSPEA2
F1Mean0.00290.01130.00390.00290.00320.0033
Std0.00030.00340.00070.00020.00030.0002
F2Mean0.01040.10080.01040.00730.01160.0121
Std0.00100.03150.01050.00820.02380.0038
F3Mean0.00260.00570.00320.00260.00280.0034
Std0.00030.00240.00020.00020.00020.0002
F4Mean0.00270.00930.00340.00270.00290.0032
Std0.00030.00170.00020.00010.00020.0001
F5Mean0.00260.00880.00390.00260.00310.0033
Std0.00030.00340.00030.00010.00010.0002
F6Mean0.00500.10930.00370.00260.00300.0033
Std0.00320.10540.00030.00010.00010.0002
F7Mean0.05290.13250.12530.05260.12740.0101
Std0.01160.13510.1180.06920.13330.0009
F8Mean0.01280.01600.01320.01130.01200.0139
Std0.00090.00130.00150.00100.00100.0012
F9Mean0.00430.01080.00470.00230.00420.0028
Std0.00160.00280.00790.00050.00800.0003
F10Mean0.01840.02260.02190.01380.01570.0176
Std0.00210.00490.00270.00070.00120.0012
F11Mean0.08470.09180.13830.13780.12450.2457
Std0.00230.00250.00880.01050.00680.0388
F12Mean0.08860.09140.16560.14260.13810.2355
Std0.00210.00220.01890.01150.01070.0344
F13Mean0.03500.07300.01590.01030.02290.0665
Std0.02970.03050.01350.00670.02560.0478
F14Mean0.08430.09650.14890.14150.12810.2797
Std0.00240.00290.01250.01180.00600.0642
F15Mean0.09020.10030.18210.16400.14920.2819
Std0.00270.00330.01910.01600.01370.0353
F16Mean0.09670.17640.19650.19170.18130.1951
Std0.02780.02440.04200.04000.03630.0161
F17Mean0.08290.09770.09720.09490.09610.0983
Std0.00720.00140.00140.00050.00110.0025
F18Mean0.06060.09240.08540.08280.08290.0833
Std0.01210.00420.00680.00010.00010.0039
F19Mean0.09480.15430.15790.14870.15070.1570
Std0.01850.00760.00580.00220.00230.0047
F20Mean0.17350.19820.24390.23730.21980.3295
Std0.00380.00280.01440.00940.00950.0449
F21Mean0.17070.19510.24600.23840.22720.3261
Std0.00290.00320.00850.01200.00790.0284
F22Mean0.14890.16480.21150.21140.19510.3297
Std0.00260.00320.01240.01070.00640.0363
F23Mean0.21100.24800.27550.27360.26100.3465
Std0.00510.00510.01400.01280.01040.0407
F24Mean0.18010.20370.24780.24890.23280.3497
Std0.00380.00320.01090.01250.00880.0459
Table 5. The mean and standard deviation of 1/HV obtained by running the algorithm 20 times.
Table 5. The mean and standard deviation of 1/HV obtained by running the algorithm 20 times.
NumberMOWSOMOPSODN NSGA-IINSGA-IIOmni-OptSPEA2
F1Mean1.14651.33471.14851.14661.14721.1475
Std0.00080.31780.00130.00090.00080.0006
F2Mean1.16511.38381.15831.15351.15751.1829
Std0.00260.07980.01400.01020.02640.0536
F3Mean1.85611.01051.85811.85301.85521.8593
Std0.00303.01660.00090.00060.00090.0023
F4Mean1.14621.17421.14811.14571.14661.1475
Std0.00070.05780.00120.00040.00050.0005
F5Mean1.14581.42501.14951.14571.14701.1473
Std0.00070.28800.00110.00030.00030.0005
F6Mean2.401513.35762.38222.37152.37432.4049
Std0.028655.83860.00360.00060.00060.0175
F7Mean0.08100.08120.08070.07950.08080.0777
Std0.00110.00390.00320.00260.00380.0000
F8Mean0.06890.06890.06890.06890.06890.0689
Std0.00000.00000.00000.00000.00000.0000
F9Mean0.63720.64170.64470.63560.64450.6365
Std0.00150.00330.04000.00050.04010.0010
F10Mean0.05430.05430.05430.05430.05430.0543
Std0.00000.00000.00000.00000.00000.0000
F11Mean0.38560.35380.32750.35550.33920.4043
Std0.03520.01180.01720.00850.01480.0611
F12Mean0.25430.23990.23780.23960.23810.2697
Std0.00850.00670.01680.00740.01150.0254
F13Mean1.22100.85596.56331.16071.17231.1734
Std0.08282.89568.53020.01020.02690.4056
F14Mean0.38350.33610.31720.34860.33210.4115
Std0.02020.01400.01830.00910.01310.0851
F15Mean0.25440.23310.23520.23230.23620.2746
Std0.01540.00600.01020.00560.01180.0283
F16Mean0.08150.08090.08090.08000.08080.0779
Std0.00100.00400.00290.00250.00310.0007
F17Mean0.06890.06890.06890.06890.06890.0689
Std0.00000.00000.00000.00000.00000.0000
F18Mean0.63770.64260.64530.63550.63560.6365
Std0.00140.00450.04000.00010.00000.0012
F19Mean0.05430.05430.05430.05430.05430.0543
Std0.00000.00010.00000.00000.00000.0000
F20Mean0.25580.23670.24070.23860.23560.2674
Std0.00990.00660.01470.00680.01180.0314
F21Mean0.25620.23600.24300.23490.23270.2707
Std0.01150.00540.01950.00630.00850.0338
F22Mean0.25170.23230.23060.23390.23120.2692
Std0.01160.00530.00910.00560.00630.0325
F23Mean0.25380.23260.23140.23370.23500.2763
Std0.01080.00680.01020.00580.00880.038
F24Mean0.25440.23410.23290.23500.23120.2653
Std0.01660.00610.01280.00580.00900.0311
Table 6. Friedman rank test.
Table 6. Friedman rank test.
Algorithm1/PSP IGDX IGDF 1/HV Result
ScoreRankScoreRankScoreRankScoreRankScoreRank
MOWSO1.500011.416711.916714.208352.26041
MOPSO3.875043.625044.208343.854243.89065
NSGA-II3.375033.583334.583353.312533.71354
DN_ NSGA-II4.458354.604262.500022.250013.45313
Omni-Opt3.333323.250023.000032.833323.10422
SPEA24.458364.520854.701764.541764.57816
Table 7. Setting of algorithm parameters.
Table 7. Setting of algorithm parameters.
AlgorithmParameters
Population τ f max f min p max p min
MOWSO504.1250.750.071.50.5
Table 8. Partial Pareto optimal solution.
Table 8. Partial Pareto optimal solution.
Objective FunctionAlgorithm
MOWSOMOPSOSPEA2
F1_MinF2_MaxF3_MaxF1_MinF2_MaxF3_MaxF1_MinF2_MaxF3_Max
F192,801112,556184,41696,88210,202186,53994,98195,258114,161
F2424619434518455238
F3844369313418470270207603
Table 9. Pareto optimal solution obtained by MOWSO.
Table 9. Pareto optimal solution obtained by MOWSO.
SolutionPosition CoordinateFacility Point NumberObjective Function Value
123456789F1F2F3
1x73931949230955877639420552066318562993,8054674
y60388433435510,1183380905812,60317506574
2x465921716945582076866563184137634539102,61448195
y89799328557210,94922918014364211,1966629
3x246440013289665170274461383524392793155,10625633
y164320519725678112,46711,076385011,1486162
Table 10. The algorithm obtains the value of the objective function.
Table 10. The algorithm obtains the value of the objective function.
Objective FunctionAlgorithm
MOWSOMOPSOSPEA2
F1F2F3F1F2F3F1F2F3
Mean127635120513103392231278361219
Table 11. The average of 10 runs of the algorithm on SP and UD.
Table 11. The average of 10 runs of the algorithm on SP and UD.
EvaluationAlgorithm
MOWSOMOPSOSPEA2
MinMeanMinMeanMinMean
SP201557549524219.67209516,643
UD0.11750.21130.14750.22770.11190.2328
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Guo, B.; Zhou, Y. Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem. Biomimetics 2025, 10, 537. https://doi.org/10.3390/biomimetics10080537

AMA Style

Zheng Y, Guo B, Zhou Y. Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem. Biomimetics. 2025; 10(8):537. https://doi.org/10.3390/biomimetics10080537

Chicago/Turabian Style

Zheng, Yan, Bin Guo, and Yongquan Zhou. 2025. "Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem" Biomimetics 10, no. 8: 537. https://doi.org/10.3390/biomimetics10080537

APA Style

Zheng, Y., Guo, B., & Zhou, Y. (2025). Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem. Biomimetics, 10(8), 537. https://doi.org/10.3390/biomimetics10080537

Article Metrics

Back to TopTop