Next Article in Journal
S-Contractive Mappings on Vector-Valued White Noise Functional Space and Their Applications
Previous Article in Journal
Vacuum Polarization Energy of a Proca Soliton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Multi-Objective Satellite Data Transmission Scheduling Problems via a Minimum Angle Particle Swarm Optimization

1
School of Computer Science, Shaanxi Normal University, Xi’an 710119, China
2
School of Electronic Engineering, Xidian University, Xi’an 710126, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(1), 14; https://doi.org/10.3390/sym17010014
Submission received: 27 November 2024 / Revised: 17 December 2024 / Accepted: 23 December 2024 / Published: 25 December 2024
(This article belongs to the Section Computer)

Abstract

:
With the increasing number of satellites and rising user demands, the volume of satellite data transmissions is growing significantly. Existing scheduling systems suffer from unequal resource allocation and low transmission efficiency. Therefore, effectively addressing the large-scale multi-objective satellite data transmission scheduling problem (SDTSP) within a limited timeframe is crucial. Typically, swarm intelligence algorithms are used to address the SDTSP. While these methods perform well in simple task scenarios, they tend to become stuck in local optima when dealing with complex situations, failing to meet mission requirements. In this context, we propose an improved method based on the minimum angle particle swarm optimization (MAPSO) algorithm. The MAPSO algorithm is encoded as a discrete optimizer to solve discrete scheduling problems. The calculation equation of the sine function is improved according to the problem’s characteristics to deal with complex multi-objective problems. This algorithm employs a minimum angle strategy to select local and global optimal particles, enhancing solution efficiency and avoiding local optima. Additionally, the objective space and solution space exhibit symmetry, where the search within the solution space continuously improves the distribution of fitness values in the objective space. The evaluation of the objective space can guide the search within the solution space. This method can solve multi-objective SDTSPs, meeting the demands of complex scenarios, which our method significantly improves compared to the seven algorithms. Experimental results demonstrate that this algorithm effectively improves the allocation efficiency of satellite and ground station resources and shortens the transmission time of satellite data transmission tasks.

1. Introduction

The rapid development of the space industry has led to a significant increase in the number of satellites in orbit. However, ground station resources remain relatively fixed, resulting in brief communication windows and substantial data transmission demands, posing a global challenge. By the end of 2022, Europe had more than 50 imaging satellites. All types of satellites, including remote sensing, communication, and navigation satellites, rely on stable data transmission to maintain contact with their ground stations, which is essential for their functionality and value. The efficiency of data transmission between satellites and ground stations is a critical factor in the performance of satellite communication systems. Therefore, efficiently scheduling multi-objective satellite data transmission scheduling problems (SDTSPs) has become increasingly urgent [1].
Data transmission scheduling is an NP-hard problem [2]. Also, adding more ground stations cannot address the rapidly growing number of satellites. Research on the conflict between limited ground station resources and the growing number of satellites has been ongoing for over a decade. In the 1990s, Gooley [3] used a mixed-integer programming model to address the scheduling problem of the satellite control network (AFSCN) in the United States. Later, Barbulescu [4] employed heuristic algorithms for the same issue and summarized the evolution of the AFSCN problem. Since then, many scholars have applied various evolutionary algorithms to address these problems [5,6]. For complex SDTSPs, swarm intelligence algorithms have become a primary solution method [7,8,9]. In recent years, new technologies have emerged [10] to solve satellite data transmission issues.
Although early studies have achieved some success in exploring SDTSP, they have not fully considered the cyclical nature of resource competition, the unique aspects of large-scale data transmission missions (SDTT), and the multi-dimensional optimization objectives. For instance, Earth observation satellites operate in fixed orbits, and ground stations have relatively fixed antenna receiving areas. When satellites enter these areas, they periodically compete with ground stations for data transmission, especially during large-scale transmissions. Traditional heuristic algorithms, such as simulated annealing or genetic algorithms, often fail to handle large-scale data transmission problems due to their complexity and time-consuming global search processes. Furthermore, current scheduling algorithms frequently fall into the trap of locally optimal solutions, failing to use the available time window resources fully.
To tackle these challenges effectively, employing suitable algorithms for managing large-scale multi-objective SDTSPs [11] is essential. Multi-objective evolutionary algorithms (MOEAs) have been widely used due to their excellent performance in practical problems [12,13]. The core of these algorithms lies in decision-making methods, typically categorized into Pareto dominance-based [14], index-based [15], and decomposition-based approaches [16]. However, for large-scale multi-objective optimization problems like SDTSP, traditional MOEAs often perform poorly, and research on large-scale multi-objective optimization algorithms remains limited.
Based on the Pareto dominance method, this paper develops a multi-objective SDTSP model to maximize transmission efficiency while minimizing latency. In addition, we enhance the minimum angle selection strategy of the particle swarm optimization (PSO) algorithm. This improvement effectively identifies global and local optimal particles, mitigating the risk of local optima and enhancing the efficiency of solving the SDTSP. To assess the effectiveness of this method, we conducted simulation experiments on tasks of different sizes. The findings show that the proposed method displays robust generalization capabilities and substantially enhances the efficiency of satellite data transmission tasks.
The contributions of this manuscript are as follows:
  • The SDTSP was modeled as an optimization problem with two objectives: minimizing the satellite transmission time and maximizing task gain simultaneously;
  • We proposed a modified MAPSO algorithm to solve the multi-objective SDTS problem. The proposed algorithm was encoded as a discrete optimizer, and the strategies were modified to enhance its searchability.
Here is the organization of this paper: Section 2 introduces the satellite data transmission problem and establishes its mathematical model. Section 3 describes our MAPSO based on the problem’s key characteristics. Section 4 details the experimental design and evaluates the findings. Finally, Section 5 offers conclusions and suggestions for further research.

2. Problem Description

First, the data transmission problem in satellite communication can be considered a combinatorial optimization challenge. Second, the process includes assigning transmission resources from the ground station when each satellite is in view. This problem can be broken down into three key decision stages:
  • Ensuring that the allocation order meets all constraints;
  • Determining the order of time window allocation for each satellite;
  • Minimizing the satellite transmission time and maximizing task gain.

2.1. Symbol Definition

The relevant symbols are described below:
T = { t i | 1 i n t } represent the set of data transmission tasks where T = n t .   S = { s j | 1 j n s } represents the collection of satellites where S = n s .   G = { g k | 1 k n g } represents the set of ground stations where G = n g .   A s j = { a s j m 1 m m s j } represents a collection of antennas, a set of antennas. The antennas at the ground station are g k with A g k = d g k . Each antenna a g k is associated with a frequency band a f g k d . For each task i ,   j ,   k ,   m     R , the task symbols’ definitions are given in Table 1.
W = { w j k z 1 j n s , 1 k n g , 1 z n w } represent the set of visibility time windows between ground stations and satellites, where W = n w . Definitions of visibility time window symbols are given in Table 2.
Since PSO is typically used for solving continuous problems, and this scheduling problem is discrete, we use a binary encoding method to represent each satellite’s selection state. Among them, we use binary encoding to represent the selection status of each satellite. Specifically, the selected satellites are denoted by 1, and the unselected satellites are denoted by 0. Based on this condition, the decision variables for the data transmission tasks are established. This decision variable ensures the data transmission between the satellite and the ground station matches.
ξ ikj = 1 indicates that satellite S transmits data to ground station G during the time window. ξ ikj = 0 indicates that satellite S does not transmit data to ground station G during the u time window. χ ikj = 1 indicates that ground station G receives data from satellite S , while χ ikj = 0 indicates that ground station k does not receive data from the satellite.

2.2. Assumptions

In order to develop analyses and research quickly, we make the following assumptions:
(1)
Visibility of Satellites and Ground Stations: Each satellite will maintain visibility with at least one ground station during its orbital period, ensuring that data transmission can be completed within the planned time window.
(2)
Uninterrupted Data Transmission: Communication between satellites and ground stations is assumed to be free from interference or blockages, ensuring the stability and continuity of data transmission.
(3)
Stable Power Supply: All satellites and ground stations are assumed to have a stable power supply. Specifically, satellites are considered to have sufficient power from batteries and solar panels to support long-duration continuous transmission tasks.
(4)
Independence of Tasks: Each data transmission task is independent, with no dependencies between tasks.
(5)
Constant Transmission Rate: The data transmission rate between satellites and ground stations is assumed to be constant, unaffected by environmental changes or equipment performance fluctuations, ensuring the predictability of task planning and execution.

2.3. Constraints

The following constraints are established based on the symbols and assumptions:
Constraint (1) guarantees that each task is scheduled within its designated transmission window:
i : s t t i s t w i < e t w i e t t i
Constraint (2) ensures that each task is scheduled within the visibility time windows between ground stations and satellites:
i , j , k , z : χ ikj = 1 ξ ikj = 1 s w j k z s t w i < e t w i e w j k z
Constraint (3) indicates that there should be a correspondence in the frequency bands between the satellite and ground station antenna:
i , j , k , d , z : χ ikj = 1 ξ ikj = 1 a f s j m = f t i a f g k d = f t i
Constraint (4) indicates that a single ground station antenna cannot handle multiple tasks concurrently:
i , j , k ; i 1 , i 2 i 1 i 2 : χ i 1 kj χ i 2 kj = 0 T W i 1 T W i 2 =
Constraint (5) guarantees that each available task is allocated to no more than one ground station antenna:
i , j , k , d , z ;   k 1 , k 2 ,   j 1 , j 2 k 1 k 2 d 1 d 2 : χ ik 1 j 1 + χ ik 2 j 2 1
Constraint (6) ensures that each task’s allocated transmission time slot meets the minimum transmission duration criteria:
i : d u r t i e t w i s t w i
Constraint (7) specifies that each task is scheduled at most once:
i , j ,   k : i t i χ ikj     ξ ikj   1
Constraint (8) guarantees that the interval between two tasks should be sufficient to accommodate the antenna transition time. C is a constant:
i 1 ,   i 2 ,   j 1 , j 2 , m 1 , m 2 , z 1 , z 2 , k , d : χ i 1 kj χ i 2 k 2 j = 1 m a x s t w i 1 , s t w i 2 m i n e t w i 1 , e t w i 2 C
Constraints (9) and (10) define the scope of values for the choice variables:
i , j , m , k , d , z : x i j m k d z { 0,1 }
i : s t w i , e t w i 0

2.4. Optimization Objectives

Two objectives equations are based on the characteristics of the data transmission tasks. The objective function F encompasses two objectives, f 1 and f 2   :
  • Maximize task gain: The task gain is meant to maximize the product of ground station and satellite priorities, determined by the following Equation (11):
    f 1 = max i = 1 n p t i · g k i
  • Minimize transmission time: Reduce the total time needed to accomplish the data transmission tasks, as calculated by the following Equation (12):
    f 2 = min   E T S T
The comprehensive objective function, denoted as F , is formulated as shown in Equation (13):
max   F = { f 1 ,   f 2 }
The proposed model effectively simplifies the complexities of large-scale satellite data transmission scheduling constraints to ensure that the scheduling of satellite data transmissions is efficient.

3. Methodology

3.1. Minimum Angle Particle Swarm Optimization

In multi-objective evolutionary algorithms, the MAPSO algorithm was first proposed by Gong [17] in 2005 as one of the pioneering angle-based evolutionary algorithms. In 2021, Yang [18] improved upon this algorithm by using vector-based angles to select elite particles, addressing the rapid convergence issue of PSO algorithms and finding wide applications in various fields [19,20].
To address the high complexity and large scale of data transmission tasks, we employ and improve the MAPSO algorithm [18] to balance convergence and diversity in the external archive set. By utilizing the minimum angle strategy to select global guides, the algorithm effectively avoids local optima, thereby improving solution quality, reducing solution time, and enhancing the efficiency of satellite data transmission tasks.

3.2. General Framework

The flowchart of the MAPSO algorithm is illustrated in Figure 1. Initially, parameters such as the number of ground stations, satellites, and the population size are input. A set of uniformly distributed weight vectors [21] is then generated. The population p is initialized by generating a group of randomly distributed particles in space with initial velocities v i (where { v i | 1 i n , i Z } ). An initial archive A is established to store the non-dominated solutions of population p , which is updated based on the clone strategy.
After completing the population initialization, we employ the archive update strategy to update the external archive. At the start of each iteration, the contents of archive A are copied to prevent the loss of solution sets. The PSO search strategy selects leaders from the archive to update the population. After updating the population, the archive update strategy is employed again to update the sets P and A . The procedure is repeated until the end condition is met. The algorithm comprises three key components: the clone, the PSO search process, and the archive update operation. The upcoming sections will offer a detailed introduction to these three components.

3.3. Clone

As shown in Algorithm 1, the clone steps aim to enhance the diversity and convergence of the external archive A , thereby accelerating the PSO convergence speed, particularly for the parameter-independent MAPSO algorithm. The process involves the following steps: initialization, SDE distance calculation, selection and duplication, and generation of offspring population. The clone process is illustrated in Figure 2. Below are the detailed descriptions.
Initialization: First, copy the external archive A to a temporary archive B and initialize a temporary archive C as empty. Make sure that Archive A remains intact in the following steps.
Shift-based density estimation (SDE) distance calculation: SDE is an improved density estimation strategy designed to address the challenges faced by Pareto-based evolutionary multi-objective optimization algorithms when handling multi-objective optimization problems. Traditional density estimation methods only reflect the distribution of individuals in the population, while SDE considers individuals’ distribution and convergence information. Calculate the SDE distance for each solution b i in B . The SDE distance measures the diversity of the solutions. Refer to the relevant literature for specific calculation steps and equations [22].
Algorithm 1 Clone Process
Input: External   repository
Output: New   temporary   solution   set   S
  • B     A
  • C    
  • AssignSDE ( B )
  • sort ( B ) // Sort B based on the SDE metric
  • if  Length ( B )   >   NC  then
  • B = B [ : ,   NC ]
  • end if
  • for  i = 1   to   Length ( B )  do
  • Calculate n i
  • Create n i   solutions for b i and append to C
  • end for
  • S←modify(C)
  • Return S
Selection and duplication: Next, select NC solutions from the temporary archive B for duplication. The number of duplications n i for each solution b i is calculated using the following (14):
n i = N × S D E b i i = 1 N C   S D E b i
where SDE ( b i ) presents the SDE distance of the solution b i , and N is the total number of solutions to be duplicated. This step ensures that solutions with more considerable SDE distances obtain more duplication opportunities, thereby enhancing the diversity and convergence of the population.
Generation of offspring population: Each solution b i is duplicated n i times and added to the temporary archive C , forming a new offspring population C . This population then undergoes evolutionary search, applying operations such as crossover and mutation, resulting in a new offspring population S . The clone selects superior solutions based on SDE distance, ensuring that the external archive A maintains good diversity and convergence, effectively guiding the evolution of population P in PSO and avoiding a local optimum. The evolutionary search strategy in MAPSO is implemented in the same manner as in NSGA-II [23].
The clone guarantees the population’s diversity and convergence and accelerates MAPSO convergence speed. Through these steps, the external archive A exhibits excellent diversity and convergence, promoting the practical evolution of the MAPSO population P , making the algorithm more efficient in handling multi-objective optimization problems.

3.4. Archive Update

The archive update step in Algorithm 2 determines which solutions will be added to archive A . The performance of multi-objective algorithms is closely tied to selecting the external archive set. Therefore, choosing solutions that exhibit good convergence and diversity is crucial. The clone process is illustrated in Figure 3. Below are the detailed descriptions.
Algorithm 2 Archive Update Process
Input :   External   repository   A ,   population   P ,   population   size   N ,   data
Output :   Updated   repository   A
  • C     A P  
  • If   A   is   not   empty   then
  • C     Normalize ( C )  
  • C     EvaluateFitness ( C )  
  • Cnd     SelectNonDominated ( C )     / /   Keep   only   non - dominated   solutions   C n d   in   C  
  • else
  • if   size ( C n d )   > N   then
  • A     SelectMaxMinAngle ( Cnd ,   A )
  • else
  • A     C
  • end if
  • Return A
The archive update operation updates the external archive A through the following steps:
  • Merge archives: Combine archive A with population P , creating a joint population C that includes both A and P .
  • Normalization: Normalize each solution in the joint population C , standardizing the objective values to a range between 0 and 1.
  • Calculation fitness: Calculate the fitness value for each solution in C using Equation (15):
    fit c j = i = 1 m f i ( c j ) 2
  • Non-dominated sorting: Pareto dominance filters out non-dominated solutions from C, forming the non-dominated solution set Cnd.
    • If the number of solutions in Cnd exceeds the population size N , use the minimum angle selection strategy to select solutions added to A ;
    • If the number of solutions in Cnd is less than or equal to N, directly add all solutions in Cnd to A . Return updated archive: return the updated archive A .
The fit c j is the fitness value of the c j solution in the joint population C .
Fitness value serves as a standard for evaluating the convergence of the solutions. Subsequently, the non-dominated solution set Cnd is selected from C -based Pareto dominance criteria.
Because in MAPSO, most solutions are non-dominated, the minimum angle selection strategy is adopted when the Pareto dominance criterion is no longer adequate. The steps of the minimum angle selection strategy, as detailed in Algorithm 3, can be divided into three main parts:
  • Eliminate dominance-resistant solutions (DRS):
    • First, remove DRS from the non-dominated solutions, Cnd. These solutions are highly inferior in at least one objective and difficult to dominate;
    • Calculate the minimum vector angle for each δ ij solution in Cnd , representing the smallest angle between the solution and other solutions;
    • Add m extreme solutions to archive A . Extreme solutions are those in Cnd that have the smallest angle with the reference vectors (1, 0, …, 0), (0, 1, …, 0), …, (0, 0, …, 1).
  • Remove minimum vector angle of solutions:
    • Remove solutions with the smallest vector angles from Cnd until the number of solutions in C n d reaches N-m;
    • During the removal process, select solutions with lower fitness values, meaning those with fitness values lower than other solutions;
    • Removed solutions are added to the removed solution set R .
The angle calculation method directly affects particle distribution and search space exploration. This study uses the following (16) equation for calculating the angle between particles:
δ ij = δ x i , x j = arcsin f x i · f x j | f x i | · | f x j |
instead of (17):
δ ij = δ x i , x j = arcos f x i · f x j | f x i | · | f x j |
The benefits of using Equation (16) (based on arcsin) are as follows:
(1)
Intuitive geometric interpretation and visualization: The range of arcsinX is closer to actual movement angles, making it easier to interpret and visualize particle movement paths. Due to its smaller range, arcsinX can better describe angle changes in physical systems, preventing significant angle jumps and facilitating fine-tuning.
Assume a satellite is orbiting along a circular path around the Earth. The angle θ between the satellite and a point on the Earth’s surface can be represented as θ = arcsin(d/h), where d is the distance along the Earth’s surface from the point directly beneath the satellite to the point of interest, and h is the altitude of the satellite.
Algorithm 3 Minimum Angle Selection Process
  • while   size ( Cnd ) >   N   do
  • [ MaxVal ,   Cnd ]     FindMax ( Cnd )
  • C     Normalization ( C )   //   Identify   the   maximum   objective   value   MaxVal
  • If   MaxVal > 1   then
  • Remove ( Cnd )   / / Eliminate   Cnd  
  • else  
  • break  
  • end   if
  • end   while
  • if   size ( C n d )   > N   then
  • AddExtremeSolutions ( A ,   m ) ,   i   1 ,   2 ,   ,   m   / /   add   m   extreme   solution   E i   to   A
  • while   size ( C n d )   <   0.9   ·   N m   do
  • [ c k , c j ] = findMinAngle ( Cnd )   / /   Identify   two   solutions   c k   and   c j   with   the   smallest   angle   δ c k   in   Cnd
  • if   Fitness   {   c k   }   <   Fitness { c j   } then
  • R   U   {   c j   }
  • Remove { c j   }   / / remove   the   c j   from   C n d
  • else
  • R   U   {   c k   }
  • remove   {   c k   }   / / remove   the   c k   from   Cnd
  • end   if
  • end   while
  • A = A   U   Cnd
  • while   size ( A ) < N   do
  • r k = FindMaxAngle ( A ,   R )   / /   Find   the   solution   r k   with   maximum   δ r k ,   in   R
  • A   U   { r k }
  • remove { r k }   / / Eliminate   r k   from   R
  • end   while
  • end   if
  • Return   A
We compare it to a linear approximation of the actual movement angles to show that arcsinX provides a closer approximation. The linear approximation can be written as θ     d / h , which is only accurate for tiny angles ( d < < h ).
When we expand θ = a r c s i n ( d / h ) with the Taylor series, we can use the general form of the Taylor series a r c s i n ( x ) using Equation (18).
a r c s i n x = x + x 3 6 + 3 x 5 40 + O x 7
For θ = a r c s i n d h , we consider d h as x . Thus, the expansion becomes θ = a r c s i n d h = d h + 1 6 d h 3 + 3 40 d h 5 + O d h 7 .
This expansion shows that for small values of d h , the linear approximation d h is reasonable. However, as d h increases, higher-order terms become significant, making the approximation of a r c s i n d h more accurate.
Figure 4 illustrates a comparison between the calculated actual movement angles   ( θ ) using the a r c s i n ( d / h ) function and a straightforward linear approximation   ( d / h ) . The plot demonstrates that the arcsin function (blue curve) closely follows the accurate movement angles, especially for larger values of d, illustrating its superior accuracy. In contrast, the linear approximation (red dashed line) deviates more as d increases, highlighting the effectiveness of the arcsin approach in accurately representing movement angles.
(2)
Numerical stability: a r c s i n X varies more smoothly within its range, and its derivative does not tend to infinity, resulting in more stable numerical computations. In contrast, a r c c o s X has larger derivative changes near boundary values, which can cause numerical instability.
(3)
Fine-grained search: For multi-objective optimization problems in satellite data transmission, optimizing transmission paths and resource allocation requires high precision and fine adjustments. a r c s i n X offers finer search granularity, ensuring better solutions even under complex constraints. The smooth and gradual changes of a r c s i n X lead to more uniform particle distribution and stable computations.
In summary, Equation (16) based on a r c s i n X is more suitable for PSO algorithms dealing with complex multi-objective optimization problems, such as satellite data transmission systems, due to its intuitive geometric interpretation, numerical stability, and fine-grained search capability.
3.
Add Solutions with Maximum Vector Angles:
The process of the minimum angle selection operation is illustrated in Figure 5. The triangular point (▲) represents solutions in R, while solid (•) belongs to A . R represents the set of solutions that were removed in step (2). A well-distributed solution r i is removed, indicating that some well-distributed solutions might be eliminated. MAPSO allows solutions removed from R to be re-added to A if the number of solutions in C n d is less than N−m. Find the solution in R with the maximum vector angle relative to solutions in A using the following (19):
δ r i   = min j = 1 | A |   ( angle ( r i ,   a j ) ) ,   i = 1 ,   2 ,     ,   | R |
Add this solution to A , and remove it from R . Repeat this process until the number of solutions in A reaches N . Merge C and R , removing solutions already in A to form the new combined population C . Return the updated archive A .

3.5. PSO Search

The procedure for PSO is detailed in Algorithm 4. The PSO method is employed to update the population P . Before updating the solutions’ position information, it is crucial to determine pbest and gbest. In multi-objective optimization problems, solutions cannot be directly compared using a single objective value; however, Pareto dominance can be utilized. In contrast, for many-objective optimization problems, most solutions are non-dominated, which complicates the selection of suitable solutions (pbest and gbest) for updating the population. Therefore, pbest and gbest are chosen based on the particle angle size. Similar to conventional PSO, this process includes selecting pbest and gbest and updating the velocity and position (lines 2–3 of Algorithm 4). The specific steps are as follows:
Algorithm 4 PSO Algorithm
Input :   Population   P ,   external   repository   A ,   population   size   N
Output :   new   population   P
  • for   I = 1   to   N   do
  • Select   pbest i   and   gbest i   based   on   the   vector   angle   in   A
  • Update   the   velocity   v   and   position   x
  • end for
  • Return P
Choosing pbest and gbest: A penalty factor θ i balances population diversity and convergence when choosing pbest and gbest. A higher θ i encourages diversity, while a lower θ i favors convergence. The equation for calculating θ i is given in Equation (20):
θ i = k × ( p i , λ i ) 1 , 2 , , N
k represents a scaling parameter, and ( p i ,   λ i ) denotes the vector angle between p i and λ i , with N as the population size. This vector angle is normalized to range between 0° and 90°.
Figure 6 illustrates the selection of pbest . The red lines represent the reference vectors λ 1 and λ 2 . A smaller vector angle between p 1 and λ 1 indicates that p 1 has good diversity. By using the equation above to calculate θ 1 , a particle with a smaller angle can be chosen as pbest1 to guide p 1 rather than another particle with better diversity but worse convergence. Similarly, p 2 should be selected as gbest2 for its diversity rather than p 1 , which might have better convergence. For MAOP, there is no single optimal solution, so a solution close to λ i from A is selected as gbest i to improve diversity. In Figure 6, p 1 and p 2 will be chosen as gbest1 and gbest2, respectively. Sometimes, pbest i and gbest i are equal, indicating that the solution can provide diversity and convergence directions.
Updating particle position and velocity: Once pbest and gbest have been identified, the particle’s velocity v i is determined using the following Equation (21):
v i t + 1 = ω v i t + c r 1 x pbest i t x i t + sgn ( fit ( gbest i ) fit ( p i ) ) F i r 2 ( x gbest i ( t ) x i ( t ) )
where x i represents the position of the i-th solution in the population, r 1 and r 2 are evenly dispersed within the range of [0, 1], ω denotes the inertia weight, and c represents the learning factor. F i balances the search direction between pbesti and gbesti, calculated as Equation (22):
F i = π 2 ( gbest i , λ i ) π 2 + fit ( pbest i ) fit ( gbest i )
Based on Equation (22), the overall search direction is primarily influenced by pbesti and gbesti. The branch search direction of gbest i is governed by fit ( gbest i ) fit ( p i ) , which emphasizes diversity rather than convergence. If gbest i fitness is higher than p i ,   it indicates that p i is closer to the Pareto front (PF) than gbest i . The search direction is from gbest i to p i ; otherwise, it is the opposite. Finally, the position x i is updated as follows (23):
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )
If the position exceeds the boundaries in some dimensions, it is set to the boundary value, and the corresponding velocity is reset to zero.

4. Experimental Design and Results Analysis

To validate the effectiveness of the MAPSO algorithm, we created a simulation scenario in Systems Tool Kits. We generated test instances based on data transmission flows from Systems Tool Kits.

4.1. Scenario Design

In the Systems Tool Kits-designed simulation scenario, we selected four ground stations (Hainan, Kashgar, Yunnan, and Xi’an) and various numbers of satellites for data transmission experiments. We designed ten sun-synchronous orbits, each with nine satellites evenly distributed. This section shows 24 scenarios. Each scenario varies in the number of ground stations (one to four) and satellites (10 to 90), with a planning period of 24 h, from 1 March 2024 00:00:00 to 2 March 2024 00:00:00. The parameters are detailed in Table 3.
The essential attributes for the transmission tasks are as follows:
  • Each satellite can only transmit data to one ground station;
  • The transmission priority of each satellite is randomly assigned using a 1–10 scale, where 10 denotes the highest priority;
  • The priorities for the ground stations are Hainan (seven), Kashgar (five), Xi’an (ten), and Yunnan (three);
  • The transmission time window must fall within the planning period, and if the window is shorter than 7 min, it must equal the task’s minimum required transmission time;
  • All tasks use a standard communication protocol between satellites and ground stations;
  • The transmission frequency band of all tasks is set to be S-band.
This section evaluates MAPSO by comparing it with NSGA-II [24], MOEA/D [25], MOGA [26], DRSC-MOAGDE [27], DSC-MOPSO [28], and MO-Ring-PSO-SCD [29]. NSGA-II is Pareto-based, MOGA is indicator-based, and MOEA/D is decomposition-based. DRSC-MOAGDE has a competitive performance on practical MOP, DSC-MOPSO is a multi-objective PSO algorithm with dynamic-switching crowding, MO-Ring-PSO-SCD is a powerful PSO algorithm for solving multi-objective problems. We generated 24 test instances, as described in Section 4.1. Each test instance was run 30 times, and the average value was taken as the final result. Our algorithm was implemented using MATLAB R2022b and tested on a personal computer equipped with an AMD Ryzen 7 4800H CPU with Radeon Graphics (2.90 GHz).

4.2. Experimental Results and Performance Analysis

We tested 24 experimental scenarios recorded in Table 3 using MAPSO, NSGA-II, MOEA/D, and MOGA algorithms. The NSGA-II parameters were set as population size 100, maximum iterations 100, crossover probability 0.9, mutation probability 0.1, and roulette wheel selection method. The optimization parameters of MAPSO were population size 100, inertia weight 0.7, and learning factors c1 = c2 = 1.5. MOEA/D parameters were population size 100, neighborhood size 20, mutation probability 0.02, crossover probability 0.9, and maximum iterations 100. MOGA parameters were population size 100, crossover probability 0.9, mutation probability 0.02, maximum iterations 100, and roulette wheel selection.
Table 4 records the time consumed by each algorithm under different experimental scenarios, and Table 5 records the time each algorithm takes to schedule data transmission tasks in different scenarios. The MAPSO algorithm improved data transmission efficiency using the minimum angle method to select particles with the best fitness for gbest and pbest. The archive update operation also updated the archive set, making MAPSO the fastest in solving satellite data transmission problems across all scenarios.
We selected the most complex scenarios from Table 1: C4 (one station, 40 satellites), C9 (two stations, 50 satellites), C15 (three stations, 60 satellites), and C24 (four stations, 90 satellites) to plot Pareto fronts. Figure 7 displays the Pareto fronts obtained by the four optimization algorithms under different scenarios. Scenarios C1 to C4 were not chosen because when there is only one ground station, satellites transmit data sequentially within the time window, resulting in fixed values for the shortest transmission time and maximum priority between satellites and ground stations, making it impossible to plot corresponding Pareto fronts. Compared to NSGA-II, MOEA/D, and MOGA, the MAPSO algorithm shows significantly better performance in terms of Pareto front diversity and uniformity.
Figure 8 illustrates the final scheduling results for satellite data transmission tasks under different scenarios. The x-axis represents the scheduling time window, and the y-axis represents the ground stations. Satellites are recorded in the figure by sequence number. If a satellite is within the range of 1 on the y-axis, it indicates that the satellite is transmitting data to the first ground station, and so on. In Figure 8, we present a selected Pareto optimal scheduling result for different configurations of ground stations and satellites. Each subplot illustrates a scheduling outcome under varying conditions. The displayed scheduling result was chosen because it effectively demonstrates the balance between minimizing transmission time and maximizing resource utilization. While multiple Pareto optimal solutions exist, this result provides a representative and clear visualization of our algorithm’s performance. It highlights how our approach handles the complexities of large-scale multi-objective optimization in satellite data transmission scheduling. The results in Figure 8 show that the MAPSO algorithm provides a better scheduling plan for satellite data transmission tasks.
The MAPSO algorithm enhances the efficiency of solving multi-objective SDTSPs, making it more suitable for handling satellite data transmission tasks across various scenarios. MAPSO enhances the system’s service capability, meeting diverse user needs. The scheduling results indicate an improved Pareto front, increasing the efficiency and accuracy of satellite data transmission tasks. The MAPSO algorithm employs a minimum angle particle selection strategy in dynamic scheduling, effectively selecting gbest and pbest particles. Combined with objective functions tailored to satellite data transmission characteristics, this increases the probability of generating better offspring during iterations, enhancing optimization performance.

4.3. Stability Analysis

BIG-O notation, typically employed to characterize the time complexity of algorithms, is not well suited for quantifying the computational complexity of evolutionary algorithms (EAs). This inadequacy stems from EAs not navigating within differentiable spaces but instead seeking global optima in complex, multimodal problem landscapes. Such landscapes often feature many local optima, which can entrap the algorithm and result in early convergence. In these contexts, the algorithm’s internal complexity is of lesser importance. As the search process defies straightforward optimization, the computational complexity, or search time, is potentially infinite. Consequently, assessing the computational complexity of the MAPSO algorithm necessitates stability analysis.
The stability analysis method quantifies evolutionary algorithms’ time taken and success rate in discovering feasible solutions within a controlled simulation environment [30]. For its innovative approach and effectiveness over other metrics, this research utilizes success rate (SR), mean fitness evaluations (MFE), and mean search time (MST) as the criteria for stability analysis. The collective performance of seven algorithms is evaluated by determining the values of SR, MFE, and MST.
The fundamental goal of the MAPSO algorithm is to identify feasible solutions for the SDTSP. Its success rate (SR) is computed using Equation (24), where a a signifies the count of independent trials that successfully yield a feasible solution, and t t denotes the overall number of trials conducted:
S R = a a t t × 100
A further aspect of stability assessment involves monitoring the frequency of fitness evaluations executed by the algorithm. In detail, the fitness evaluation counter (FE) is incremented with every invocation of the objective function by the algorithm. The search concludes once the algorithm effectively locates a viable solution, at which point the FE counter’s tally is noted. Following this methodology, the mean number of fitness evaluations (MFE) across successful trials is computed using Equation (25):
M F E = 1 a a i = 1 a a f e s [ i ]
Another critical aspect of stability analysis is examining the time required by the algorithm to find a feasible solution. Once a feasible solution is located, the search halts, and the search time (ST) is recorded. The average search time (MST) for successful runs is calculated using Equation (26):
M S T = 1 a a i = 1 a a s t [ i ]
The methods for calculating SR, MFE, and MST are based on the stability analysis described in [27], with detailed steps outlined in Algorithm 5. The statistical results for the SR, MFE, and MST of seven algorithms across 24 scenarios are presented in Table 6, Table 7 and Table 8.
Algorithm 5 Stability Analysis
Input :   feasible   solution   defined   for   the   problem
Output :   S R ,   M F E ,   M S T
  • for   i = 1   to   run   / / total   count   of   independent   runs
  • While   ( f e s   <   m a x   F E s )   do
  • Start   the   timer ,   then   initiate   the   evolutionary   search   steps
  • If   ( f e s   >   0.3   *   m a x   F E s   and   best _ solution     F S )  
  • Stop   the   timer   and   record   the   values   of   S T [ i ]   and   F E [ i ]
  • Increment   the   variable   aa   by   1
  • Exit   the   while   loop
  • else
  • Increment   the   fes   count   in   F E [ i ]
  • end if
  • end While
  • end for
  • Compute   SR ,   MFE ,   and   MST using Formulas (24)–(26)
Table 6 shows that the MAPSO algorithm achieves a high success rate across all scenarios, indicating its ability to find feasible solutions in every case. Table 6 demonstrates that MAPSO possesses strong robustness and reliability, effectively addressing satellite data transmission problems of varying scales and complexities. The DRSC-MOAGDE and DSC-MOPSO algorithms also achieve high success rates in smaller-scale scenarios, surpassing MAPSO. However, their success rates decline as the number of satellites and ground stations increases. Table 6 suggests these algorithms may struggle with more complex scenarios, potentially falling into local optima or exhibiting slower convergence. In contrast, the success rates of MO-Ring-PSO-SCD, NSGA-II, MOEA/D, and MOGA are generally low, particularly in large-scale scenarios. Table 6 implies that these algorithms are more prone to premature convergence or being trapped in local optima when handling complex problems.
As shown in Table 7, the MAPSO algorithm achieves relatively short average search times across all scenarios, indicating its ability to identify feasible solutions quickly. This efficiency is primarily attributed to the minimum angle particle selection strategy employed in MAPSO, which significantly enhances search performance. The DRSC-MOAGDE and DSC-MOPSO algorithms also exhibit relatively short average search times in smaller-scale scenarios. However, as the number of satellites and ground stations increases, their average search times slightly exceed those of MAPSO. Table 7 indicates that while these algorithms maintain relatively high search efficiency, they are still less effective than MAPSO.
In contrast, the average search times of MO-Ring-PSO-SCD, NSGA-II, MOEA/D, and MOGA are generally longer, particularly in large-scale scenarios. Table 7 suggests that these algorithms have lower search efficiency and require more time to find feasible solutions. As shown in Table 8, the MAPSO algorithm demonstrates relatively low average fitness evaluation counts across all scenarios, indicating its ability to find feasible solutions with fewer iterations. This efficiency is primarily attributed to the minimum angle particle selection strategy and the fitness function employed by MAPSO, which effectively enhance search performance and optimization results. Similarly, the DRSC-MOAGDE and DSC-MOPSO algorithms also exhibit low average fitness evaluation counts, though slightly higher than MAPSO in large-scale scenarios. This suggests that while these algorithms maintain relatively high optimization efficiency, they still fall short compared to MAPSO. In contrast, the MO-Ring-PSO-SCD, NSGA-II, MOEA/D, and MOGA algorithms have significantly higher average fitness evaluation counts, particularly in large-scale scenarios, indicating lower optimization efficiency and requiring more iterations to locate feasible solutions.
Based on the analysis of these three indicators, the following conclusions can be drawn: The MAPSO algorithm demonstrates superior stability in solving satellite data transmission problems. It consistently identifies feasible solutions, achieves shorter search times, and requires fewer fitness evaluations, highlighting its ability to address complex problems effectively and efficiently. While the DRSC-MOAGDE and DSC-MOPSO algorithms also exhibit high stability, they are slightly less effective than MAPSO. On the other hand, the MO-Ring-PSO-SCD, NSGA-II, MOEA/D, and MOGA algorithms show poor stability, often becoming trapped in local optima or suffering from premature convergence, making them unsuitable for solving large-scale or complex satellite data transmission problems.

4.4. Hypervolume and Friedman Evaluation Index

This study utilized the hypervolume (HV) and Friedman fraction metrics to evaluate the algorithm’s performance [31] and comprehensively analyze the diversity and convergence of the Pareto front generated by the algorithm. Each solution within the Pareto front corresponds to a scheduling plan that satisfies the dynamic scheduling demands of relay satellites under different preference scenarios. The HV metric is computed by summing the areas of the rectangles formed between each point in the non-dominated solution set and a reference point, typically the maximum individual in the objective space. A higher HV value signifies superior overall algorithm performance. The HV calculation is expressed by the following Formula (27):
H V = i = 1 n   ( r 1 f 1 ( x i ) f 1 ( x i ) m a x ) × ( f 2 ( x i ) r 2 f 2 ( x i ) m a x )
In this context, f 1 ( x i ) and f 2 ( x i ) denote the first and second objective values of solution x i , respectively. Similarly, r 1 and r 2 are the respective coordinates of the reference point. f 1 ( x i ) m a x represents the maximum value of the first objective function f 1 in all solutions x i . f 2 ( x i ) m a x represents the maximum value of the second objective function f 2 in all solutions x i . Cases c 1 through c 4 lack a Pareto front, and as a result, they are not considered in the hypervolume (HV) assessment. Table 4 presents the HV values and the reference point coordinates for the 20 experimental scenarios. A normalization process has been implemented to avoid excessively high HV values. This process involves standardizing each objective value by dividing it by its maximum, which maintains uniform scaling among the objectives.
Table 9 presents the average HV values obtained by seven algorithms over 30 runs across all experimental scenarios. In solving satellite data transmission problems, the MAPSO algorithm demonstrates a significant advantage, particularly in scenarios 15 to 24, where its HV values are consistently higher than those of other algorithms. Table 9 indicates that MAPSO effectively balances solution diversity and convergence, achieving superior solution sets. In contrast, the HV values of the MOGA algorithm are noticeably lower across all scenarios, reflecting poor solution diversity and limited ability to explore the solution space effectively. The HV values of the DRSC-MOAGDE, DSC-MOPSO, MO-Ring-PSO-SCD, NSGA-II, and MOEA/D algorithms fall between those of MAPSO and MOGA, exhibiting varying performance levels.

5. Conclusions

We have proposed a minimum angle particle swarm optimization (MAPSO) algorithm to solve multi-objective satellite data transmission scheduling problems. The proposed algorithm was encoded as a discrete optimizer, and the calculation equation of the sine function was modified to enhance the PSO algorithm’s ability to deal with complex multi-objective problems. The MAPSO method introduces this new minimum angle selection strategy into the design and effectively selects the global optimal solution by calculating the minimum angle between particles. MAPSO balances convergence and diversity by avoiding local extrema in the global search process.
In the experimental validation section, we tested the MAPSO method on 24-scale task instances, demonstrating its ability to solve multi-objective data transmission scheduling problems and improve data transmission efficiency and accuracy. The experimental results show that the MAPSO method has an advantage in solving data transmission problems and maintains high solution quality across different task scales, indicating its generalization capability and adaptability in practical applications. Future research can further optimize this method, combining it with other intelligent optimization techniques to enhance its application potential in more extensive and complex task environments, promoting the development of satellite communication technology.

Author Contributions

Data curation, Y.S., Z.W. and H.R.; writing—original draft, Z.Z.; writing—review and editing, S.C. and L.X.; supervision, L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Natural Science Foundation of China (Grant No. 61806119), the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2024JC-YBMS-516), and Fundamental Research Funds for the Central Universities (No. GK202201014).

Data Availability Statement

The data supporting the reported results can be found in the manuscript. Due to the nature of the research, no new datasets were created.

Acknowledgments

We would like to acknowledge the technical support provided by the School of Computer Science, Shaanxi Normal University and the School of Electronic Engineering, Xidian University. Thanks to the anonymous reviewers and editor from the journal.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MAPSOminimum angle particle swarm optimization
AFSCNaddress the scheduling problem of the satellite control network
SDTTthe unique aspects of large-scale data transmission missions
PSOparticle swarm optimization
MOEAMulti-objective evolutionary algorithms

References

  1. Zhang, J.; Xing, L.; Peng, G.; Yao, F.; Chen, C. A large-scale multi-objective satellite data transmission scheduling algorithm based on SVM+ NSGA-II. Swarm Evol. Comput. 2019, 50, 100560. [Google Scholar] [CrossRef]
  2. Vazquez, A.J.; Erwin, R.S. On the tractability of satellite range scheduling. Optim. Lett. 2015, 9, 311–327. [Google Scholar] [CrossRef]
  3. Gooley, T.; Borsi, J.; Moore, J. Automating Air Force Satellite Control Network (AFSCN) scheduling. Math. Comput. Model. 1996, 24, 91–101. [Google Scholar] [CrossRef]
  4. Barbulescu, L.; Howe, A.E.; Watson, J.P.; Whitley, L.D. Satellite range scheduling: A comparison of genetic, heuristic and local search. In Proceedings of the Parallel Problem Solving from Nature—PPSN VII: 7th International Conference, Granada, Spain, 7–11 September 2002; Proceedings 7. Springer: Berlin/Heidelberg, Germany, 2002; pp. 611–620. [Google Scholar]
  5. Xia, L.; Yu, N. Redundant and Relay Assistant Scheduling of Small Satellites. In Proceedings of the 2014 IEEE 28th International Conference on Advanced Information Networking and Applications, Victoria, BC, Canada, 13–16 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1017–1024. [Google Scholar]
  6. Liang, Q.; Fan, Y.; Yan, X. An algorithm based on differential evolution for satellite data transmission scheduling. Int. J. Comput. Sci. Eng. 2019, 18, 279–285. [Google Scholar] [CrossRef]
  7. Yong, D.; Feng, Y.; Xing, L.; He, L. The inter-satellite data transmission method in satellite networks is based on a hybrid evolutionary algorithm. Syst. Eng. Electron. 2023, 45, 2931–2940. [Google Scholar]
  8. Chen, X.; Gu, W.; Dai, G.; Xing, L.; Tian, T.; Luo, W.; Cheng, S.; Zhou, M. Data-Driven Collaborative Scheduling Method for Multi-Satellite Data-Transmission. Tsinghua Sci. Technol. 2024, 29, 1463–1480. [Google Scholar] [CrossRef]
  9. Zhang, J.; Xing, L. An improved genetic algorithm for the integrated satellite imaging and data transmission scheduling problem. Comput. Oper. Res. 2022, 139, 105626. [Google Scholar] [CrossRef]
  10. Chen, H.; Sun, G.; Peng, S.; Wu, J. Dynamic rescheduling method of measurement and control of data transmission resources based on multi-objective optimization. Syst. Eng. Electron. 2024, 46, 3744–3753. [Google Scholar]
  11. Maher, M.L. A model of co-evolutionary design. Eng. Comput. 2000, 16, 195–208. [Google Scholar] [CrossRef]
  12. Chauhan, S.; Vashishtha, G.; Zimroz, R.; Kumar, R. A crayfish optimized wavelet filter and its application to fault diagnosis of machine components. Int. J. Adv. Manuf. Technol. 2024, 135, 1825–1837. [Google Scholar] [CrossRef]
  13. Chauhan, S.; Vashishtha, G.; Zimroz, R. Analysing Recent Breakthroughs in Fault Diagnosis through Sensor: A Comprehensive Overview. Comput. Model. Eng. Sci. 2024, 141, 1983–2020. [Google Scholar] [CrossRef]
  14. Wang, T.; Luo, Q.; Zhou, L.; Wu, G. Space division and adaptive selection strategy based differential evolution algorithm for multi-objective satellite range scheduling problem. Swarm Evol. Comput. 2023, 83, 101396. [Google Scholar] [CrossRef]
  15. Song, Y.; Ou, J.; Pedrycz, W.; Suganthan, P.N.; Wang, X.; Xing, L. Generalized Model and Deep Reinforcement Learning-Based Evolutionary Method for Multitype Satellite Observation Scheduling. IEEE Trans. Syst. Man Cybern. Syst. 2024, 15, 271–276. [Google Scholar] [CrossRef]
  16. Mai, Y.; Shi, H.; Liao, Q.; Sheng, Z.; Zhao, S.; Ni, Q.; Zhang, W. Using the decomposition-based multi-objective evolutionary algorithm with adaptive neighborhood sizes and dynamic constraint strategies to retrieve atmospheric ducts. Sensors 2020, 20, 2230. [Google Scholar] [CrossRef]
  17. Gong, D.W.; Zhang, Y.; Zhang, J.H. Multi-objective particle swarm optimization based on minimal particle angle. In Proceedings of the International Conference on Intelligent Computing, Tianjin, China, 5–8 August 2024; Springer: Berlin/Heidelberg, Germany, 2005; pp. 571–580. [Google Scholar]
  18. Yang, L.; Hu, X.; Li, K. A vector angles-based many-objective particle swarm optimization algorithm using archive. Appl. Soft Comput. 2021, 106, 107299. [Google Scholar] [CrossRef]
  19. Cui, Y.; Meng, X.; Qiao, J. A multi-objective particle swarm optimization algorithm based on a two-archive mechanism. Appl. Soft Comput. 2022, 119, 108532. [Google Scholar] [CrossRef]
  20. Kang, L.; Liu, N.; Cao, W.; Chen, Y. Many-objective particle swarm optimization algorithm based on multi-elite opposition mutation mechanism in the Internet of things environment. Int. J. Grid Util. Comput. 2023, 14, 107–121. [Google Scholar] [CrossRef]
  21. Das, I.; Dennis, J.E. Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef]
  22. Li, M.; Yang, S.; Liu, X. Shift-based density estimation for Pareto-based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 2013, 18, 348–365. [Google Scholar] [CrossRef]
  23. Zhu, Q.; Lin, Q.; Chen, W.; Wong, K.-C.; Coello, C.A.C.; Li, J. An external archive-guided multi-objective particle swarm optimization algorithm. IEEE Trans. Cybern. 2017, 47, 2794–2808. [Google Scholar] [CrossRef]
  24. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  25. Wang, Q.; Gu, Q.; Chen, L.; Meyarivan, T. A MOEA/D with global and local cooperative optimization for complicated bi-objective optimization problems. Appl. Soft Comput. 2023, 137, 110162. [Google Scholar] [CrossRef]
  26. Saha, S.; Zaman, P.B.; Tusar, I.H.; Dhar, N.R. Multi-objective genetic algorithm (MOGA) based optimization of high-pressure coolant assisted hard turning of 42CrMo4 steel. Int. J. Interact. Des. Manuf. (IJIDeM) 2022, 16, 1253–1272. [Google Scholar] [CrossRef]
  27. Akbel, M.; Kahraman, H.T.; Duman, S.; Temel, S. A clustering-based archive handling method and multi-objective optimization of the optimal power flow problem. Appl. Intell. 2024, 54, 11603–11648. [Google Scholar] [CrossRef]
  28. Bakır, H.; Kahraman, H.T.; Yılmaz, S.; Duman, S.; Guvenc, U. Dynamic switched crowding-based multi-objective particle swarm optimization algorithm for solving multi-objective AC-DC optimal power flow problem. Appl. Soft Comput. 2024, 166, 112155. [Google Scholar] [CrossRef]
  29. Yue, C.; Qu, B.; Liang, J. A multi-objective particle swarm optimizer using ring topology for solving multimodal multi-objective problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  30. Kahraman, H.T.; Akbel, M.; Duman, S.; Kati, M.; Sayan, H.H. Unified space approach-based Dynamic Switched Crowding (DSC): A new method for designing Pareto-based multi/many-objective algorithms. Swarm Evol. Comput. 2022, 75, 101196. [Google Scholar] [CrossRef]
  31. Shang, K.; Ishibuchi, H. A new hypervolume-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2020, 24, 839–852. [Google Scholar] [CrossRef]
Figure 1. Illustration of the MAPSO process for solving data transmission tasks.
Figure 1. Illustration of the MAPSO process for solving data transmission tasks.
Symmetry 17 00014 g001
Figure 2. Illustration of a clone.
Figure 2. Illustration of a clone.
Symmetry 17 00014 g002
Figure 3. Illustration of archive update.
Figure 3. Illustration of archive update.
Symmetry 17 00014 g003
Figure 4. Actual and approximate angles of movement in a simple linear approximation.
Figure 4. Actual and approximate angles of movement in a simple linear approximation.
Symmetry 17 00014 g004
Figure 5. The minimum angle selection diagram.
Figure 5. The minimum angle selection diagram.
Symmetry 17 00014 g005
Figure 6. The PSO search illustration.
Figure 6. The PSO search illustration.
Symmetry 17 00014 g006
Figure 7. (a) Description of Two stations, 50 satellites Pareto; (b) Description of Three stations, 60 satellites Pareto; (c) Description of Four stations, 90 satellites Pareto.
Figure 7. (a) Description of Two stations, 50 satellites Pareto; (b) Description of Three stations, 60 satellites Pareto; (c) Description of Four stations, 90 satellites Pareto.
Symmetry 17 00014 g007aSymmetry 17 00014 g007b
Figure 8. (a) One station, 40 satellites; (b) Two stations, 50 satellites; (c) Three stations, 60 satellites, 90 satellites Pareto; (d) Four stations, 90 satellites.
Figure 8. (a) One station, 40 satellites; (b) Two stations, 50 satellites; (c) Three stations, 60 satellites, 90 satellites Pareto; (d) Four stations, 90 satellites.
Symmetry 17 00014 g008
Table 1. Symbol of task.
Table 1. Symbol of task.
SymbolExplanation
p t i The   priority   of   satellite   t i
g t i The   priority   of   ground   stations   t i
sSatellite
gGround stations
t i The number of tasks
a s j m The number of antennas
a f g k d The ground station antennas
a f s j m The satellite antennas
Table 2. Symbol of visibility time window.
Table 2. Symbol of visibility time window.
SymbolExplanation
wThe visibility time windows
s w j k z The visibility start-time windows
e w j k z The visibility end-time windows
S T The scheduled start time
E T The scheduled end time
d t i The minimal transmission duration
s t i The task start-time window
e t i The task end-time window
s t w i The transmit start-time window
e t w i The transmit end-time window
Table 3. Experimental scenarios and parameters.
Table 3. Experimental scenarios and parameters.
Experiment IDNumber of SatellitesNumber of Ground StationsExperiment IDNumber of SatellitesNumber of Ground Stations
c 1 101 c 13 403
c 2 201 c 14 503
c 3 301 c 15 603
c 4 401 c 16 104
c 5 102 c 17 204
c 6 202 c 18 304
c 7 302 c 19 404
c 8 402 c 20 504
c 9 502 c 21 604
c 10 103 c 22 704
c 11 203 c 23 804
c 12 303 c 24 904
Table 4. Algorithm time consumption in different scenarios (s).
Table 4. Algorithm time consumption in different scenarios (s).
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA
c 1 68.72160.01863.879101.592166.0427207.0557120.4097
c 2 104.37395.49999.312142.554201.463243.6916163.3364
c 3 140.589133.62132.381179.064241.8197276.7497196.0603
c 4 176.731171.462168.063217.887275.8042309.9191231.333
c 5 75.30671.4669.561116.76168.3898206.9387134.2394
c 6 101.01997.32693.876144.621196.8151238.5583160.6354
c 7 132.663125.976128.64176.328225.8332268.8849186.2558
c 8 168.387159.555160.029203.223266.7544303.7772225.4224
c 9 201.054193.992198.036236.07297.1121339.3375257.7917
c 10 75.71470.19768.253117.27176.9157217.1687135.2166
c 11 97.02989.87193.825141.975194.9647237.4704152.9181
c 12 120.222114.033116.661172.062219.9124258.5598179.7287
c 13 159.045152.28153.903209.04252.822295.9644217.6764
c 14 190.179184.935181.431235.839288.494326.0212246.127
c 15 232.329223.425224.913282.906333.6264369.0082284.6459
c 16 67.44772.58274.13111.975167.7088200.2816124.347
c 17 80.08785.9587.579128.79179.2077218.7366139.6489
c 18 96.206101.061102.966140.469192.9514234.0556148.4967
c 19 119.507126.933127.977167.544219.457256.1416171.2443
c 20 153.452158.637159.156211.776248.7481286.194204.6377
c 21 186.713195.546194.073238.659284.4103324.8533241.7649
c 22 210.807216.108218.565251.862305.4156349.3471263.0285
c 23 247.022253.851255.438291.633343.9177385.8968300.5855
c 24 279.164285.867287.88324.519374.9307419.8057334.68
Table 5. Algorithm planning time for data transmission tasks in different scenarios (s).
Table 5. Algorithm planning time for data transmission tasks in different scenarios (s).
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA
c 1 33,26633,26633,26633,26633,26633,26633,266
c 2 55,12655,12655,12655,12655,12655,12655,126
c 3 60,42560,42560,42560,42560,42560,42560,425
c 4 65,05265,05265,05265,05265,05265,05265,052
c 5 19,42019,38519,43319,47319,46819,34419,268
c 6 29,72229,70629,75029,75729,79534,81035,090
c 7 44,27244,26744,30644,34944,40246,47652,374
c 8 55,34555,35455,39355,48255,61957,92363,666
c 9 61,74661,76361,80861,99562,67162,89762,647
c 10 17,29917,28717,32517,37217,45717,53217,740
c 11 21,26821,26421,29721,40421,42226,60426,763
c 12 28,01228,03428,06928,25028,56229,38234,825
c 13 44,45544,61944,77045,21646,15348,14151,127
c 14 46,44946,46046,51846,57146,72047,88455,554
c 15 48,42848,49148,60448,74249,61953,49753,570
c 16 12,59712,46912,50512,51912,53612,56112,630
c 17 21,26221,18321,20821,23421,26721,79422,374
c 18 23,89923,92523,95824,01624,08725,03525,224
c 19 30,81530,95331,05631,25731,36833,32435,193
c 20 40,03340,18940,24640,44840,50443,51045,514
c 21 43,30943,85244,12745,03646,36247,95550,385
c 22 44,96545,43145,75146,46047,70750,44651,806
c 23 45,63046,06246,35947,19848,24750,70752,055
c 24 47,40947,73648,09448,73149,97451,08052,337
Table 6. SR performance of algorithms.
Table 6. SR performance of algorithms.
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA
c 1 100.00100.00100.00100.00100.00100.00100.00
c 2 100.00100.00100.00100.00100.00100.00100.00
c 3 100.00100.00100.00100.00100.00100.00100.00
c 4 88.5490.4288.5185.6279.7366.9367.94
c 5 100.00100.00100.00100.00100.00100.00100.00
c 6 100.00100.00100.00100.00100.00100.00100.00
c 7 100.00100.00100.0095.0788.4087.6477.79
c 8 93.9194.3395.4588.5082.2078.1275.14
c 9 81.7081.9582.1078.8676.8470.3071.44
c 10 100.00100.00100.00100.00100.00100.00100.00
c 11 100.00100.00100.00100.00100.00100.00100.00
c 12 100.00100.00100.0093.7390.3784.5180.48
c 13 82.3781.66 79.6673.1670.19 66.9660.49
c 14 76.6374.3874.3471.7965.0561.0056.73
c 15 72.4469.3669.54 64.4857.9856.6453.14
c 16 100.00100.00100.00100.00100.00100.00100.00
c 17 100.00100.00100.00100.00100.00100.00100.00
c 18 100.00100.00100.00100.00100.00100.00100.00
c 19 100.00100.00100.00100.00100.00100.00100.00
c 20 100.00100.00100.00100.00100.00100.00100.00
c 21 100.00100.00 97.6389.0180.4768.9759.31
c 22 79.1577.4675.7170.3260.8559.02 55.56
c 23 78.5573.6568.8965.36 52.1347.4330.94
c 24 75.6267.3166.4154.7738.3528.012.59
Table 7. MST performance of algorithms.
Table 7. MST performance of algorithms.
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA
c 1 2.202.122.232.513.053.273.43
c 2 2.312.072.152.643.263.523.95
c 3 2.442.33 2.302.963.413.804.56
c 4 3.573.43 3.473.724.114.615.71
c 5 2.212.172.252.573.173.433.68
c 6 2.852.772.823.033.744.154.53
c 7 3.163.383.483.614.825.345.94
c 8 3.823.663.733.955.275.756.67
c 9 4.47 4.274.364.626.106.467.49
c 10 2.842.752.822.993.333.573.82
c 11 3.443.363.393.724.724.625.79
c 12 4.314.124.224.865.955.806.51
c 13 4.754.774.815.256.316.337.63
c 14 5.195.265.465.957.327.418.77
c 15 5.435.695.756.548.058.189.43
c 16 2.372.432.502.713.413.574.20
c 17 2.892.922.933.284.054.125.32
c 18 3.363.483.543.834.774.856.34
c 19 4.554.614.785.065.925.897.87
c 20 5.225.375.466.227.187.349.24
c 21 6.386.596.627.478.378.4110.32
c 22 7.497.607.578.519.549.6612.58
c 23 8.528.738.899.4910.7910.9414.95
c 24 9.8610.1110.2012.7713.2213.5316.62
Table 8. MFE performance of algorithms.
Table 8. MFE performance of algorithms.
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA
c 1 2.29072.00062.12933.38645.5347576.9018574.013657
c 2 3.47913.18333.31044.75186.7154338.1230535.444547
c 3 4.68634.4544.41275.96888.0606579.224996.535343
c 4 5.89105.71545.60217.26299.19347310.330647.7111
c 5 2.51022.3822.31873.8925.6129936.8979574.474647
c 6 3.36733.24423.12924.82076.5605037.9519435.354513
c 7 4.42214.19924.2885.87767.5277738.962836.208527
c 8 5.61295.31855.33436.77418.89181310.125917.51408
c 9 6.70186.46646.60127.8699.90373711.311258.593057
c 10 2.52382.33992.27513.9095.897197.2389574.50722
c 11 3.23432.99573.12754.73256.4988237.915685.09727
c 12 4.00743.80113.88875.73547.3304138.618665.990957
c 13 5.30155.0765.13016.9688.42749.865487.25588
c 14 6.33936.16456.04777.86139.61646710.867378.204233
c 15 7.74437.44757.49719.430211.1208812.300279.488197
c 16 2.24822.41942.47103.73255.5902936.6760534.1449
c 17 2.66952.86502.91934.2935.973597.291224.654963
c 18 3.20683.36873.43224.68236.4317137.8018534.94989
c 19 3.98354.23114.26595.58487.3152338.5380535.708143
c 20 5.11505.28795.30527.05928.2916039.53986.821257
c 21 6.22376.51826.46917.95539.48034310.828448.05883
c 22 7.02697.20367.28558.395410.1805211.64498.767617
c 23 8.23408.46178.51469.721111.4639212.8632310.01952
c 24 9.30549.52899.596010.817312.4976913.9935211.156
Table 9. Hypervolume value.
Table 9. Hypervolume value.
Experiment IDMAPSODRSC-MOAGDEDSC-MOPSOMO-Ring-PSO-SCDNSGA-IIMOEA/DMOGA ( r 1 , r 2 )
c 5 1.86431.9331.87811.7591.45461.31090.4688(22,000, 350)
c 6 2.13852.21422.1642.01171.56871.44930.6678(40,000, 800)
c 7 2.48212.60562.53822.30151.88871.68870.9002(62,000, 1100)
c 8 2.58112.68372.66872.3841.93331.70560.8436(68,000, 1500)
c 9 2.80362.93062.91462.59392.02561.79631.0452(69,000, 1700)
c 10 1.34911.47981.45041.13461.13490.80350.5479(23,000, 400)
c 11 1.39131.46191.45541.15661.15870.82880.5787(40,000, 800)
c 12 1.50341.41231.38791.25951.22010.93120.6761(54,000, 1200)
c 13 1.63041.51681.49531.36921.32741.05590.7779(56,000, 1500)
c 14 1.70581.58451.56121.46681.38041.11150.808(57,000, 2000)
c 15 1.83161.73431.72671.57771.4121.20630.9465(58,000, 2500)
c 16 2.09921.99741.96211.851.59670.77810.412(14,000, 400)
c 17 2.31192.19322.15012.04061.73740.86720.4618(30,000, 850)
c 18 2.63222.51132.47782.36182.01161.12210.6498(40,000, 1200)
c 19 2.96242.81542.8292.67672.27431.30690.7946(50,000, 1500)
c 20 3.44583.3123.26113.14922.72361.67661.1481(53,000, 1600)
c 21 3.69433.52923.46593.3662.93131.75171.3172(55,000, 2100)
c 22 3.93283.75083.74523.553.10341.78621.3576(60,000, 2500)
c 23 4.00473.81693.80173.59163.13931.63751.2498(55,000, 2800)
c 24 4.18594.01263.96543.77283.26421.85051.306(56,000, 3100)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Cheng, S.; Shan, Y.; Wang, Z.; Ran, H.; Xing, L. Solving Multi-Objective Satellite Data Transmission Scheduling Problems via a Minimum Angle Particle Swarm Optimization. Symmetry 2025, 17, 14. https://doi.org/10.3390/sym17010014

AMA Style

Zhang Z, Cheng S, Shan Y, Wang Z, Ran H, Xing L. Solving Multi-Objective Satellite Data Transmission Scheduling Problems via a Minimum Angle Particle Swarm Optimization. Symmetry. 2025; 17(1):14. https://doi.org/10.3390/sym17010014

Chicago/Turabian Style

Zhang, Zhe, Shi Cheng, Yuyuan Shan, Zhixin Wang, Hao Ran, and Lining Xing. 2025. "Solving Multi-Objective Satellite Data Transmission Scheduling Problems via a Minimum Angle Particle Swarm Optimization" Symmetry 17, no. 1: 14. https://doi.org/10.3390/sym17010014

APA Style

Zhang, Z., Cheng, S., Shan, Y., Wang, Z., Ran, H., & Xing, L. (2025). Solving Multi-Objective Satellite Data Transmission Scheduling Problems via a Minimum Angle Particle Swarm Optimization. Symmetry, 17(1), 14. https://doi.org/10.3390/sym17010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop