An On-Orbit Task-Offloading Strategy Based on Satellite Edge Computing

Satellite edge computing has attracted the attention of many scholars due to its extensive coverage and low delay. Satellite edge computing research remains focused on on-orbit task scheduling. However, existing research has not considered the situation where heavily loaded satellites cannot participate in offloading. To solve this problem, this study first models the task scheduling of dynamic satellite networks as a minimization problem that considers both the weighted delay and energy consumption. In addition, a hybrid genetic binary particle swarm optimization (GABPSO) algorithm is proposed to solve this optimization problem. The simulation results demonstrate that the proposed method outperforms the other three baseline algorithms.


Introduction
In recent years, the terrestrial Internet has rapidly developed, with applications such as smart cities and environmental monitoring [1,2] attracting widespread attention. However, terrestrial Internet services are concentrated mainly in urban areas, and providing quality services in remote areas, such as islands, oceans, and deserts, is challenging. The instabilities of terrestrial networks become apparent when faced with natural disasters, such as floods and earthquakes [3]. Satellite networks have significant advantages, such as wide global coverage and high destructive resistance. They have been used in emergency communications, navigation, positioning, and smart city applications [4,5], effectively compensating for the lack of terrestrial Internet services and providing critical support for 6G global interconnection [6,7]. However, previous research has considered primarily satellite networks as relay networks in a satellite-ground fusion architecture. Ground access terminal services and raw satellite remote sensing data, for example, are transmitted back to the ground cloud center for unified processing, despite the possibility of performing tasks directly on the satellite [3,8,9]. Both result in significant delays and the waste of valuable satellite communication resources [10,11].
Mobile edge computing (MEC) [12] is an emerging architecture that brings the traditional cloud-centric computing model down to the edge of the user node. It provides the services and computing power needed at the user's periphery, creating service edge nodes with low latency and high processing rate for a better quality of service (QoS). MEC principles and low-latency, high-capacity low-Earth orbit (LEO) satellite networks are combined to create edge computing satellites (ECS). In this manner, satellite acquisition data can be processed in real-time at satellite edge nodes [4,8,9,13], conserving bandwidth and allowing for quick mission reaction. The TIANSUAN Constellation test satellite [14], co-chaired by Beijing University of Posts and Telecommunications and other institutions, for example, has validated remote sensing image inference and computing services on orbit with the equipped KubeEdge and Sedna edge intelligence inference platform. The researchers also compared it with traditional ground-based backhaul analysis strategies and verified that on-orbit edge processing can effectively reduce transmission traffic and

•
This research proposes a computation offloading strategy based on satellite edge computing, where large dependent tasks can be divided into multiple subtasks and computed on other satellite nodes.

•
On the basis of the shortcomings of computation offloading strategies for satellite edge computing of other literatures, study paper considers the scenarios where the satellite network topology changes, and high-load satellites are not involved in offloading. The optimization problem of weighted minimization of task completion delay and energy consumption is then investigated.

•
To solve the optimization problem, the modified GABPSO algorithm is utilized. The proposed algorithm is extensively simulated, demonstrating its superior performance compared with other benchmark algorithms.
The remainder of this paper is organized as follows: Section 2 summarizes the related work. Section 3 describes the system model and problem formulation. Section 4 describes the proposed hybrid genetic binary particle swarm algorithm and analyzes the simulation results. Section 5 concludes the paper.

Related Works
Task scheduling in edge computing can be divided into two categories: static task scheduling and dynamic task scheduling [17]. When task information and network information are known, static task scheduling can be used directly for task offload scheduling.
Dynamic task scheduling involves reassigning the scheduling policy at each scheduling moment when the number of tasks, network information, and other factors change at any time. For the purpose of this paper, we focus on static task scheduling.
Static tasks that are offloaded to the edge can be classified into two types: independent tasks and dependent tasks [18]. Independent tasks can be split into multiple tasks processed in parallel, and each node returns the result after completing the task processing. In contrast, dependent tasks include several subtasks with logical dependencies. In addition, the processing of a subtask can be performed when all the preceding subtasks of the subtask are completed.
Due to the constraint relationship among subtasks, scheduling dependent tasks is more challenging than scheduling independent ones. With the rise of big data, dependent tasks are becoming more common, including target tracking and identification, which require combining and processing multiple tasks [18]. As a result, developing effective scheduling strategies for dependent tasks is crucial. Although static task scheduling in terrestrial MEC scenarios has been extensively studied, research on task scheduling for LEO satellite constellations is still in its early stages, particularly for dependent tasks.
For independent-type task scheduling, Ren et al. [19] proposed an inter-satellite collaborative computation method for formation-flying satellites. The authors characterized the formation-flying satellite network using a weighted undirected graph, dividing the computational tasks into multiple parallel computational subtasks assigned to each satellite node and solving the delay optimization problem under the energy consumption constraint using the modified particle swarm algorithm (MPSO). However, because the work was limited to formation-flying satellites with a constant topology, its applicability for task scheduling of LEO satellite constellations with dynamic topologies is limited.
Chen Wang [11] presented a strategy for LEO satellite collaborative computing. The authors used time-expanded graphs to generate a steady-state matrix for dynamic LEO satellite networks. Furthermore, a generalized discrete algorithm based on transmission capacity and computational power addressed the time-delay optimization problem for multi-satellite collaborative computing applications. The scheduling of independent tasks was simpler because there were no logical dependencies between tasks.
For dependent task scheduling, Wu et al. [20] proposed a task collaborative scheduling algorithm for small satellite cluster networks. The research authors assigned dozens of jobs with logical relationships to separate satellites for collaborative computation. Considering that satellite nodes fail, the authors proposed an improved task scheduling strategy based on three heuristic algorithms so that the system can effectively guarantee that all tasks are completed by the deadline as much as possible while also providing some robustness to the scheduling algorithm. The effectiveness of the proposed algorithms was verified by comparing them with genetic algorithms, for example. However, again, the authors considered the same network of small satellite clusters and formation-flying satellites, and the topology between satellites remained fixed, which lacked a reliable reference value for the dynamic LEO satellite network.
Guo et al. [21] first characterized the LEO satellite network using the weighted time extended graph (WTEG) model, in which a uniform delay weight parameter was added to each edge in the steady-state graph to analyze the delay of the on-orbit computation and transmission. A directed acyclic graph (DAG) was used to characterize the task model, and the nodes and edges of the task model were mapped to the steady-state graph to find the minimum task completion delay. The authors employed a binary particle swarm algorithm to optimally solve the optimal mapping problem and verify the algorithm's feasibility in comparison with ground cloud processing and other basic scheduling algorithms.
Han et al. [4] constructed a satellite edge cluster computing architecture using LEO and geostationary earth orbit (GEO) satellites as edge nodes for collaborative task computing and characterized the logical relationships and constraints among subtasks using the DAG model. Furthermore, the author designed a scheduling algorithm that considered the dynamic changes in the priority and link bandwidth of subtasks in different time slots. At each scheduling moment, the unresolved subtasks were assigned to the appropriate satellite nodes for processing to ensure that the corresponding metrics of interest were optimized.
Most authors included all edge computing satellites in the spectrum of task scheduling assignable nodes in the work mentioned above. In fact, due to factors such as unequal population distribution and varying business demands, the load of each satellite node varies significantly. Enough computing power for new task processing is difficult for highload satellites. However, this problem has not been considered in any the above research.
At the same time, the satellite was powered by solar energy, and the energy consumption was an optimization target of great interest in satellite terrestrial networks [22,23]. This was basically not mentioned by the authors in the above work. As a result, the remainder of this paper will focus on investigating a strategy in which satellite nodes with high loads are excluded from task scheduling in the dynamic LEO satellite network task scheduler. Task latency and energy consumption will be considered together. In addition, the research concentrates on the dependent tasks. The difference between our work and the existing literature is summarized in Table 1.  [11,19] √ √ [20] √ √ [4,21] √ √ √

Satellite Edge Computing Architecture
In this paper, we introduce a classic satellite edge computing architecture, as shown in Figure 1. Task requests uploaded by users on the ground can be continually received by the LEO satellite during its motion. These tasks include the analysis of monitoring data from some sensors, assistance with communication from ocean-going ships, requests for emergency communication from the ground, and analysis of remote sensing images from the satellite itself. Traditionally, the satellite acts as a relay node to transmit the tasks back to the ground central station, i.e., the ground cloud center, for batch processing. However, as the satellite gains access to more powerful computing resources, it may think about processing tasks on the satellites while in orbit instead of sending them back to the ground cloud center. The satellites will carry the edge computing servers. The functional components are shown in Figure 1. The satellite edge computing server can autonomously perform work, such as resource allocation and task scheduling, in orbit. This edge computing satellite model, which is close to the users on the ground, may efficiently decrease the backhaul network traffic while also reducing task-processing latency. When a satellite receives many tasks, it considers the resource usage of each satellite in the constellation. In addition, a suitable strategy for offloading in orbit is developed. The inter-satellite offloading issue for a single dependent task in this edge computing satellite scenario is the focus of this article. Each satellite can establish contact with the four satellites around it using the inter-satellite link (ISL), as depicted in Figure 2. Due to the various operation conditions, each satellite has a unique load state, which can be broadcast to other satellites via the ISL. For example, when SAT1 receives a complete task request, it will select the appropriate low-load satellite in the constellation to offload parts of subtasks, while the high-load satellite cannot be selected for offloading.

SatTEG Construction
Unlike traditional terrestrial MEC networks, which have a fixed topology, satellite MEC networks have periodic topological changes due to the high-speed movement of satellites. It makes the transmission time delay between satellites uncertain. The fundamental difficulty to be addressed in the subsequent study is how to design the satellite MEC network as a mathematical model in a reasonable manner. Like the previous works [11,24,25], our paper first characterizes the dynamic satellite network using time-expanded graphs. The LEO satellite network experiences periodic topological changes as it travels around the globe, separating each operational period into time slots and determining the length of each time slot Δ = . Within each time slot, the topological state of the satellite network can be considered relatively stable and constant.
The set of all satellites in the LEO satellite network can be stated as = { , ⋯ , , ⋯ , }, in which is the number of LEO satellites. In each time slot, every satellite establishes a connection with the two satellites preceding and following it in the same orbit, as well as the two satellites closest to it in adjacent orbits. The resulting network of connections among satellites in time slot can be represented by the connection status ( ).
where C ij = 1, , ∈ means that the two satellites are connected, otherwise C ij = 0.
Further, the connectivity of LEO satellite nodes during their operation cycle can be expressed as a route table by combining the connection status ( ) of all time slots. By using , we can obtain the connectivity path of any satellite in any time slot during the satellite network operation. In addition, we can get the number of hops transmitted between satellites, denoted as ℎ .

Task Model
This section discusses a dependent task model. We assume that in the satellite working stage, a satellite node can achieve a remote sensing image online reasoning task [14]. The image reasoning task can be divided into several dependent subtasks. Each subtask can be assigned to different satellites for AI online analysis according to the satellite channel conditions and computing power. The division method for dependent subtasks [26] and AI on-orbit analysis [27,28] have been investigated to some extent, but they are not the topic of this paper. Hence, they are not discussed in detail.

SatTEG Construction
Unlike traditional terrestrial MEC networks, which have a fixed topology, satellite MEC networks have periodic topological changes due to the high-speed movement of satellites. It makes the transmission time delay between satellites uncertain. The fundamental difficulty to be addressed in the subsequent study is how to design the satellite MEC network as a mathematical model in a reasonable manner. Like the previous works [11,24,25], our paper first characterizes the dynamic satellite network using time-expanded graphs. The LEO satellite network experiences periodic topological changes as it travels around the globe, separating each operational period T into N time slots and determining the length of each time slot ∆t = T N . Within each time slot, the topological state of the satellite network can be considered relatively stable and constant.
The set of all satellites in the LEO satellite network can be stated as V = {v 1 , · · · , v i , · · · , v d }, in which d is the number of LEO satellites. In each time slot, every satellite establishes a connection with the two satellites preceding and following it in the same orbit, as well as the two satellites closest to it in adjacent orbits. The resulting network of connections among satellites in time slot t can be represented by the connection status C(t).
where C ij = 1, i, j ∈ d means that the two satellites are connected, otherwise C ij = 0. Further, the connectivity of LEO satellite nodes during their operation cycle can be expressed as a route table SatTEG by combining the connection status C(t) of all time slots. By using SatTEG, we can obtain the connectivity path of any satellite in any time slot during the satellite network operation. In addition, we can get the number of hops transmitted between satellites, denoted as hop.

Task Model
This section discusses a dependent task model. We assume that in the satellite working stage, a satellite node can achieve a remote sensing image online reasoning task [14]. The image reasoning task can be divided into several dependent subtasks. Each subtask can be assigned to different satellites for AI online analysis according to the satellite channel conditions and computing power. The division method for dependent subtasks [26] and AI on-orbit analysis [27,28] have been investigated to some extent, but they are not the topic of this paper. Hence, they are not discussed in detail. Figure 3, we use a typical DAG to represent the subsequent research's subtask dependence model. The DAG can be expressed as ϕ = (O, E). O = o 1 , · · · , o p represents p subtask nodes, and E = e ij |(i, j) ∈ p} denotes the set of directed edges in DAG. Furthermore, we define any subtask o i (i ∈ p) as o i = {D i , ζ i }. D i (Mb) and ζ i (CPUcycle/Mb) indicate the quantity of computation and computational complexity of subtasks, respectively. When the subtask o i calculation is finished, the calculated result data quantity RDo i must be transferred to the subtask o j , as indicated by the weighted directed edge e ij = RDo i .

As shown in
Sensors 2023, 23, x FOR PEER REVIEW As shown in Figure 3, we use a typical DAG to represent the subsequen subtask dependence model. The DAG can be expressed as = ( , ). = , resents subtask nodes, and = { |( , ) ∈ } denotes the set of directe DAG. Furthermore, we define any subtask ( ∈ ) as = { , } . ( / ) indicate the quantity of computation and computational co subtasks, respectively. When the subtask calculation is finished, the calcu data quantity must be transferred to the subtask , as indicated by th directed edge = . At the same time, we represent the precursor subtasks set of the subtask The subtask is authorized to begin computation when all the subtasks in been completed and the appropriate calculation results have been successfully satellite node where the subtask is situated. According to the above analysis, the task scheduling strategy in the LEO s work is the mapping scheme from task graph DAG to the .

Node Mapping
We define , = 1 to mean that the subtask is assigned to the th satellite node for computational processing. When , = 0, it indica variable is not allocated. For all subtasks and satellite nodes of the complete ti mapping connection can then be written as a decision matrix According to the above analysis, the task scheduling strategy in the LEO satellite network is the mapping scheme from task graph DAG to the SatTEG.
Any node m o i , v n j ∈ M needs to satisfy Sensors 2023, 23, 4271

Path Mapping
After each subtask node is completely mapped to the corresponding satellite node, the weighted directed edge e ij between subtasks can be converted to the shortest path, , between mapping nodes. m o i , v n j = 1 guarantees the node assignment of subtasks, and each subtask may span multiple time slots from the start of transmission to the completion of the computation. start v o j signifies the satellite node assigned by subtask o j in the beginning time slot, and end(v(o i )) denotes the satellite node in the time slot when the subtask o i computation is completed. The shortest path Path is obtained by Dijkstra's shortest path algorithm with the route table SatTEG as input. Similarly, we can also obtain the minimum hop,

Objective Function
As analyzed in the task model, the subtask o i is authorized to begin computation when all the subtasks in PRE i have been completed. Therefore, the subtasks are not all offloaded in the first time slot. Considering that the subtask assignment may select a certain satellite node after several time slots, we use T i wait to signify the inter-slot duration of waiting before the transmission of this subtask.
Assume that the source node initiating the scheduling is v 1 1 . When subtask o i is assigned from source node v 1 1 to node v n k , it needs to wait for n − 1 time slots at the source node. T i wait can be expressed as In addition, the original data of subtask o i is transferred from source node v 1 1 to v n k in the following time where B(Mb/s) denotes the transmission rate of the ISL, and hop i denotes the minimum number of hops required for a subtask o i to transmit to the destination node v n k . Additionally, the energy consumption resulting from the transmission of the subtask o i through the ISL is defined as where P trans is the transmission power of the ISL. When the data of subtask o i are all transmitted to the destination node v n k , the computation time for this subtask is where C k (CPUcycle/s) is the on-orbit processing performance of satellite node v k , and the calculated energy consumption of o i on satellite node v k is defined as where P k com is the computational power of satellite node v k . The processing result RD must be transferred to the node allocated to the succeeding task o j once the subtask o i is computed, and the transmission time is stated as where hop re i denotes the minimum number of hops required for RDo i to transmit to the node allocated o j .
The transmission energy consumption of RDo i is defined as Then, the final completion time of subtask o i is and the final energy consumption of subtask o i is We have specified that the start of subtask o j 's computation must occur after the completion of its preceding subtask set PRE j . That is, the original data transfer completion time of subtask o j follows the constraint The depth-first algorithm can be used to determine the order in which subtasks are executed. We must perform the calculations in the logical order of the activities for dependent subtasks. The task's total completion time is then equal to the time it took to complete the last exit subtask o l , i.e., To simplify the model, the satellite nodes assigned to the exit subtask o l in this study can be thought of as directly sending the calculation results to the ground cloud center after completing the task computation; the accompanying feedback delay is ignored, then T l re = 0. The total energy consumption can be defined as Furthermore, we define Ω as the set of satellite nodes with high service load, any satellite node v g ∈ Ω can only be utilized as an auxiliary node for subtask transmission, and no subtasks can be scheduled for computational processing. The scheduling procedure should then satisfy Both task completion delay and system energy consumption are issues to consider during the satellite task scheduling process. The system cost obtained by weighting them together is defined as where α and β are used as weights to indicate the importance given to latency and energy consumption, respectively. In summary, the optimization problem for dependent task scheduling based on SatTEG can be represented as follows: min COST s.t. (2)(3)(4)(14)(17). (19) Sensors 2023, 23, 4271 9 of 18

Algorithm Introduction
The binary particle swarm optimization (BPSO) algorithm was utilized to solve the similar model [21]. The BPSO algorithm has a memory function and can converge to a stable solution in a short time, but it is easy for it to fall into a local optimum. The genetic algorithm (GA) algorithm has a wide range of spatial search capability and variational capability, with strong global search capability, and can effectively overcome the problem of falling into a local optimum in the search process, making it suitable for massively parallel computing. To compensate for the limitations of the BPSO algorithm, we combine the BPSO algorithm and the GA in this work to obtain the GABPSO algorithm. It not only ensures a better information exchange mechanism but also avoids the deficiency of falling into a local optimum, enhances the search velocity, and improves the success rate of optimal solutions. First, we design the position and velocity of the u < U particles in the i < I iteration of the BPSO algorithm as In (23), represents a 1 × J-dimensional array, and 2 J > d × N. Then each particle position X i u is represented by a binary combination of p groups, and the corresponding task allocation node can be obtained by combining the decision matrix M after decimal decoding. That is, ∀X i u (u ∈ U, i ∈ I) is a possible solution to the objective function. In (24), the initial value of v i u (k) ∈ V i u is defined as a random array within [0,1], which is matched with X i u . The velocity update formula is where W is the inertia weight, r1, r2 is a random number between 0 and 1, and C1, C2 is the learning factor. The position update formula is where r is a random number. The GA's crossover and mutation operations are introduced after updating particle positions. The updated particle positions in the ith iteration is polled in turn. A particle position X i f is randomly chosen from the particle population, and its partial encoding is crossed with the polled particle position. The crossed particle position is inverted with a particular probability to obtain the mutation. Finally, the particle positions are utilized as input of the decision matrix M to find the fitness function, which is the value of the optimization objective function COST in (19). When we find the minimum fitness function, the minimum system cost can be obtained.
The specific steps of the improved GABPSO hybrid algorithm are shown in Algorithm 1.

Algorithm 1 GABPSO Task Scheduling Algorithm
Input: task ϕ; route table SatTEG; high load satellites set Ω; weights α, β; ISL bandwidth B; power P trans , P com 1: Initialization a particle swarm 2: while u < Udo 3: Initialization X 0 u , V 0 u 4: end while 5: Calculate fitness function f it of the particle swarm by substituting the particles into decision matrix M, i.e., f it = COST. 6: Set the current position as the best position for each particle P xbest 7: Set the position of the particle with the smallest f it among all particles as the global best position G xbest 8: for i < I do 9: for u < U do 10: Update the particle velocity based on (22) 11: Update the particle position based on (23)  12: Perform crossover and mutation on particles position 13: Calculate the fitness function f it for the new particle position where D is the function that generates the solution to the problem, ξ is the random vector generated from the probability space (R n , B, µ k ), f is the objective function, S is a subset of R n , denotes the constraint space of the problem, µ k is the probability measure on B, and B is the σ-domain of a subset of R n .

• Assumption (H2)
For any (Borel)subset A o f S with the measure v(A) > 0, we have that where µ t (A) is the probability of generating A from the measure v(A).

• Convergence Theorem (Global Search)
Suppose that f is a measurable function, S is a measurable subset of R n , and (H1) and (H2) are satisfied. Let x t ∞ t=0 be a sequence generated by the algorithm. Then, where P x t ∈ R ε is the probability at step t, and R ε is the global best points set. The theorem shows that for any random search algorithm, it can converge to the global optimal with probability 1 as long as it satisfies Assumptions H1 and H2.
Next, we will analyze whether the GABPSO algorithm satisfies the above assumptions.
In the GABPSO algorithm, the solution sequence is p g,t , where t is the number of evolutionary generations, and p g,t is the best particle position at the tth generation. The function D is defined as Then, it is easy to prove that it satisfies Assumption H1.
To satisfy Assumption H2, the union of the sample space of a particle population of size N must contain S, i.e., S ⊆ N i=1 M i,t , where M i,t is the support set for the sample space of ith particle at the tth generation. It has been shown that the basic PSO algorithm does not satisfy Assumption H2 [32,33]. In the PSO algorithm, as the number of iterations t is established, which means that there exists an integer t such that when t > t , there exists a set A ⊂ S such that ∑ N i=1 µ i,t (A) = 0. This is not consistent with Assumption H2. However, the GABPSO algorithm adds the crossover and mutation operations of the genetic algorithms. For a normally evolved particle, we set the union of its support set to α; for a particle recreated using crossover and mutation, we set the union of its support set to β. Due to the randomness and variability of crossover and mutation operations, there must exist an integer t2 such that β ⊇ S when t > t2. Therefore, for the GABPSO algorithm, there must exist an integer t2 such that α ∪ β ⊇ S when t > t2. Define any Borel subset of S to be . Therefore, the GABPSO algorithm is satisfied by Assumption H2.
According to the Convergence Theorem, it is known that the GABPSO algorithm can converge to the global optima with probability 1.

Algorithm Complexity Analysis
In the GABPSO algorithm, the population size is U, the number of iterations is I, and the problem size is N. For a single particle, the complexity of each operation is as follows: • Velocity update: each particle gets a new velocity based on (25), and the time complexity is O(1) • Position update: each particle gets a new position based on (26), and the time complexity is O(N). • Fitness calculation: each particle needs to be calculated on the basis of the decision matrix M to obtain the corresponding fitness, and the time complexity is O(N). • Fitness evaluation: each particle is compared with the historical best particle, and the time complexity is O(1) • Crossover and mutation: each particle performs a crossover and mutation operation with a certain probability, and the time complexity is O(1) Thus, for a single particle, the time complexity of one iteration is proportional to the problem size as O(N). Therefore, the time complexity of the algorithm is IUO(N)

Simulation Analysis
This research first created a 6 × 5 = 30 LEO satellite network based on the Iridium NEXT architecture. STK was used to obtain the shortest distance between satellites in each time slot in order to obtain the route table SatTEG. The specific scene parameters [19,21] and the related parameters of the GABPSO algorithm were set as shown in Table 2. In this paper, we provide the following reference algorithms for comparison to confirm the effectiveness of the proposed approach.  [19]. The authors inverted the flight velocity of some particles with a certain probability and perform position updates. The "variant particles" were created to obtain better search performance. The same parameters were set equally throughout the algorithms to avoid losing generality, and the scene parameters were identical. Meanwhile, the relevant simulation results were averaged over 30 runs to reduce the impact of stochasticity.
Since the optimization objective of this study is the system cost obtained by weighting the delay and energy consumption, different combinations of weights will be studied first. The energy consumed during offloading is far greater than the time delay. When α is 1 and β is 0.1, the two are nearly equal. Therefore, we will keep α at 1 and explore the impact on the system cost as β grows. As illustrated in Figure 4, the energy consumption had an increasing impact on the system cost as β rose, and the system cost grew gradually. However, the GABPSO algorithm was able to solve for the lowest system cost whether the delay and energy consumption were essentially equal or the energy consumption was given much more importance than the delay. In subsequent simulations, α was set to 1 and β was set to 0.1 to simulate the scenarios where delay and energy consumption were equally important.
We first chose to explore the convergence performance of the four algorithms. It is noteworthy that we simulated two representative scenarios in which the high-load satellite ratios were 20% (30 × 20% = 6) and 60% (30 × 60% = 18), respectively. Most of the solutions in the search domain were defined as infeasible solutions when there was a high percentage of high-load satellites. When the number of high-load satellites is small, it will not have much impact. It was necessary to perform corresponding simulations to explore the possible consequences. The simulation figures demonstrate that the four algorithms performed nearly identically for two different conditions. In the process of particle evolution of the BPSO and MPSO algorithm, there were individual historical best position P xbest and the global best position G xbest of the particle population controlling the direction of the optimal solution. Therefore, compared with the simple GA, the BPSO and MPSO algorithms could move more quickly toward the optimal solution. These two algorithms, however, reached local optimality, while the particle swarm diversity vanished. The GABPSO algorithm combined the benefits of the PSO algorithm's quick convergence and the GA algorithm's robust search capacity, enabling speedy convergence to a better solution, as illustrated in Figure 5a. As the number of feasible solutions in the search domain decreased sharply, the convergence speed and optimal solution deteriorated. However, the GABPSO algorithm still outperformd the other three algorithms, which is shown in Figure 5b.
Since the optimization objective of this study is the system cost obtained by weighting the delay and energy consumption, different combinations of weights will be studied first. The energy consumed during offloading is far greater than the time delay. When is 1 and is 0.1, the two are nearly equal. Therefore, we will keep at 1 and explore the impact on the system cost as grows. As illustrated in Figure 4, the energy consumption had an increasing impact on the system cost as rose, and the system cost grew gradually. However, the GABPSO algorithm was able to solve for the lowest system cost whether the delay and energy consumption were essentially equal or the energy consumption was given much more importance than the delay. In subsequent simulations, was set to 1 and was set to 0.1 to simulate the scenarios where delay and energy consumption were equally important.  We first chose to explore the convergence performance of the four algorithms. It is noteworthy that we simulated two representative scenarios in which the high-load satellite ratios were 20% (30 × 20% = 6) and 60% (30 × 60% = 18), respectively. Most of the solutions in the search domain were defined as infeasible solutions when there was a high percentage of high-load satellites. When the number of high-load satellites is small, it will not have much impact. It was necessary to perform corresponding simulations to explore the possible consequences. The simulation figures demonstrate that the four algorithms performed nearly identically for two different conditions. In the process of particle evolution of the BPSO and MPSO algorithm, there were individual historical best position and the global best position of the particle population controlling the direction of the optimal solution. Therefore, compared with the simple GA, the BPSO and MPSO algorithms could move more quickly toward the optimal solution. These two algorithms, however, reached local optimality, while the particle swarm diversity vanished. The GABPSO algorithm combined the benefits of the PSO algorithm's quick convergence and the GA algorithm's robust search capacity, enabling speedy convergence to a better solution, as illustrated in Figure 5a. As the number of feasible solutions in the search domain decreased sharply, the convergence speed and optimal solution deteriorated. However, the GABPSO algorithm still outperformd the other three algorithms, which is shown in Figure 5b. Firstly, we explored the relationship between system cost and the number of highload satellites. The system cost of each algorithm gradually increased as the proportion of high-load satellites rose, as indicated in Figure 6. This is because there were more satellite nodes available that were closer to the source satellite node when there were fewer highload satellites. This resulted in a smaller number of hops required for offloading through the ISL, which led to a subsequent decrease in transmission delay and energy consumption. In contrast, when the percentage of high-load satellites was large, there were fewer available satellite nodes closer to the source satellite node, causing inter-satellite offloading to be longer delayed and more energy-intensive. The GA algorithm was able to obtain lower system cost than the MPSO and BPSO algorithms. This was the same as the analysis of the convergence curve. The GABPSO algorithm, on the other hand, was able to maintain the best performance over time because it combined the advantages of both. Firstly, we explored the relationship between system cost and the number of high-load satellites. The system cost of each algorithm gradually increased as the proportion of highload satellites rose, as indicated in Figure 6. This is because there were more satellite nodes available that were closer to the source satellite node when there were fewer high-load satellites. This resulted in a smaller number of hops required for offloading through the ISL, which led to a subsequent decrease in transmission delay and energy consumption. In contrast, when the percentage of high-load satellites was large, there were fewer available satellite nodes closer to the source satellite node, causing inter-satellite offloading to be longer delayed and more energy-intensive. The GA algorithm was able to obtain lower system cost than the MPSO and BPSO algorithms. This was the same as the analysis of the convergence curve. The GABPSO algorithm, on the other hand, was able to maintain the best performance over time because it combined the advantages of both. Sensors 2023, 23, x FOR PEER REVIEW 14 of 19 Figure 6. System cost vs. high-load satellite rate.
Next, we set the number of high-load satellites to 3 (30 × 10% = 3). On the basis of this, we studied the effect of other system parameters on the system cost.
At first, we investigated how the quantity of original data in the subtasks would affect the system cost. As seen from Figure 7, the system cost grew gradually and with a more pronounced trend. The increased quantity of original data for subtasks led directly not only to an increase in computational delay and energy consumption but also to an increase in inter-satellite transmission delay and energy consumption. It led to a relatively fast curve change. Similarly, the GABPSO algorithm consistently outperformed the other baseline algorithms. The MPSO algorithm used the inverse of the particle velocity to achieve the particle variation. It was difficult to make essential changes to the particle population, and the entire search domain could not be fully explored. As a result, the MPSO method did not significantly outperform the BPSO algorithm. Further, we explored the changes brought about by computational complexity. The increase in computational complexity indicated that the task took longer to compute on the satellite nodes. This, in turn, caused an increase in the overall system cost. As depicted in Figure 8, all four algorithm times increased as the computational complexity grew. The MPSO algorithm and BPSO algorithm lacked powerful global search capability for a better solution, leading to the worst performance, while the GABPSO algorithm performed the best. Next, we set the number of high-load satellites to 3 (30 × 10% = 3). On the basis of this, we studied the effect of other system parameters on the system cost.
At first, we investigated how the quantity of original data in the subtasks would affect the system cost. As seen from Figure 7, the system cost grew gradually and with a more pronounced trend. The increased quantity of original data for subtasks led directly not only to an increase in computational delay and energy consumption but also to an increase in inter-satellite transmission delay and energy consumption. It led to a relatively fast curve change. Similarly, the GABPSO algorithm consistently outperformed the other baseline algorithms. The MPSO algorithm used the inverse of the particle velocity to achieve the particle variation. It was difficult to make essential changes to the particle population, and the entire search domain could not be fully explored. As a result, the MPSO method did not significantly outperform the BPSO algorithm. Next, we set the number of high-load satellites to 3 (30 × 10% = 3). On the basis of this, we studied the effect of other system parameters on the system cost.
At first, we investigated how the quantity of original data in the subtasks would affect the system cost. As seen from Figure 7, the system cost grew gradually and with a more pronounced trend. The increased quantity of original data for subtasks led directly not only to an increase in computational delay and energy consumption but also to an increase in inter-satellite transmission delay and energy consumption. It led to a relatively fast curve change. Similarly, the GABPSO algorithm consistently outperformed the other baseline algorithms. The MPSO algorithm used the inverse of the particle velocity to achieve the particle variation. It was difficult to make essential changes to the particle population, and the entire search domain could not be fully explored. As a result, the MPSO method did not significantly outperform the BPSO algorithm. Further, we explored the changes brought about by computational complexity. The increase in computational complexity indicated that the task took longer to compute on the satellite nodes. This, in turn, caused an increase in the overall system cost. As depicted in Figure 8, all four algorithm times increased as the computational complexity grew. The MPSO algorithm and BPSO algorithm lacked powerful global search capability for a better solution, leading to the worst performance, while the GABPSO algorithm performed the best. Further, we explored the changes brought about by computational complexity. The increase in computational complexity indicated that the task took longer to compute on the satellite nodes. This, in turn, caused an increase in the overall system cost. As depicted in Figure 8, all four algorithm times increased as the computational complexity grew. The MPSO algorithm and BPSO algorithm lacked powerful global search capability for a better solution, leading to the worst performance, while the GABPSO algorithm performed the best.
Finally, we incrementally increased ISL bandwidth while holding the other variables constant to investigate how the transmission capacity of ISL affects the system cost. Figure 9 shows how effectively the ISL bandwidth affected the system cost. The increased bandwidth enabled faster offloading of subtasks among satellite nodes. It allowed for the system cost to be gradually reduced as well, and the GABPSO algorithm still performed the best. The effect of the same growth on system cost was more noticeable when the ISL bandwidth was minimal. When the ISL bandwidth is large enough, the inter-satellite transmission delay and energy consumption will be negligible. In that case, the system cost will almost equal the delay and energy consumption required for the computation. It is anticipated that inter-satellite task scheduling will be able to be finished in a relatively short time in the future if satellite transmission performance is markedly enhanced. Finally, we incrementally increased ISL bandwidth while holding the other variables constant to investigate how the transmission capacity of ISL affects the system cost. Figure  9 shows how effectively the ISL bandwidth affected the system cost. The increased bandwidth enabled faster offloading of subtasks among satellite nodes. It allowed for the system cost to be gradually reduced as well, and the GABPSO algorithm still performed the best. The effect of the same growth on system cost was more noticeable when the ISL bandwidth was minimal. When the ISL bandwidth is large enough, the inter-satellite transmission delay and energy consumption will be negligible. In that case, the system cost will almost equal the delay and energy consumption required for the computation. It is anticipated that inter-satellite task scheduling will be able to be finished in a relatively short time in the future if satellite transmission performance is markedly enhanced. In the above simulation scenarios, the GABPSO algorithm was always able to achieve the best performance because it took into account the fast converge capability of the BPSO algorithm and the variational properties of the GA. Due to its cross-mutation property, the GA algorithm was also better able to look for better solutions. The MPSO and BPSO algorithms had the worst overall performances because they tended to fall into local optima. The simulation results are consistent with the analysis in the convergence curve.

Statistical Analysis
The simulation findings provided some support for the GABPSO algorithm's superiority. Referring to research [36], a two-way analysis of variance (ANOVA) was used to explore the system cost in relation to each parameter for a more in-depth analysis. This  Finally, we incrementally increased ISL bandwidth while holding the other variables constant to investigate how the transmission capacity of ISL affects the system cost. Figure  9 shows how effectively the ISL bandwidth affected the system cost. The increased bandwidth enabled faster offloading of subtasks among satellite nodes. It allowed for the system cost to be gradually reduced as well, and the GABPSO algorithm still performed the best. The effect of the same growth on system cost was more noticeable when the ISL bandwidth was minimal. When the ISL bandwidth is large enough, the inter-satellite transmission delay and energy consumption will be negligible. In that case, the system cost will almost equal the delay and energy consumption required for the computation. It is anticipated that inter-satellite task scheduling will be able to be finished in a relatively short time in the future if satellite transmission performance is markedly enhanced. In the above simulation scenarios, the GABPSO algorithm was always able to achieve the best performance because it took into account the fast converge capability of the BPSO algorithm and the variational properties of the GA. Due to its cross-mutation property, the GA algorithm was also better able to look for better solutions. The MPSO and BPSO algorithms had the worst overall performances because they tended to fall into local optima. The simulation results are consistent with the analysis in the convergence curve.

Statistical Analysis
The simulation findings provided some support for the GABPSO algorithm's superiority. Referring to research [36], a two-way analysis of variance (ANOVA) was used to explore the system cost in relation to each parameter for a more in-depth analysis. This process was used to test the effect of two factors on the dependent variable, consistent In the above simulation scenarios, the GABPSO algorithm was always able to achieve the best performance because it took into account the fast converge capability of the BPSO algorithm and the variational properties of the GA. Due to its cross-mutation property, the GA algorithm was also better able to look for better solutions. The MPSO and BPSO algorithms had the worst overall performances because they tended to fall into local optima. The simulation results are consistent with the analysis in the convergence curve.

Statistical Analysis
The simulation findings provided some support for the GABPSO algorithm's superiority. Referring to research [36], a two-way analysis of variance (ANOVA) was used to explore the system cost in relation to each parameter for a more in-depth analysis. This process was used to test the effect of two factors on the dependent variable, consistent with the type of simulation in this study. Firstly, the objective was defined to test whether there was any difference in the scheduling algorithms or high-load satellite proportions at the 0.05 level of standard significance. The calculation parameters are shown in Table 3.

•
Step 1: Null Hypotheses: H0 (1) : There is no significant difference in the scheduling algorithms H0 (2) : There is no significant difference in the high-load satellite proportions Alternative Hypotheses: There is a significant difference in the scheduling algorithms H1 (2) : There is a significant difference in the high-load satellite proportions • Step 2: In this scenario, a = 4 and b = 7. At the 0.05 level of significance Step 3: Calculation Total sum of squares: Variation between rows: Variation between columns: = 461,881.36 Variation due to error: SSE = SST − SSR − SSC = 10,256.93 The specific results of the two-way ANOVA analysis are shown in Table 3. • Step  Table 4.

Conclusions
The proposed research work explores the problem of offloading a single dependent task to multiple satellites for collaborative processing in the satellite edge computing scenario. Firstly, a model is proposed in which tasks are offloaded to multiple satellites for collaborative computing without the participation of high-load satellites. Secondly, the crossover and mutation operations of the GA are introduced in this paper to address the drawbacks of the traditional BPSO algorithm. Utilizing the optimized GABPSO algorithm, a lower system cost can be obtained under the scenario. The experiments verified that the optimized algorithm has better performance than other baseline algorithms.
In practical satellite applications, the size of remote sensing images is huge. Adopting the strategy proposed in this paper can effectively accelerate the on-orbit analysis of images. In addition, in the future satellite-IoT architecture, the analysis of ground monitoring data acquired by satellites, etc., is also applicable to the scenario studied in this work. The strategy proposed in this paper has some reference value for all these applications.

Future Works
The scenarios studied in this work address only the single-user, single-service scenario. Future research will concentrate on the task-scheduling problem of the multi-user, multiservice satellite scenario. Furthermore, in practical engineering, the arrival of tasks is continuous. Dynamic scheduling for continuous tasks also requires further research.
In terms of the algorithm, optimization objectives and heuristic algorithms are combined through node mapping and algorithmic coding approaches in this study. Using the algorithm's own search to find the optimal solution will consume more time, and combining some a priori methods can effectively improve the search velocity. In addition, the decision matrix and the criteria defined in this paper will take up a lot of space. If these problems are sufficiently improved in future work, they will have high engineering value.