Next Article in Journal
Soil Salinity Mapping of Plowed Agriculture Lands Combining Radar Sentinel-1 and Optical Sentinel-2 with Topographic Data in Machine Learning Models
Next Article in Special Issue
A 40-Year Time Series of Land Surface Emissivity Derived from AVHRR Sensors: A Fennoscandian Perspective
Previous Article in Journal
Impact of Assimilating Geostationary Interferometric Infrared Sounder Observations from Long- and Middle-Wave Bands on Weather Forecasts with a Locally Cloud-Resolving Global Model
Previous Article in Special Issue
Evolution of Coastal Environments under Inundation Scenarios Using an Oceanographic Model and Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints

1
School of Electronic and Information Engineering, Xi’an Jiaotong University (XJTU), Xi’an 710049, China
2
State Key Laboratory of Astronautic Dynamics, China Xi’an Satellite Control Center, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(18), 3459; https://doi.org/10.3390/rs16183459
Submission received: 20 July 2024 / Revised: 29 August 2024 / Accepted: 16 September 2024 / Published: 18 September 2024

Abstract

:
Integrated communication–sensing–computing (ICSC) satellites, which integrate edge computing servers on Earth observation satellites to process collected data directly in orbit, are attracting growing attention. Nevertheless, some monitoring tasks involve sequential sub-tasks like target observation and movement prediction, leading to data dependencies. Moreover, the limited energy supply on satellites requires the sequential execution of sub-tasks. Therefore, inappropriate assignments can cause circular waiting among satellites, resulting in deadlocks. This paper formulates task offloading in ICSC satellites with data-dependent constraints as a mixed-integer linear programming (MILP) problem, aiming to minimize service latency and energy consumption simultaneously. Given the non-centrality of ICSC satellites, we propose a distributed deadlock-free task offloading (DDFTO) algorithm. DDFTO operates in parallel on each satellite, alternating between sub-task inclusion and consensus and sub-task removal until a common offloading assignment is reached. To avoid deadlocks arising from sub-task inclusion, we introduce the deadlock-free insertion mechanism (DFIM), which strategically restricts the insertion positions of sub-tasks based on interval relationships, ensuring deadlock-free assignments. Extensive experiments demonstrate the effectiveness of DFIM in avoiding deadlocks and show that the DDFTO algorithm outperforms benchmark algorithms in achieving deadlock-free offloading assignments.

1. Introduction

Earth observation satellites play an essential role in environment monitoring, geographical surveys, and situational awareness [1]. With the rapid growth of Earth observation satellites, massive amounts of observation data are generated worldwide, requiring further processing [2]. A common practice is to transmit observation data to ground stations, but this strains terrestrial–satellite network resources and exceeds the communication capabilities of satellites, causing congestion and packet loss [3,4,5]. Traditional compression methods [6,7] reduce data volume but retain task-irrelevant information. To minimize communication costs and latency, it is more efficient to transmit only task-relevant information. Inspired by mobile edge computing (MEC), some works attempt to deploy edge computing servers on Earth observation satellites to process data onboard [5,8]. These satellites, known as integrated communication–sensing–computing (ICSC) satellites, have drawn increasing attention in recent years [9,10,11,12].
In ICSC satellite networks, a group of ICSC satellites connected by inter-satellite links (ISLs) perform coordinated observations and onboard data processing for a set of monitoring tasks. Some tasks, such as military monitoring [13] and target tracking [14], require continuous target observation and target movement prediction, involving a series of observation and computation sub-tasks. Therefore, task offloading, determining where and when sub-tasks are performed, is crucial in ICSC satellite networks. However, due to data dependencies among sub-tasks and limited energy supply on satellites, which requires the sequential execution of sub-tasks, not all offloading assignments are feasible.
Specifically, on the one hand, there are data dependency constraints between the observation and computation sub-tasks. Computation sub-tasks must wait until the preceding observation sub-task completes and provides the necessary data. Similarly, observation sub-tasks must wait for preceding computation tasks to finish to obtain the predicted target location [14]. These dependencies introduce additional constraints on task execution. On the other hand, ICSC satellites have limited electricity generated by their solar panels [15], resulting in power restrictions. Then, the ICSC satellite cannot perform observation and computation sub-tasks simultaneously, and sub-tasks are executed sequentially on each satellite. Even worse, if sub-tasks are offloaded inappropriately, several satellites may need information from sub-tasks that can only be executed later by these same satellites, leading to an undesirable phenomenon called deadlock [16]. In such a deadlock, ICSC satellites are trapped in circular waiting, leading to the suspension of the entire system. Therefore, task offloading in ICSC satellites should be carefully addressed to resolve the existing conflicts.
The task offloading problem in ICSC satellites has received increasing attention recently. Recent works [5,17,18,19,20] employed heuristic and convex optimization methods to achieve offloading assignments for ICSC satellites. Additionally, considering the decentralized nature of ICSC satellites, study [21] proposed a distributed scheduling algorithm for ICSC satellite clusters. Unfortunately, some practical challenges are still neglected, such as data dependency constraints among sub-tasks, the coupling between observation and computation, and potential deadlocks. By resolving task offloading problems under these challenges, we enable ICSC satellites to achieve self-organized task assignments, autonomous task execution, and in-orbit data processing.
In this paper, we address the task offloading problem in ICSC satellites with data-dependent constraints and resolve potential deadlocks. The main contributions are summarized as follows:
  • We establish a mixed-integer linear programming (MILP) model for the task offloading problem in ICSC satellites, considering data dependence constraints among sub-tasks.
  • To address this problem, we introduce a decentralized PI framework. Our method, the distributed deadlock-free task offloading (DDFTO) algorithm, operates on each satellite in parallel, utilizing local communication via ISLs. It alternates between stages of sub-task inclusion and consensus and sub-task removal on each satellite, continuing until all ICSC satellites converge on a common offloading assignment.
  • To resolve undesired deadlocks in offloading assignments, a deadlock-free insertion mechanism (DFIM) is integrated into DDFTO. We demonstrate its effectiveness and computational complexity in resolving deadlocks.
The rest of this article is organized as follows: Section 2 illustrates the MILP model. Section 3 outlines the problem description and formulation. Section 4 presents the proposed DDFTO algorithm, including the deadlock-free insertion mechanism. Section 5 describes and analyzes extensive numerical experiments. Section 6 discusses the parameter effects on DDFTO. Finally, conclusions are drawn in Section 7.

2. Related Works

ICSC satellites have gained attention for their fast data processing and decision-making capabilities, which are crucial for urgent tasks like disaster rescue and military surveillance. Firstly, ICSC satellites reduce data transmission, conserving valuable terrestrial–satellite communication resources. For example, Zhu et al. [22] developed a two-tier processing framework for hyperspectral images, while Mateo-Garcia et al. [23] used machine learning on the Φ-Sat-1 satellite to filter out cloudy images. Secondly, these satellites enhance efficiency in emergencies such as earthquakes and wildfires by prioritizing critical areas for data transmission. Furano et al. [24] used edge computing to focus on key areas for image transmission, and Bui et al. [25] employed onboard computing on the IRIS-B satellite to detect and predict landslides, sending only essential information to ground stations. Finally, ICSC satellites support real-time object tracking, allowing ground stations to receive precise object locations instead of large video frames. Shi et al. [26] developed a method for tracking moving aircraft using satellite video frames.
Although efficient task offloading algorithms play a significant role in ICSC satellites, little attention has been paid to this field [17,18,19,20,21]. Existing methods can be roughly classified into the following two types: centralized and distributed. Specifically, He et al. [17] developed a heuristic algorithm that schedules tasks based on the density of residual tasks. A central node manages task filtering and scheduling periods, while edge satellites use heuristic rules to schedule their own tasks. To minimize energy consumption for ICSC satellites, Leyva-Mayorga et al. [5] formulated a satellite mobile edge computing framework for Earth observation, optimizing image distribution and compression parameters. Considering ISL communication resources, Valente et al. [18] proposed two heuristic approaches with varying computational complexities to address issues related to bandwidth and computing resource allocation among ICSC satellites. Zhu et al. [19] decomposed the task offloading problem into two subproblems, devising a strategy for joint subframe allocation and task partitioning to optimize overall performance, achieving Pareto-optimal solutions. Finally, Gost et al. [20] divided the problem into two subproblems and applied convex optimization methods to obtain a trade-off between energy consumption and latency.
The aforementioned methods are centralized and straightforward to implement and operate swiftly. However, they necessitate constant communication between the decision node and all ICSC satellites, which can lead to a single point of failure [27]. Considering the non-centrality of ICSC satellites, Biswas et al. [21] proposed a distributed scheduling algorithm for ICSC satellite clusters, and the overall latency in task execution was significantly improved. Inspired by previous works [28,29], we introduce a distributed heuristic algorithm called DDFTO in this paper. DDFTO operates on each ICSC satellite in parallel, using local communication via ISLs. It alternates between two newly designed stages: sub-task inclusion and consensus and sub-task removal, until ICSC satellites converge on a common offloading assignment.
However, neglecting data dependency constraints among sub-tasks and potential deadlocks in monitoring scenarios can lead to task failures. Therefore, this paper focuses on designing a full-decentralized deadlock-free task offloading algorithm considering the above practical factors for ICSC satellites.

3. Problem Description and Formulation

As depicted in Figure 1, this paper explores the task offloading problem for ICSC satellites, in which a cluster of ICSC satellites collaboratively monitor terrestrial targets and process collected data directly in orbit. This section provides an overview of ICSC satellites, monitoring tasks, and relevant constraints. Additionally, we propose an MILP model for the considered problem. Key notations are listed in Table 1.

3.1. ICSC Satellites

Let S = {1, 2, 3, …, n} be the set of n ICSC satellites. Each ICSC satellite sS not only serves as an Earth observation satellite but also deploys an MEC server with computing capacity Cs for in-orbit image processing. Every satellite sS is connected to four neighboring satellites via ISLs, and the network topology is represented by an n × n matrix G, where G[s, k] = ls,k if satellite s and k can exchange information via ISL with distance ls,k; otherwise G[s, k] = ∞.

3.2. Monitoring Tasks and Sub-Tasks

Let T = {1, 2, 3, …, m} denote the monitoring tasks for m targets. According to the collaborative observation method for tracking moving targets [30], each monitoring process involves a sequence of target observation and movement prediction subprocesses. Consequently, each monitoring task tT can be divided into multiple observation and computation sub-tasks. Figure 2 illustrates an example of this monitoring process.
Therefore, each monitoring task tT can be formally represented by a directed acyclic graph (DAG) t = (𝒱t, t), where vertex set 𝒱t collects all |𝒱t| sub-tasks in t, and edge set t denotes the dependencies among them. Specifically, an edge (u, v) ∈ t indicates that sub-task v depends on the results of its predecessor sub-tasks u (u, v𝒱t); thus, sub-task v can be performed only when predecessor u has completed and received the necessary results. Then, we let 𝒫v (resp. 𝒮v) collect all predecessors (resp. successors) of v, and V = {v𝒱t | tT} denotes the set of all sub-tasks across all monitoring tasks.
Moreover, let parameter tuple <ξv, ρv, dvI, dvO> characterize each sub-task vV, where ξv refers to the workload of v, ρv represents the imaging time of v, dvI refers to the input data of v, and dvO represents the output. Since sub-task v requires the results of all its predecessors in 𝒫v, we have dvI = Σu𝒫vduO. Additionally, there is ξv = ∅ for observation sub-tasks and ρv = ∅ for computation sub-tasks.

3.3. Basic Constraints

We first introduce the observation constraints of ICSC satellites. Based on satellite movement and camera parameters [17], observation sub-tasks can only be performed by satellites under specific time windows. Let TWvs = {twvs1, twvs2, …, twvs|TWvs|} represents the set of visible time windows for observation sub-task vV and satellite sS. Each entry twvsg = [twbvsg, twevsg] denotes the visible time window for v on s at g-th opportunity, where twbvsg and twevsg are the begin and end time of twvsg, respectively.
For observation sub-task v with imaging time ρv, which is going to be observed by satellite s at g-th opportunity. The start observation time 𝒯vE must be satisfied by the following conditions:
T v E t w b v s g
T v E + ρ v t w e v s g
According to the characteristics of Earth observation satellites [1], the ICSC satellite needs to adjust its attitude to align with the target before performing observation sub-task v. Thus, if there exists a preceding observation sub-task uV assigned to the same satellite s, an angle transition time is required. Assuming that sub-task u is performed by s at its h-th opportunity, according to the time-dependent angle transition model [31], the required transition time can be given as follows:
T u v , s a n g l e = t r a n s ( Ψ u s h , Ψ v s g )
where Ψush and Ψvsg are look angles of satellite s for observing sub-task u and v, and trans(•) denotes the calculation function of transition time in [31]. Herein, let 𝒯uE be the executing time of sub-task u; the following satellite maneuvering constraint needs to be satisfied:
T u E + ρ u + T u v , s a n g l e T v E
After observing the target, the collected data need to be transmitted via ISLs to the assigned satellites of successor computation sub-tasks. The communication latency consists of propagation latency and transmission latency [32]. Let RISL be the rate of ISLs, which is a constant in this paper. Then, communication latency for transmitting data d v O of sub-task v from satellite s to k can be expressed as follows:
T v , s k c o m m = l s , k c + d v O R I S L
where ls,k is the length of the shortest route between satellite s and k, which can be obtained based on the network topology matrix G, and c is the speed of light.
After receiving the required data, the computation sub-task vV with input data dvI and workload ξv can be performed. Then, the computational latency of v on assigned satellite s is as follows:
T v , s c o m p = d v I ξ v C s

3.4. Latency Model

Considering the restricted power supply in satellites [15], the observation and computation process, which consumes huge energy, cannot be performed in a stimulatory fashion on a specific satellite. That means that when there are multiple sub-tasks are assigned to one satellite, we need to consider the queue delay required for performing sub-tasks.
Then we let θ = {θ1, θ2, …, θn} be an offloading assignment for ICSC satellites, where θs represents the sub-task sequence on satellite sS. Thus, for a specific sub-task vV, several time parameters are defined as follows:
(1)
𝒯vR: The time when all results of predecessors 𝒫v are received by the assigned satellite of v.
(2)
𝒯vC: The time when the assigned satellite has completed the previous sub-task v.
(3)
𝒯vA: The time the assigned satellite turns to the required angle if v is an observation sub-task.
(4)
𝒯vE: The time when sub-task v is executed.
(5)
𝒯vF: The finish time of sub-task v.
First, we let u𝒫v be a predecessor sub-task of v according to DAG. After u is completed, the result of u is transmitted to the assigned satellite of v via ISLs. After receiving all results of 𝒫v, sub-task v can be performed, and then we have the following:
T v R = max u P v { T u F + T u , k s c o m m }
Then, we let ω(v) ∈ V be the preceding sub-task of v from sub-task sequence θs. If v is the first sub-task in θs, we set ω(v) = ∅; otherwise, v cannot be performed until sub-task ω(v) is completed.
T v C = 0 , i f   ω ( v ) = T ω ( v ) F , o t h e r w i s e
Furthermore, if v is an observation sub-task, v can be performed only when the required lo angle is satisfied. According to (4), if there is a previous observation sub-task δ(v) in θs, then the lo angle ready time 𝒯vA is as follows:
T v A = 0 , i f   δ ( v ) = T δ ( v ) E + ρ δ ( v ) + T δ ( v ) v , s a n g l e , o t h e r w i s e
Obviously, 𝒯vE𝒯vR, 𝒯vE𝒯vC, and 𝒯vE𝒯vA exist, and then we can give the calculation formula of 𝒯vE as follows:
T v E = max { T v R , T v C , T v A }
Especially for observation sub-task vV, constraints (1) and (2) need to be satisfied, and then we have 𝒯vE = max{𝒯vR, 𝒯vC, 𝒯vA, twbvsg}, where 𝒯vE + ρvtwevsg holds for [twbvsg, twevsg] ∈ TWvs.
Finally, the completion time of sub-task v is as follows:
T v F = T v E + ρ v , i f   v   i s   o b s e r v a t i o n T v E + T v , s c o m p , o t h e r w i s e
Based on the above analysis, given an assignment solution θ, we can calculate T v R , T v C , T v A , T v E , and T v F for each vV sequentially using (7), (8), (9), (10), and (11), respectively.

3.5. Problem Formulation

Two decision variables are employed to represent sub-task assignment solution θ. Let xsv be the decision variable where xsv = 1 if sub-task v is assigned to satellite s, and otherwise, xsv = 0. Another variable, ysvu = 1 if satellite s needs to perform sub-task v before sub-task u, and ysvu = 0 otherwise.
In this paper, service latency and energy consumption are considered performance indices for evaluating offloading assignment θ. In particular, the service latency is the maximum completion time of all sub-tasks vV, as follows:
F 1 = max v V T v F
The energy consumption of ICSC satellites consists of the following four components: transmission, computation, angle transition, and observation. Let ηx, ηa, and ηo be the power of transmission, angle transition, and observation, respectively. For computation, we use a widely adopted model in which energy consumption per commuting cycle is κCs2 [33]. Therefore, the total energy consumption is given as follows:
F 2 = v V u P v s S k S η x T u , s k c o m m x s u x k v + v V s S κ C s 3 T v , s c o m p x s v + v V s S η a T δ ( v ) v , s a n g l e x s v + v V s S η o ρ v x s v
Then, we formulate the task offloading problem in ICSC satellites to minimize service latency and energy consumption.
F ( θ ) = min x s v , y s v u { F 1 F 2 }
s S x s v = 1 ,   v V
u V y s v u x s v 1 ,   s S , v V
v V y s v u x s u 1 ,   s S , u V
y s v u ( T u E T v F ) 0 ,   s S , u , v V
T v E max u P v { T u F } 0 ,   v V
where xsv and ysvu are binary decision variables. The objective function in (14) aims to minimize both service latency and energy consumption. The fitness of an offloading assignment can also represented as the sum of F1 and F2, i.e., F(θ) = minxsvysvu{F1 + F2}. However, this formulation is sensitive to the differing scales of F1 and F2. If one cost is significantly larger than the other, minimizing the total cost becomes difficult. For example, if F2 has much larger values than F1, even a slight increase in F2 can overshadow any improvements made in optimizing F1. Specially, when unexpectable sub-tasks exist in θ, we set F(θ) = ∞ due to system stagnation. Equation (15) ensures that each sub-task can be offloaded to at most one specific ICSC satellite. Equations (16) and (17) state that each sub-task v ∈ θs can have at most one preceding and one following sub-task in its sequence θs. Equation (18) describes the temporal relationships between sub-tasks assigned to the same satellite. Equation (19) outlines the temporal relations among sub-tasks with data dependencies.

4. The Distributed Deadlock-Free Task Offloading Algorithm

Inspired by PITO [29], this article develops the DDFTO algorithm to address the task offloading problem in ICSC satellites with data-dependent constraints. DDFTO operates on each satellite in parallel, relying on local communication via ISLs. Two newly designed stages, sub-task inclusion and consensus and sub-task removal, perform alternately, ensuring that DDFTO generates deadlock-free offloading assignments.

4.1. Basic Concept

In DDFTO, each satellite sS iteratively modifies its sub-task sequence θs by adding or removing sub-tasks, aiming to minimize the global objective F(θ) using information from neighboring satellites via ISLs.
To achieve this, we first introduce two indicators, removal impact and inclusion impact, designed for each sub-task vV concerning satellite sS.
(1)
Removal impact: for sub-task v ∈ θs, the removal impact (θs v) indicates the variation of F(θ) after removing v from θs, and then we have the following:
R ( θ s v ) = F ( θ ) F ( θ s v )
where θs v represents the removal of sub-task v from θsθ. For any v ∉ θs, we let (θs v) = ∞.
(2)
Inclusion impact: for sub-task v ∉ θs, the inclusion impact (θs v) represents the minimum variation of F(θ) after inserting t into θs, and then we have the following:
I ( θ s v ) = min p { 1 , 2 , , θ s + 1 } { F ( θ s , p v ) F ( θ ) }
where θs,p v indicates the inclusion of sub-task v into p-th position of θs. Similarly, we set (θs v) = ∞ when sub-task v already exists in θs.
Notably, these indicators differ from those in [29] because of data dependencies among sub-tasks. Inserting or removing any sub-task vV from θs impacts the execution of all subsequent sub-tasks across all satellites, thereby altering F(θ).
Using these two indicators, given an offloading assignment θ = {θ1, θ2, …, θn} and two satellites s, kS connected by ISL (i.e., G[s, k] ≠ ∞), transferring a sub-task vV from satellite s to k will decrease the global fitness F(θ) as long as Condition (22) satisfies the following:
R ( θ s v ) > I ( θ k v )
Let θ, θ′, and θ″ represent the task assignment when sub-task v is in θs, removed, and in θk, respectively. According to Equations (20) and (21), we have (θs v) = F(θ) − F(θ′) and (θs v) = F(θ″) − F(θ′). Thus, we have the new assignment θ″ with a lower F(θ″), i.e., F(θ″) < F(θ), only if (θk v) < (θs v).
Since distributed algorithms often become stuck in local optima when making greedy choices based on local communications [29], DDFTO employs five vectors, Rs, As, Es, and Ps, when each satellite sS communicates with its neighbors via ISLs.
(1)
Rs = [Rs1, Rs2, …, Rs|V|]T records the latest removal impacts for sub-tasks in V. Initially, Rsv is set to (θs v) for ∀ v ∈ θs, and Rsv = ∞ otherwise.
(2)
As = [As1, As2, …, As|V|]T records the assigned satellite of sub-tasks in V as believed by s. Initially, Asv = v for ∀ v ∈ θs, and Asv = ∅ otherwise.
(3)
Es = [Es1, Es2, …, Es|V|]T tracks the time when sub-tasks are executed by the assigned satellites as believed by s. Initially, Esv = T v E for ∀ v ∈ θs, and Esv = ∅ otherwise.
(4)
Ps = [Ps1, Ps2, …, Psn]T is a vector where entry Psk is the timestamp that satellite s believes it received the latest information from satellite k. Initially, Psk = 0 for ∀ kS. During communication, PskPs is updated by the following rules:
P s k = T s k , i f   G [ s , k ] max l S G [ s , l ] P l k o t h e r w i s e
where 𝒯sk is the time when s received information from k via ISL.
Based on these vectors, we begin introducing DDFTO with its first stage, sub-task inclusion.

4.2. Sub-Task Inclusion

During the sub-task inclusion stage, each satellite sS independently adds sub-tasks to its sequence θs based on its local information.
Due to data dependencies among sub-tasks, any changes in θs will impact the execution of sub-tasks on other satellites, thereby affecting F(θ). Additionally, since only local communication is adopted, satellites cannot access the latest assignment θ. Therefore, we first employ Algorithm 1 to establish a local assignment θ, leveraging the local vectors As and Es of satellite sS.
Algorithm 1: Local Assignment Construction
Input: Sub-task sequence θs, vectors As = [As1, As2, …, As|V|]T and Es = [Es1, Es2, …, Es|V|]T.
Output: Local assignment θ.
1:
Initialize θ = {θ1, θ2, …, θn} where θs = ∅ for ∀ sS;
2:
Let θs = θs;
3:
Let Πs be a sequence that sorts sub-tasks v ∈ {Vs} in ascending order by Esv;
4:
for each sub-task v in Πs
5:
Let θk = θkv when Asv = k;
6:
end
7:
Output local assignment θ.
In Algorithm 1, after initializing θ in Line 1, we begin by setting θs = θs in Line 2 since satellite s retains its latest assignment θs. Subsequently, all sub-tasks v ∈ {Vs} are sorted in ascending order of Esv in Line 3. Tasks v where Esv = ∅ are excluded from this sorting as they either have not been assigned or cannot be executed. Lines 4–6 handle the assignment of each sub-task v ∈ Πs according to Asv. Once all sub-tasks are assigned, the local assignment θ is finalized.
Using local assignment θ, the inclusion impact vector Is = [Is1, Is2, …, Is|V|]T is established using Equation (21), where Isv = (θs v) for ∀ vV. Subsequently, each element of Is is compared with the vector Rs = [Rs1, Rs2, …, Rs|V|]T. A task vV will be inserted into θs if Condition (24) is satisfied:
max v V { R s v I s v } > 0
According to (22), when Condition (24) holds, it implies there exists a sub-task v′ = argmaxvV{RsvIsv} whose insertion into θs can decrease the objective value F(θ). Sub-task v′ is then inserted into p′-th position of θs, where p′ = argmaxp∈{1,2,…,|θs|+1}{F(θs,pv′)− F(θ)}). Next, a new θ is obtained, and vectors Rs, As, and Es are updated as follows: Rsv = Isv, Asv = s; the elements in Es are refreshed using θ. Subsequently, vector Is is recalculated, and Condition (24) is checked again. This inclusion process repeats until there are no remaining sub-tasks in Vs or Condition (24) is no longer satisfied. Finally, a new Rs is derived based on updated θs and θ.
However, due to data dependencies among sub-tasks, inappropriately adding sub-tasks to satellites can result in an undesired phenomenon called deadlock. In a deadlock, multiple satellites engaged in sub-tasks become stuck in a circular waiting pattern, resulting in the entire system being suspended. Example 1 illustrates an instance of deadlock.
Example 1. 
Consider the monitoring tasks T = {t1, t2} in Figure 3, which are processed by two ICSC satellites S = {s1, s2} with assignmentθ= {θ1, θ2}. Here, θ1 = {1, 3, 6, 8} and θ2 = {5, 7, 2, 4}. For clarity, the sub-task sequences for satellites s1 and s2 are indicated by blue and red arcs, respectively, as shown in Figure 3. According to Figure 3, Sub-task 7 is performed before Sub-task 2, Sub-task 6 is the predecessor of Sub-task 7, and Sub-task 3 is before Sub-task 6; Consequently, Sub-task 3 must be performed prior to Sub-task 2, which contradicts the data dependence constraints that Sub-task 2 is the predecessor of Sub-task 3. Thus, a deadlock occurs inθ, with vertexes {2, 3, 6, 7} forming a cycle.
To resolve these deadlocks, a deadlock-free insertion mechanism is proposed in Section 4.3.

4.3. Deadlock-Free Insertion Mechanism

In this section, we introduce a deadlock-free insertion mechanism (DFIM) that ensures data dependencies between any consecutive sub-tasks by restricting their candidate insertion positions. The DFIM procedure is as follows:
First, using the vector Es = [Es1, Es2, …, Es|V|]T, a local executing sequence Γs is created by sorting all assigned sub-tasks vV in ascending order of Esv. As established in Section 3.2, a sub-task v can be processed only after all its predecessors 𝒫v have been completed, and its successors 𝒮v can begin execution once v is finished. Thus, the following conditions, (25) and (26), must be satisfied for each sub-task vV:
max u P v T u E T v E
T v E min u S v T u E
In other words, within sequence Γs, the index of the inserted sub-task v must be greater than all predecessors 𝒫v and less than all successors 𝒮v simultaneously. Let (αvL, αvR) be the constraint index interval for v based on these data dependencies, where αvL and αvR are the maximum and minimum indices of sub-tasks in 𝒫v and 𝒮v, respectively. If 𝒫v ∩ Γs = ∅ (or 𝒮v ∩ Γs = ∅), we have αvL = −∞ (or αvR = ∞).
To ensure deadlock-free assignments, the index of each sub-task v ∈ θs within Γs is determined. For a potential inclusion position p ∈ {1, 2, …, |θs|+1}, let (βpL, βpR) denote the position index interval after inserting a sub-task into the p-th position of θs. Here, βpL and βpR represent the indices of the preceding sub-task θs[p − 1] and the following sub-task θs[p], respectively. Specifically, when p = 1 (or p = |θs| + 1), we have βpL = −∞ (or βpR = ∞).
To ensure assignment θ is deadlock-free, Proposition 1 must be satisfied.
Proposition 1. 
An assignment θ with the corresponding executing sequence Γ is deadlock-free if every sub-task v V is executed between sub-tasks γL(v) and γR(v), which have the maximum and minimum index in Γ among predecessors 𝒫v and successors Sv, respectively.
Proof. 
Since Γ is determined by the executing time of sub-tasks, γL(v) and γR(v) are the latest executed sub-task in 𝒫v and the earliest in 𝒮v. Thus, 𝒯γL(v)E = maxu𝒫v𝒯uE and 𝒯γR(v)E = minu𝒮v𝒯uE. If v is executed between γL(v) and γR(v), it implies that 𝒯vE𝒯γL(v)E = maxu𝒫v𝒯uE and 𝒯vE𝒯γR(v)E = minu𝒮v𝒯uE exist, satisfying Conditions (25) and (26), fulfilling the data dependence constraints involving v. Therefore, if every sub-task vV is executed exactly between sub-tasks γL(v) and γR(v), all data dependence constraints are satisfied, ensuring that assignment θ is deadlock-free. □
Given a sub-task vVs and inclusion position p ∈ {1, 2, …, |θs| + 1}, the constraint index interval (αvL, αvR) and position index interval (βpL, βpR) are recalled. If (αvL, αvR) ∩ (βpL, βpR) ≠ ∅, there is at least one valid index for inserting v into the p-th position of θs, ensuring that v is executed between γL(v) and γR(v). According to Proposition 1, this guarantees that the data dependence constraints involving v are satisfied. By verifying each inserted sub-task, all constraints are met, ensuring a deadlock-free assignment θs. Hence, avoiding deadlock in θ equates to determining the intersection between (αvL, αvR) and (βpL, βpR) for each sub-task v and position p.
Based on the above analysis, we developed the DFIM (Algorithm 2) to resolve the deadlock problem in assignment θ.
Algorithm 2: Deadlock-Free Insertion Mechanism (DFIM)
Input: Sub-task sequence θs, vectors Es, candidate sub-task v.
Output: Candidate insertion positions Φs,v.
1:
Initialize Φs,v = ∅;
2:
Obtain local executing sequence Γs based on Es;
3:
Obtain predecessors 𝒫v and successors 𝒮v of v;
4:
Obtain constraint index interval (αvL, αvR) using 𝒫v, 𝒮v, and Γs;
5:
for each position p ∈ {1, 2, …, |θs|+1}
6:
Determine position index interval (βpL, βpR) using θs and Γs;
7:
if αvL < βpR and βpL < αvR // i.e., (αvL, αvR) ∩ (βpL, βpR) ≠ ∅
8:
Φs,v = Φs,v ∪ {p};
9:
end
10:
end
11:
Output candidate insertion positions Φs,v.
In DFIM, we first generate local executing sequence Γs using vector Es and then identify the predecessors 𝒫v and successors 𝒮v of sub-task v (Lines 1–3). Next, we calculate the constraint index interval (αvL, αvR) using 𝒫v, 𝒮v, and Γs in Line 4. After that, each candidate insertion position p ∈ {1, 2, …, |θs| + 1} is checked (Lines 5–10). For position p, we determine the position index interval (βpL, βpR) in Line 6. If the condition αvL < βpR and βpL < αvR (Line 7) is met, indicating an intersection between (αvL, αvR) and (βpL, βpR), we add position p into Φs,v. The loop continues until all positions are checked. Finally, we obtain the candidate insertion positions Φs,v. Example 2 illustrates the process of applying DFIM.
Example 2. 
Figure 4 illustrates the application of DFIM to find candidate insertion positions for Sub-task 7 in satellite s2, based on the scenario in Figure 3. Here, the task offloading assignment is θ = {θ1, θ2}, where θ1 = {1, 3, 6, 8} and θ2 = {5, 2, 4}. First, Figure 4a shows the local executing sequence Γ2 = {5, 1, 2, 3, 6, 4, 8} for s2. The data-dependent constraints for Sub-task 7 are 𝒫7 = {6} and 𝒮7 = {8}, resulting in the constraint index interval (α7L, α7R) = (5, 7), shown in Figure 4b. Next, for |θ2| = 3, we evaluate each possible insertion position p ∈ {1, 2, …, 4}. When position p = 1 (i.e., inserting Sub-task 7 before Sub-task 5), the position index interval (β1L, β1R) = (−∞, 1). Since (5, 7) ∩ (−∞, 1) = ∅, position p = 1 is rejected (Figure 4c). When position p = 2 (i.e., inserting Sub-task 7 between Sub-tasks 5 and 2), the interval (β2L, β2R) = (1, 3). Since (5, 7) ∩ (1, 3) = ∅, position p = 2 is also rejected (Figure 4d). When position p = 3 (i.e., inserting Sub-task 7 between Sub-tasks 2 and 4), the interval (β3L, β3R) = (3, 6). Since (5, 7) ∩ (3, 6) = (5, 6), this position is valid, meaning Sub-task 7 can be placed between Sub-tasks 6 and 4. Thus, position p = 3 is added to Φ2,7 (Figure 4e). When position p = 4 (i.e., inserting Sub-task 7 after Sub-task 4), the interval (β4L, β4R) = (6, +∞). Since (5, 7) ∩ (6, +∞) = (6, 7), this position is also valid, allowing Sub-task 7 to be placed between Sub-tasks 4 and 8. Thus, position p = 4 is added to Φ2,7 (Figure 4f). Thus, the candidate insertion positions for inserting Sub-task 7 in satellite s2 are Φ2,7 = {3, 4}.
Proposition 2. 
DFIM has polynomial time complexity and ensures deadlock-free assignments.
Proof. 
For sequence θs and sub-task vVs, a total of (|θs| + 1) candidate positions are evaluated, meaning the check in Line 7 repeats (|θs| + 1) times. Since θs has at most (|V| − 1) sub-tasks before inserting v, we have |θs| ≤ |V| − 1. Therefore, the complexity of DFIM is O(|V|), and DFIM is polynomial.
According to Proposition 1, if all sub-tasks vV are executed between γL(v) and γR(v), assignment θ is deadlock-free. For a sequence θs and a sub-task vVs, DFIM identifies candidate insertion positions Φs,v. Any position p in Φs,v ensures that inserting sub-task v into the p-th position of θs guarantees v is executed between γL(v) and γR(v), satisfying data dependence constraints. Therefore, inserting sub-task vVs into positions Φs,v identified by DFIM ensures deadlock-free assignments. □
By using DFIM to obtain deadlock-free insertion positions for each sub-task, the entire sub-task inclusion stage is summarized as follows:
In Algorithm 3, we start by determining local assignment θ and initialize inclusion impact vector Is for satellite s (Lines 1–2). Then, in Lines 3–6, we use DFIM to identify deadlock-free insertion positions for each sub-task vV / θs within task sequence θs. We record the minimum insertion impact value for each sub-task v in Isv, thus obtaining the insertion impact vector Is. Next, in Lines 5–9, we use Condition (24) to identify the sub-task v′ and its insertion position p′ that provides the maximum reduction in F(θ) and perform the insertion. After updating vectors Rs, As, and Es, and solution θ, we recalculate vector Is (Lines 2–6), and recheck Condition (24) for the remaining sub-tasks. The sub-task inclusion stage ends when there are no remaining tasks or when Condition (24) is no longer met. This means that no further adjustments to the current task sequence will reduce F(θ), so the algorithm outputs sequence θs′.
Algorithm 3: Sub-task inclusion
Input: Sub-task set V, sequence θs, vectors Rs, As, and Es.
Output: New sequence θs′, new vectors Rs′, As′, and Es′.
1:
Obtain local assignment θ by Algorithm 1;
2:
Initialize inclusion impact vector Is = [Is1, Is2, …, Is|V|]T;
3:
for each sub-task vV/θs
4:
Obtain candidate insertion positions Φs,v using DFIM in Algorithm 2;
5:
Let   I s v = arg min p Φ s , v { F ( θ s , p v ) F ( θ ) } ;
6:
end
7:
while {V/θs} ≠ ∅ and maxvV{RsvIsv} > 0
8:
Obtain sub-task v′ = maxvV{RsvIsv} and corresponding position p′ in θs;
9:
Insert sub-task v′ into p′-th position of θs;
10:
Update θ, Rs, and As by setting θs = θs, Rsv = Isv, and Asv = s;
11:
Recalculate Es according to θ;
12:
Recalculate Is using Lines 2–6;
13:
end
14:
Recalculate Rs according to θ;
15:
Let θs′ = θs, Rs′ = Rs, As′ = As, and Es′ = Es;
16:
Output θs′, Rs′, As′, and Es′.

4.4. Consensus and Sub-Task Removal

The consensus and sub-task removal stage involves the following two key processes: the consensus process and the task removal process. During the consensus process, each satellite sS uses local communications via ISLs to agree on common Rs, As, and Es. In the task removal process, any conflicting tasks (those assigned to multiple satellites) are removed from the satellite sequences.

4.4.1. Consensus

The consensus process involves the following steps. First, each satellite kS broadcasts its vectors Rk, Ak, Ek, and Pk to neighboring satellites s (with G[k, s] ≠ ∞) via ISL. Upon receiving this information, each satellite sS updates its vectors Rs, As, and Es using a consensus rule, modified by [34], to ensure uniform values across all satellites.
At time 𝒯ks, vectors Rk = [Rk1, Rs2, …, Rk|V|]T, Ak = [Ak1, Ak2, …, Ak|V|]T, Ek = [Ek1, Ek2, …, Ek|V|]T, and Pk = [Pk1, Pk2, …, Pkn]T are transmitted from sending satellite k to satellite s via ISL. Satellite s then updates its timestamp vector Ps based on time 𝒯ks and vector Pk using Equation (23). For each sub-task vV, elements Rsv, Asv, and Esv of vectors Rs, As, and Es are updated according to Table 2, which specifies three possible actions for satellite s, with Maintain being the default, as follows:
(1)
Update: Set Rsv = Rkv, Asv = Akv, and Esv = Ekv.
(2)
Maintain: Set Rsv = Rsv, Asv = Asv, and Esv = Esv.
(3)
Reset: Swr Rsv = ∞, Asv = ∅, and Esv = ∅.
The consensus process continues until all satellites sS agree on common values for Rs, As, and Es. This process is essential to prevent Rs and Es from converging to non-existent values, which could occur if these values were broadcast directly to all satellites [29].

4.4.2. Sub-Task Removal

After reaching a consensus on the common Rs, As, and Es values, each satellite performs a modified sub-task removal process independently. This process involves removing conflicting tasks from its own assignment θs.
For each satellite sS, we first create local assignment θ using the common vectors Rs, As, and Es with Algorithm 1. Based on θ, we identify the pending removal tasks Γs = {v ∈ θs|Asvs}. These tasks are present in θs but should not be offloaded to s according to the consensus As. We then compute the local removal impact vector Rs = [Rs1, Rs2, …, Rs|V|]T based on θ, where Rsv = (θs v) for ∀v ∈ θs, and Rsv = 0 otherwise. A sub-task v ∈ Γs will be removed from θs if it meets the criterion defined in Equation (27).
max v Γ s { R s v R s v } > 0
when Criterion (27) is satisfied, it means there is a sub-task v ∈ Γs whose removal impact value Rsv is higher than consensus results Rsv. This indicates that the assignment of v in θs is worse. Therefore, we remove the sub-task v′ = argmaxv ∈ Γs{RsvRsv} from both θ and Γs. After removing the sub-task, we update θ and Rs, then re-evaluate the criterion. This process continues until Γs = ∅ or no sub-task meets Criterion (27). Finally, the remaining sub-tasks in Γs are retained in θs. We update Rsv = (θ s v) and Asv = s for each v ∈ Γs and recalculate the vector Es based on θ. Algorithm 4 outlines the entire sub-task removal process.
Algorithm 4: Sub-task removal
Input: sequence θs, vectors Rs, As, and Es.
Output: new sequence θs′, new vectors Rs′, As′, and Es′.
1:
Obtain local assignment θ using Algorithm 1;
2:
Identify pending removal sub-tasks Γs = {v ∈ θs | Asvs} by θ and As;
3:
Calculate local removal impact vector Rs = [Rs1, Rs2, …, Rs|V|]T by θ;
4:
while Γs ≠ ∅ and (27) is satisfied
5:
Obtain   sub - task   v = arg max v Γ s { R s v R s v } ;
6:
Remove v′ from both θs and Γs;
7:
Update θ and Rs;
8:
end
9:
for each remaining sub-tasks v ∈ Γs
10:
Set Rsv = R(θs v) and Asv = s;
11:
end
12:
Update Es based on θ;
13:
Let θs′ = θs, Rs′ = Rs, As′ = As and Es′ = Es;
14:
Output θs′, Rs′, As′, and Es′.

4.5. Framework of DDFTO

The proposed DDFTO algorithm combines the two abovementioned stages and operates as follows:
DDFTO runs in parallel across each satellite, alternating between the sub-task inclusion stage and the consensus and sub-task removal stage. During the task inclusion stage, each satellite sS independently integrates sub-tasks. The DFIM is used to limit the positions where sub-tasks can be inserted, thereby avoiding deadlocks. After that, the consensus and sub-task removal stage is performed. In the consensus process, common vectors Rs, As, and Es are established among all satellites. In the sub-task removal process, satellites eliminate conflicting sub-tasks from their sequences. The algorithm terminates when there are no further changes in θ across all satellites. A summary of the entire DDFTO process is provided in Algorithm 5 and Figure 5.
Algorithm 5: Distributed Deadlock-Free Task Offloading (DDFTO)
Input: satellites S, monitoring tasks T, network topology matrix G.
Output: task assignment θ.
1:
Obtain sub-task set V according to T;
2:
Initialize θ = {θs | sS} with θs = ∅ for all satellites sS;
3:
Initialize Rs, As, Es, and Ps for each satellite sS;
4:
Set converged to false;
5:
while converged is false do
// Sub-task inclusion stage
6:
Use Algorithm 3 to include sub-tasks for each satellite sS;
// Consensus process
7:
Send vectors Rk, Ak, Ek, and Pk from each satellite kS to its neighbors s
  where G[k, s] ≠ ∞;
8:
Update Ps for each sS based on received information;
9:
Perform the consensus process (refer to Table 2) for each sS until a common Rs,
   As, Es is reached;
   // Sub-task removal process
10:
Use Algorithm 4 to remove sub-tasks for each satellite sS;
11:
Check convergence and update converged;
12:
end
13:
Output assignment θ.

4.6. Convergence and Complexity Analysis

We begin by analyzing the convergence of DDFTO. Similar to the methods in [28,29], each satellite sS iteratively adjusts its task sequence θs by adding or removing tasks to optimize F(θ). As stated in Equation (24), modifications to θs are only made if they reduce F(θ). To avoid deadlocks, DFIM is integrated into DDFTO to restrict inappropriate positions for sub-task insertion. Inserting sub-tasks into these restricted positions can cause deadlocks, impacting their execution and resulting in F(θ) = ∞. Therefore, using DFIM does not hinder DDFTO’s ability to achieve a better F(θ) value and allows it to optimize F(θ) effectively. Based on this analysis, DDFTO converges when no further changes to θ occur during iterations, indicating that no sub-task adjustments can further improve F(θ).
Next, we discuss the complexity of DDFTO. There are n ICSC satellites and m monitoring tasks, consisting of |V| sub-tasks. DDFTO alternates between the sub-task inclusion stage and the consensus and sub-task removal stage. During the sub-task inclusion stage, each of n ICSC satellites independently includes up to |V| sub-tasks in their task sequence. To avoid deadlocks, DFIM is used to restrict candidate insertion positions for each sub-task on each satellite. This results in a task inclusion stage complexity of O(n|V|2). In the consensus process, each of n ICSC satellites updates its vectors for |V| sub-tasks based on information from up to four neighboring satellites. The complexity for this stage is O(4n|V|). In the task removal process, each of the n ICSC satellites independently removes up to |V| conflicting sub-tasks from their task sequences, resulting in a complexity of O(n|V|). Therefore, the computational complexity for each iteration of DDFTO is O(n|V|2) + O(4n|V|) + O(n|V|) = O(n(|V|2+5|V|)). Assuming K is the number of iterations required for DDFTO to converge, the total computational complexity of DDFTO is O(nK(|V|2+5|V|)). Thus, DDFTO has a polynomial computational complexity.

5. Computational Experiments

In this section, extensive computation experiments are conducted to evaluate the performance of the proposed DDFTO.

5.1. Experimental Setup

We set up an ICSC simulation environment in Python, similar to previous works [33,35,36]. This simulation uses six Walker Delta constellations A–F, as described in Table 3, each with different orbital altitudes and numbers of satellites, but all positioned in sun-synchronous orbits to meet the demands of Earth observation. The simulation period spans from [1 July 2024 00:00:00.000 UTCG] to [7 July 2024 00:00:00.000 UTCG]. Satellite coordinates and time windows TWvs for each sub-task vV and satellite sS are obtained from the Satellite Tool Kit (STK). Figure 6 shows the process for determining these time windows.
The experiments are organized into three groups, small, medium, and large, creating a total of 3 × 12 = 36 combinations, with parameter sizes detailed in Table 4. Each combination includes ten test instances. In each instance, monitoring target positions are randomly generated, with average distances of about 100 km for high-density scenarios and 1000 km for low-density scenarios. We use the same parameters as in works [33,37], and additional parameters used in the experiments are detailed in Table 5.
Then, we implement our DDFTO algorithm and compare it with three benchmark algorithms as follows:
(1)
DALEOS [21]: A distributed algorithm that uses heuristics to select a controller satellite for each monitoring task and a flooding mechanism to assign satellites for each sub-task. This competitor retains its original design but adopts DFIM to handle deadlocks.
(2)
Local Execution: Each monitoring task is performed by the nearest visible ICSC satellite without any inter-satellite coordination.
(3)
Random Offloading: Sub-tasks are sorted based on their data dependencies, and ICSC satellites are randomly selected for them.
In our experiments, we use the following three performance indicators:
  • Relative Percent Value (RPV): This measures the relative percentage value of F(θ) compared to other algorithms.
    R P V = F ( θ ) F b e s t F w o r s t F b e s t
    Here, F(θ) is the fitness of an algorithm for a test instance, while Fworst and Fbest are the worst and best fitness values obtained using all algorithms for the same instance. A lower RPV indicates better performance, eliminating the influence of different test instances.
  • Service Latency (SL): This refers to the performance index F1, which is the maximum completion time for all tasks and is calculated using Formula (12).
  • Energy Consumption (EC): This refers to the performance index F2, which is the energy consumption for performing all sub-tasks and is calculated using Equation (13).
To reduce randomness, each algorithm is independently run 10 times for every test instance. The average values of the performance indicators (aRPV, aSL, and aEC) are then used to evaluate each algorithm. All algorithms are coded in Python and run on a PC with an Intel Core i7-14700k [email protected] GHz and 64 GB of RAM in the 64-bit Windows 11 operating system.

5.2. Deadlock Statistics

To evaluate the necessity and effectiveness of the proposed DFIM algorithm in avoiding deadlocks, we conducted two rounds of experiments.
First, we statistically evaluated the deadlock rate in randomly generated individuals to emphasize the importance of avoiding deadlocks. Using the instance types from Section 5.1, we generated 10 test instances for each combination. In each test instance, 1000 offloading assignments were randomly generated, resulting in a total of 10,000 assignments per instance type. We recorded the number of deadlock-free assignments and the deadlock rate for each combination in Table 6, Table 7 and Table 8, and the results indicated that deadlocks occur in all combinations. Notably, instances with a high m/n ratio, in which fewer ICSC satellites must perform more monitoring tasks, were more likely to violate dependent constraints increasing the deadlock rates. Figure 7 further illustrates the variation in deadlock rates across different instance types. For instances with the same monitoring tasks, more satellites provided sub-tasks with more options, leading to fewer constraint violations and a lower deadlock rate. Conversely, instances with the same constellation but more monitoring tasks resulted in each satellite executing more sub-tasks, increasing constraint violations and deadlock rates. Specifically, for instances of types {A, 8, high}, {A, 8, low}, {E, 30, high}, and {E, 30, low}, the deadlock rate exceeds 90%, underscoring the necessity of proposed DFIM.
Next, we reused these test instances and generated individuals using DFIM. The number of deadlock-free assignments and deadlock rates are still recorded in Table 6. The results show that all obtained assignments are deadlock-free, demonstrating the effectiveness of the proposed DFIM.

5.3. Comparison with Existing Algorithms

In this section, we conduct a comparative analysis of the proposed DDFTO algorithm against three benchmark algorithms, local execution, random offloading, and DALEOS [21], as described in Section 5.1. The comparison results for small-scale, medium-scale, and large-scale instances are presented in Table 9, Table 10 and Table 11, respectively. In these tables, the optimal values for each metric within the same instance type are highlighted in bold, and optimal value achievement rates (OVARs) are calculated.
From Table 9, Table 10 and Table 11, it is evident that the performance of these algorithms varies significantly in terms of aRPV, aSL, and aEC. The DDFTO algorithm achieved the optimal aRPV and aSL values in most instances. Although DALEOS obtained more optimal aEC values, its disadvantages in service latency resulted in a lower overall aRPV compared to the proposed DDFTO. To further illustrate the variations of three metrics for each algorithm across different instances, we have plotted line charts, as shown in Figure 8, Figure 9 and Figure 10.
Figure 8 illustrates the variation of aRPV values for different algorithms. Smaller aRPV values indicate obtained assignments with better objective F(θ) values. The DDFTO algorithm achieves the optimal aRPV values in nearly all instances. For example, in instances of type {A, 3}, the aRPV values achieved by DDFTO are reduced by 99.58% (≈(0.3146–0.0013)/0.3146), 99.83% (≈(0.7666–0.0013)/0.7666), and 97.10% (≈(0.0449–0.0013)/0.0449) compared to local execution, random offloading, and DALEOS, respectively. This demonstrates the effectiveness of DDFTO in obtaining optimal offloading assignments. Additionally, DDFTO consistently obtained smaller aRPV values than DALEOS. This is because DALEOS uses a flooding mechanism [21] that relies on direct neighbor information and is more easily trapped into local optima [28], affecting the quality of the offloading assignments. In contrast, DDFTO employs a consensus process [34], in which satellites share information with their neighbors using vectors, allowing it to escape local optima and achieve better results [29]. Furthermore, for instances of type {F, 25} and {F, 30}, local execution shows a slight advantage in aRPV values. This is because, for larger-scale instances, the DDFTO offloading assignments require satellite cooperation, leading to data transmission delays and increased service latency. In contrast, local execution keeps all sub-tasks of a task on a single satellite, eliminating data transmission and resulting in lower service latency.
Figure 9 illustrates the trends in aSL values for various algorithms across different instances. A lower aSL value indicates reduced service latency. DDFTO consistently achieves the lowest aSL values in nearly all instances. For example, in instances of type {E, 30}, the aSL of DDFTO is decreased by 48.04% (≈(425.17–220.89)/425.17), 97.45% (≈(8669.95–220.89)/8669.95), and 87.29% (≈(1738.67–220.89)/1738.67) compared with local execution, random offloading, and DALEOS, respectively. This highlights the effectiveness of DDFTO’s consensus process in minimizing service latency. In contrast, random offloading consistently shows higher aSL values than other algorithms, with a significant increase as instance size grows, underscoring the importance of effective offloading strategies.
Figure 10 shows the aEC trends of different algorithms across various instances. Smaller aEC values indicate lower energy consumption for performing tasks. As instance size increases, the aEC values for all algorithms also increase. Despite this, the performance ranking of the four algorithms remains consistent. DALEOS achieves the lowest aEC values in most cases, demonstrating the effectiveness of its heuristic-based control agent selection strategy. Although the proposed DDFTO achieves slightly higher aEC values, its outstanding performance in reducing service latency results in the best overall F(θ) values in most instances.

6. Discussion

Finally, we explore how instance parameters (satellite constellations and the number of monitoring tasks m) affect DDFTO and other benchmark algorithms. Our experiments are categorized into two groups.
In the first group, we investigate the impact of different satellite constellations while keeping task number m = 20. We utilize constellations A–F from Section V.A., with low target density, resulting in six instance combinations. For each combination, we generate 10 test instances and execute each algorithm independently, 10 times per instance. The statistical results for aNS, aHV, and aDR are shown in Table 12, with optimal values highlighted in bold, and OVARs are calculated.
From Table 12, DDFTO consistently achieves optimal values for aNS, aHV, and aDR metrics. To analyze performance trends, we plot line graphs displaying these metrics for each algorithm across different constellations, as shown in Figure 11. In Figure 11a, DDFTO consistently exhibits the best aPRV values across various constellation instances, demonstrating its statistical effectiveness. Figure 11b illustrates a decreasing trend in aSL for all algorithms as constellation size increases, attributed to increased satellite availability for task execution, thus reducing latency. Nevertheless, DDFTO maintains superior aSL values across various instances. For example, in instances like {F, 15, low}, DDFTO reduces aSL by 29.71% (≈(132.61–93.20)/132.61) compared to local execution and by 86.48% (≈(689.76–93.20)/689.76) compared to DALEOS. Figure 11c presents aEC values for all algorithms, whereas DDFTO slightly trails DALEOS in this metric.
In another group, we investigate how the number of tasks affects algorithm performance while maintaining satellite constellation C. The tasks range from 2 to 30, with consistently low target density, creating a total of 15 different instance combinations. For each combination, we still generate 10 test instances and execute each algorithm independently, 10 times per instance. The statistical results for aNS, aHV, and aDR are recorded in Table 13, with optimal values are in bold, and OVARs are included.
Table 13 shows that DDFTO achieves the best aRPV and aSL values in most instances. However, when the number of tasks exceeds 24, local execution provides lower service latency and outperforms DDFTO. Similarly, DALEOS achieves the best aEC value but has higher service latency. Figure 12 further illustrates the performance trends of different algorithms with varying task numbers.
In Figure 12a, the aRPV value for random offloading increases, while the other algorithms generally show a decreasing trend. DDFTO achieves the best aRPV when the number of tasks m ≤ 18, but local execution, due to its minimal service latency, becomes optimal when m ≥ 24. In Figure 12b, all algorithms show increasing aSL values as the number of tasks grows. DDFTO achieves the best aSL in most instances, but local execution excels when m is large. Dividing these tasks into several smaller groups [38] can effectively resolve these performance differences. Figure 12c shows that all algorithms exhibit increasing aEC values with more tasks, and the performance ranking of these algorithms remains consistent regardless of the number of tasks. Although DDFTO has a slightly worse aEC value compared to DALEOS, its advantage in service latency enables it to achieve the optimal overall F(θ) values.
Although DDFTO shows performance degradation in instances with more than 20 tasks, this can be mitigated by grouping or clustering tasks [38]. Future work will focus on enhancing DDFTO to manage larger and more complex task scenarios.

7. Conclusions

This paper addresses the task offloading problem in ICSC satellites with data-dependent constraints, in which a monitoring task can be divided into multiple observation and computation sub-tasks. Considering data-dependent constraints among sub-tasks, we introduce a MINP model that aims to minimize both service latency and energy consumption simultaneously. Based on this model, we present a distributed algorithm called DDFTO for resolving the task offloading problem. DDFTO operates on each satellite in parallel, alternating between stages of sub-task inclusion and consensus and sub-task removal, until a common offloading assignment is reached. To handle undesired deadlocks arising from sub-task inclusion, we propose the DFIM method. DFIM strategically restricts the insertion positions of sub-tasks based on interval relationships, thereby ensuring deadlock-free assignments. Finally, extensive experiments underscore the necessity of deadlock avoidance, with DFIM effectively preventing deadlocks in offloading assignments. DDFTO demonstrates superior performance compared to benchmark algorithms such as DALEOS [21], local execution, and random offloading. An analysis of parameter impacts further validates DDFTO’s effectiveness in optimizing offloading assignments. In future work, we plan to extend DDFTO to address other scenarios in ICSC satellites, including multi-source satellite cooperative observation.

Author Contributions

R.Z. conceived the conceptualization and algorithm. R.Z. completed the implementation of the algorithm and the writing of the paper and supported the writing review and editing. Y.Y. and H.L. provided theoretical guidance and suggestions for revision of the paper. Y.Y. provided funding support and necessary assistance for thesis writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Innovation 2030 Key Project of “New Generation Artificial Intelligence” under grant 2020AAA0108203 and the National Natural Science Foundation of P.R. China under grants 62003258 and 62103062.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, Q.; Yu, L.; Du, Z.; Peng, D.; Hao, P.; Zhang, Y.; Gong, P. An overview of the applications of earth observation satellite data: Impacts and future trends. Remote Sens. 2022, 14, 1863. [Google Scholar] [CrossRef]
  2. Ramapriyan, H.K. The role and evolution of NASA’s Earth science data systems. In Proceedings of the Institute of Electrical and Electronic Engineers (IEEE) EDS/CAS Chapter Meeting, Camarillo, CA, USA, 19 August 2015. No. GSFC-E-DAA-TN24713. [Google Scholar]
  3. Çelikbilek, K.; Saleem, Z.; Ferre, R.M.; Praks, J.; Lohan, E.S. Survey on optimization methods for LEO-satellite-based networks with applications in future autonomous transportation. Sensors 2022, 22, 1421. [Google Scholar] [CrossRef] [PubMed]
  4. Ji, S.; Zhou, D.; Sheng, M.; Li, J. Mega satellite constellation system optimization: From a network control structure perspective. IEEE Trans. Wirel. Commun. 2021, 21, 913–927. [Google Scholar] [CrossRef]
  5. Leyva-Mayorga, I.; Martinez-Gost, M.; Moretti, M.; Pérez-Neira, A.; Vázquez, M.; Popovski, P.; Soret, B. Satellite edge computing for real-time and very-high resolution earth observation. IEEE Trans. Commun. 2023, 71, 6180–6194. [Google Scholar] [CrossRef]
  6. Xiang, S.; Liang, Q.; Tang, P. Task-Oriented Compression Framework for Remote Sensing Satellite Data Transmission. IEEE Trans. Ind. Inform. 2024, 20, 3487–3496. [Google Scholar] [CrossRef]
  7. Mamun, A.; Jia, X.; Ryan, M. Adaptive data compression for efficient sequential transmission and change updating of remote sensing images. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 4, pp. IV–498. [Google Scholar]
  8. Mateo-Garcia, G.; Veitch-Michaelis, J.; Smith, L.; Oprea, S.V.; Schumann, G.; Gal, Y.; Baydin, A.G.; Backes, D. Towards global flood mapping onboard low cost satellites with machine learning. Sci. Rep. 2021, 11, 7249. [Google Scholar] [CrossRef] [PubMed]
  9. Qi, Q.; Chen, X.; Khalili, A.; Zhong, C.; Zhang, Z.; Ng, D.W.K. Integrating sensing, computing, and communication in 6G wireless networks: Design and optimization. IEEE Trans. Commun. 2022, 70, 6212–6227. [Google Scholar] [CrossRef]
  10. Zuo, Y.; Yue, M.; Yang, H.; Wu, L.; Yuan, X. Integrating Communication, Sensing and Computing in Satellite Internet of Things: Challenges and Opportunities. IEEE Wirel. Commun. 2024, 31, 332–338. [Google Scholar] [CrossRef]
  11. Giuffrida, G.; Fanucci, L.; Meoni, G.; Batic, M.; Buckley, L.; Dunne, A.; van Dijk, C.; Esposito, M.; Hefele, J.; Vercruyssen, N.; et al. The Φ-Sat-1 mission: The first on-board deep neural network demonstrator for satellite earth observation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  12. Bentoutou, Y. A real time EDAC system for applications onboard earth observation small satellites. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 648–657. [Google Scholar] [CrossRef]
  13. Dolce, F.; Di Domizio, D.; Bruckert, D.; Rodríguez, A.; Patrono, A. Earth observation for security and defense. In Handbook of Space Security: Policies, Applications and Programs; Springer: Berlin/Heidelberg, Germany, 2020; pp. 705–731. [Google Scholar]
  14. Han, X.; Yang, M.; Wang, S.; Chao, T. Continuous monitoring scheduling for moving targets by Earth observation satellites. Aerosp. Sci. Technol. 2023, 140, 108422. [Google Scholar] [CrossRef]
  15. Selva, D.; Krejci, D. A survey and assessment of the capabilities of Cubesats for Earth observation. Acta Astronaut. 2012, 74, 50–68. [Google Scholar] [CrossRef]
  16. Coffman, E.G.; Elphick, M.; Shoshani, A. System deadlocks. ACM Comput. Surv. CSUR 1971, 3, 67–78. [Google Scholar] [CrossRef]
  17. He, Y.; Chen, Y.; Lu, J.; Chen, C.; Wu, G. Scheduling multiple agile earth observation satellites with an edge computing framework and a constructive heuristic algorithm. J. Syst. Archit. 2019, 95, 55–66. [Google Scholar] [CrossRef]
  18. Valente, F.; Eramo, V.; Lavacca, F.G. Optimal bandwidth and computing resource allocation in low earth orbit satellite constellation for earth observation applications. Comput. Netw. 2023, 232, 109849. [Google Scholar] [CrossRef]
  19. Zhu, X.; Wang, H.; Yang, Z.; Pham, Q.V. Time-division based integrated sensing, communication, and computing in integrated satellite-terrestrial networks. Digit. Signal Process. 2023, 143, 104262. [Google Scholar] [CrossRef]
  20. Gost, M.M.; Leyva-Mayorga, I.; Pérez-Neira, A.; Vázquez, M.Á.; Soret, B.; Moretti, M. Edge computing and communication for energy-efficient earth surveillance with LEO satellites. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Republic of Korea, 16–20 May 2022; pp. 556–561. [Google Scholar]
  21. Biswas, S.; Paul, H.S. DALEOS: Distributed scheduling for earth observation Data Analytics in LEO Satellites. In Proceedings of the 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Biarritz, France, 11–15 March 2024; pp. 209–214. [Google Scholar]
  22. Zhu, B.; Lin, S.; Zhu, Y.; Wang, X. Collaborative Hyperspectral Image Processing Using Satellite Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 2241–2253. [Google Scholar] [CrossRef]
  23. Mateo-Garcia, G.; Veitch-Michaelis, J.; Purcell, C.; Longepe, N.; Reid, S.; Anlind, A.; Bruhn, F.; Parr, J.; Mathieu, P.P. In-orbit demonstration of a re-trainable machine learning payload for processing optical imagery. Sci. Rep. 2023, 13, 10391. [Google Scholar] [CrossRef]
  24. Furano, G.; Meoni, G.; Dunne, A.; Moloney, D.; Ferlet-Cavrois, V.; Tavoularis, A.; Byrne, J.; Buckley, L.; Psarakis, M.; Voss, K.-O.; et al. Towards the use of artificial intelligence on the edge in space systems: Challenges and opportunities. IEEE Aerosp. Electron. Syst. Mag. 2020, 35, 44–56. [Google Scholar] [CrossRef]
  25. Bui, T.-A.; Lee, P.-J.; Lum, K.-Y.; Loh, C.; Tan, K. Deep learning for landslide recognition in satellite architecture. IEEE Access 2020, 8, 143665–143678. [Google Scholar] [CrossRef]
  26. Shi, F.; Qiu, F.; Li, X.; Zhong, R.; Yang, C.; Tang, Y. Detecting and tracking moving airplanes from space based on normalized frame difference labeling and improved similarity measures. Remote Sens. 2020, 12, 3589. [Google Scholar] [CrossRef]
  27. Turner, J.; Meng, Q.; Schaefer, G.; Whitbrook, A.; Soltoggio, A. Distributed task rescheduling with time constraints for the optimization of total task allocations in a multirobot system. IEEE Trans. Cybern. 2017, 48, 2583–2597. [Google Scholar] [CrossRef] [PubMed]
  28. Zhao, W.; Meng, Q.; Chung, P.W. A heuristic distributed task allocation method for multivehicle multitask problems and its application to search and rescue scenario. IEEE Trans. Cybern. 2015, 46, 902–915. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, R.; Feng, Y.; Yang, Y.; Li, X.; Li, H. Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach. Remote Sens. 2024, 16, 2184. [Google Scholar] [CrossRef]
  30. Wu, Q.; Pan, J.; Wang, M. Dynamic Task Planning Method for Multi-Source Remote Sensing Satellite Cooperative Observation in Complex Scenarios. Remote Sens. 2024, 16, 657. [Google Scholar] [CrossRef]
  31. Peng, G.; Song, G.; He, Y.; Yu, J.; Xiang, S.; Xing, L.; Vansteenwegen, P. Solving the agile earth observation satellite scheduling problem with time-dependent transition times. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 1614–1625. [Google Scholar] [CrossRef]
  32. Cao, X.; Li, Y.; Xiong, X.; Wang, J. Dynamic routings in satellite networks: An overview. Sensors 2022, 22, 4552. [Google Scholar] [CrossRef]
  33. Zhang, H.; Liu, R.; Kaushik, A.; Gao, X. Satellite edge computing with collaborative computation offloading: An intelligent deep deterministic policy gradient approach. IEEE Internet Things J. 2023, 10, 9092–9107. [Google Scholar] [CrossRef]
  34. Choi, H.L.; Brunet, L.; How, J.P. Consensus-based decentralized auctions for robust task allocation. IEEE Trans. Robot. 2009, 25, 912–926. [Google Scholar] [CrossRef]
  35. Cui, G.; Duan, P.; Xu, L.; Wang, W. Latency optimization for hybrid GEO–LEO satellite-assisted IoT networks. IEEE Internet Things J. 2022, 10, 6286–6297. [Google Scholar] [CrossRef]
  36. Chai, F.; Zhang, Q.; Yao, H.; Xin, X.; Gao, R.; Guizani, M. Joint multi-task offloading and resource allocation for mobile edge computing systems in satellite IoT. IEEE Trans. Veh. Technol. 2023, 72, 7783–7795. [Google Scholar] [CrossRef]
  37. Zhang, Y.; Chen, C.; Liu, L.; Lan, D.; Jiang, H.; Wan, S. Aerial edge computing on orbit: A task offloading and allocation scheme. IEEE Trans. Netw. Sci. Eng. 2022, 10, 275–285. [Google Scholar] [CrossRef]
  38. Chen, X.; Zhang, P.; Du, G.; Li, F. A distributed method for dynamic multi-robot task allocation problems with critical time constraints. Robot. Auton. Syst. 2019, 118, 31–46. [Google Scholar] [CrossRef]
Figure 1. Architecture of ICSC satellite networks.
Figure 1. Architecture of ICSC satellite networks.
Remotesensing 16 03459 g001
Figure 2. An example of a monitoring task.
Figure 2. An example of a monitoring task.
Remotesensing 16 03459 g002
Figure 3. Example of deadlock.
Figure 3. Example of deadlock.
Remotesensing 16 03459 g003
Figure 4. Processing of DFIM. (a) Local executing sequence Γ2. (b) Constraint index interval (α7L, α7R). (c) Insertion position p = 1. (d) Insertion position p = 2. (e) Insertion position p = 3. (f) Insertion position p = 4.
Figure 4. Processing of DFIM. (a) Local executing sequence Γ2. (b) Constraint index interval (α7L, α7R). (c) Insertion position p = 1. (d) Insertion position p = 2. (e) Insertion position p = 3. (f) Insertion position p = 4.
Remotesensing 16 03459 g004aRemotesensing 16 03459 g004b
Figure 5. Framework of DDFTO.
Figure 5. Framework of DDFTO.
Remotesensing 16 03459 g005
Figure 6. Obtain time windows for each satellite and sub-task using STK.
Figure 6. Obtain time windows for each satellite and sub-task using STK.
Remotesensing 16 03459 g006
Figure 7. Deadlock rate among different test instances.
Figure 7. Deadlock rate among different test instances.
Remotesensing 16 03459 g007
Figure 8. Comparison of aRPV across different instances for each algorithm.
Figure 8. Comparison of aRPV across different instances for each algorithm.
Remotesensing 16 03459 g008
Figure 9. Comparison of aSL across different instances for each algorithm.
Figure 9. Comparison of aSL across different instances for each algorithm.
Remotesensing 16 03459 g009
Figure 10. Comparison of aEC across different instances for each algorithm.
Figure 10. Comparison of aEC across different instances for each algorithm.
Remotesensing 16 03459 g010
Figure 11. Trends of performance indicators across instances with different constellations.
Figure 11. Trends of performance indicators across instances with different constellations.
Remotesensing 16 03459 g011aRemotesensing 16 03459 g011b
Figure 12. Trends of performance indicators across instances with different task numbers.
Figure 12. Trends of performance indicators across instances with different task numbers.
Remotesensing 16 03459 g012aRemotesensing 16 03459 g012b
Table 1. Notations.
Table 1. Notations.
NotationDescription
nNumber of ICSC satellites.
mNumber of monitoring tasks.
SSet of ICSC satellites.
TSet of monitoring tasks.
VSet of sub-tasks.
CsComputing capacity of ICSC satellites.
GNetwork topology matrix.
(𝒱t, t)Directed acyclic graph of monitoring task t, where 𝒱t collects sub-tasks in t, and t denotes dependencies among them.
𝒫vPredecessors of sub-task v.
𝒮vSuccessors of sub-task v.
<ξv, ρv, dvI, dvO>Parameter tuple characterizing each sub-task v, where ξv refers to the workload of v, ρv represents the imaging time of v, dvI refers to the input data of v, and dvO represents the output.
TWvsVisible time windows for observation sub-task v and satellite s.
θ = {θ1, θ2, …, θn}Offloading assignment for ICSC satellites.
(θs v)Removal impact, indicating the variation of F(θ) after removing v from θs.
(θs v)Inclusion impact, indicating the minimum variation of F(θ) after inserting t into θs.
Rs = [Rs1, Rs2, …, Rs|V|]TVector of removal impacts for sub-tasks on satellite s.
As = [As1, As2, …, As|V|]TVector of considered assigned satellite for sub-tasks on satellite s.
Es = [Es1, Es2, …, Es|V|]TVector tracking the time when sub-tasks are executed believed by satellite s.
Ps = [Ps1, Ps2, …, Psn]TVector indicating the latest timestamp of satellite s.
ΓsPending removal tasks on satellite s.
Φs,vCandidate insertion positions for sub-task v on satellite s.
Table 2. Action rules for satellite k after receiving information from satellite s.
Table 2. Action rules for satellite k after receiving information from satellite s.
Value of Akv in Sending Satellite kValue of Asv in Receiving Satellite sActions Adopted by s
ksif Rkv < Rsv → Update
kUpdate
a ∉ {k, s}if Pka > Psa or Rkv < Rsv → Update
Update
ssMaintain
kReset
a ∉ {k, s}if Pka > Psa → Reset
Maintain
a ∉ {k, s}sif Pka > Psa and Rkv < Rsv → Update
kif Pka > Psa → Update
else → Reset
aPka > Psa → Update
b ∉ {k, s, a}if Pka > Psa and Pkb > Psb → Update
if Pka > Psa and Rkv < Rsv → Update
if Pkb > Psb and Pka < Psa → Reset
if Pka > Psa → Update
sMaintain
kUpdate
a ∉ {k, s}if Pka > Psa → Update
Maintain
Table 3. Satellite constellations.
Table 3. Satellite constellations.
ConstellationAltitude (km)Inclination (deg)PlanesSatellites (n)
A5000138.5826
B5000138.5839
C3000112.42416
D48097.33324
E55097.59636
F78098.52666
Table 4. Parameter size for each instance type.
Table 4. Parameter size for each instance type.
Instance TypeConstellationsTarget NumberTarget DensityCombination Number
SmallA, B3, 5, 8high, low2 × 3 × 2 = 12
MediumC, D10, 12, 15high, low2 × 3 × 2 = 12
LargeE, F20, 25, 30high, low2 × 3 × 2 = 12
Table 5. Simulation parameters.
Table 5. Simulation parameters.
ParametersDefault Values
Computing capacity of satellites Cs3~5 GHz
Number of sub-tasks |Vt|2~4
Workload of sub-tasks ξv1~1.5 Kcycle/bit
Imaging time of sub-tasks ρv10~20 s
Input data size of sub-tasks dvI50~100 Mbit
Output data size of sub-tasks dvO50~100 Mbit
Rate of ISL RISL100 Mbps
Transition power ηx1 w
Angle transition power ηa0.2 w
Observation power ηo1 w
Effective capacitance coefficient κ10−28
Table 6. Statistical results of deadlock in small-scale instances.
Table 6. Statistical results of deadlock in small-scale instances.
Instance TypeTotal
Assignment
Number
Deadlock-Free
Assignments
Deadlock
Rate
Deadlock-Free
Assignments
(Using DFIM)
Deadlock
Rate
(Using DFIM)
{A, 3, high}10,000436356.37%10,0000%
{A, 3, low}10,000546445.36%10,0000%
{A, 5, high}10,000254474.56%10,0000%
{A, 5, low}10,000210878.92%10,0000%
{A, 8, high}10,00051094.90%10,0000%
{A, 8, low}10,00078692.14%10,0000%
{B, 3, high}10,000648835.12%10,0000%
{B, 3, low}10,000643135.69%10,0000%
{B, 5, high}10,000465653.44%10,0000%
{B, 5, low}10,000366063.40%10,0000%
{B, 8, high}10,000199580.05%10,0000%
{B, 8, low}10,000176082.40%10,0000%
Table 7. Statistical results of deadlock in medium-scale instances.
Table 7. Statistical results of deadlock in medium-scale instances.
Instance TypeTotal
Assignment
Number
Deadlock-Free
Assignments
Deadlock
Rate
Deadlock-Free
Assignments
(Using DFIM)
Deadlock
Rate
(Using DFIM)
{C, 10, high}10,000224877.52%10,0000%
{C, 10, low}10,000278272.18%10,0000%
{C, 12, high}10,000178082.20%10,0000%
{C, 12, low}10,000188781.13%10,0000%
{C, 15, high}10,000111788.83%10,0000%
{C, 15, low}10,000124487.56%10,0000%
{D, 10, high}10,000470352.97%10,0000%
{D, 10, low}10,000448955.11%10,0000%
{D, 12, high}10,000373862.62%10,0000%
{D, 12, low}10,000376862.32%10,0000%
{D, 15, high}10,000268073.20%10,0000%
{D, 15, low}10,000300070.00%10,0000%
Table 8. Statistical results of deadlock in large-scale instances.
Table 8. Statistical results of deadlock in large-scale instances.
Instance TypeTotal
Assignment
Number
Deadlock-Free
Assignments
Deadlock
Rate
Deadlock-Free
Assignments
(Using DFIM)
Deadlock
Rate
(Using DFIM)
{E, 20, high}10,000300969.91%10,0000%
{E, 20, low}10,000234076.60%10,0000%
{E, 25, high}10,000186681.34%10,0000%
{E, 25, low}10,000175382.47%10,0000%
{E, 30, high}10,00081691.84%10,0000%
{E, 30, low}10,00065993.41%10,0000%
{F, 20, high}10,000561643.84%10,0000%
{F, 20, low}10,000538546.15%10,0000%
{F, 25, high}10,000490250.98%10,0000%
{F, 25, low}10,000494050.60%10,0000%
{F, 30, high}10,000380361.97%10,0000%
{F, 30, low}10,000385361.47%10,0000%
Table 9. Comparison results for small-scale instances.
Table 9. Comparison results for small-scale instances.
Instance TypeLocal ExecutionRandom OffloadingDALEOSDDFTO
aRPVaSLaECaRPVaSLaECaRPVaSLaECaRPVaSLaEC
{A, 3, high}0.4282435.53888.450.63118924.26938.270.0662101.23767.550.002544.75932.11
{A, 3, low}0.20101146.07819.030.90229147.02788.910.0235517.16632.490.000045.16756.99
{A, 5, high}0.16601071.961461.310.908316,785.531482.370.1040145.401194.320.000066.241414.82
{A, 5, low}0.0870433.651400.801.000025,363.961452.150.0158159.801220.320.000064.131427.90
{A, 8, high}0.04862061.532219.581.000034,354.582361.440.10754550.041903.008.1 × 10−4232.852283.35
{A, 8, low}0.0360892.742201.611.000021,941.942453.220.0185579.461836.030.0000117.042348.60
{B, 3, high}0.6890198.44680.420.55243956.77866.580.195093.53625.640.000048.76704.52
{B, 3, low}0.7997176.54787.170.3899565.41897.560.247981.41692.390.000046.87753.15
{B, 5, high}0.2380392.661332.970.818114,630.021524.080.0905155.811120.630.000064.011349.24
{B, 5, low}0.1228505.251377.860.913219,065.121507.310.0082147.771121.280.000062.971313.57
{B, 8, high}0.0192435.632090.451.000027,635.112491.370.0035212.631762.700.0000117.982143.24
{B, 8, low}0.1189349.832234.250.830618,097.472436.950.2047225.831795.800.044893.682175.48
OVAR0%0%0%0%0%0%0%0%100%100%100%0%
Table 10. Comparison results for medium-scale instances.
Table 10. Comparison results for medium-scale instances.
Instance TypeLocal ExecutionRandom OffloadingDALEOSDDFTO
aRPVaSLaECaRPVaSLaECaRPVaSLaECaRPVaSLaEC
{C, 10, high}0.0076160.702866.591.000013,253.093145.130.0100231.072311.410.000097.372658.69
{C, 10, low}0.0037138.732707.941.000015,278.852993.540.0065190.142242.340.0000104.502496.39
{C, 12, high}0.0054180.893352.481.000017,310.713599.320.06311968.202685.990.0000119.963039.99
{C, 12, low}0.0057171.263068.451.000015,403.013393.810.0129293.632456.521.3 × 10−4116.072785.90
{C, 15, high}0.0025168.084125.301.000021,064.084473.350.0085356.053211.757.5 × 10−4151.284059.67
{C, 15, low}0.0027202.874017.561.000018,305.664436.200.0057295.033242.791.4 × 10−5147.953793.53
{D, 10, high}0.08942167.182386.861.000024,468.142921.720.0021149.652185.800.000088.992240.79
{D, 10, low}0.06871804.672728.451.000026,929.553141.480.0030171.682285.220.000072.202625.68
{D, 12, high}0.12662395.763061.001.000033,719.683408.300.0035178.002647.140.000095.452919.24
{D, 12, low}0.20572999.812886.580.900030,512.773469.720.0164651.312629.050.0010111.542887.50
{D, 15, high}0.07823096.263642.951.000038,568.154123.190.0115659.703188.660.0000111.953375.34
{D, 15, low}0.18092845.503469.260.935327,362.244148.230.0042218.992989.420.0000116.303461.84
OVAR0%0%0%0%0%0%0%0%100%100%100%0%
Table 11. Comparison results for large-scale instances.
Table 11. Comparison results for large-scale instances.
Instance TypeLocal ExecutionRandom OffloadingDALEOSDDFTO
aRPVaSLaECaRPVaSLaECaRPVaSLaECaRPVaSLaEC
{E, 20, high}0.0050318.105006.041.000036,845.375539.210.0013213.554341.250.0000147.744351.55
{E, 20, low}0.0060318.605128.571.000033,922.076008.570.0018230.054355.020.0000142.954642.19
{E, 25, high}0.0055379.186477.221.000038,514.607572.580.0134743.565324.090.0000146.196060.81
{E, 25, low}0.0047342.666439.911.000036,048.587386.280.03571953.495495.792.7 × 10−4177.616345.95
{E, 30, high}0.0050414.727626.811.000040,356.608644.880.04112181.076340.720.0000214.897198.28
{E, 30, low}0.0045435.627726.611.000047,409.998695.020.01321296.276416.820.0000226.907361.69
{F, 20, high}0.0098154.744957.871.000025,687.845922.470.0052186.224507.429.5 × 10−6111.554479.23
{F, 20, low}0.0014131.134802.051.000019,089.415877.350.0112221.644301.810.0042140.554510.62
{F, 25, high}5.3 × 10−4141.836319.601.000031,898.817375.910.0159742.025516.962.6 × 10−4153.175668.86
{F, 25, low}7.8 × 10−4143.235953.341.000021,702.097201.700.0036236.785331.030.0014162.015549.13
{F, 30, high}1.5 × 10−4130.867986.291.000029,388.368764.140.12453857.846733.560.0031232.957153.58
{F, 30, low}4.5 × 10−4158.907555.691.000033,801.748636.410.09946797.186515.083.8 × 10−4176.346733.99
OVAR25%41.67%0%0%0%0%0%0%91.66%75%58.33%8.33%
Table 12. Statistical results for algorithms under different constellations.
Table 12. Statistical results for algorithms under different constellations.
Instance TypeLocal ExecutionRandom OffloadingDALEOSDDFTO
aRPVaSLaECaRPVaSLaECaRPVaSLaECaRPVaSLaEC
{A, 15, low}0.01381318.224436.111.000049,211.484803.420.05574164.823415.795.1 × 10−4344.87469.89
{B, 15, low}0.0211954.013969.161.000036,736.824343.960.04752728.173165.180.0000254.593981.18
{C, 15, low}0.0040190.323804.911.000022,411.344074.590.0044251.903113.460.0000121.913489.86
{D, 15, low}0.11992850.993713.901.000030,349.574168.340.06153387.673228.450.0000123.163771.09
{E, 15, low}0.0085310.553785.441.000032,505.324288.400.0020139.213275.270.000099.923262.42
{F, 15, low}0.0030132.613598.911.000019,458.294307.390.0186689.763287.600.000093.203367.35
OVAR0%41.67%0%0%0%0%0%0%83.33%100%100%16.67%
Table 13. Statistical results for algorithms under different numbers of tasks.
Table 13. Statistical results for algorithms under different numbers of tasks.
Instance TypeLocal ExecutionRandom OffloadingDALEOSDDFTO
aRPVaSLaECaRPVaSLaECaRPVaSLaECaRPVaSLaEC
{C, 2, low}0.8632118.71537.510.3027567.60609.030.251266.26486.840.000038.04549.19
{C, 4, low}0.4479106.641001.550.63664477.501041.430.203283.02854.510.000044.26860.44
{C, 6, low}0.2080148.621404.250.87149274.551679.380.0948127.341185.290.000053.441421.18
{C, 8, low}0.1095127.261953.010.937810,267.032185.590.0652162.511570.334.1 × 10−585.291878.07
{C, 10, low}0.0059148.712777.451.000013,353.123054.300.0420972.522339.670.000094.852692.88
{C, 12, low}0.0037158.463221.831.000013,061.913522.470.0101268.962631.717.6 × 10−4110.893188.37
{C, 14, low}0.0046164.223864.221.000021,333.834270.940.0108326.013157.890.0000125.003434.52
{C, 16, low}0.0021178.324296.151.000029,990.684524.430.0048329.913282.431.3 × 10−4150.013967.27
{C, 18, low}0.0025196.324875.381.000031,174.335301.990.09125380.433809.383.2 × 10−4152.354610.90
{C, 20, low}0.0012204.495506.411.000030,025.855831.970.02071031.624359.450.0019208.685161.47
{C, 22, low}0.0012241.705882.841.000036,929.396457.770.03041904.914672.928.3 × 10−4220.605828.34
{C, 24, low}8.2 × 10−4240.266915.951.000037,058.537195.910.09824744.115508.890.0022261.366813.02
{C, 26, low}2.6 × 10−4254.907399.231.000033,985.647937.180.05892845.195413.300.01971192.966471.63
{C, 28, low}8.7 × 10−4262.297620.081.000040,233.248204.790.05762930.336217.390.0040415.836882.54
{C, 30, low}4.4 × 10−4251.488348.841.000042,611.648630.320.04202169.426162.364.9 × 10−4279.297747.13
OVAR33.33%33.33%0%0%0%0%0%0%100%66.67%66.67%0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, R.; Yang, Y.; Li, H. A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints. Remote Sens. 2024, 16, 3459. https://doi.org/10.3390/rs16183459

AMA Style

Zhang R, Yang Y, Li H. A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints. Remote Sensing. 2024; 16(18):3459. https://doi.org/10.3390/rs16183459

Chicago/Turabian Style

Zhang, Ruipeng, Yikang Yang, and Hengnian Li. 2024. "A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints" Remote Sensing 16, no. 18: 3459. https://doi.org/10.3390/rs16183459

APA Style

Zhang, R., Yang, Y., & Li, H. (2024). A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints. Remote Sensing, 16(18), 3459. https://doi.org/10.3390/rs16183459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop