Next Article in Journal
Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN)
Previous Article in Journal
Spatiotemporal Landscape Pattern Analyses Enhanced by an Integrated Index: A Study of the Changbai Mountain National Nature Reserve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Distributed Approach for Time-Dependent Observation Scheduling Problem in the Agile Earth Observation Satellite Constellation

Systems Engineering Institute, School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(7), 1761; https://doi.org/10.3390/rs15071761
Submission received: 26 January 2023 / Revised: 14 March 2023 / Accepted: 20 March 2023 / Published: 24 March 2023

Abstract

:
The increasing number of agile earth observation satellites (AEOSs) in orbit have advanced maneuverable capabilities, enabling the AEOS constellation to provide richer observation services. Therefore, observation scheduling in the AEOS constellation is crucial for improving the performance of satellite remote sensing systems. This paper focuses on the problem of distributed observation scheduling in the AEOS constellation, where a period of transition time is required between two consecutive observations, and this constraint depends on the start time of observations. We define a new fitness function that not only maximizes the profit sum but also considers system load balancing. Based on the fundamental idea of a distributed performance impact (PI) algorithm, we develop a PI-based distributed scheduling method (PIDSM) that runs concurrently on all AEOSs via local inter-satellite link (ISL)-based communications. The PIDSM iterates between two phases: target inclusion and consensus and target removal. The first phase aims to select the optimal task for each AEOS, while the second phase reaches a consensus over all AEOSs and removes targets that may decrease overall fitness. Experimental results demonstrate that the PIDSM can schedule more targets, reduce communication overhead, and achieve higher fitness values than existing algorithms. Sensitivity analyses further validate the effectiveness of the PIDSM.

1. Introduction

Earth observation satellites (EOSs) aim to collect images of ground targets using remote sensing cameras. Since EOSs have the advantages of a high resolution and large areas of coverage, they have been applied to the domains of resource exploration, disaster alteration, and climate analysis in the past 20 years [1,2]. The number of orbiting EOSs is continuously increasing and a group of EOSs share the same orbital planes so as to constitute a constellation. Therefore, the observation scheduling in constellation is important for the satellite remote sensing systems.
The conventional EOS (CEOS) can move only in the roll axis, while the Agile EOS (AEOS) can adjust the attitude in three axes (roll, pitch, and yaw). Thus, AEOSs have better maneuverability than CEOSs. Moreover, the differences in observations for CEOSs and AEOSs are shown in Figure 1. We can see that a CEOS observes a target within an observation time window (OTW), which is identical to its corresponding visible time window (VTW). In this way, the observation scheduling of CEOSs is equivalent to the VTW selection problem [3]. By contrast, since an AEOS can look forward and back along the pitch axis, the VTW for the AEOS is usually longer than the corresponding OTW. Consequently, AEOSs can observe more targets than CEOSs during a time interval, but the complexity of the AEOS schedule problem is increased greatly compared with that of the CEOS. In addition, the increasing satellite number in the AEOS constellation brings significant challenges for scheduling.
The observation task is associated with a profit that indicates the importance and value obtained by completing this task. The objective of scheduling AEOSs is often to maximize the profit sum of scheduled targets, while satisfying all operational constraints. Among them, the transition time constraint requires that there must be enough period of time for an AEOS to rotate the camera when executing two consecutive observations. Generally, such a transition time is time-dependent, since it is determined by the angular variation which is changed over time. The time-dependent transition constraint increases the complexity for scheduling. To simplify the problem, some researchers [4] relate the transition time only with task sequence instead of angular variation. The onboard resource constraints, such as energy and data storage capabilities, are considered [5,6].
There are mainly two scheduling approaches for the AEOS observation schedule: centralized and distributed methods. In the former, a centralized server generates a schedule plan for each AEOS. A few studies [7,8,9] used the exact algorithm to obtain an optimal solution. For example, by introducing a deterministic mixed integer liner programming (MILP) model, Valicka et al. [8] provided the optimality guarantees. Cho et al. [9] obtained the optimal schedule solution using a MILP solver for a constellation of AEOSs. However, due to the complexity of the considered modeling, these exact algorithms are only suitable for solving small instances. To tackle this issue, a number of heuristic and metaheuristic approaches are developed for the AEOS schedule problem [10,11,12,13,14,15,16,17,18,19,20]. Specifically, Nag et al. [10] presented the dynamic programming approach for AEOSs in Cubesat constellations based on MILP modeling. He et al. [11] used an edge computing framework where a central node and several edge nodes are included. Owing to the simplicity and efficiency, various evolutionary algorithms are applied extensively for scheduling AEOSs [3,13,14,15,16,17,18,19,20]. Du et al. [13] used the travelling salesman problem to model the AEOS scheduling problem and propose a kind of ant colony optimization. Cui et al. [14] aimed to maximize the profit sum but minimize the total task transition time in the emergence of AEOS scheduling. To obtain high-quality scheduling solutions quickly, some scholars have employed reinforcement learning methods [21,22,23,24,25]. He et al. [21] proposed a general reinforcement learning-based approach that can efficiently handle unknown data and generate good solutions in a short time. Wei et al. [22] addressed the multi-objective AEOS scheduling problem. Wang et al. [23] proposed neural networks with an ‘encoder–decoder’ structure that significantly reduce the inference time. Zhao et al. [24] presented a two-phase neural combinatorial optimization method with reinforcement learning for AEOS scheduling. Chen et al. [25] proposed a non-iterative heuristic construction neural network method, which demonstrated excellent performance. On the other hand, a series of articles addressed the time-dependent transition time constraint of AEOS scheduling. Liu et al. [26] presented an adaptive large neighborhood search algorithm for scheduling an observation task for a single AEOS. The proposed method in [26] was extended to cases with multiple AEOSs [27]. Peng et al. [28,29] presented the concept of minimal transition time and based on this idea, an effective local search algorithm was developed.
Most of the centralized schedule plans are generated on the ground and are uploaded to the satellite, thus neglecting dynamic and unpredictable events, such as the arrival of new task or the required change of users. To overcome those problems, the onboard distributed AEOS scheduling approaches have received increasing attention [30]. In a distributed framework, an AEOS creates a local schedule after exchanging information with neighbor satellites via inter-satellite links (ISLs). Then, the global schedule result can be obtained by integrating all these local schedules using coordination and communication mechanisms. Specifically, Philliops and Parra [31] proposed a market-based method to allocate observation tasks to a set of satellites, though the satellites were managed by different mission centers. Si-wei et al. [32] proposed an extended contract net protocol (ECNP) to solve dynamic task assignments for multiple satellites. Li et al. [30] developed an online schedule for distributed AEOS systems according to the idea of the well-known consensus-based bundle algorithm (CBBA). Herein, CBBA selects tasks using the auction approach and the conflict is resolved by the consensus procedure [33]. Conflict-free assignment can be achieved in the CBBA by iterating between the bundle construction phase and the conflict resolution phase. The performance impact (PI) algorithm is another important distributed task allocation approach [34,35]. It aims to directly optimize the overall objective using the same two-phase architecture as the CBBA by introducing a key concept called the significance of a task to measure its contribution to the global objective value. In other words, the PI algorithm considers not only the cost of a task assignment but also the impact of that assignment on the overall objective. Compared with the CBBA, the PI can solve more time-critical task allocation problems and find a task assignment with a lower time cost.
Although the time-dependent transition time constraint is important for realistic AEOS observation scheduling, there is no distributed scheduling method taking it into consideration. On the other hand, as far as we know, implementation of the PI algorithm for AEOS observation scheduling has not been reported. Inspired by these points, this paper intends to propose a PI-based distributed scheduling method (PIDSM) for solving the problem of time-dependent observation scheduling in the AEOS constellation. To facilitate our understanding, we propose a mathematical model of the considered problem. The global fitness (or objective) function is designed to not only maximize the profit sum but also account for system load balancing. Moreover, we compute the non-conflict VTWs for each non-scheduled target to guarantee that each AEOS can observe one target at a time. The main contributions of this paper are twofold:
(1) We are the first to apply the decentralized PI framework for solving the AEOS observation schedule problem with time-dependent transition constraints. The proposed PIDSM is concurrently run on all AEOSs via ISL-based local communications. It iterates between two phases, namely, target inclusion and consensus and target removal. The first phase aims to select the optimal task for each AEOS. The second reaches consensus based on the significance value across all AEOSs, and then removes the targets that may decrease the overall fitness.
(2) Compared with other distribution algorithms, i.e., CBBA [30], CNP [36], and ECNP [32], our PIDSM can successfully schedule more targets with less communication overhead and higher global fitness values. Moreover, the sensitivity analysis demonstrates that variations in the number, distribution density, and observation time of targets have minimal impact on the performance of the PIDSM.
The rest of this paper is organized as follows. Section 2 illustrates the studied problem and presents the corresponding mathematical model. Section 3 proposes heuristics methods to compute the global fitness value of a solution. Section 4 proposes a PIDSM consisting of two phases. Experiments are conducted in Section 5 and we discuss the whole algorithm in Section 6. This paper is concluded in Section 7.

2. Problem Description and Modelling

In this section, we introduce the observing schedule problem in the AEOS constellation and propose the corresponding mathematical model. As shown in Figure 2, the AEOS constellation consists of Zo adjacent orbital planes with Zs satellites in each orbital plane, i.e., it contains Ns = Zo × Zs satellites. Each AEOS in the constellation is connected to four adjacent satellites through ISLs, two of which are the closest in the same orbital plane, and two are the closest in adjacent orbital planes. Although all satellites move at high speed, their ISL-based connections always exist. For example, as illustrated in Figure 2, AEOS s5 connects not only s2 and s8 (in the same plane), but also the closest hetero-orbiting satellites s4 and s6. In this paper, AEOS s is called a neighbor of c if it is connected by an ISL.
A group of AEOSs in the constellation, denoted by S = {s1, s2,…, sm}, mNs, is equipped with cameras to observe a set of ground spot targets denoted by G = {g1, g2,…, gn}. The geographic positions, observation times dg, and profits ϕg for each target gG are given by users in advance. We use an m × m matrix H to describe the ISL connections among AEOSs, where H[s, c] = 1 represents that satellites s and c can exchange messages via ISL, and otherwise H[s, c] = 0. Some useful notations are summarized in Table 1.
During the scheduling horizon, an AEOS s may circle the earth several times and a target can be observed by s in different orbits. Herein, an orbit is referred to the time period that the satellite flies through the sunshine area while circling the Earth one time. Let R be the set of all orbits of AEOSs in S. According to the satellite movement parameters and camera parameters, in an orbit kR, a target gG is visible for a satellite sS during a visible time window (VTW) denoted by Wsgk = [stsgk, etsgk], where stsgk (respectively, etsgk) is the start (respectively, end) time. Moreover, for each time tWsgk, the look angle of s on three axes (roll, pitch, and yaw) is provided as Ψsg(t) = <γsg[t], πsg[t], ψsg[t]>. All VTWs of AEOS s regarded with target g can be denoted by Wsg = {Wsgk|kR}. Note that each target g can be observed at most once by one satellite during one VTW of {Wsg|sS}.
Owing to the finite camera resources and visible constraints, only a set of targets can be observed in the specific horizon. The AEOS observation schedule aims to assign targets for different AEOSs in S such that the objective function is somehow optimized while satisfying the VTW limitations and transition time constraints. Firstly, based on the work [29], we make certain assumptions to simplify the scheduling problem.
Assumption: 
(1) Each satellite has enough onboard power and memory in every orbit, and hence, we do not consider the power and memory constraints; (2) each target is a spot one, which can be observed by AEOS with one pass; (3) we focus on the semiagile satellite AS-01 whose look angles are unchanged when observing a target.
We intend to schedule a target sequence as = <gs1, gs2,…., gs|as|> for each AEOS sS such that targets in as are observed sequentially by s. All these target allocations constitute a candidate solution a = [a1, a2,…, am] for the AEOS constellation observing schedule.
For target gas, let tg be the time that s begins to observe g within the VTW Wsgk = [stsgk, etsgk] in an orbit kR, and the end time of observation is tg + dg. The following VTW condition must be satisfied:
t g s t s g k
t g + d g e t s g k
Note that there may be cases where an AEOS s cannot observe a task g in as due to conflicting VTMs (see Section 3.1). For completeness, we assume the start-observing or start time of this assigned but unobserved target is 0, i.e., tg = 0. Then, let Γsas be the sequence of observable targets by ss can be obtained by Algorithm CGF in Section 3.3).
Based on the characteristics of AS-01 satellite [29], the look angle of AEOS s should be adjusted as Ψsg(tg) before observing target g at the time tg, and this angle remains unchanged during the subsequent observation. Therefore, if AEOS s intends to observe two consecutive targets g and h in the same orbit, a transition time denoted as trans(tg, th) is required between the observation of g and h. In other words, the following transition time constraint must be satisfied:
t g + d g + t r a n s ( t g , t h ) t h
The transition time trans(tg, th) is calculated based on Δ = |γsg[tg] − γsh[th]| + |πsg[tg] − πsh[th]| + |ψsg[tg] − ψsh[th]|, which is the total look angle change of transition movement associated with g and h.
t r a n s ( t g , t h ) = t r a n s A ( Δ ) , if   g   and   h   are   in   the   same   orbit 0 , otherwise
where transA(Δ) is the following piecewise linear function [29]:
t r a n s A ( Δ ) = 11.66 , Δ 10 5 + Δ 1.5 , 10 < Δ 30 10 + Δ 2 , 30 < Δ 60 16 + Δ 2.5 , 60 < Δ 90 22 + Δ 3 , 90 < Δ
The look angle of an AEOS during VTW varies over time, as it is influenced by the satellite parameters and the relative positions of the satellite and target. Thus, according to (4) and (5), the transition time trans(tg, th) is dependent on time, and hence the AEOS observation schedule considered in this paper is time-dependent.
Given a target allocation as for AEOS s, we define the (partial) fitness value F(as) as follows:
F ( a s ) = g Γ s ϕ g 1 e τ g Γ s d g Ave ( s ) × | Γ s |
where Γs collects all observable targets in as, and τ = 1/(|Γs| + 1), |Γs| is the number of targets in Γs, and Ave(s) = ΣkRΣgG(etsgkstsgk)/n is the average sum of VTM length for each satellite. The proposed fitness Function (6) considers three factors: task execution benefits, satellite task load, and satellite observation capabilities towards the targets.
Clearly, the higher the profit sum of the observable targets, the bigger the fitness value F(as). To enforce the system load balance, when two satellites observe the same set of targets, the one with the longer average VTW is assigned a higher fitness value according to equation (6). The global fitness value for the solution a = [a1, a2,…, am] is defined as ϒ(a) = ΣsSF(as).
Let xskg be the decision variable such that xskg = 1 if target g is observed by AEOS s in the orbit kR and xskg = 0 otherwise. Another variable yskgh = 1 if satellite s selects target h after observing g in the orbit k, and yskgh = 0 otherwise. In addition, let lsk and esk be the dummy source and sink targets in orbit k associated with s, respectively. It is assumed that the observations of s in orbit k is start from lsk but end with esk.
The mathematical model of the observation scheduling of the AEOS constellation can be expressed as follows:
J = Max s S { k R g G { x s k g ϕ g x s k g d g Ave ( s ) } 1 e k R g G x s k g + 1 }
s . t .                                                 s S k R x s k g 1 , g G
h G { e s k } \ { g } y s k g h = h G { l s k } \ { g } y s k h g = x s k g , s S , k R , g G
g G l s k y s k g e s k = g G e s k y s k l k g = 1 , s S , k R
s t s g k x s k g t g , s S , k R , g G
t g + d g e t s g k + K ( 1 x s k g ) , s S , k R , g G
t g + d g + t r a n s ( t g , t h ) t h K ( 1 y s k g h ) , s S , k R , g G
x s k g { 0 ,   1 } , y s k g h { 0 ,   1 } , s S , k R , g , h G
t g 0 , g G
where K is an integer that is large enough. Equation (7) implies that the objective function is to maximize the global fitness value. Equation (8) implies that each target is observed at most once. Equations (9) and (10) together imply that each scheduled task (including the dummy source and sink ones) has a predecessor and a successor task in the assigned orbit. Equations (11) and (12) are the VTW constraints that each scheduled observation must satisfy. Equation (13) is the time-dependent transition time constraint. Equations (14) and (15) state the domains of involved variables.

3. Heuristics Method to Calculate the Global Fitness of a Solution

This section computes the global fitness value ϒ(a) of a schedule solution a = [a1, a2, …, am]. First, after a set of targets are observed by an AEOS sS, and we obtain the no-conflict VTMs that are visible for a new target. Then, by using these no-conflict VTMs, we further compute the start time of each observed target in the sequence as.

3.1. Computation of Non-Conflict VTWs

We denote as = <gs1, gs2, …, gs|as|> for convenience and assume that at some time point, AEOS s has tried to observe a set of targets bs = <gs1, gs2,…, gu>, where u < |as|, and the start-observing time of each target in bs is known (owing to the conflicts of VTWs, the start time of some target in bs may be zero.). For a new target has\bs, its VTWs set Wsh may be overlapped with the scheduled duration {[tg, tg + dg] | g′bs and tg ≠ 0}, resulting in a VTW conflict. For example, consider bs = {g1} and target g2. As illustrated in Figure 3, there are four possible situations regarding the overlap between the scheduled duration [tg1, tg1 + dg1] and a VTW Wsg2k = [ϖ, φ] ∈ Wsg2. Especially, note that in Figure 3c, the VTW [ϖ, φ] becomes infeasible for g2 because s observes g1 during that time interval.
Since AEOS s can observe, at most, one target at a time, at least one non-conflict VTW is required before observing a new target. Motivated by the analysis in Figure 3, we develop Algorithm 1, namely compute non-conflict VTWs (CNV), to compute a non-conflict VTW for a new target hbs.
Algorithm 1: Compute non-conflict VTWs (CNV).
Input: AEOS s, observed target sequence bs, tg (∀gbs), hbs, and Wsh;
Output: The non-conflict VTW set Vsh(bs);
  • Let Ω = Wsh;
  • for each gbs
  •    for each VTW [ϖ, φ] ∈ Ω
  •      if tg ≤ ϖ, tg + dg < ϕ and tg + dg > ϖ //see Figure 3a
  •         Set ϖ = tg + dg;
  •      else if tg > ϖ, tg + dg ≥ ϕ and tg < ϕ //see Figure 3b
  •         Set ϕ = tg;
  •       else if tg ≤ ϖ and tg + dg > ϕ //see Figure 3c
  •           Delete [ϖ, φ] from Ω;
  •      else if tg > ϖ and tg + dg < φ          //see Figure 3d
  •          Divide [ϖ, φ] into two segments [ϖ, tg] and [tg + dg, φ];
  •      end
  •    end
  • end
  • for each [ϖ, φ] ∈ Ω
  •    if ϕ − ϖ < dh
  •      Delete [ϖ, ϕ] from Ω;
  •    end
  • End
  • Output Vsh(bs) = Ω;
In Algorithm CNV, we begin by setting Ω = Wsh. For each visited target gbs, we “prune” the overlapped part between every VTW [ϖ, φ] ∈ Ω and the time interval [tg, tg + dg], based on four different situations illustrated by Figure 3 (Lines 2~14). Next, in Lines 15~19, we delete the VTWs from Ω that have a length shorter than dh and finally output the non-conflict VTWs Vsh(bs) = Ω.

3.2. Computation of the Start Time of Observation

According to Algorithm CNV, given the start time tgu of the last target gu in bs and a target has\bs, the feasible non-conflict VTWs Vsh(bs) can be obtained. Based on the idea of dichotomy, we develop Algorithm 2, namely compute start-observing time (CST), to compute the start time th of the observation for h such that the VTW Constraints (11) and (12), as well as the transition time Constraint (13), are satisfied.
Algorithm 2: Compute start-observing time (CST).
Input: AEOS s, tgu, has\bs, and Vsh(bs);
Output: th;
  • Let Flag = False and th = 0;
  • for each VTW [ϖ, φ] ∈ Vsh(bs)
  •   if trans(tgu, −φ − dh) + tgu + dgu ≤ −φ − dh
  •     Set x = −φ − dh, y = ϖ;
  •     Set x′ = (x + y)/2;
  •     Set Flag = True;
  •     break;
  •   end
  • end
  • while(Flag)
  •   if trans(tgu, x′) + tgu + dgux′
  •     x = x′;
  •     x′ = (x′ + y)/2;
  •   else
  •     y = x′;
  •     x′ = (x + y)/2;
  •   end
  •   ifxx′ ≤ 0.1;
  •     break;
  •   end
  • end
  • if Flag = True
  •    Set th = x;
  • end
  • Output th;
In Algorithm CST, the Boolean variable Flag = True represents that we can find the start time th within one non-conflict VTW in Vsh(bs) while satisfying the Constraints (11)–(13). Initially, we set Flagalsease and th = 0. This algorithm first tries to find a VTW [ϖ, φ] ∈ Vsh(bs) so that
trans(tgu, −φ − dh) + tgu + dgu ≤ −φ − dh
Satisfying the above inequity implies that the VTW and transition-time constraint are met if we set th = −φ − dh. In this case, let Flag = True and we execute a loop (Lines 10~21) to further find a sufficiently small number x ∈ [ϖ, −φ − dh] which satisfies the constraint trans(tgu, x) + tgu + dgux based on the dichotomy theory. Finally, if Flag = True, let th = x, otherwise th = 0.

3.3. Computation of Global Fitness Value

Based on the proposed Algorithms CNV and CST, we can calculate the partial fitness F(as) as well as the global fitness ϒ(a) by Algorithm 3 which is called compute global fitness (CGF).
Algorithm 3: Compute global fitness (CGF).
Input: A candidate solution a = [a1, a2,…, am];
Output: the global fitness value ϒ(a);
  • for eachsS
  •    Denote as = <g1, g2, …, g|as|>;
  •    tg1 = min{sts, g1k, kR};
  •    Set Γs = {g1};
  •    Set gl = g1;
  •    Set i = 2;
  •    while(i ≤ |as|)
  •       g′ = gi;
  •       According to Algorithm CNV, obtain non-conflict VTW set Vsgs);
  •       Let V1 = {[ϖ, φ] ∈ Vsgs)|ϖ ≥ tgl + dgl };
  •       Let s, gl, g′, and V1 be the inputs of Algorithm CST, obtain tg;
  •       if tg > 0
  •         Γs := Γs ∪ {g′};
  •         gl = g′;
  •       end
  •       i = i + 1;
  •    end
  •    F(as) = Σg∈Γsϕg − 1/eτ − (Σg∈Γsdg)/Ave(s), where τ = 1/(|Γs| + 1);
  • End
  • Output ϒ(a) = ΣsSF(as);
  •    F(as) = Σg∈Γsϕg − 1/eτ− (Σg∈Γsdg)/Ave(s), where τ = 1/(|Γs| + 1);
  • End
  • Output ϒ(a) = ΣsSF(as);
In Algorithm CGF, for each AEOS s, we use Γsto collect all observable targets in as and initially set Γs = {g1} and tg1 = minkR{sts, g1k}. Then, a loop (Lines 7~19) is executed to compute the start time for the observation of remaining targets in as, where gl ∈ Γs is the current target and g′ is the target to be analyzed. In particular, each iteration can be divided into the following steps:
Step 1: We use Algorithm CNV to obtain the non-conflict VTW set Vsgs) of g′ under the known observations of targets in Γs.
Step 2: Let V1 = {[ϖ, φ] ∈ Vsgs)|ϖ ≥ tgl + dgl} be the feasible VTWs in Vsgs) which begins after the observation of gl, and we further use Algorithm CST to obtain the start-observing time tg of g′.
Step 3: If tg > 0, we add g′ into Γs and update gl = g′, otherwise if tg = 0, it indicates that g′ cannot be observed by s and we continue analyzing the next target.
After all targets in as have been analyzed, we compute the partial fitness F(as) according to (6), and the global fitness value ϒ(a) = ΣsSF(as) is finally outputted.

4. Distribution Method for Observation Scheduling in the AEOS Constellation

Based on the decentralized PI algorithm, this section proposes a PIDSM for solving the problem of time-dependent observation tasks in the AEOS constellation. Such an algorithm can be deployed and executed concurrently in all AEOSs. By exchanging information with the neighbor satellites through ISLs, each AEOS sS iteratively adds or removes some targets in or from its currently ordered sequence such that the global fitness is somehow optimized, while the constraints defined in Section II are met.

4.1. The Decentralized Framework of PI

We start by introducing the “significance value” (respectively, the “marginal significance value”) of a target with regard to the assigned (resp. non-assigned) satellite in the PI framework.
(1) Significance: Assume gas, the significance value qsi(as g) represents the variation of F(as) after removing g from as, it can be formulated as follows:
q s ( a s g ) = F ( a s ) F ( a s g )
where as g is the sequence after removing g from as. For completeness, we set qs(as g) = 0 for gas.
(2) Marginal significance: Assume gas, the marginal significance qc*(acg) represents the maximal variation of F(as) and after adding g into as, it can be formulated as follows:
q s ( a s g ) = max k { 1 , 2 , , | a s | + 1 } { F ( a s k g ) F ( a s ) }
where ask g is the target sequence after inserting g into the k-th position of as. If gas, let qs*(asg) toward minus infinity, i.e., qs*(acg) → −∞.
The basic idea of PI framework is as follows [34]: each AEOS s transmits the significance value qs(as g) of target gas to its neighbor satellite c through ISLs. Then, satellite c compares the received qs(as g) with the marginal significance value qc*(acg) computed based on its sequence ac. If the criterion in (18) is satisfied (see Proposition 1 later), target g is deleted from as but added into some proper position of ac such that the global fitness value ϒ(a) is increased. Similarly, the significance of target g is further transmitted to other satellites and this value is continuously updated. The above process is repeated until (18) is no longer satisfied or ϒ(a) cannot be increased by exchanging targets between satellites. Note that the convergence of PI algorithm is naturally guaranteed since the global fitness ϒ(a) is increased whenever the target assignment is changed.
To facilitate the understanding of the PI procedure, the following conclusion reveals how global fitness is increased by exchanging targets between satellites.
Proposition 1.
Given the global fitness value ϒ(a) of a solution a, two AEOSs s and c with H(s, c) = 1, and a target g ∈ as\ac. Then, if (18) is satisfied, the global fitness value would be increased after removing g from as but inserting it into the proper position of ac.
q c ( a c g ) > q s ( a s g )
Proof: 
Let pc = argmaxk ∈{1,2,..|ac|+1}{F(ackg) − F(ac)}. Suppose we remove g from as and add it into the pc-th position of ac, resulting in a new solution denoted as a′. By (16) and (17), we can calculate the fitness of a′ as ϒ(a′) = ϒ(a) + qc*(acg) − qs(as g). Therefore, the inequality ϒ(a′) > ϒ(a) holds and the global fitness value increases only when qc*(cg) > qs(s g). □
Specifically, we use Fs and Fc to denote the partial fitness value of s and c, respectively. According to whether qc*(acg) and qs(asg) is greater or less than zero, there are four cases:
Case 1. qs(as g) < 0 and qc*(acg) > 0. Naturally (18) holds, then Fs and Fc are both increased. In this way, the global fitness value ϒ(a′) is increased.
Case 2. qs(as g) ≥ 0 and qc*(acg) > 0. Then, the satisfaction of (18) implies that the reduction of Fs is less that the increase of Fc. Hence, ϒ(a′) is increased.
Case 3. qs(as g) ≥ 0 and qc*(acg) ≤ 0. Then, (18) never holds.
Case 4. qs(as g) < 0 and qc*(acg) ≤ 0. Then, we have |qc*(acg)| < |qs(as g)|. That means, the increase of Fs is greater than the decrease of Fc, and hence ϒ(a′) is increased correspondingly.
The significance value of a target can be broadcasted among AEOSs by using ISLs. According to Proposition 1, based on the received significance value, an AEOS can determine whether to remove or insert a target from or into its current target sequence so as to increase the global fitness value. However, the sparse ISL communication topology makes it difficult to deliver a target’s significance value to all satellites, leading to the possibility of the system getting trapped in a local optimum [34]. For example, consider a set of AEOSs S = {s1, s2, s3} and four targets G = {g1, g2, g3, g4}. There are ISLs between s1 and s2 as well as between s2 and s3. As illustrated in Figure 4, the current schedule solution is a1 = {g1}, a2 = {g2, g3}, and a3 = {g4}. We now assume that i) qs2*(as2g1) ≤ qs1(as1 g1), and ii) the global fitness will be increased if target g1 is removed from a1 to a3. However, since qs2*(as2g1) ≤ qs1(as1 g1), g1 cannot be added into as2 by Proposition 1, and further g1 cannot be added into as3, leading to a local optimum.
To deliver the target significance value to all satellites, PI algorithm framework introduces three message lists Zs, Qs, and us stored on each satellite s, which are defined as follows:
Zs = [Zs1, Zs2, …, Zsn]T is a vector that keeps track of which target is assigned to which satellite. The entry Zsg = c implies that s thinks that target g is scheduled to satellite c, and if s deems g to be unassigned, Zsg towards infinity, i.e., Zsg → ∞.
Qs = [Qs1, Qs2, …, Qsn]T is a vector recording the significance values of all targets. If Zsg → ∞, we set Qsg → ∞.
us = [us1, us2…, usm]T is a vector where the entry usc records the time when (s thinks that) satellite c has received the latest message from other satellites. Once a message is passed, the timestamp usc is updated as follows:
u s c = τ r ,   if   H ( s , c ) = 1 max h S : H ( s , h ) = 1 u h c , o t h e r w i s e
where τr is the time that s receives the message from c. Apparently, if c is the neighbor of s, or H[s, c] = 1, we have usc = τr; otherwise, usc is equal to maximal timestamp uhc where H[s, h] = 1.
The timestamp list us records the latest message exchange time, and the lists Qs and Zs together represent a schedule solution. These three lists can be continuously updated through local communication with other satellites. In particular, suppose Zsg = cS. Then, AEOS s believes that at least at time usc, target g is scheduled to satellite c and the significance value of g with regard to c is Qsg. Initially, we set as = ∅, Zsg → ∞, and Qsg = 0 for each AEOS s and every target g.
Generally, as shown in Figure 5, the decentralized PI framework contains two phases: target inclusion and consensus and target removal; they are both running concurrently on each satellite. Specifically, the task inclusion phase determines whether to add a target to its current target sequence, and different significance values may result for a specific target at the end of this phase. Thus, the second phase is required, which has two parts: (i) consensus, where a consensus significance value list is reached among all satellites, and (ii) task removal, where each satellite removes the problematic targets from its current sequence. Note that only the consensus part exchanges messages among satellites using ISLs, but the other procedures are performed independently. The above two phases are executed iteratively. When no change can be made to the obtained sequences for a period of time, the PI algorithm running on each satellite ends.
Generally, as shown in Figure 5, the decentralized PI framework contains two phases: target inclusion and consensus and target removal; they are both running concurrently on each satellite. Specifically, the task inclusion phase determines whether to add a target to its current target sequence, and different significance values may result for a specific target at the end of this phase. Thus, the second phase is required, which has two parts: (i) consensus, where a consensus significance value list is reached among all satellites, and (ii) task removal, where each satellite removes the problematic targets from its current sequence. Note that only the consensus part exchanges messages among satellites using ISLs, but the other procedures are performed independently. The above two phases are executed iteratively. When no change can be made to the obtained sequences for a period of time, the PI algorithm running on each satellite ends.

4.2. Target Inclusion Phase

In this phase, each AEOS s adds some target gG\as into its sequence as by comparing Qsg with g’s marginal significance qs*(asg) computed based on its sequence as. Specifically, we first determine whether the following Inequality (20) is satisfied or not.
max g G { q s * ( a s g ) Q s g } > 0
If (20) holds, we select target g′ = argmaxgG{qs*(asg) − Qsg} whose removal leads to the maximum increase in ϒ(a) and let pf = argmaxp∈{1, 2,.., |as|+1}{F(aspg′) − F(as)} be the position of as to insert. Then, we add g′ into the pf-position of as and update Zsg such that Zsg = s. Meanwhile, we update the marginal significance qs*(asg) for each target gG\as. We repeat the above process until (20) is not met.
At the end of this phase, we update the significance list Qs by setting Qsg = qs(as g) for each target g. The whole target inclusion phase can be expressed in the following Algorithm 4.
Algorithm 4: Task inclusion phase running on each satellite.
Input: AEOS s, target sequence as, Qs and Zs;
Output: New sequence as, updated lists Qs and Zs;
  • while (1)
  •    for each gG
  •      Set qs*(asg) = 0;
  •      if gas
  •         qs*(asg) = −∞;
  •      else
  •         for each p ∈ {1, 2,.., |as|+1}
  •           Compute F(aspg);
  •           if F(aspg) − F(as) > qs*(asg);
  •              Let qs*(asg) = F(aspg) − F(as);
  •           end
  •         end
  •      end
  •    end //compute the marginal significance values of all targets
  •    if maxgG{qs*(asg) − Qsg} > 0
  •      Let g′ = argmaxgG{qs*(asg) − Qsg};
  •      Let pf = argmaxp∈{1, 2,.., |as|+1}{F(aspg′) − F(as)};
  •      Insert g′ into the pf-position of as;
  •      Let Zsg = s;
  •    else
  •      break;
  •    end
  • end
  • Update Qs such that Qsg = qs(asg);
  • Output as, Qs, Zs

4.3. Consensus and Target Removal Phase

Algorithm 4 is executed independently on each satellite, which can lead to a target being allocated to more than one satellite or the significance value of a target being different on different satellites, resulting in conflicts. Thus, the second phase is required to ensure that each target g is scheduled at most once and a consensus significance value for each target is reached among all satellites. This phase includes two steps: consensus and task removal.
Consensus: By (16), the significance value of target g is dependent on the target order of the sequence that g is assigned, thus the strong synergies exist between targets. According to the basic idea of PI algorithm, the significance list is broadcasted among satellites through a series of local ISL-based communication, and the significance value of every target is gradually increased. If we just aim to find the biggest target significance value during communication, the significance of some targets may converge to a nonexisting value (in the sense of not being assigned to any satellite). To avoid such a problem, we use the heuristic rules called winning bids proposed in the work [33] to reach the consensus of the significance list. This method is discussed in detail as follows.
By using ISLs, AEOS s sends the message lists Qs, Zs and us to its neighbor c with H[s, c] = 1 and receives the corresponding lists Qc, Zc, and uc. Then, s updates the stored messages Zsg and Qsg for each target g according to the rules given in Table 2. There are three actions in Table 2 and the default is the Leave action.
  • Update: Zsg = Zcg, Qsg = Qci;
  • Leave: no change made on Zsg and Qsg;
  • Reset: Zsg → ∞, Qsg → 0.
The first two columns of Table 2 record target g’s observers that are believed by the sender c and receiver s, respectively. The third column outlines the action taken on Qsg and Zsg with respect to receiver s. Note that once a piece of message is passed, timestamp ui is updated according to (19) to obtain the latest time information.
Table 2. Consensus rules.
Table 2. Consensus rules.
Sender c Thinks Zcg IsReceiver s Thinks Zsg IsReceiver’s Action
csif Qcg > Qsg: Update
cUpdate
k∉{c, s}if uck > usk or Qcg > Qsg: Update
noneUpdate
ssLeave
cReset
k∉{c, s}if uck > usk: Reset
noneLeave
k∉{c, s}sif uck > usk and Qcg > Qsg: Update
cif uck > usk: Update
otherwise: Reset
kif uck > usk: Update
β∉{c, s, g}if uck > usk and u > u: Update
if uck > usk and Qcg > Qsg: Update
if u > u and usk > uck: Reset
noneif uck > usk: Update
nonesLeave
cUpdate
k∉{c, s}if uck > usk: Update
noneLeave
Target removal: Only the local message lists Qs and Zs are updated during the above consensus procedure, but the target sequence as is not affected. Thus, it is possible that Zsgs for some sS and gas when the consensus is completed. That means that s believes that it does not observe a target which is however included in its sequence as Then, the second step, namely, target removal, is needed to remove these problematic targets De = {gas | Zsgs} from as. The removal criterion is given as follows:
max g D e { Q s g w s g } > 0
where wsg = F(as) − F(asg) is the significance value of target g computed based on the current sequence as.
If (21) is satisfied, we remove from as and De the target gk = argmaxgDe(Qsgwsg) which is associated with the largest objective increase and set Qsgk = 0. Repeat this process until De becomes empty or (21) is no longer met. Finally, if De ≠ ∅ but (21) is not satisfied, i.e., there are some targets retained in De, we reset the observer of each remaining target g in De as s, i.e., Zsg = s, and update the significance list Qsg = wsg. The consensus and target removal phase can be expressed in Algorithm 5.
Algorithm 5: Consensus and target removal phase running on each satellite.
Input: AEOS s, target sequence as, Qs and Zs;
Output: New sequence as, and updated lists Qs and Zs;
  • Send Qs, Zs and us to satellite c with H(s, c) = 1;
  • Receive Qc, Zc and ucfrom c with H(c, h) = 1;
  • According to rules in Table 2 to update Qs, Zs and us;
  • Let De = {gas|Zsgs};
  • while maxgDe(Qsgwsg) > 0
  •   gk = argmaxgDe(Qsgwsg);
  •   Remove gk from as and De;
  •   Reset Qsgk = 0;
  • end
  • Let Qsg = wsggas;
  • if De ≠ ∅
  • Let Zsg = s, ∀gDe;
  • End
  • Output as, Qs, Zs;

4.4. Convergence Analysis

The proposed PIDSM is essentially working on the iterative optimization principle with each satellite aiming to increase the global fitness at each iteration, and thereby the convergence is guaranteed. In particular, the significance values of all targets are exchanged among satellites via local ISL communication, and the global fitness value is increased by recursively adding/removing a target into/from the target sequence of some satellites according to Proposition 1. Meanwhile, the first phase adds as many targets as possible into the target sequence of a satellite, while the second phase ensures that a consensus significance values list can be reached for all satellites at each iteration. When no changes can be made in the obtained solution after executing the first and second phases for a period of time, the whole algorithm is converged.
Formally, the entire procedure of the PIDSM running on AEOS s is expressed in detail as follows.
Step 1: Running Algorithm 4 to include targets. First, add targets to as according to Criterion (20) and update the related information Zs. When no target can be scheduled to as, recalculate Qsg for each target g in ai.
Step 2: Sending the information lists Qs, Zi, and us stored on satellite s to its neighbor c with H[s, c] = 1.
Step 3: Receiving the information lists Qc, Zc, and uc from satellite c with H[s, c] = 1.
Step 4: Conducting the consensus procedure. Based on the received message Qc, Zc, and uc from each neighbor c, update the message Qs and Zs stored on satellite s itself.
Step 5: Removing the problematic targets. According to Criterion (21), remove tasks in De = {gas | Zsgs} from ai and De until De = ∅ or that criterion is not satisfied. Then, update Qs and place each remaining target g in De back in as by setting Zsg = s.
Step 6: Repeat steps 1–7 until no further changes are made to the scheduled solution for a specified period of time.

5. Simulation Results

A series of experiments are conducted in this section to evaluate the performance of the proposed PIDSM scheduling approach.

5.1. Experimental Setup

As shown in Table 3, we develop six Walker Delta constellations A–F with different orbital elements. A number of targets are generated randomly around the world according to two distributions: low and high density. The average distances between targets are around 10 km and 100 km in the high-density and low-density cases, respectively. Moreover, the observation time for each target is generated randomly between the long-term [200 s, 300 s] or short-term [40 s, 60 s], while the profit of each target takes a random integer value from 1 to 10.
We assume that the scenario begins from the time of [1 October 2022 00:00:00UTCG] and the horizon is set as 6 h. The VTWs between satellites and targets as well as the look angle on the three axes can be obtained directly from the software STK. The experiments are conducted on three types of texting instances: small, medium, and large, as shown in Table 4. There are 3 × 16 = 48 combinations in total and we generate 10 texting cases for each combination.
We compare the proposed PI-based distribute schedule approach, namely the PIDSM, with the existing algorithms, and the performance metrics are defined as follows:
  • δ: The average proportion of successfully assigned targets;
  • φ: The average communication times, which implies the communication burden between satellites;
  • The relative percentage value (RPV):
RPV = ϒabest
where ϒa is the fitness of an algorithm for a test instance, and ϒbest is the best fitness value obtained by all algorithms for the same instance.
To reduce randomness, each algorithm is independently run 10 times for each testing instance, then δ, φ, best RPV (bRPV), and average RPV (aRPV) are used to evaluate each algorithm. All algorithms are coded in MATLAB 2021a and run on a PC with an Intel Core i9-9900K CPU @3.60 GHz and 32 GB of RAM in the 64-bit Windows 10 operating system.

5.2. Comparison with State-of-the-Art Distribute Algorithms

We compare the proposed PIDSM with the existing distribution algorithms CBBA [30], CNP [36], and ECNP [32] based on the performance metrics in Section 5.1. The statistics results of comparison for small-type, medium-type, and large-type instances are shown in Table 5, Table 6 and Table 7, respectively.
The performance of all algorithms varies markedly in terms of δ, φ, bRPV, and aRPV; the optimal values are given in bold. From Table 5, Table 6 and Table 7, we can conclude the following:
(1) The PIDSM enables the AEOS constellation to successfully observe almost all targets with δ ≥ 95% for each instance. It demonstrates that the proposed PIDSM has outstanding performances in the time-dependent observing schedule problem. Moreover, except in some isolated cases, the ECNP has a similar δ to the PIDSM. The CNP can observe fewer targets than the ECNP and the PIDSM and CBBA obtain the worst δ.
(2) The PIDSM terminates after the least communication times in small-sized and medium-sized instances. In particular, although the ECNP requires marginally more communication times than the CNP for the same instance, the ECNP always obtains a better solution because it uses adaptive bidding with swarm intelligence to improve the solution quality, but the CNP does not. In a small-type (respectively, medium-type) instance, the communication times of the CNP is three (respectively, six) times than of the PIDSM. Moreover, in the large-type instance, the value of φ for the PIDSM increases with the growth of the satellite number. For example, in the F-constellation, the PIDSM’s φ is greater than that of the CNP.
(3) The PIDSM obtains the best aRPV and bRPV in almost all kinds of instances. With significantly higher communication times, the ECNP has a slightly inferior performance. The CNP is worse than both the PIDSM and the ECNP, and exhibits the largest fluctuations. The CBBA performed the worst among all methods.
In summary, compared with other competitors, the PIDSM can observe more targets with the least communication burden in most instances. In addition, the fitness value of the PIDSM is always better than that of the CBBA, CNP, and ECNP.
The variations for each algorithm on different instances are further illustrated in Figure 6. Compared with the CBBA, CNP, and ECNP, the proposed PIDSM exhibits more stability and better performance in solving the time-dependent AEOS constellation observing schedule problem with different scales.

6. Discussion

6.1. Instance Parameters

The instance parameters, such as target number, target distribution density, and target observation time, may affect the performance of various algorithms. This section discusses the sensitivity of these parameters. Since CBBA obtains the worst allocation according to Table 5, Table 6 and Table 7, we only compare the CNP, ECNP, and PIDSM for the sake of clarity, and bRPV is used as the merit. Table 8, Table 9 and Table 10 show the sensitivity analysis results of the target number, distribution density, and observation time, respectively, on the performance of the algorithms. The optimal values for each instance parameter given in bold.
The impact of target number: We consider six instance combinations whose target number belongs to {10, 20, 50, 100, 200, 500}, but all have a C-constellation, low-density distribution, and long-term observation duration. For each combination, test instances are generated randomly, and the CNP, ECNP, and PIDSM are implemented 10 times independently for each instance to eliminate data randomness. As shown in Table 8, the PIDSM outperforms the CNP and ECNP in almost all instances, and only for the instance with a target number of 10, the ECNP achieves a marginal lead. As shown in Figure 7, when the target number does not exceed 20, all three algorithms achieve the best RPV. As the target number grows gradually, the RPV of the CNP drops more sharply than the ECNP and PIDSM. Compared with the CNP and ECNP, the variation of the target number has the least impact on the performance of the PIDSM. This demonstrates that the PIDSM is more suitable for solving satellite observation schedule problems with a large number of targets.
The impact of target distribution density: We consider eight combinations with 100 targets, a C-constellation, and a long-term observation duration. The target distribution density is represented by the average shortest distances between targets, and the values for these eight combinations are 1 km, 5 km, 10 km, 50 km, 100 km, 200 km, 500 km, and 1000 km, respectively. Similarly, test instances are randomly generated and each algorithm is repeated 10 times. Table 9 shows that the ECNP obtains the best results in the instance with a target density of 1000 km, while the PIDSM achieves the optimal results in all other cases. The variance plot of the comparison results is shown in Figure 8. We can see that the CNP, ECNP, and PIDSM obtain the worst RPVs in the instances with a target density of 50 km, 5 km, and 10 km, respectively. However, after these worst cases, their RPVs improve as the average distances among targets increase. Especially in the case with 500 km density, the CNP, ECNP, and PIDSM all achieve the peak of their RPV. Meanwhile, the fluctuations of the PIDSM are significantly smaller than those of the CNP and ECNP, which verifies the stable performance of the PIDSM.
The impact of target observation duration: We consider six combinations with 50 targets, a C-constellation, and low-density target distribution, but their observation duration ranges between [8 s, 12 s], [40 s, 60 s], [80 s, 120 s], [180 s, 220 s], and [480 s, 520 s], respectively. The experimental instances are still randomly generated, and each algorithm is repeated 10 times for each instance. As shown in Table 10, the PIDSM outperforms the CNP and ECNP in all instances. Moreover, the variance plot in Figure 9 illustrates that the PIDSM achieves its best RPV when the target duration ranges between [8 s, 12 s], [40 s, 60 s], and [80 s, 120 s]. After that, the value of RPV for the PIDSM decreases slightly when the target observation duration becomes large. By contrast, the RPV values of the ECNP and CNP are more sensitive to an increase in observation time. Thus, the variation of the target observation duration has the least impact on the performance of the PIDSM.

6.2. Validation of the Reassignment

In this section, we aim to validate the ability of the proposed PIDSM algorithm to handle unexpected new targets by reassigning them. We utilize the 12 combinations obtained from Table 4, each with a low target distribution density and short-term observation time, and then generate 10 testing cases for each combination. After obtaining the initial assignment results of each testing instance using the PIDSM, we add five random targets to each testing instance and execute the PIDSM again to obtain new assignments. Subsequently, we conduct an overall assignment with both original and random targets to verify the quality of the reassignment solution.
Figure 10a presents the analysis of running time. It can be observed that compared to the overall assignment, the PIDSM’s reassignment time increases less with an increase in instance size. This is because our proposed PIDSM algorithm can make full use of the consensus results of the original assignments, significantly reducing the number of iterations required for the consensus after adding new targets. Furthermore, it is worth noting that the sum of the original assignment and the reassignment time of our algorithm is equal to the time taken for the overall assignment. This demonstrates the high efficiency of our algorithm in the reassignment, without introducing any additional computational burden when handling dynamic new targets.
Figure 10b presents the fitness value analysis, which shows that the solutions obtained through reassignment have almost the same fitness values as those obtained through the overall assignment for all testing instances. This demonstrates that our proposed PIDSM algorithm can handle dynamic new targets without sacrificing the quality of the solutions.

6.3. Runtime Analysis

We conducted a further runtime analysis of our proposed PIDSM. We obtained the required 12 combinations from Table 4 and randomly generated 10 testing instances for each combination, with the target distribution density and observation time randomly selected. The proposed PIDSM was then run on each testing instance, and the running time was recorded.
Figure 11 displays the statistical analysis results, indicating a significant increase in the running time of the PIDSM as the size of the testing instance increases, similar to the trend of communication times shown in Figure 6b. This demonstrates the advantage of our algorithm in dealing with small and medium-sized problems, particularly for small-sized problems where we can obtain a good task assignment solution in only 0.04 s. There are also some differences between Figure 6b and Figure 11. For example, in the {E × 200} combination, there is a significant difference in running time compared to the {E × 100} combination, while the difference in communication times is not significant. This is because our proposed algorithm enables satellites to communicate with each other about all tasks at once, so the increase in the number of tasks has a limited impact on communication times. However, the calculation of fitness value is more closely related to the number of tasks, and more tasks require a longer time to calculate the fitness value, ultimately resulting in an increase in the running time.

7. Conclusions

This paper addresses the problem of time-dependent observation scheduling for the AEOS constellation, where the transition time required between two consecutive observations is determined by angular change over time. This time-dependent characteristic increases the complexity of the scheduling problem.
To facilitate our understanding, we propose a mathematical model of the studied problem and a new fitness function that maximizes the profit sum while also considering system load balancing. We also compute non-conflict VTWs for each non-scheduled target to ensure that each AEOS can observe one target at a time. Using the basic idea of a distributed PI algorithm, we develop the PIDSM to solve the scheduling problem. The PIDSM iterates between two phases: the first selects the optimal task for each AEOS, while the second achieves consensus over all AEOSs and removes targets that may decrease the overall fitness. Each AEOS can run the PIDSM concurrently, and useful information is exchanged between AEOSs via ISLs during the PIDSM procedure. The convergence of the PIDSM is naturally guaranteed since the global fitness is increased whenever the schedule solution is changed at the end of each iteration. Compared with the existing distribution algorithms, CBBA [30], CNP [36], and ECNP [32], the PIDSM performs better for almost all experimental instances. Sensitivity analyses further validate the effectiveness of the PIDSM. In future work, we intend to reduce the communication overhead of the PI-based distributed scheduling approach.

Author Contributions

Y.F. and R.Z. conceived the study and algorithm. Y.F. completed the implementation of the algorithm and the writing of the paper. Y.F. and R.Z. supported the writing— review and editing. S.R. and S.Z. completed certain preliminary simulations and undertook the preliminary research and summary. Y.Y. provided theoretical guidance and suggestions for the revision of the paper. Y.F. provided funding support and necessary assistance for writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Innovation 2030-Key Project of “New Generation Artificial Intelligence” under Grant 2020AAA0108203 and the National Natural Science Foundation of P.R. China under Grants 62003258, 62003265, and 62103062.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.; Wu, G.; Xing, L.; Pedrycz, W. Agile Earth Observation Satellite Scheduling over 20 Years: Formulations, Methods, and Future Directions. IEEE Syst. J. 2020, 15, 3881–3892. [Google Scholar] [CrossRef]
  2. Chatterjee, A.; Tharmarasa, R. Reward Factor-Based Multiple Agile Satellites Scheduling with Energy and Memory Constraints. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 3090–3103. [Google Scholar] [CrossRef]
  3. Lemaıître, M.; Verfaillie, G.; Jouhaud, F.; Lachiver, J.-M.; Bataille, N. Selecting and Scheduling Observations of Agile Satellites. Aerosp. Sci. Technol. 2002, 6, 367–381. [Google Scholar] [CrossRef]
  4. Hosseinabadi, S.; Ranjbar, M.; Ramyar, S.; Amel-Monirian, M. Scheduling a Constellation of Agile Earth Observation Satellites with Preemption. J. Qual. Eng. Prod. Optim. 2017, 2, 47–64. [Google Scholar]
  5. Wang, X.; Song, G.; Leus, R.; Han, C. Robust Earth Observation Satellite Scheduling with Uncertainty of Cloud Coverage. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 2450–2461. [Google Scholar] [CrossRef]
  6. Wang, J.; Demeulemeester, E.; Qiu, D. A Pure Proactive Scheduling Algorithm for Multiple Earth Observation Satellites under Uncertainties of Clouds. Comput. Oper. Res. 2016, 74, 1–13. [Google Scholar] [CrossRef]
  7. Gabrel, V.; Moulet, A.; Murat, C.; Paschos, V.T. A New Single Model and Derived Algorithms for the Satellite Shot Planning Problem Using Graph Theory Concepts. Ann. Oper. Res. 1997, 69, 115–134. [Google Scholar] [CrossRef]
  8. Valicka, C.G.; Garcia, D.; Staid, A.; Watson, J.-P.; Hackebeil, G.; Rathinam, S.; Ntaimo, L. Mixed-Integer Programming Models for Optimal Constellation Scheduling given Cloud Cover Uncertainty. Eur. J. Oper. Res. 2019, 275, 431–445. [Google Scholar] [CrossRef]
  9. Cho, D.-H.; Kim, J.-H.; Choi, H.-L.; Ahn, J. Optimization-Based Scheduling Method for Agile Earth-Observing Satellite Constellation. J. Aerosp. Inf. Syst. 2018, 15, 611–626. [Google Scholar] [CrossRef]
  10. Nag, S.; Li, A.S.; Merrick, J.H. Scheduling Algorithms for Rapid Imaging Using Agile Cubesat Constellations. Adv. Space Res. 2018, 61, 891–913. [Google Scholar] [CrossRef]
  11. He, Y.; Chen, Y.; Lu, J.; Chen, C.; Wu, G. Scheduling Multiple Agile Earth Observation Satellites with an Edge Computing Framework and a Constructive Heuristic Algorithm. J. Syst. Archit. 2019, 95, 55–66. [Google Scholar] [CrossRef]
  12. Mok, S.-H.; Jo, S.; Bang, H.; Leeghim, H. Heuristic-Based Mission Planning for an Agile Earth Observation Satellite. Int. J. Aeronaut. Space Sci. 2019, 20, 781–791. [Google Scholar] [CrossRef]
  13. Du, B.; Li, S.; She, Y.; Li, W.; Liao, H.; Wang, H. Area Targets Observation Mission Planning of Agile Satellite Considering the Drift Angle Constraint. J. Astron. Telesc. Instrum. Syst. 2018, 4, 047002. [Google Scholar] [CrossRef]
  14. Cui, K.; Xiang, J.; Zhang, Y. Mission Planning Optimization of Video Satellite for Ground Multi-Object Staring Imaging. Adv. Space Res. 2018, 61, 1476–1489. [Google Scholar] [CrossRef]
  15. Li, L.; Wang, Y.; Trautmann, H.; Jing, N.; Emmerich, M. Multiobjective Evolutionary Algorithms Based on Target Region Preferences. Swarm Evol. Comput. 2018, 40, 196–215. [Google Scholar] [CrossRef]
  16. Wang, X.; Han, C.; Zhang, R.; Gu, Y. Scheduling Multiple Agile Earth Observation Satellites for Oversubscribed Targets Using Complex Networks Theory. IEEE Access 2019, 7, 110605–110615. [Google Scholar] [CrossRef]
  17. He, L.; Liang, B.; Li, J.; Sheng, M. Joint Observation and Transmission Scheduling in Agile Satellite Networks. IEEE Trans. Mob. Comput. 2021, 21, 4381–4396. [Google Scholar] [CrossRef]
  18. Du, Y.; Wang, T.; Xin, B.; Wang, L.; Chen, Y.; Xing, L. A Data-Driven Parallel Scheduling Approach for Multiple Agile Earth Observation Satellites. IEEE Trans. Evol. Comput. 2019, 24, 679–693. [Google Scholar] [CrossRef]
  19. Stollenwerk, T.; Michaud, V.; Lobe, E.; Picard, M.; Basermann, A.; Botter, T. Agile Earth Observation Satellite Scheduling with a Quantum Annealer. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3520–3528. [Google Scholar] [CrossRef]
  20. Gu, Y.; Han, C.; Chen, Y.; Xing, W.W. Mission Replanning for Multiple Agile Earth Observation Satellites Based on Cloud Coverage Forecasting. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 594–608. [Google Scholar] [CrossRef]
  21. He, Y.; Xing, L.; Chen, Y.; Pedrycz, W.; Wang, L.; Wu, G. A Generic Markov Decision Process Model and Reinforcement Learning Method for Scheduling Agile Earth Observation Satellites. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1463–1474. [Google Scholar] [CrossRef]
  22. Wei, L.; Chen, Y.; Chen, M.; Chen, Y. Deep reinforcement learning and parameter transfer based approach for the multi-objective agile earth observation satellite scheduling problem. Appl. Soft Comput. 2021, 110, 107607. [Google Scholar] [CrossRef]
  23. Wang, X.; Wu, J.; Shi, Z.; Zhao, F.; Jin, Z. Deep reinforcement learning-based autonomous mission planning method for high and low orbit multiple agile Earth observing satellites. Adv. Space Res. 2022, 70, 3478–3493. [Google Scholar] [CrossRef]
  24. Zhao, X.; Wang, Z.; Zheng, G. Two-phase neural combinatorial optimization with reinforcement learning for agile satellite scheduling. J. Aerosp. Inf. Syst. 2020, 17, 346–357. [Google Scholar] [CrossRef]
  25. Chen, J.; Chen, M.; Wen, J.; He, L.; Liu, X. A Heuristic Construction Neural Network Method for the Time-Dependent Agile Earth Observation Satellite Scheduling Problem. Mathematics 2022, 10, 3498. [Google Scholar] [CrossRef]
  26. Liu, X.; Laporte, G.; Chen, Y.; He, R. An Adaptive Large Neighborhood Search Metaheuristic for Agile Satellite Scheduling with Time-Dependent Transition Time. Comput. Oper. Res. 2017, 86, 41–53. [Google Scholar] [CrossRef]
  27. He, L.; Liu, X.; Laporte, G.; Chen, Y.; Chen, Y. An improved adaptive large neighborhood search algorithm for multiple agile satellites scheduling. Comput. Oper. Res. 2018, 100, 12–25. [Google Scholar] [CrossRef]
  28. Peng, G.; Dewil, R.; Verbeeck, C.; Gunawan, A.; Xing, L.; Vansteenwegen, P. Agile Earth Observation Satellite Scheduling: An Orienteering Problem with Time-Dependent Profits and Travel Times. Comput. Oper. Res. 2019, 111, 84–98. [Google Scholar] [CrossRef]
  29. Peng, G.; Song, G.; He, Y.; Yu, J.; Xiang, S.; Xing, L.; Vansteenwegen, P. Solving the Agile Earth Observation Satellite Scheduling Problem with Time-Dependent Transition Times. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1614–1625. [Google Scholar] [CrossRef]
  30. Li, G. Online Scheduling of Distributed Earth Observation Satellite System under Rigid Communication Constraints. Adv. Space Res. 2020, 65, 2475–2496. [Google Scholar] [CrossRef]
  31. Phillips, S.; Parra, F. A Case Study on Auction-Based Task Allocation Algorithms in Multi-Satellite Systems. In Proceedings of the AIAA Scitech 2021 Forum, Virtual Event, 11–21 January 2021. [Google Scholar] [CrossRef]
  32. Si-wei, C.; Jing, C.; Lin-Cheng, S.; Yi, T. ECNP-Based Method of Distributed Dynamic Task Allocation for Multiple Observation Satellite Planning. In Proceedings of the 2010 2nd International Conference on Advanced Computer Control, Shenyang, China, 27–29 May 2010. [Google Scholar] [CrossRef]
  33. Choi, H.-L.; Brunet, L.; How, J.P. Consensus-Based Decentralized Auctions for Robust Task Allocation. IEEE Trans. Rob. 2009, 25, 912–926. [Google Scholar] [CrossRef] [Green Version]
  34. Zhao, W.; Meng, Q.; Chung, P.W. A Heuristic Distributed Task Allocation Method for Multivehicle Multitask Problems and Its Application to Search and Rescue Scenario. IEEE Trans. Cybern. 2015, 46, 902–915. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Turner, J.; Meng, Q.; Schaefer, G.; Whitbrook, A.; Soltoggio, A. Distributed Task Rescheduling with Time Constraints for the Optimization of Total Task Allocations in a Multirobot System. IEEE Trans. Cybern. 2017, 48, 2583–2597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Wang, L.; Liu, M.; Meng, M.Q.-H. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems. IEEE Trans. Cybern. 2016, 47, 473–484. [Google Scholar] [CrossRef]
Figure 1. (a) CEOSs where the VTW is equivalent to the OTW; (b) AEOS where the VTW is usually longer than the OTW.
Figure 1. (a) CEOSs where the VTW is equivalent to the OTW; (b) AEOS where the VTW is usually longer than the OTW.
Remotesensing 15 01761 g001
Figure 2. (a) An AEOS constellation with 36 AEOSs; (b) ISLs connections where s5 connects the closest AEOSs s2 and s8 are in the same plane but with s4 and s6 also being in adjacent orbital planes.
Figure 2. (a) An AEOS constellation with 36 AEOSs; (b) ISLs connections where s5 connects the closest AEOSs s2 and s8 are in the same plane but with s4 and s6 also being in adjacent orbital planes.
Remotesensing 15 01761 g002
Figure 3. The analysis on the non-conflict VTWs where (a) the non-conflict VTW is located after the scheduled duration; (b) the non-conflict VTW is located before the scheduled duration; (c) there is no non-conflict VTW; and (d) the non-conflict VTW is split by scheduled duration into two parts.
Figure 3. The analysis on the non-conflict VTWs where (a) the non-conflict VTW is located after the scheduled duration; (b) the non-conflict VTW is located before the scheduled duration; (c) there is no non-conflict VTW; and (d) the non-conflict VTW is split by scheduled duration into two parts.
Remotesensing 15 01761 g003
Figure 4. An example of local optimum where as1 = {g1}, as2 = {g2, g3}, and as2 = {g4}.
Figure 4. An example of local optimum where as1 = {g1}, as2 = {g2, g3}, and as2 = {g4}.
Remotesensing 15 01761 g004
Figure 5. The framework of the PI algorithm.
Figure 5. The framework of the PI algorithm.
Remotesensing 15 01761 g005
Figure 6. Variation trend of δ, RPV, and φ for each algorithm on different instance types.
Figure 6. Variation trend of δ, RPV, and φ for each algorithm on different instance types.
Remotesensing 15 01761 g006
Figure 10. The reassignment results of the proposed PIDSM.
Figure 10. The reassignment results of the proposed PIDSM.
Remotesensing 15 01761 g010
Figure 11. The results of running time.
Figure 11. The results of running time.
Remotesensing 15 01761 g011
Figure 7. The impact of the variation of target number on the performance of algorithms.
Figure 7. The impact of the variation of target number on the performance of algorithms.
Remotesensing 15 01761 g007
Figure 8. The impact of variation of the target distribution density on the performance of the algorithms.
Figure 8. The impact of variation of the target distribution density on the performance of the algorithms.
Remotesensing 15 01761 g008
Figure 9. The impact of the variation of target observation time on the performance of the algorithms.
Figure 9. The impact of the variation of target observation time on the performance of the algorithms.
Remotesensing 15 01761 g009
Table 1. Notations.
Table 1. Notations.
SSet of involved AEOSs
GSet of ground target
HMatrix to describe the ISLs connections among AEOSs
RSet of orbits during the scheduling horizon
sSatellite index
gTarget index
dgObservation duration of target g
ϕgProfit of target g
Wsgk = [stsgk, etsgk]Visible time window of satellite s and target g during orbit kR
Ψsg(t) = <γsg[t], πsg[t], ψsg[t]>Look angle of s on three axes in time t
Wsg = {Wsgk|kR}Set of VTWs for AEOS s and target g
as = <gs1, gs2,…., gs|as|>Target sequence assigned for satellite s
ΓsObservable targets of as for satellite s
aCandidate schedule solution
tgStart-observing time of target g
transA(tg, th)Transition time between observation targets tg and th
F(as)Partial fitness value for satellite s
ϒ(a) Global fitness value for solution a
qsi(asg) Significance represents the variation of F(as) after removing g from as
qc*(acg)Marginal significance represents the maximal variation of F(as) after adding g into as
ZsInformation stored in satellite s keeps the target assignment
QsInformation stored in satellite s records the significance values of all targets
UsTimestamp list for satellite s
Table 3. Satellite constellations used in the simulation.
Table 3. Satellite constellations used in the simulation.
ConstellationAltitude (km)Inclination (deg)PlanesSatellites
A60006026
B500053.839
C300060416
D280070420
E250065432
F200060540
Table 4. Parameter size for each instance type.
Table 4. Parameter size for each instance type.
Instance TypeConstellationsTask NumberDistribution DensityObservation TimeCombination Number
smallA, B3, 5low, highlong-term, short-term2 × 2 × 2 × 2 = 16
mediumC, D30, 50low, highlong-term, short-term2 × 2 × 2 × 2 = 16
largeE, F100, 200low, highlong-term, short-term2 × 2 × 2 × 2 = 16
Table 5. The comparison results for small-type instances.
Table 5. The comparison results for small-type instances.
CBBACNPECNPPIDSM
δφbRPVaRPVδφbRPVaRPVδφbRPVaRPVδφbRPVaRPV
{A, 3, high, long}66.70%2160.8470.847100%21100%0.983100%4510.997100%8.711
{A, 3, high, short}33.30%2160.4860.486100%21100%0.971100%4510.975100%8.90.9970.997
{A, 3, low, long}66.70%2160.8470.847100%21100%0.983100%4510.994100%811
{A, 3, low, short}66.70%2160.7710.771100%21100%0.972100%4510.990100%811
{A, 5, high, long}40%2160.4920.49288%350.9830.929100%7510.995100%711
{A, 5, high, short}80%2160.8540.85486%35100%0.85298%7510.98298%11.111
{A, 5, low, long}60%2160.7450.745100%350.9810.97198%750.9810.96798%711
{A, 5, low, short}20%2160.4290.429100%350.9630.92698%7510.95298%711
{B, 3, high, long}66.70%4320.7310.731100%30100%0.917100%63.510.932100%80.9980.998
{B, 3, high, short}66.70%4320.7830.783100%30100%0.971100%63.310.990100%80.9960.996
{B, 3, low, long}100%4321197%30100%0.941100%63.410.982100%10.211
{B, 3, low, short}33.30%4320.4780.47890%300.9460.860100%63.30.9470.940100%10.211
{B, 5, high, long}40%4320.4920.492100%500.9830.971100%105.510.992100%1211
{B, 5, high, short}20%4320.2770.277100%500.9790.961100%105.610.989100%8.911
{B, 5, low, long}20%4320.2390.23994%500.9830.896100%105.60.9830.982100%611
{B, 5, low, short}20%4320.2360.23690%500.9800.851100%105.90.9800.975100%611
Table 6. The comparison results for medium-type instances.
Table 6. The comparison results for medium-type instances.
CBBACNPECNPPIDSM
δφbRPVaRPVδφbRPVaRPVδφbRPVaRPVδφbRPVaRPV
{C, 30, high, long}0%5120085%3000.9660.91889.7%645.60.9960.99190%3911
{C, 30, high, short}3.3%10240.0530.05383%3000.9380.87087.3%645.410.98290%37.20.9990.999
{C, 30, low, long}0%51200100%3000.9820.974100%645.70.9960.994100%52.411
{C, 30, low, short}6.7%11520.1010.10199%3000.9770.958100%645.70.9960.992100%39.811
{C, 50, high, long}6%10240.0860.08696%5000.9610.93498.6%1074.70.9950.988100%56.211
{C, 50, high, short}12%11520.2170.21792%5000.9490.90397.6%1077.10.9920.98399.2%5710.997
{C, 50, low, long}16%10240.2400.24093%5000.9710.93895.6%1075.10.9870.97398%66.610.999
{C, 50, low, short}6%10240.1140.11492%5000.9600.92695.2%1075.70.9810.97396.8%83.510.993
{D, 30, high, long}3.3%14400.0430.043100%3000.9780.975100%647.40.9970.994100%54.810.999
{D, 30, high, short}13.3%16000.2310.231100%3000.9810.965100%6490.9930.987100%45.410.997
{D, 30, low, long}6.7%14400.0870.08799%3000.9820.957100%648.10.9960.993100%38.110.997
{D, 30, low, short}13.3%12800.2260.22699%3000.9710.947100%648.70.9950.991100%42.911
{D, 50, high, long}34%16000.5030.50398%5000.9860.951100%1080.50.9940.991100%38.711
{D, 50, high, short}12%16000.1830.18396%5000.9640.947100%1078.30.9920.989100%81.810.996
{D, 50, low, long}6%14400.0870.08798%5000.9760.953100%1080.60.9910.989100%43.510.999
{D, 50, low, short}0%7200097%5000.9810.935100%1079.80.9930.991100%78.210.999
Table 7. The comparison results for large-type instances.
Table 7. The comparison results for large-type instances.
CBBACNPECNPPIDSM
δφbRPVaRPVδφbRPVaRPVδφbRPVaRPVδφbRPVaRPV
{E, 100, high, long}3.0%33280.0460.04696%10000.9750.95298.8%21750.9970.99498.8%179.610.998
{E, 100, high, short}2.0%30720.0310.03195%10000.9440.92298.8%21740.9980.99499.0%597.411
{E, 100, low, long}2.0%30720.0300.03095%10000.9630.94497.7%21740.9960.99398.0%167.210.999
{E, 100, low, short}5.0%28160.0850.08594%10000.9520.92897.7%21760.9950.99197.1%223.810.997
{E, 200, high, long}0.5%33280.0080.00897%20000.9670.95099.3%43520.9950.99399.5%323.510.998
{E, 200, high, short}7.0%30720.1180.11895%20000.9490.92999.3%43540.9960.99399.5%418.310.999
{E, 200, low, long}1.0%30720.0160.01692%20000.9370.91898.4%43500.9950.99298.7%196.210.998
{E, 200, low, short}2.5%33280.0410.04189%20000.9090.86697.2%43530.9890.98498.5%338.710.999
{F, 100, high, long}3.0%72000.0470.04794%10000.9500.93498.0%21810.9930.99298.0%2477.210.999
{F, 100, high, short}1.0%44800.0140.01494%10000.9430.92498.0%21800.9810.98098.2%6902.310.988
{F, 100, low, long}6.0%48000.0920.09293%10000.9400.91098.9%21800.9950.99199.0%2007.610.999
{F,100, low, short}5.0%60800.0740.07491%10000.9280.88599.0%21790.9900.98899.0%5549.810.998
{F, 200, high, long}1.0%44800.0160.01692%20000.9520.92995.0%43610.9900.98995.5%11096.610.999
{F, 200, high, short}1.5%44800.0260.02690%20000.9460.92594.7%43590.9960.99394.6%8609.310.996
{F, 200, low, long}1.5%48000.0230.02389%20000.9100.89197.7%43590.9950.99098.0%4067.610.999
{F, 200, low, short}4.5%44800.0720.07288%20000.8600.83896.5%43580.9550.95397.2%12125.210.967
Table 8. Comparison results of different target numbers.
Table 8. Comparison results of different target numbers.
Target NumberCNPECNPPIDSM
bRPVaRPVbRPVaRPVbRPVaRPV
100.98810.974010.99610.99950.9953
200.97740.96510.99480.993211
500.93740.91730.98850.977010.9832
1000.95250.92800.98760.976210.9868
2000.86010.83940.95410.936910.9923
5000.81010.82240.95410.926910.9866
Table 9. Comparison results of different target distribution densities.
Table 9. Comparison results of different target distribution densities.
Target Distribution DensityCNPECNPPIDSM
bRPVaRPVbRPVaRPVbRPVaRPV
10.93670.88840.99660.986610.9881
50.93280.89070.98680.975010.9947
100.96550.92440.99790.976610.9914
500.90240.86000.99030.971110.9961
1000.90300.86150.98220.955210.9816
2000.94220.91860.99620.987410.9945
5000.94920.91090.99570.986210.9973
10000.94120.923810.99360.99880.9928
Table 10. Comparison results of different target observation durations.
Table 10. Comparison results of different target observation durations.
Target Observation DurationCNPECNPPIDSM
bRPVaRPVbRPVaRPVbRPVaRPV
[8, 12]0.95960.93880.99590.986910.9988
[40, 60]0.97070.93810.98740.973210.9986
[80, 120]0.97050.93740.99400.980210.9985
[180, 220]0.95500.91770.98530.978810.9905
[480, 520]0.94220.90940.98750.979610.9883
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, Y.; Zhang, R.; Ren, S.; Zhu, S.; Yang, Y. A Distributed Approach for Time-Dependent Observation Scheduling Problem in the Agile Earth Observation Satellite Constellation. Remote Sens. 2023, 15, 1761. https://doi.org/10.3390/rs15071761

AMA Style

Feng Y, Zhang R, Ren S, Zhu S, Yang Y. A Distributed Approach for Time-Dependent Observation Scheduling Problem in the Agile Earth Observation Satellite Constellation. Remote Sensing. 2023; 15(7):1761. https://doi.org/10.3390/rs15071761

Chicago/Turabian Style

Feng, Yanxiang, Ruipeng Zhang, Sida Ren, Shuailin Zhu, and Yikang Yang. 2023. "A Distributed Approach for Time-Dependent Observation Scheduling Problem in the Agile Earth Observation Satellite Constellation" Remote Sensing 15, no. 7: 1761. https://doi.org/10.3390/rs15071761

APA Style

Feng, Y., Zhang, R., Ren, S., Zhu, S., & Yang, Y. (2023). A Distributed Approach for Time-Dependent Observation Scheduling Problem in the Agile Earth Observation Satellite Constellation. Remote Sensing, 15(7), 1761. https://doi.org/10.3390/rs15071761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop