Previous Article in Journal
Steiner Tree Approximations in Graphs and Hypergraphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm

1
School of Artificial Intelligence, Fuzhou Technology and Business University, Fuzhou 350715, China
2
School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Algorithms 2026, 19(3), 233; https://doi.org/10.3390/a19030233
Submission received: 25 December 2025 / Revised: 17 March 2026 / Accepted: 18 March 2026 / Published: 19 March 2026

Abstract

Flight Schedule Problem optimization is a typical NP-hard combinatorial optimization problem that is challenging to solve using traditional algorithms, so metaheuristic algorithms are commonly adopted for such problems. This paper proposes a Discrete Memory-Enhanced Restructured Particle Swarm Optimization algorithm (DMERPSO) to address Flight Scheduling Problem optimization. Firstly, this paper designs a hybrid particle encoding scheme capable of simultaneously handling flight time adjustments (integer variables) and route selections (categorical variables) for the Flight Schedule Problem. Secondly, a new update equation of particle positions is provided based on probability selection within the three terms of the Memory-Enhanced Restructured Particle Swarm Optimization (MERPSO) algorithm, and the calculation of the selection probability is designed. Thirdly, the two strategies and perturbation terms of MERPSO are improved in order to be adapted to optimize the discrete Flight Schedule Problem. Finally, simulation experiments are conducted using DMERPSO based on real flight data from multiple Chinese airports with the objective of minimizing total flight delays, leading to better solutions that are faster than various benchmark algorithms. The DMERPSO algorithm exhibits significant advantages in reducing total delays, improving solution stability, and enhancing robustness, which validates that DMERPSO provides an effective new approach for solving Flight Schedule Problem optimization problems.

1. Introduction

Flight Schedule Problem optimization is aimed at efficiently allocating resources, such as aircraft and crews, in order to enhance overall efficiency, which represents a complex combinatorial optimization problem constrained by aircraft routes, runway slot capacities, and turnaround requirements [1]. Traditional methods for solving the Flight Schedule Problem rely primarily on mathematical programming and precise algorithms [2]. The classical paradigm of traditional Flight Schedule Problem optimization is centered on mathematical programming, particularly relying on Mixed-Integer Linear Programming (MILP) to construct precise deterministic models [3]. This paradigm rigorously formulates objective functions—such as maximizing the total operational profit or minimizing overall costs—alongside a set of hard constraints [4]. Because the Flight Schedule Problem is an NP-hard problem, solving it primarily depends on exact algorithms such as branch-and-bound [5] and branch-and-price [6]. The branch-and-price algorithm decomposes the problem into a master problem (selecting optimal flight strings/crew pairings) and pricing subproblems (generating new feasible task columns within resource-constrained networks), thereby effectively handling large-scale combinatorial spaces. The seminal work of Barnhart et al. [7] systematically established this framework, laying the algorithmic foundation for theoretical research on core issues such as fleet assignment and crew scheduling. However, pure methods of this kind often face computational intractability when addressing ultra-large-scale problems, multi-stage integration or dynamic stochastic disruptions. In response, current research frontiers focus on enhancing the scalability and practicality of the traditional paradigm through several key avenues [8,9,10]. These innovations allow the traditional cornerstone of mathematical programming to sustain its vitality, offering interpretable and high-performance solutions for complex scheduling problems through integration with modern computational intelligence.
However, with the increase in the scale of the Flight Schedule Problem, these traditional algorithms have become computationally impractical. To overcome the computational bottlenecks of traditional methods, both academic and industrial communities commonly employ metaheuristic algorithms to obtain high-quality approximate optimal solutions, which demonstrate strong compatibility with the large-scale and high-dimensional characteristics of Flight Schedule Problem optimization problems [11,12]. Metaheuristic algorithms have emerged as a significant computational paradigm in modern combinatorial optimization and have been widely and successfully applied in the field of Flight Schedule Problem [13]. Their core advantages lie in their strong generality, low dependency on specific problem structures, and their ability to effectively explore vast solution spaces. For example, the Genetic algorithm approach was used to build feasible daily schedules and assign optimal aircraft for hub-and-spoke airlines [14]; Ant Colony Optimization mimics the pheromone communication of ants to find optimal paths through positive feedback, making it particularly suitable for flight string or aircraft routing optimization [15]; and Tabu Search utilizes memory structures to avoid cyclical searching, which has been successfully applied to flight schedule development, tail assignment, and crew scheduling [16].
The Particle Swarm Optimization (PSO) algorithm is a metaheuristic approach which is characterized by its conceptual simplicity, fast convergence speed, and strong global search capability, and has demonstrated significant potential in solving complex optimization problems [17]. However, when addressing Flight Schedule Problem optimization involving numerous discrete variables and complex multimodal search spaces, the standard PSO algorithm’s inherent “global best” guidance mechanism often leads to premature loss of population diversity, thereby trapping the algorithm in local optima—a phenomenon known as “premature convergence”. To address these limitations, researchers have explored algorithmic improvements through two primary approaches, for example, integrating PSO with neighborhood search [18].
In recent years, the Restructured Particle Swarm Optimization (RPSO) algorithm was developed based on the linear system theory of PSO, which kept the optimization performance alongside algorithmic simplicity [19]. RPSO simplified a standard PSO framework by designing a new particle position updating formulation with convergent term and perturbation term, introduced novel search mechanisms [20]. RPSO has relatively good convergence theoretical support and is simple and easy to use. However, RPSO loses the population’s memory, including the historical positions and fitness information of the particles. A variant of RPSO was designed, the memory-enhanced reconfigured particle swarm algorithm (MERPSO), which uses an experience selection strategy and a block search strategy to store the memory, constructing two new learning samples to replace the original learning samples so to balance local development and global exploration [21].
However, the original RPSO algorithm was primarily designed for continuous optimization problems, whereas Flight Schedule Problem optimization involves discrete decision variables. Its reliance on a single global best still risks premature convergence, particularly in highly multimodal problems such as Flight Schedule Problem optimization. This paper presents a discretized RPSO by transforming the variant of RPSO, the MERPSO algorithm, into a Discrete Memory-Enhanced Restructured Particle Swarm Optimization algorithm (DMERPSO), which is then applied to solve single-objective Flight Schedule Problem optimization. Experimental results demonstrate significant improvements in both computational efficiency and solution reliability. The main contributions of this work are as follows:
(1)
The deep analysis of the model of Flight Schedule Problem optimization is conducted so that a complete flight route can be mathematically described. For the Flight Schedule Problem optimization model to be solved by the RPSO algorithm, the decision variables affecting flights are obtained and the equation of particle expression is designed. A hybrid particle encoding scheme capable of simultaneously handling integer-type (adjustment time) and categorical-type (route selection) decision variables is designed.
(2)
A probability-based discrete position update formula is developed, enabling particles to probabilistically select between “exploitation” samples (sg), “exploration” samples (mp), and random perturbation terms (dt) to achieve balanced trade-offs between exploitation and exploration in discrete solution spaces.
(3)
The core learning strategies are restructured through discretization: the “mean” operation in continuous EDS was transformed into a “voting/majority rule” mechanism for discrete EDS; the “hyper-rectangular block” structure in continuous BSS was reformulated into adaptive “integer interval blocks” and “discrete set blocks” suitable for different variable types, while preserving its core concept of dynamic splitting and merging.

2. Memory-Enhanced Restructured Particle Swarm Optimization Algorithm

This section first introduces the standard Particle Swarm Optimization (PSO) algorithm and its significant variant—Restructured Particle Swarm Optimization (RPSO). Then, it elaborates on how the Memory-Enhanced Restructured Particle Swarm Optimization (MERPSO) algorithm significantly improves performance by incorporating the memory-enhancement mechanisms of three terms: “experience selection”, “block search” and “random perturbation”.

2.1. Restructured Particle Swarm Optimization (RPSO)

PSO was inspired by the foraging behavior of bird flocks in 1995 [22]; the classical equations for updating velocity and position are as the Equations (1) and (2):
v i d ( t + 1 ) = w . v i d ( t ) + c 1 r 1 ( p b e s t i d ( t ) x i d ( t ) ) + c 2 r 2 ( g b e s d ( t ) x i d ( t ) )
x i d ( t + 1 ) = x i d ( t ) + v i d ( t + 1 )
where pbest is the particle’s own historically optimal position and gbest is the globally best-known position of the entire swarm; v i d ( t ) and x i d ( t ) represent the velocity and position of particle i in the d-th dimension during the t-th iteration, respectively; w denotes the inertia weight; c1 and c2 are learning factors; r1 and r2 are random numbers uniformly distributed within [0, 1]. The PSO is highly susceptible to premature convergence toward a non-global optimal gbest, leading to diversity loss and entrapment in local optima [23].
Based on the results of linear system theoretical analyses of PSO, the Restructured Particle Swarm Optimization (RPSO) algorithm was proposed [19,20], which employs only a single position update equation, as shown in Equation (3):
x i + 1 = c r × p b e s t + ( 1 c r ) g b e s t + G ( t )
where cr represents the acceleration coefficient, a random number within the interval [0, 1], and G(t) denotes the perturbation term that can be designed by the user. For instance, the calculation method for G(t) was formulated as in Equation (4) [19]:
G ( t ) = ( s × r 3 ) × ( 0.8 + ω ) × ( 1 t T )
Here, s is set to one-quarter of the problem search range, r3 is a random number within the interval [−1, 1], ω is similar to the inertia weight of the PSO algorithm, with its value linearly decreasing between 0.2 and 0.8, t denotes the current iterating generation, and T signifies the maximum iteration. The RPSO algorithm simplifies the parametric structure and directly “restructures” new particle positions, wherein each particle has a certain probability of either learning from the global best particle (gbest) and the personal best particle (pbest) to take on the exploitation, while G(t) controls the exploration which is set by the designer.

2.2. Memory-Enhanced Restructured Particle Swarm Optimization (MERPSO)

RPSO remains dependent on the single gbest. Therefore, if gbest is trapped in a local optimum of the searching area, the evolutionary trajectory of the entire particles will still be severely misguided. To address the issue of RPSO’s over-reliance on gbest, a Memory-Enhanced Restructured Particle Swarm Optimization (MERPSO) algorithm is proposed [21]. Effective guiding information should not be derived solely from the current single “champion” gbest, but rather from the collective wisdom crystallized by all “elites” of the entire search history.

2.2.1. Core Idea and Update Mechanism

In MERPSO, the concepts of pbest and gbest are completely abandoned, and the position update mechanism is transformed into a probabilistic selection-based learning approach. The position updates of particles are determined through probabilistic selection from three “learning samples”, representing distinct search strategies, instead of by velocity and displacement [21]. The position update formula is as shown in Equation (5):
x i , d t + 1 = s g d t , i f r a n d ( ) P s g m p i , d t , i f P s g < r a n d ( ) p s g + p m p d t i , d t , o t h e r w i s e
where x i , d t + 1 denotes the new position of particle i in the d-th dimension at the (t + 1)th iteration. The three core learning samples are defined as follows: (1) The experience-guided sample (sg), which represents exploitation of promising regions. This sample is generated from an “experience set” storing numerous historical optimal solutions. (2) The block exploration sample (mp), representing exploration of uncharted regions. This sample is generated through a “block search strategy” designed to guide particles toward exploring less visited novel regions of the solution space. (3) The random perturbation term (dt), which introduces minor stochastic variations to the current particle position for maintaining population diversity and preventing premature algorithm stagnation.

2.2.2. Core Learning Strategies

The generating quality of sg and mp of the MERPSO algorithm is critical, which incorporates two sophisticated memory-enhanced strategies.
(1) 
Experience Selection Strategy (EDS)
The core of the EDS strategy lies in maintaining an external archive called the Experience Set (ES), which maintains the top Np solutions with the best fitness performance throughout the entire search history. The ES is updated after each iteration following fitness evaluation, preserving the most promising positions. The EDS strategy generates learning samples from the ES, denoted as SG. Each dimension of SG is independently determined according to Equation (6), meaning that every dimension of SG is computed as the average value of that dimension across K randomly selected solutions from the ES. In this algorithm, the value of K is set to 10.
s g d = 1 K × k = 1 K E S d ( r k )
where ESd(rk) denotes the d-th dimension of the rk-th solution in the ES, and rk is a random integer within the interval [1, Np]. To enhance the algorithm’s exploitation capability in the later stages, Np is gradually reduced to narrow the scope of solutions preserved in the experience set. Np is calculated according to Equation (7):
N p = p s × 1 t 1 T
where ps represents the population size, || || denotes the ceiling operation, t is the current generation number, and T is the maximum iteration limit.
ESD stores the most historically fit solutions discovered by the algorithm since its initiation. When generating s g samples to guide the entire population, the algorithm randomly selects multiple elite historical solutions from the ES and forms a “consensus” direction by calculating their mean.
(2) 
Block Search Strategy (BSS)
The block search strategy (BSS) dynamically partitions the entire multi-dimensional search space into a series of hyper-rectangular “blocks” and continuously monitors the frequency of particle visits to each block. The block search strategy (BSS) generates learning samples, denoted as MP. The method involves each particle Xi having its corresponding learning sample, labeled as mpi. The vector mpi is generated by randomly combining its personal best pbi and a randomly selected position pr in the block, as determined by Equation (8):
m p i , d = p b i , d , r a n d > q i p r d { b } , e l s e
where mpi,d denotes the position of the d-th dimension in the i-th learning sample mpi. For each dimension, it has a probability qi of being pri,d{b} and a probability (1 − qi) of being pbi. Here, pri,d{b} represents a random position within the b-th block of the d-th dimension; b is the selection result from a roulette wheel choice among all blocks in that dimension. The selection weight for each block is given by Wb = Mbnp, where np is the number of particles within that block and Mb is the upper limit for the number of particles in a block. For the algorithm’s exploration, BSS tends to generate exploratory samples m p from those “less visited” (low-visit-frequency) blocks, thereby guiding particles to explore new search territories.
(3) 
The random perturbation term (DT)
The learning sample SG generated by EDS enhances the algorithm’s exploitation capability, while the learning sample MP produced by BSS strengthens its exploration capability. The position update equation of particles in the MERPSO algorithm is given by Equation (9):
X i t + 1 = c s × s g t + c s × m p i t + D ( t )
D ( t ) = ( s p a c e × r ) × ( 0.8 + ω ) × ( 1 t T ) C m × t T
where Xit+1 represents the position of the i-th particle at generation t + 1. SGt denotes the SG selected at generation t, and mpit is the i-th MP at generation t. D(t) is a time-varying perturbation term, calculated according to Equation (10). The parameter space is set to 20% of the problem’s search space, r is a random number uniformly distributed in the interval [−1, 1], ω is the inertia weight ranging from 0.8 to 0.2, and Cm is the perturbation coefficient.

3. Discrete MERPSO Algorithm for Solving Single-Objective Flight Schedule Problem Optimization

The MERPSO algorithm demonstrates competitive performance in continuous optimization problems. However, the Flight Schedule Problem constitutes a discrete optimization problem characterized by two functionally distinct decision variables: adjusted times and selected routes. So that the MERPSO algorithm can solve Flight Schedule Problem optimization, this section designs Discrete Memory-Enhanced Restructured Particle Swarm Optimization (DMERPSO), which involves the strategies of constructing particle representations, reformulating position update equations, designing learning strategies, incorporating perturbation terms, and implementing parameter discretization.

3.1. The Representation of Flight Schedule Problem Optimization for RPSO Algorithm

3.1.1. A 4D Trajectory Model for the Flight Schedule Problem Optimization

The primary flight schedule information includes the number of flights, departure and arrival locations, scheduled departure times, estimated arrival times, and flight routes. Table 1 presents a basic flight schedule.
The routes in flight scheduling are planned with one or more candidate routes, and different routes may exist for the same origin and destination. During the actual implementation of flight operations, the flight schedule often considers factors of adverse weather conditions and airspace congestion levels. While the departure location (orig) and landing location (des) remain unchanged, the actual departure time τorig is dynamically adjusted according to real-time situations. The arrival time of a flight τdes is influenced by multiple factors, including the route distance, flight speed vf, and congestion level, at the destination airport. In practical flight execution, routes are selected based on operational requirements.
Evaluating the flight schedule requires real-time position information for each flight within the flight plan. The 4D trajectory model is a commonly used framework which encompasses three spatial dimensions (flight position coordinates) and one temporal dimension (time information). An aircraft’s flight path along route r can be conceptualized as straight-line segments between waypoints (wp) as navigation checkpoints. The connecting paths between waypoints are termed flight segments L. A complete flight route r(i,j) can be mathematically described using either waypoints or flight segments using Equation (11):
r i , j = o r i g i , w p 1 i , j , w p 2 i , j , , d e s i = L 1 i , j , L 2 i , j , Γ i , j
where origi and desi denote the origin and destination of flight Fi, Γ i , j represents the set of flight route, and Figure 1 illustrates a basic example of a flight route. Once the flight schedule parameter τiorig, the decision variable Si for route selection, and the flight speed vfi are determined, the flight trajectory can be calculated accordingly.

3.1.2. Particle Representation and Flight Position Calculation in the RPSO Algorithm

For the Flight Schedule Problem optimization model, the decision variables affecting flights include the adjustment time of departure ta and the selected route Sr for each flight. A particle X in the RPSO algorithm represents an adjustment scheme for the flight schedule, denoted as Xts. The dimension D of particle X equals 2Nf, so the i-th particle Xi can be expressed by Equation (12):
X i = t i , 1 a , t i , 2 a , , t i , N f a , S i , 1 r , S i , 2 r , , S i , N f r
where ta(i,1) denotes the adjusted departure time (in integer minutes) for the first flight of the i-th particle. Flight departure times are typically set at 5 min intervals, where ta = 3 indicates a 15 min delay and ta = −4 corresponds to a 20 min advance. Sr(i,1) represents the selected route for the first flight of particle i.
Since the altitude of flights is neglected and only their horizontal positions are considered, once the adjustment scheme X is determined, the position of flight j can be calculated using Equations (13)–(16):
T j w p j 0 = τ o r i g + 5 × t i , j a = T j o r i g j
T j w p j k = T j w p j k 1 + l e n w p j k w p j k 1 v j f , j = 1 , 2 , ,   N f
d i s = 2 R × arcsin ( sin 2 l a t B l a t A / 2 ) + cos l a t A × cos l a t B × sin 2 l o n B l o n A / 2
l o c j t s = t s T j w p j z 1 T j w p j z T j w p j z 1 × w p j z w p j z 1
where Tj(wpjk)) denotes the time when flight j passes the k-th waypoint, τorig represents the scheduled departure time in the flight plan, Tj(origj) indicates the adjusted departure time of flight j, and the arrival time is denoted as Tj(desj); vfj signifies the average flight speed of flight j provided by the dataset; len(A-B) is a function calculating the distance between two waypoints A and B using the Haversine formula as Equation (15), where R represents the Earth’s radius (6371 km), and latA and lonA respectively indicate the latitude and longitude coordinates of point A measured in radians; locj(ts) specifies the position of flight j at timestamp ts, with ts being discrete time points separated at 5 min intervals.

3.1.3. The Objective and Constraint Conditions of Flight Schedule Problem Optimization

The objective of Flight Schedule Problem optimization takes minimizing flight delay time (Delay) as the optimization objective, which consists of total ground delay (Tdep) and total arrival delay (Tarr). The calculation method is shown in Equations (17)–(19):
min D e l a y = T d e p + T a r r
T d e p = j = 1 N f 5 × t j a
T a r r = j = 1 N f T j d e s j τ j d e s
The constraints of Flight Schedule Problem optimization primarily consider the utilization of airport runways, with the constraint conditions shown as Equations (20)–(23):
N a p d e p t s Q a p d e p , a p Ω , t s 0 , T max
N a p a r r t s Q a p a r r , a p Ω , t s 0 , T max
t j , a p t j + 1 , a p τ s e p   , i = 1 , , N f 1 , a p Ω
t a T min a , T max a
where all airports ap constitute the set Ω. For each airport, the number of flights simultaneously taking off and landing within a given period is subject to upper limits, known as the airport’s departure capacity Qapdep and arrival capacity Qaparr; therefore, Equations (20) and (21) ensure that the number of departing and arriving flights at a given airport do not exceed its respective capacity. Furthermore, the interval between two consecutive departures at the same airport must exceed a specified threshold τsep, so Equation (22) guarantees that the separation time between consecutive departures at the same airport adheres to the safety threshold Tsc, which is set to 5 min. Equation (23) constrains the adjustment time for an individual flight within the schedule to a reasonable operational range; T min a , T max a respectively represent the minimum and maximum values of the flight schedule adjustment.

3.2. Discrete MERPSO Algorithm

The MERPSO algorithm framework was originally proposed to address continuous optimization problems, whereas discrete optimization problems possess a different feasible solution space. This section extends the MERPSO framework to the Discrete MERPSO (DMERPSO) so to solve discrete optimization problems. Firstly, the methodology of the DMERPSO algorithm is designed; secondly, the discretization processing of each strategy of the algorithm is introduced; finally, the particle initialization method is presented.

3.2.1. Position Update Formula of DMERPSO

The discretization method of the PSO algorithm employs a transfer function to map velocity values onto the interval [0, 1]. Subsequently, these mapped values within the [0, 1] interval are converted into binary values (0 and 1) through stochastic probability.
Since the MERPSO algorithm does not require particle velocity for position updates, the position update of particles in the DMERPSO algorithm is influenced by three components: the learning samples sg and mp, as well as the perturbation term dt. Specifically, sample sg enhances both the convergence speed and precision of the particle, mp strengthens the particle’s exploration capability, while dt maintains particle diversity. These three components directly take on particle position updates while serving distinct functional roles. Therefore, MERPSO directly updates particle positions based only on probability. The corresponding particle update equation is presented in Equation (24):
x i , d t + 1 = s g d t , r a n d P i , d s g m p i , d t , P i , d s g < r a n d P i , d s g + P i , d m p d t i , d t , o t h e r
where the update value of each dimension of a particle is selected from the corresponding dimensions of sg, mp, or dt with selection probabilities Psg, Pmp, and Pdt respectively, where the sum of these three probabilities equals 1. The probability of selecting each term is determined based on factors including particle characteristics and the population’s search status.

3.2.2. Calculation of Selection Probability

The DMERPSO algorithm determines the probability of updating particle positions by evaluating and scoring multiple scenarios, followed by normalizing the evaluated scores to obtain probabilities, thereby ensuring that the sum of the three probabilities equals 1. Each evaluation score can be expressed as SC = W (weight value) × B (base score). The parameter values or ranges in this section are presented in Table 2. The three selection probabilities Psg, Pmp, and Pdt in Equation (25) are calculated with Equations (26)–(28).
P l = S C l S C s g + S C m p + S C d t , l s g , m p , d t
S C s g = w 1 × b s g
S C m p = w 1 × b m p
S C d t = w 2 w 1 × B 3
where the weight w1 is a parameter that linearly iterates with the number of generations within the interval [0.5, 2], which makes particles to undergo random mutation with a higher probability during the early stages, thereby expanding the population’s search scope while reducing the mutation probability in later stages so to enhance the convergence precision. The weight w2 governs the maintenance of convergence in one-dimensional space, where particles remain identical to both learning samples s g and m p during the pursuit of convergence accuracy in later evolutionary stages, which is calculated using Equation (29).
w 2 ( i , d ) = 0 , x i , d = s g i , d = m p i , d   &   t > T 4 1 , o t h e r
The scores bsg and bmp function similarly to offset parameters in continuous MERPSO algorithms, computed through Equations (30) and (31), which regulate particles to exhibit greater displacement probability during initial exploration phases while maintaining positional stability or enabling minor adjustments with increased probability in subsequent optimization stages. In the two equations, ps denotes the number of particles of each generation of the algorithm, and rank(f(Xi)) is the sorting number of fitness of the objective function for the ith particle with the population.
b s g ( i , d ) = b 2 × rank f X i p s ,   t < T 2   &   d i s s d i s m   &   d N f b 2 × 1 rank f X i p s ,   t T 2   &   d i s s < d i s m   &   d N f B 1 , other  
b m p ( i , d ) = b 2 × rank f X i p s ,   t < T 2   &   d i s s < d i s m   &   d N f b 2 × 1 rank f X i p s ,   t T 2   &   d i s s d i s m   &   d N f B 1 , other  
Since the particle consists of two components, the adjustment time and trajectory, for the former d i s m ( i , d ) = | x i , d m p i , d | and d i s s ( i , d ) = | x i , d s g d | , larger values indicate greater temporal discrepancies between the particle’s corresponding dimension and the learning samples, which carries practical physical significance; for the latter component, differences between these two values merely reflect distinct route selections, making comparative magnitude analysis meaningless. Consequently, the scoring mechanisms for these two particle components differ, with parameter b2 calculated using Equation (32).
b 2 = B 2 * 1 1 + e 0.01 × t T 2 0.5 + B 1

3.2.3. Improvement of Strategies and Perturbation Terms

Although the previous subsection adapted the position update formula of the MERPSO algorithm to Equation (24) and introduced a probability-based selection method to better solve discrete optimization problems, it still cannot guarantee that the resulting positions are integers, as is required for solutions to the Flight Schedule Problem optimization problem. Therefore, this section improves the experience selection strategy (ES), the block search strategy (BSS), and the algorithm’s perturbation term, thereby transforming the MERPSO algorithm into the discrete variant, the MERSPO algorithm, tailored for solving the Flight Schedule Problem optimization problem.
(1) 
Experience Selection Strategy
In the discrete experience selection strategy, the storage principle and method of the Experience Set (ES) remain unchanged. The ES maintains an archive of the Np best-performing solutions in terms of fitness throughout the entire search history, which is updated after each iteration’s fitness evaluation. Each dimension of the learning sample s g is still selected independently; however, instead of calculating the average value of the selected solutions during the selection process, K solutions are randomly selected with replacement from the ES, and the most frequently occurring value among these selected solutions is chosen as the corresponding dimensional value for s g . In cases where multiple values have equal frequencies, one of these values is randomly selected as the dimensional value for s g . Here, K = 10, and Np is calculated using Equation (33):
N p = p s 5 × 1 t 1 T + 5
(2) 
Block Search Strategy
In the discrete block search strategy, the configuration of blocks varies according to particle dimensions. For the first to Nf-th dimensions of particles (i.e., flight adjustment time), blocks still maintain boundaries, where all integers on these boundaries can serve as selection ranges for m p . Taking Figure 2 as an example, which represents a block with a lower boundary of 1 and an upper boundary of 3, when selecting prd{b} according to Equation (15), m p can take any integer value within {1,2,3}. The splitting and merging operations continue to be applied to blocks in the first to Nf-th dimensions. An additional operation is introduced: when the difference between block boundaries is even and splitting becomes necessary, a new block will be formed by combining the original lower boundary with the integer value of (upper boundary + lower boundary)/2, as illustrated in Figure 2. However, for the (Nf + 1)-th to 2Nf-th dimensions of particles (i.e., route selection components), there exists no continuous practical significance between adjacent integers. Consequently, blocks in this dimension range are defined without boundaries—for instance, block {5,8} indicates that eight particles have selected route number 5 in this dimension. Therefore, neither merging nor splitting operations are performed on blocks within this dimensional range.
With the increase in particle dimensions, higher demands are placed on memory storage. To conserve memory and reduce the computational time, this section stores the data blocks of dimensions 1 to Nf and dimensions Nf + 1 to 2Nf separately in two matrices, BL1 and BL2. Two additional matrices are employed to record the starting positions of each dimension, which are referred to as boundary matrices (Edge Matrices) EM1 and EM2. The correspondence between BL and EM is illustrated in Figure 3.
(3) 
Perturbation Term
The perturbation operation performs single-point mutation on the original particle basis, with a mutation probability of Pdt for each dimension. The parameter Pdt linearly decreases from 0.6 to 0.2 as the number of generations increases. The variation range for the first Nf dimensions is [Rdtmin, Rdtmax], while the variation range for the last Nf dimensions corresponds to any selectable flight path, as formulated in Equations (33)–(36):
s p a c e t d t = T max a T min a 2 × 1 0.8 × t T
R min d t = max T min a , x i , d s p a c e t d t
R max d t = min T max a , x i , d + s p a c e t d t

3.2.4. The Methods for Initializing Particles and Satisfying Constraints

The adjustment time is categorized into five levels: slightly early, almost no adjustment, slightly delayed, moderately delayed, and severely delayed, defined as ta = (−6, 0], (0, 3], (3, 8], (8, 18], (18, 36]. During population initialization, dimensions 1 to Nf of each particle randomly select one of these five intervals to generate adjustment times, while route selection dimensions are chosen randomly. This process forms the initial population p0. After evaluating p0, the top 1/3 particles with the best fitness performance are retained. These preserved particles undergo single-point mutation with probability Pp0, where each particle performs three mutation operations. All mutated particles constitute the new population p1, which serves as the initialized population for the algorithm. This initialization approach ensures sufficient uniformity in particle distribution across the search space, enhances population diversity, and facilitates global exploration. Simultaneously, eliminating the worst-performing particles avoids excessive computational resource consumption on inferior solutions, thereby improving convergence precision.
During the optimization process, in order to ensure that the position of each particle in the population is always within the feasible region that satisfies the constraint limits (such as the capacity limits for runway takeoff and landing, and the minimum takeoff interval limit), the particle solution after each position update is always compared with the constraint conditions. If it does not meet the constraint condition limits, a random method is used to generate a new position, thereby repairing the particle solution to ensure that the search solution satisfies the operational constraints.

4. Simulation Experiment and Verification

4.1. Flight Data Collection and Processing

The dataset used in the experiments consists of real flight plans provided by an aviation data communication company. The flights originate from eight flight schedule instances from different operational periods across nearly 200 airports in China, ensuring authenticity and generalizability [24]. Each dataset contains the following information: flight number, coordinates of departure airport, coordinates of arrival airport, scheduled departure time (τorig), scheduled arrival time (τdes), route (including waypoints), actual trajectory, and average speed during actual flight. To simulate temporary adjustments to flight plans before execution and introduce additional scheduling difficulty, the departure and arrival times of certain flights were modified. The characteristics of these flight datasets are summarized in Table 3.
Among them, the initial delay refers to the sum of delay times for all flights when the flight schedule is actually executed, while the adjusted initial delay represents the sum of delay times under increased optimization difficulty during actual execution. Datasets M1 and M2 represent original flight schedules; M3 and M4 simulate mild delay scenarios by randomly selecting 10% of flights in the original schedules and simultaneously advancing or delaying both departure and arrival times by identical time intervals (ranging from 30 min early to 30 min late); M5 and M6 simulate moderate delay scenarios by randomly selecting 3% of flights in the original schedules with adjusted time windows (either advanced by 30–60 min or delayed by 30–60 min); M7 and M8 first implement mild delay simulations on 10% of randomly selected flights as in M3/M4, then additionally apply moderate delay adjustments to 3% of remaining unadjusted flights (representing 3% of total flights) with modified time windows (advanced by 30–60 min or delayed by 30–60 min).

4.2. Experimental Performance

To validate the effectiveness of the DMERPSO algorithm, comparative experiments were conducted with RPSO [19], ADFPSO [25], SpadePSO [26], L-SHADE [27] and JSO [28] algorithms. For several continuous algorithms, in order to discretize them into integer encoding, the real-valued solutions of these algorithms underwent decimal rounding processing techniques to obtain integers. If their integer was greater than or less than the maximum and minimum values of each dimension integer within the given range of Flight Schedule Problem optimization problem, then the maximum and minimum values were taken, which ensured that the solutions of the comparison algorithms fell within the integer solution space of the Flight Schedule Problem optimization problem.
For each comparison algorithm, parameter settings followed the recommended values specified in their respective original papers. Each algorithm was independently executed 30 times on every dataset. To ensure fair comparison, all algorithms shared identical initial populations and employed the same stopping criterion based on maximum fitness evaluations (MaxFEs). The population size was set to 30 and MaxFEs was configured as 30,000 for all compared algorithms. All experiments were performed on a computer running the 64-bit version of Windows 11 Pro, with hardware specifications including an Intel Core i5 @3 GHz CPU and 32 GB RAM. All algorithms were implemented in MATLAB R2023a.
The optimization results of the DMERPSO algorithm and these comparative algorithms are presented in Table 4. The data in the table represent the mean and variance of total delays (in minutes) across 30 independent experiments. The symbols (+), (−), and (≈) indicate that the corresponding algorithm is significantly better, significantly worse, or approximately equivalent to the DMERPSO algorithm in terms of performance based on t-test results with p ≤ 0.05. The minimum mean total delay for each dataset is highlighted in bold. Experimental results demonstrate that the DMERPSO algorithm can significantly reduce total flight delays and optimize flight schedules. It achieves superior performance over other comparative algorithms across all eight datasets, particularly showing the best performance on the more challenging M7 and M8 datasets.
To better demonstrate the advantages of the proposed algorithm, the Friedman test was first applied to the mean values of the Total Delay Optimization Results from Table 4, followed by the Wilcoxon signed-rank test to compare the performance of the RMERSPO algorithm with other algorithms. Table 5 presents the rankings of algorithms based on the mean Total Delay. The last row of the table shows the Friedman test results; the p-value is very small, indicating a significant difference in algorithm performance.
The Wilcoxon signed-rank test was used to compare two algorithms in all pairwise comparisons. The DMERSPSO algorithm was compared with the other three algorithms using the Wilcoxon signed-rank test at a significance level of α = 0.05. The summarized results are presented in Table 6, with the signs +, ≈, and −. The sign ‘+’ means that DMERSPSO performs significantly better than the compared algorithm; ‘−’ means that DMERSPSO performs significantly worse; while ‘≈’ means that there is no significant difference between the two algorithms. From the results in the table, it can be seen that DMERSPSO performs overwhelmingly better than the other algorithms.
To verify the stability of the algorithm’s operation, box plots were generated to analyze the total delay obtained from 30 independent runs on eight datasets, with the results shown in Figure 4. The experimental results demonstrate that the DMERPSO algorithm achieves superior optimization performance and exhibits higher stability.

4.3. Effectiveness Analysis of the Initialization Method

The DMERPSO algorithm introduces a population initialization method that incurs the computational cost of one generation. To analyze the effectiveness of this strategy, convergence curves were plotted by comparing the average best fitness per generation across 30 independent experiments between the DMERPSO algorithm, the DMERPSO-I variant (without the initialization strategy), and three baseline algorithms. The convergence curves of these algorithms on eight datasets are presented in Figure 5. As observed from the convergence results, the DMERPSO algorithm demonstrates strong sustained convergence capability. In the middle to late iteration stages, when other algorithms become trapped in local optima, DMERPSO exhibits superior ability to escape local optima through continuous exploration and exploitation of the solution space to seek better solutions. Although the DMERPSO-I variant without the initialization strategy achieves better performance than DMERPSO in early iterations, the population diversity introduced by the initialization strategy enables DMERPSO to achieve better convergence accuracy in later stages. This validates the effectiveness of the proposed initialization strategy.

5. Conclusions

To effectively address the discrete optimization problem of single-objective Flight Schedule Problem optimization, a Discrete Memory-Enhanced Restructured Particle Swarm Optimization algorithm (DMERPSO) is proposed. Experimental results demonstrate that the DMERPSO algorithm can effectively reduce flight delays, optimize air traffic capacity allocation, and achieve superior performance compared to benchmark algorithms.
The core innovation of the DMERPSO algorithm is designing the calculation of the selection probabilities from three components. More importantly, the DMERPSO algorithm improves the three terms—the experience selection strategy (ES), the block search strategy (BSS), and the algorithm’s perturbation term—to directly transform them so as to be tailored to solve the integer code problem of the Flight Schedule Problem. The proposed integer coding scheme for Flight Schedule Optimization is significant and can be extended to other related problems. In the experiments conducted in this study, continuous algorithms were employed as comparisons. These algorithms performed decimal rounding on real-number solutions to obtain integers, which makes it difficult to maintain the favorable optimization characteristics of the solution space search, thereby introducing certain discrepancies in the experimental comparison.

Author Contributions

Conceptualization, J.L., W.G. and B.W.; methodology, B.W., W.G. and J.L.; software, B.W. and D.T.; validation, W.G., D.T., J.L. and B.W.; formal analysis, W.G., B.W., D.T. and J.L.; investigation, W.G., B.W., D.T. and J.L.; resources, J.L. and W.G.; data curation, B.W., D.T., J.L. and W.G.; writing—original draft preparation, B.W. and W.G.; writing—review and editing, W.G., J.L. and B.W.; visualization, B.W. and D.T.; supervision, J.L.; project administration, J.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Provincial Natural Science Foundation of Fujian, China (2023J01349).

Data Availability Statement

Data is contained in https://github.com/buaaguotong/ATFM-Benchmark (accessed on 10 March 2026).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertsimas, D.; Patterson, S.S. The Air Traffic Flow Management Problem with Enroute Capacities. Oper. Res. 1998, 46, 406–422. [Google Scholar] [CrossRef]
  2. Hu, H.; Sun, J.; Du, B. Air Traffic Management in Dense Airspace via Network Flow Optimization. J. Aerosp. Inf. Syst. 2025, 22, 433–446. [Google Scholar] [CrossRef]
  3. Eltoukhy, A.E.E.; Chan, F.T.S.; Chung, S.H. Airline schedule planning: A review and future directions. Ind. Manag. Data Syst. 2017, 117, 1201–1243. [Google Scholar] [CrossRef]
  4. Wang, D.; Wu, Y.; Hu, J.Q.; Liu, M.; Yu, P.; Zhang, C.; Wu, Y. Flight schedule recovery: A simulation-based approach. Asia-Pac. J. Oper. Res. 2019, 36, 1940010. [Google Scholar] [CrossRef]
  5. Yan, S.; Tu, Y.P. Multifleet routing and multistop flight scheduling for schedule perturbation. Eur. J. Oper. Res. 1997, 103, 155–169. [Google Scholar] [CrossRef]
  6. Soykan, B.; Erol, S. A branch-and-price algorithm for the robust airline crew pairing problem. Savun. Bilim. Derg. 2014, 13, 37–74. [Google Scholar]
  7. Barnhart, C.; Belobaba, P.; Odoni, A.R. Applications of operations research in the air transport industry. Transp. Sci. 2003, 37, 368–391. [Google Scholar] [CrossRef]
  8. Mercier, A.; Cordeau, J.F.; Soumis, F. A computational study of Benders decomposition for the integrated aircraft routing and crew scheduling problem. Comput. Oper. Res. 2005, 32, 1451–1476. [Google Scholar] [CrossRef]
  9. Reiners, T.; Pahl, J.; Maroszek, M.; Rettig, C. Integrated aircraft scheduling problem: An auto-adapting algorithm to find robust aircraft assignments for large flight plans. In Proceedings of the 2012 45th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1267–1276. [Google Scholar]
  10. Cadarso, L.; Vaze, V. Passenger-centric integrated airline schedule and aircraft recovery. Transp. Sci. 2023, 57, 813–837. [Google Scholar] [CrossRef]
  11. Delahaye, D.; Odoni, A.R. Airspace Congestion Smoothing by Stochastic Optimization. In Evolutionary Programming VI, Proceedings of the 6th International Conference; Springer: Berlin/Heidelberg, Germany, 1997; pp. 163–176. [Google Scholar]
  12. Xiao, M.; Hong, C.; Cai, K. Optimizing the safety-efficiency trade-off on nationwide air traffic network flow using cooperative co-evolutionary paradigm. Sci. Rep. 2025, 15, 20377. [Google Scholar] [CrossRef] [PubMed]
  13. Cecen, R.K.; Durmazkeser, Y. Meta-Heuristic Algorithms for Aircraft Sequencing and Scheduling Problem. In Progress in Sustainable Aviation; Springer International Publishing: Cham, Switzerland, 2022; pp. 107–118. [Google Scholar]
  14. Kablan, A.A.; Elberkawi, H.R.K.; Eldharif, E.A. Airline scheduling model using genetic algorithm. In Proceedings of the 2024 IEEE 4th International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering (MI-STA), Tripoli, Libya, 6–8 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 718–722. [Google Scholar]
  15. Deng, G.F.; Lin, W.T. Ant colony optimization-based algorithm for airline crew scheduling problem. Expert Syst. Appl. 2011, 38, 5787–5793. [Google Scholar] [CrossRef]
  16. Li, M.; Hao, J.K.; Wu, Q. Learning-driven feasible and infeasible tabu search for airport gate assignment. Eur. J. Oper. Res. 2022, 302, 172–186. [Google Scholar] [CrossRef]
  17. Abualigah, L.; Sheikhan, A.; Ikotun, A.M.; Zitar, R.A.; Alsoud, A.R.; Alshourbaji, I.; Hussien, G.; Jia, H. Particle Swarm Optimization Algorithm: Review and Applications. Metaheuristic Optim. Algorithms 2024, 1–14. [Google Scholar]
  18. Ma, J.; Zhang, X.; Wang, C. Dynamic Optimization of Flight Schedules Based on Deep Reinforcement Learning. Comput. Integr. Manuf. Syst. 2021, 27, 2697–2708. [Google Scholar]
  19. Zhu, J.; Liu, J.; Wang, Z.; Chen, Y. Restructuring Particle Swarm Optimization Algorithm Based on Linear System Theory. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  20. Zhu, J.; Liu, J.; Chen, Y.; Xue, X.; Sun, S. Binary Restructuring Particle Swarm Optimization and Its Application. Biomimetics 2023, 8, 266. [Google Scholar] [CrossRef] [PubMed]
  21. Wu, B.; Liu, J.; Li, S.; Li, M. Memory-Enhanced Reconstructive Particle Swarm Optimization Algorithm. Comput. Eng. Appl. 2025, 61, 116–127. [Google Scholar]
  22. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  23. Van den Bergh, F.; Engelbrecht, A.B. A Study of Particle Swarm Optimization Particle Trajectories. Inf. Sci. 2006, 176, 937–971. [Google Scholar] [CrossRef]
  24. Guo, T.; Mei, Y.; Tang, K.; Du, W. A Knee-guided Evolutionary Algorithm for Multi-objective Air Traffic Flow Management. IEEE Trans. Evol. Comput. 2023, 28, 994–1008. [Google Scholar] [CrossRef]
  25. Yu, F.; Tong, L.; Xia, X. Adjustable Driving Force Based Particle Swarm Optimization Algorithm. Inf. Sci. 2022, 609, 60–78. [Google Scholar] [CrossRef]
  26. Wu, X.; Han, J.; Wang, D.; Gao, P.; Cui, Q.; Chen, L.; Liang, Y.; Huang, H.; Lee, H.P.; Miao, C.; et al. Incorporating Surprisingly Popular Slgorithm and Euclidean Distance-based Adaptive Topology into PSO. Swarm Evol. Comput. 2023, 76, 101222. [Google Scholar] [CrossRef]
  27. Tanabe, R.; Fukunaga, A. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  28. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1311–1318. [Google Scholar]
Figure 1. Flight route map.
Figure 1. Flight route map.
Algorithms 19 00233 g001
Figure 2. Discrete block splitting operation.
Figure 2. Discrete block splitting operation.
Algorithms 19 00233 g002
Figure 3. Correspondence between BL and EM.
Figure 3. Correspondence between BL and EM.
Algorithms 19 00233 g003
Figure 4. Box plots of the results on the eight datasets. (a) M1; (b) M2; (c) M3; (d) M4; (e) M5; (f) M6; (g) M7; (h) M8.
Figure 4. Box plots of the results on the eight datasets. (a) M1; (b) M2; (c) M3; (d) M4; (e) M5; (f) M6; (g) M7; (h) M8.
Algorithms 19 00233 g004aAlgorithms 19 00233 g004b
Figure 5. Convergence curves of the DMERPSO algorithm and the comparison algorithms. (a) Curves of convergence on M1; (b) convergence curves on M2; (c) curves of convergence on M3; (d) curves of convergence on M4; (e) curves of convergence on M5; (f) curves of convergence on M6; (g) curves of convergence on M7; (h) curves of convergence on M8.
Figure 5. Convergence curves of the DMERPSO algorithm and the comparison algorithms. (a) Curves of convergence on M1; (b) convergence curves on M2; (c) curves of convergence on M3; (d) curves of convergence on M4; (e) curves of convergence on M5; (f) curves of convergence on M6; (g) curves of convergence on M7; (h) curves of convergence on M8.
Algorithms 19 00233 g005aAlgorithms 19 00233 g005b
Table 1. An example of flight schedule.
Table 1. An example of flight schedule.
Flight NumberDeparture AirportLanding at the AirportEstimated Time of DepartureEstimated Time of LandingFlight Routes
F1Xi’anBeijing9:0011:20r(1,1), r(1,2), r(1,3)
F2ShanghaiShenzhen10:1512:30r(2,1), r(2,2)
F3Xi’anShanghai18:2021:15r(3,1), r(3,2), r(3,3), r(3,4)
F4BeijingShenzhen7:459:10r(4,1)
F5ShenzhenXi’an11:3515:00r(5,1), r(5,2), r(5,3)
Note: The Set of flights number are denoted as F = {F1, F2, F3,…, FNf}, where Nf is the total number of flights; r(i,j) denotes the j-th route of the i-th flight Fi.
Table 2. Relevant parameters of selection probability.
Table 2. Relevant parameters of selection probability.
ParameterSize or Range
w1[0.5, 2]
w20 or 1
B120
B2100
B320
Table 3. Initial features of the experimental dataset.
Table 3. Initial features of the experimental dataset.
DatasetNumber of FlightsNumber of AirportsNumber of WaypointsNumber of RoutesInitial Delay (min)Adjusted Initial Delay (min)
M1937181826205310,812-
M2925171794211310,752-
M3952181845233113,73415,297
M4932164790237418,39220,752
M5953182817207626,59628,361
M6920171792203648,90551,802
M7810172863198320,74423,850
M8809164854213026,74429,501
Table 4. Total delay optimization results of the DMERPSO algorithm and the comparative algorithms (The bold data represents the best result).
Table 4. Total delay optimization results of the DMERPSO algorithm and the comparative algorithms (The bold data represents the best result).
DatasetTotal DelayDMERPSORPSOADFPSOSpadePSOL-SHADEJSO
M1mean1.07 × 1037.52 × 104 (+)2.31 × 104 (+)8.88 × 104 (+)1.59 × 104 (+)9.43 × 103 (+)
std.1029.627611847.31331.810360.03859.4
M2mean2.15 × 1037.71 × 104 (+)2.27 × 104 (+)8.63 × 104 (+)1.44 × 104 (+)1.05 × 104 (+)
std.1104.82721.225861387.14938.71977.0
M3mean3.16 × 1037.83 × 104 (+)2.20 × 104 (+)9.22 × 104 (+)2.03 × 104 (+)9.80 × 103 (+)
std.745.273820.62271.8888.8813375.13325.8
M4mean4.23 × 1038.29 × 104 (+)2.56 × 104 (+)9.02 × 104 (+)1.36 × 104 (+)1.45 × 104 (+)
std.1097.33818.92632.31560.78303.33917.3
M5mean6.13 × 1038.05 × 104 (+)2.91 × 104 (+)9.73 × 104 (+)2.53 × 104 (+)1.34 × 104 (+)
std.1061.12250.62107.71611.612115.33013.3
M6mean2.16 × 1037.36 × 104 (+)2.50 × 104 (+)9.14 × 104 (+)1.88 × 104 (+)1.10 × 104 (+)
std.14993405.516271522.412011.13525.0
M7mean3.27 × 1035.64 × 104 (+)2.29 × 104 (+)7.83 × 104 (+)1.85 × 104 (+)1.31 × 104 (+)
std.1119.52496.82788.41562.59224.83852.6
M8mean1.89 × 1036.02 × 104 (+)1.65 × 104 (+)7.83 × 104 (+)8.21 × 103 (+)8.25 × 103 (+)
std.667.8735132584.31257.84020.42882.0
Table 5. The rank of algorithms based on the Total Delay Mean of dataset and Friedman test.
Table 5. The rank of algorithms based on the Total Delay Mean of dataset and Friedman test.
DatasetDMERPSORPSOADFPSOSpadePSOL-SHADEJSO
M1165743
M2165743
M3165743
M4165734
M5165743
M6165743
M7165743
M8165734
Mean rank16573.753.25
Friedman Testχ2 = 24, p = 1.5880 × 10−8
Table 6. The Wilcoxon signed-rank results of DMERPSO versus other algorithms (α = 0.05).
Table 6. The Wilcoxon signed-rank results of DMERPSO versus other algorithms (α = 0.05).
Algorithm+/≈/−
RPSO8/0/0
ADFPSO8/0/0
SpadePSO8/0/0
L-SHADE8/0/0
JSO8/0/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, W.; Wu, B.; Liu, J.; Tang, D. Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm. Algorithms 2026, 19, 233. https://doi.org/10.3390/a19030233

AMA Style

Gao W, Wu B, Liu J, Tang D. Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm. Algorithms. 2026; 19(3):233. https://doi.org/10.3390/a19030233

Chicago/Turabian Style

Gao, Wei, Bingnan Wu, Jianhua Liu, and Daoming Tang. 2026. "Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm" Algorithms 19, no. 3: 233. https://doi.org/10.3390/a19030233

APA Style

Gao, W., Wu, B., Liu, J., & Tang, D. (2026). Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm. Algorithms, 19(3), 233. https://doi.org/10.3390/a19030233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop