Next Article in Journal
PCA-Kriging-Based Oscillating Jet Actuator Optimization and Wing Separation Flow Control
Next Article in Special Issue
Optimized Design and Test of Geometrically Nonlinear Static Aeroelasticity Model for High-Speed High-Aspect-Ratio Wing
Previous Article in Journal
Coupling of Advanced Guidance and Robust Control for the Descent and Precise Landing of Reusable Launchers
Previous Article in Special Issue
Experimental Parameter Identification and an Evaluation of the Impact of Tire Models on the Dynamics of Fixed-Wing Unmanned Aerial Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model-Based Optimization Method of ARINC 653 Multicore Partition Scheduling

1
College of Software Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China
2
School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
*
Authors to whom correspondence should be addressed.
Aerospace 2024, 11(11), 915; https://doi.org/10.3390/aerospace11110915
Submission received: 12 August 2024 / Revised: 4 November 2024 / Accepted: 5 November 2024 / Published: 7 November 2024
(This article belongs to the Special Issue Aircraft Design and System Optimization)

Abstract

:
ARINC 653 Part 1 Supplement 5 (ARINC 653P1-5) provides temporal partitioning capabilities for real-time applications running on the multicore processors in Integrated Modular Avionics (IMAs) systems. However, it is difficult to schedule a set of ARINC 653 multicore partitions to achieve a minimum processor occupancy. This paper proposes a model-based optimization method for ARINC 653 multicore partition scheduling. The IMA multicore processing system is modeled as a network of timed automata in UPPAAL. A parallel genetic algorithm is employed to explore the solution space of the IMA system. Owing to a lack of priori information for the system model, the configuration of genetic operators is self-adaptively controlled by a Q-learning algorithm. During the evolution, each individual in a population is evaluated independently by compositional model checking, which verifies each partition in the IMA system and combines all the schedulability results to form a global fitness evaluation. The experiments show that our model-based method outperforms the traditional analytical methods when handling the same task loads in the ARINC 653 multicore partitions, while alleviating the state space explosion of model checking via parallelization acceleration.

1. Introduction

The ARINC 653 [1] series standards have been widely applied to the designs and manufacturing of modern avionics systems, which mainly adopt the Integrated Modular Avionics (IMAs) architecture [2]. The IMA system enables the integration of multiple real-time avionic functions on shared processors but raises concerns about fault propagation and system failures due to the loss of physical isolation between applications [3]. To address these issues, the ARINC 653 defines a mechanism of partition scheduling where applications run exclusively within their designated time slots, ensuring temporal isolation [4]. As microprocessor technology rapidly advances, multicore processors are being increasingly utilized to enhance the computational capacity and performance of avionic systems [5]. In 2019, the ARINC 653 Part 1 Supplement 5 (ARINC 653P1-5) [6] introduced the temporal partitioning regulations on multicore processors in IMA systems. However, efficiently scheduling ARINC 653 multicore partitions to minimize processor usage remains a significant challenge.
The objective of ARINC 653 multicore partition scheduling is to develop an optimal partition scheduling table that maximizes processor utilization while ensuring the schedulability of all the partitions in the multicore processor. Some studies address the partition scheduling problem by simplifying the system model and linearizing the schedulability constraints, using traditional mathematical optimization techniques such as linear programming [7], mixed-integer optimization [8,9,10,11], and geometric programming [12]. However, these methods often rely on conservative, worst-case assumptions, leading to inefficient processor utilization.
Unlike these traditional analytical methods, model checking offers a promising approach due to its ability to precisely model avionic systems and perform an automated schedulability analysis. The formal model of Timed Automata (TA) is particularly suitable for this purpose because of its rigorous syntax and temporal semantics, which can accurately describe the temporal behaviors of complex IMA systems [13]. The widely used model checker UPPAAL [14] can verify the schedulability properties of the system by exploring the state space of the TA model, providing highly precise schedulability bounds in the related work [15,16,17,18,19,20,21,22].
Nevertheless, applying model checking to partition scheduling in IMA systems presents two significant challenges. First, classic model checking faces the state space explosion problem, where the processing time and memory requirements increase exponentially as more concurrent components are integrated into the system model. Second, multicore and multi-processor scheduling is inherently an NP-hard problem. While model checking excels at verification, it struggles to effectively describe and solve this type of optimization problem, often resulting in an inefficient exhaustive search for the optimal scheduling table.
In this paper, we propose a model-based optimization method for ARINC 653 multicore partition scheduling. The main contributions include the following:
(1)
The formal modeling of ARINC 653 multicore partition scheduling: The IMA multicore processing system is modeled as a network of timed automata in UPPAAL.
(2)
A model-based optimizer: A parallel genetic algorithm is employed to explore the solution space of the IMA system. The configuration of genetic operators is self-adaptively controlled by a Q-learning algorithm.
(3)
Compositional and parallel evaluation: During the evolution, each individual in a population is evaluated independently by compositional model checking. The parallelization acceleration mitigates the state space explosion problem.
The rest of this paper is organized as follows. Section 2 defines the ARINC 653 multicore processing system and its optimization problem. Section 3 provides the framework of the model-based optimization method. Section 4 presents the parallel genetic algorithm and its self-adaptive Q-learning algorithm. In Section 5, we provide the TA models of the system. Section 6 gives the experiments to show the applicability and performance of our method and Section 7 concludes this paper.

2. System and Problem Definition

This section presents the background and the formal definition of ARINC 653 multicore processing systems and provides the optimization problem of partition scheduling.

2.1. Scheduling Policies in ARINC 653

ARINC 653 partition scheduling can be viewed as a two-level hierarchical scheduling system that uses a cyclic Time Division Multiplexing (TDM) global scheduler for the whole system and a Fixed-Priority local scheduler for each partition [6]. In an ARINC 653 partition scheduling system, the static global scheduler is employed to allocate time slots to the partitions. Each partition is activated in a predefined order, ensuring deterministic scheduling.
By contrast, a general hierarchical scheduling system typically involves multiple levels of schedulers, where the higher-level schedulers manage the lower-level components. Each level may have its own scheduling policies, allowing for a combination of static and dynamic scheduling algorithms. This flexibility enables the system to adapt to different application requirements.
In the aircraft industry, there is a sophisticated framework of avionic standards (such as DO-297/ED-124 [23] and ARINC 653) that mandates the above TDM partition scheduling of an IMA system. This static scheduling policy ensures a robust temporal isolation between the partitions, thereby preventing fault propagation—an essential requirement for safety-critical systems, particularly in the field of avionics. Additionally, in a real IMA project, each hosted application within a partition should be developed and verified independently. Static partition scheduling can guarantee that an application may be designed independent of the other applications and obtain incremental acceptance on an IMA platform independently of the other applications. Conversely, a dynamic partition scheduler could be more flexible and efficient than this standard TDM scheduler, but the temporal behavior of the partitions for each application are not independent of each other.
In summary, while ARINC 653 partition scheduling is a specific implementation of a hierarchical scheduling system with a focus on partitioning and temporal isolation, the concept of hierarchical scheduling systems is more flexible and can adapt to a wider variety of scheduling requirements. Nevertheless, the deterministic nature and robust isolation provided by ARINC 653 partition scheduling make it particularly suited for safety-critical applications in avionic systems.

2.2. Definition of ARINC 653 Multicore Processing Systems

ARINC 653P1-5 [6] defines the required services of scheduling and partitioning in multicore environments, providing a Symmetrical Multi-Processing (SMP) configuration that allows multiple processing cores to execute partitions concurrently. This SMP configuration is particularly well suited for a variety of emerging airborne software applications, such as radar, sonar, and image processing, owing to its inherent parallelism.
We focus on an ARINC 653 operating system running on a multicore processor with a core set, C . As illustrated in Figure 1, the system employs a two-layer partitioned scheduling framework comprising a global and a local layer. In the global layer, the partitions are activated synchronously across all the processor cores by a TDM global scheduler [24]. Within each partition in the local layer, a local scheduler uses a preemptive Fixed-Priority (FP) policy to manage the execution of internal real-time tasks. In addition, the operating system adopts the forced preemption mode to make all the kernel codes preemptible.
The system is defined as Ω = P i | i = 1 , 2 , , n , where the processor time is divided into n temporal partitions and P i denotes the i th one. The partitions are scheduled by the TDM global scheduler cyclically for every major time frame, M , according to a partition scheduling table: S = W i | i = 1 , 2 , , n , where W i = w k i | k = 1 , 2 , , ω i represents the set of P i ’s partition time windows (i.e., the time slices allocated to P i ). For the partition P i , the kth partition time window is defined as w k i = o k i , d k i , where o k i denotes the offset from the start of the major time frame, and d k i > 0 represents the duration of the window. The time windows of the partitions are non-overlapping, thus ensuring absolute temporal isolation between the partitions. For any two partition time windows w k i = o k i , d k i and w l j = o l j , d l j , if i j or k l then we have o k i + d k i o l j or o l j + d l j o k i .
Each partition P i accommodates a real-time task set Γ i = τ j i | j = 1 , 2 , , m i , where the tasks within P i can only be scheduled or executed within the partition time windows of W i . The task scheduling policy within each partition is independent of the system’s partition scheduling strategy. A task τ j i is defined as a tuple T j i , I j i , E j i , G j i , D j i , R j i , C j i . T j i is the period of the task, I j i is the initial offset, E j i is the minimum execution time in a period, G j i the maximum, D j i denotes the relative deadline, R j i denotes the priority, and C j i C represents the processor core affinity of τ j i .
The scheduling table S determines the time allocation to all the partitions. The ARINC 653 standard [6] defines a tuple of scheduling parameters p i , b i as the periodic time demand of a partition P i . This p i is a partition period and b i is the budget every p i . Hence, the major time frame M is typically the least common multiple of all the partition periods, i.e., the hyper-period of all the partitions. The schedulability of the system ensures that the time allocation always satisfies the time demand of each partition, where all the tasks meet their deadlines.
In the scheduling table S , there are a set of temporal partitions that conform to their periodic time demand [25]. As illustrated in Figure 2, a partition P i is activated across all the processor cores synchronously, providing a periodic time service for the internal application. Although P i achieves its periodic time demand p i , b i , it allows for a certain amount of jitter from the windows. Additionally, each period may contain multiple time windows. For example, P i has one window in the first period but two in the second, where d 1 i = d 2 i + d 3 i = b i and o 1 i o 2 i .

2.3. Problem Definition

In accordance with the DO-297/ED-124 [23] certification standard, the IMA system integrator gathers the details of the hosted applications (binary, code, and documents) from the application supplier and the models of the platform components (processors, memory, IO) from the platform supplier, calculating a partition scheduling table for the final blueprint configuration of the ARINC 653 multicore processing system [26].
The problem of scheduling partitions within the ARINC 653 multicore system can be formally defined as follows: Given the above system Ω = P i | i = 1 , 2 , , n , the objective is to determine a partition scheduling table S = W i | i = 1 , 2 , , n such that all the tasks τ j i within any partition P i meet their deadline constraints while simultaneously minimizing the overall processor usage. Specifically, the goal is to minimize the processor occupancy, U , of the system:
U m i n = min i = 1 n k = 1 ω i d k i + v M
where v represents the overhead for each partition context switch. This formulation ensures an efficient use of the processor resources while maintaining the real-time constraints required by the system.
Achieving a lower processor occupancy is advantageous because it allows for the accommodation of additional workloads or enables the same workload to be managed by a system with a lower processing speed [12]. It is also very useful to optimize different metrics such as throughput and latency in the system. The IMA system integrators can define the corresponding objective function instead of the processor occupancy here.
However, the above definition of processor occupancy in Equation (1) is not used due to the complexity introduced by the unknown number ω i of partition time windows, making it difficult to solve the objective function U m i n . Considering the relationship between the temporal supply of the partition time windows W i in S and the demand for the periodic scheduling parameters p i , b i , we use the scheduling parameters p i , b i as the unknown variables rather than working with the scheduling table S directly.
Since the major time frame M is the least common multiple of all the partition periods p i , we assume that M = k i p i ,   k i N + . Thus, it satisfies
k = 1 ω i d k i = k i b i
Let c j i be the number of context switches for the partition P i during the j th period of M . The processor occupancy is defined as follows:
U = i = 1 n k = 1 ω i d k i + v M   = i = 1 n 1 k i p i k i b i + j = 1 k i c j i v   = i = 1 n b i + c i v p i
where c i is the average number of context switches for the partition P i .
Finally, the optimization problem of ARINC 653 multicore partition scheduling involves identifying a 2n-dimensional vector x = x 1 , x 2 , , x 2 n T + 2 n where x 2 i 1 = p i and x 2 i = b i constitute the periodic scheduling parameters p i , b i of the partition P i , such that all the tasks τ j i in P i meet their deadline constraints, while minimizing the processor occupancy:
U m i n = min i = 1 n b i + c i v p i
Here, the vector x of the unknown variables captures the temporal demand of all the partitions in the system; hence, x is referred to as the demand vector.
The challenge of ARINC 653 partition scheduling is fundamentally a complex combinatorial optimization problem. The solution space is typically too vast to allow for effective brute-force searching in a practical IMA project [12]. Therefore, this paper uses a genetic algorithm to explore the scheduling parameter space to find optimal solutions for all the partitions, thereby generating an efficient partition scheduling table.

3. Framework of the Model-Based Optimization Method

In this section, we provide the framework of our model-based method for the above optimization problem and show the design of each component in the optimizer.

3.1. Model-Based Optimizer

As shown in Figure 3, the model-based optimizer primarily involves an iterative evolutionary process. The constituent components are described as follows.
(1)
Genetic operators
In the parallel genetic algorithm of the optimizer, the first generation of a population β 1 = I k 1 | k = 1 , 2 , , K is randomly initialized with K individual vectors I k 1 = α 1 1 , k , α 2 1 , k , , α 2 n 1 , k T . For each generation g 1 , 2 , , G , genetic operators are invoked to recombine and mutate the individuals in a population, generating the next generation of the population β g + 1 . The individuals within a population are evaluated concurrently through model checking. Upon the completion of G generations of evolution, the optimizer identifies the optimal individual vector and its corresponding partition scheduling table S o p t , which achieves the minimum processor occupancy U m i n .
(2)
Self-adaptive configuration based on Q-learning
Our earlier work has shown that the configuration of genetic operators greatly influences the optimization results, with different settings producing significantly varied processor utilization levels. Owing to lack of priori information for the system model, our parallel genetic algorithm uses a Q-learning mechanism to adaptively adjust the configuration of the genetic operators. This approach views the population during evolution as the external “environment”. By repeatedly executing different “actions” and receiving corresponding “rewards” from this environment, the optimizer learns an effective control strategy. The design of the genetic operators and the Q-learning algorithm are presented in Section 4.
(3)
Temporal demand decoder
The temporal demand decoder performs the transformation of the individual vector I = α 1 , α 2 , , α 2 n T to the demand vector x = x 1 , x 2 , , x 2 n T . Our model-based optimizer manipulates the encoded individual vectors rather than the demand vectors. A demand vector is encoded into an individual vector in polar coordinates by using the following rules:
α 2 i 1 = arctan x 2 i / x 2 i 1 ,   α 2 i = x 2 i 1 2 + x 2 i 2
which extract the similarities from selected individuals in the recombination operation of the parallel genetic algorithm. The elements α 2 i 1 and α 2 i denote the slope angle of its processor utilization and the order of the magnitude of temporal demand, respectively. The temporal demand decoder enhances search efficiency by increasing the survival rate of the offspring during evolution.
(4)
Scheduling table generator
The scheduling table generator produces a partition scheduling table S = W i | i = 1 , 2 , , n based on the demand vector x , ensuring the strict periodicity of the time slots for each partition. This module employs the generation algorithm detailed in our previous work [22], where the priorities of the partitions are determined according to their jitter constraints, with smaller jitter values assigned a higher priority.
(5)
Schedulability verification
In the optimizer, a compositional approach is utilized to verify the schedulability of the system model with an input partition scheduling table S . The strict temporal isolation of the n partitions in the system allows for the simultaneous and independent verification of their temporal behaviors via model checking. Consequently, the global schedulability is determined from the positive local results of all the partitions, significantly reducing the state space explosion associated with a monolithic system model. The modeling framework and compositional approach are detailed in Section 5.
(6)
Fitness evaluation
According to the results of the schedulability verification, the optimizer calculates the fitness value of an individual vector I , which is associated with an original partition scheduling table S . These fitness values guide the genetic algorithm in exploring the parameter space of the partition scheduling table. A higher fitness value means a superior individual, increasing the likelihood of its features being retained in the next generation. We define the fitness function of an individual I as follows:
f I = r N + 1 i = 1 n k = 1 ω i d k i + v M
where N is the number of partitions that satisfy schedulability, i = 1 n k = 1 w i d k i + v M denotes the processor occupancy of the system, and   r is a user-defined constant like the integer 100.

3.2. Algorithm Description of Model-Based Optimization

The pseudo-code presented in Algorithm 1 outlines the workflow of the optimization method. Users input the set of TA templates M with the configuration of task sets, finally acquiring the optimal partition scheduling table S o p t from the system.
Algorithm 1: Model-based Optimization
Input: M , timed automaton templates of the system
Output: S o p t , the optimal partition scheduling table

Aerospace 11 00915 i001
The optimizer first initializes a population and strategy vector randomly. Lines 2 to 21 outline the iterative process of the genetic algorithm over G generations. For each generation, the algorithm evaluates all the K individuals in the population in parallel (using “par-do”) between Lines 3 and 11. Each partition is further processed in parallel (Lines 6~9) by model checking in UPPAAL, which is the costliest operation during an evolution. This high degree of parallelization significantly reduces the model size during model checking, thereby mitigating the state space explosion associated with schedulability verification. Line 10 collects the schedulability results of n partitions in the kth individual I k g and evaluates the fitness value of I k g .
Lines 12 to 20 define the reproduction operations in the genetic algorithm (for generation 1~ G 1 ). In Lines 13~15, the population β g undergoes selection, recombination, and mutation operations to form the next population β g + 1 . The mutation operator is controlled by the strategy vector σ g . In Lines 16~19, the Q-learning engine chooses actions to adjust the strategy vector σ g according to the Q values and updates the Q table for the next generation. By collecting the accumulated rewards of the mutations, the optimizer acquires an effective control strategy, thereby accelerating the optimization process. Finally, the optimizer returns the optimal partition scheduling table S o p t , which represents the highest fitness value identified throughout the evolutionary process.

4. The Parallel Genetic Algorithm

This section presents the parallel genetic algorithm including its genetic operators and self-adaptive configuration based on the Q-learning algorithm.

4.1. The Genetic Operators

During an evolution, a population β g = I k g | k = 1 , 2 , , K is handled by the three genetic operators: selection, recombination, and mutation operators, finally outputting the next generation of the population β g + 1 = I k g + 1 | k = 1 , 2 , , K .

4.1.1. Selection Operator

The selection operator is designed to select the K pairs of individuals from the population β g based on their fitness evaluations. In the genetic algorithm, there is still a high likelihood of producing low-fitness individuals, especially with invalid scheduling parameters. Thus, we employ an exponential ranking selection operator that applies a higher selective pressure, favoring better individuals while maintaining a balanced fitness distribution.
The exponential ranking selection operates as follows: The K individuals in β g are first ranked according to their fitness values, from the worst (rank 1) to the best (rank K). The selection probability ρ i for the i th individual is calculated using the following exponentially weighted formula [27]:
ρ i = c K i k = 1 K c K k
where c 0 , 1 defines the selective pressure. A smaller c increases the pressure, making it more likely for the best-fit individuals to be selected. The selected K pairs of individuals then serve as parents to generate the offspring population β g + 1 .

4.1.2. Recombination Operator

The recombination operator aims to extract similarities from the selected individuals during evolution [27]. We employ individual vectors in the polar coordinates, rather than demand vectors, to mix information from the parent individuals about the utilization ratio u i = b i / p i for each partition P i ( i = 1 , 2 , , n ).
Let I 1 g = α 1 g , 1 , α 2 g , 1 , , α 2 n g , 1 T and I 2 g = α 1 g , 2 , α 2 g , 2 , , α 2 n g , 2 T be a pair of selected parents at the g th generation. We adopt an intermediate recombination operator that generates an offspring individual I ¯ 1 g + 1 = α ¯ 1 g + 1 , 1 , α ¯ 2 g + 1 , 1 , , α ¯ 2 n g + 1 , 1 T using
α ¯ k g + 1 , 1 = α k g , 1 + λ k α k g , 2 α k g , 1 ,   k = 1 , 2 , , 2 n
where λ k ~ U d , 1 + d is a scaling factor chosen uniformly at random over an interval d , 1 + d for each variable.
As shown in Figure 4, a recombination operation extracts the similarities from two parent individuals I 1 g and I 2 g about the utilization ratio u i = b i / p i . Let x = x 1 , x 2 , , x 2 n T and y = y 1 , y 2 , , y 2 n T be the demand vectors of I 1 g and I 2 g , respectively. For any partition P i , its period p i is presented as the X-axis and the budget b i as the Y-axis. According to the encoding rules in Equation (5), it holds that α 2 i 1 g , 1 = θ x = arctan x 2 i / x 2 i 1 , α 2 i 1 g , 2 = θ y = arctan y 2 i / y 2 i 1 are slope angles and α 2 i g , 1 = l x = x 2 i 1 2 + x 2 i 2 , α 2 i g , 2 = l y = y 2 i 1 2 + y 2 i 2 are Euclidean distances. In Figure 4, the gray sector shows the area of the variable range of the offspring combined with the variables of the parents. They mix similar features and produce a child I ¯ 1 g + 1 whose demand vector is decoded into z = z 1 , z 2 , , z 2 n T .

4.1.3. Mutation Operator

Figure 4 also demonstrates the time demand of a partition P i in the final offspring demand vector z after mutating the individual of z . In the genetic algorithm, the mutation operator provides the main source of genetic variation to keep a certain degree of population diversity. After a recombination operation, the following mutation operator gains an offspring individual I ¯ 1 g + 1 = α ¯ 1 g + 1 , 1 ,   α ¯ 2 g + 1 , 1 , , α ¯ 2 n g + 1 , 1 T . Given I ¯ 1 g + 1 a new individual I 1 g + 1 = α 1 g + 1 , 1 ,   α 2 g + 1 , 1 , , α 2 n g + 1 , 1 T is produced using
α k g + 1 , 1 = α ¯ k g + 1 , 1 + Δ k ,   k = 1 , 2 , , 2 n
where Δ k ~ N 0 , σ k is a random sample from a normal distribution N 0 , σ k .
The two variables of a partition P [ k / 2 ] are selected for the mutation operation. Each of the variables is mutated independently with the offset Δ k . Hence, all the standard deviations constitute a mutation strategy vector σ = σ 1 , σ 2 , σ 2 n T that controls the mutation strength of the current population. In Figure 4, the strategy parameters σ 2 i 1 and σ 2 i affect the partition P i , adjusting the slope angle of its processor utilization and the order of magnitude for temporal demand, respectively.

4.2. Self-Adaptive Configuration Based on Q-Learning Algorithm

A Q-learning algorithm is defined to adaptively adjust the configuration of the genetic operators. Figure 5 shows the process of reinforcement learning in the optimizer. The population during the evolution is viewed as the external “environment”. We perform a set of “actions”, receiving corresponding “rewards” from the environment, thereby making the optimizer learn an effective mutation strategy vector σ = σ 1 , σ 2 , σ 2 n T .
The Q-learning algorithm consists of a set of states S , a set of actions A , and a state–action evaluation function Q s , a , where s S and a A . The components are defined as follows:
(1)
State: Based on the changes in best fitness during the evolution, we define a state set S = I N C , S T A B L E 1 , S T A B L E 2 , , S T A B L E ϑ . The state I N C indicates an increase in the best fitness of the current population compared to the previous generation. The states S T A B L E 1 ~ S T A B L E ϑ represent the fact that the best fitness has remained unchanged for 1~ ϑ generations, respectively. ϑ is typically an integer between 10 and 50.
(2)
Action: the actions are to adjust the strategy parameters during evolution, including three options: increase, decrease, and remain unchanged, represented by the symbols , and , respectively, and forming the action set A = , , .
(3)
Q-function: The Q-function [28] Q s , a , returning a Q-value, evaluates the expected cumulative reward for any state–action pair s , a , determining the long-term benefit of taking action a in state s . The Q-function is recursively defined as follows:
Q s g , a g = 1 τ r Q s g , a g + τ r r g + 1 + γ Q s g + 1 , a g + 1
where the symbols include:
  • g : the generation number of the evolution.
  • s g : the population state at the g th generation.
  • a g : the action taken for the population at the g th generation.
  • s g + 1 : the population state at the g + 1 th generation.
  • a g + 1 : the action taken for the population at the g + 1 th generation.
  • τ r : the learning rate τ r 0 , 1 for strategy parameter adjustment.
  • γ : the discount factor γ = 0.5 , balancing the delayed and immediate rewards.
  • r g + 1 : the immediate reward obtained by performing action a g in state s g .
This Q-function allows the genetic algorithm to approximate the exact cumulative reward iteratively throughout the evolutionary process.
The immediate reward function r s g , a g returns the immediate feedback from the environment to the Q-learning agent at the g th generation. According to the next state s g + 1 of the population, the immediate reward function is defined as
r s g , a g = r ˜ ,   δ s g , a g = I N C 0 ,   δ s g , a g = S T A B L E i , i 1 , 2 , , v
where the symbols include the following:
  • δ s g , a g : an implicit state transition function indicates the population state s g + 1 of the g + 1 th generation.
  • r ˜ : a positive integer constant represents the positive reward value provided to the Q-function.
The goal of Q-learning is to produce a policy function π :   S A , which can automatically select the next action a g A based on the current population state s g S , expressed as π s g = a g . The policy function π s g always takes the action a g with the maximum Q-value
π s g = arg max a A Q s g , a
and is generated by iteratively by updating the Q-values.
Considering the diversity of the genetic configuration, we adopt an ϵ -greedy strategy π ϵ s g to explore the potential action a g :
π ϵ s g =           π s g           ,     with probability   1 ϵ Sample A ,     with probability   ϵ
where the symbols include the following:
  • π s g : the policy function value at the g th generation.
  • S a m p l e A : a sampling function S a m p l e :   A A maps an element in A to itself, representing the process of uniformly selecting an element from the set A and returning the selected element.
  • ϵ : the probability of exploring the action space is normally defined as ϵ = 1 / g + 1 .
The Q-learning mechanism described in this section is used to control the mutation strategy parameters. Each invocation of the mutation operator executes the policy function π ϵ s g to explore the action space A , updating the stored Q-values according to Equation (10). The chosen action a g affects the value of the strategy parameters in the next generation, thereby adjusting the mutation strength of the population.
Let σ g = σ 1 g , σ 2 g , , σ 2 n g be the strategy vector at generation g . The strategy vector σ g + 1 = σ 1 g + 1 , σ 2 g + 1 , , σ 2 n g + 1 at the next generation can be obtained by
σ j g + 1 = q g σ j g e ζ N j 0.1 ,   j 1 , 2 , 2 n
where q g 1 / 2 , 1 , 2 is the factor corresponding to the actions , , and , respectively. In the exponent of the natural constant e , the factor ζ = 1 / 2 n represents the learning rate, and N j 0.1 denotes a randomly sampled value from the standard normal distribution. The updated strategy parameters in σ g + 1 are subsequently applied to the mutation of all the individuals in the population at generation g + 1 .

5. Formal Model of ARINC 653 Multicore Scheduling Systems

An ARINC 653 multicore scheduling system is modeled as a network of timed automata in UPPAAL. In this section, we define three types of UPPAAL templates for the partition scheduler, task scheduler, and task in the system. By instantiating the templates within the partition P i , we derive the instance model M i that encapsulates the behavior of the partition to support compositional model checking.

5.1. Partition Scheduler

The partition scheduler model delivers scheduling services to a partition P i . This model activates the partition exclusively during its designated time windows W i within each major time frame.
The ARINC 653 partition schedule S is organized within a structure array PST. Each element of the array corresponds to a partition time window comprising three integer fields: part, off, and dura, where part serves as the partition identifier, offset denotes the offset of the window, and dura indicates the duration of the window. The model incorporates two functions, winBegin and winEnd, which accept a parameter w and return the start and end time of the wth window, respectively, as determined by the partition schedule table PST. The major time frame is defined as an integer constant M. Given that the partition scheduler operates as a time-driven model, a clock variable t is established to measure the time within each interval of M, thereby regulating the partition scheduling behavior.
As illustrated in Figure 6, the partition scheduler template consists of three locations: Start, Outside, and Inside. The initial location Start implements a conditional control structure to ascertain the partition state at the commencement of each major time frame: if a time window of the partition begins at the onset of a major time frame, the model transitions to the Inside location; otherwise, it proceeds to the Outside location. The Inside and Outside locations represent the states of being within and outside the partition, respectively. Subsequently, the model evaluates its current state based on the value of the clock t. Upon entering a new partition time window, the model immediately transitions to the Inside location and notifies the task schedulers of the partition through the broadcast channel enter. Conversely, when exiting the current partition time window, the model shifts to the Outside location and utilizes the broadcast channel exit to inform the task schedulers, thereby halting all the tasks running within the partition.

5.2. Task Scheduler

In an ARINC 653 multicore system, each processor core activates the partition P i synchronously and meanwhile enables the task scheduler of P i . As depicted in Figure 7, the task scheduler template outlines the scheduling of the tasks of P i , according to a preemptive Fixed-Priority (FP) policy. The task scheduler model acquires the current state of the partition P i from its corresponding partition scheduler model through two channels: enter and exit. In addition, the two channels ready and finish serve as scheduling commands to manage the tasks of P i .
The task scheduler template employs the integer variable running to denote the task currently being executed on the processor core and declares a ready queue rq to record all the ready tasks in descending order of priority. This ready queue is equipped with four functions: enque inserts task identifiers into the queue according to the priority order; deque removes the front element from the queue; and front and rqLen return the front task and the length of the queue, respectively.
The template features two primary locations: WaitPartition and InPartition. At WaitPartition, the model is outside the partition and does not respond to any new ready commands for task scheduling. Conversely, the InPartition state indicates that the current time is within the partition, allowing tasks to be scheduled and executed on the processor.
A task is associated with its task scheduler on the processor core rid based on the processor affinity property of the task. When a task becomes ready, it sends a ready command to its corresponding task scheduler model. The task scheduler consistently allocates the processor to the highest-priority task in rq. If a new task with a higher priority than the front element becomes ready, the task scheduler will insert this task into rq, interrupt the currently running task using the resetRunning function, and schedule the newly selected task for execution by updating the running variable. Upon completion of its execution on the processor, a task sends a finish command to the task scheduler model. If the queue is not empty, the front element in rq will be scheduled for execution.

5.3. Task Template

The temporal behaviors of a real-time task τ j i are modeled as a task template in Figure 8. There are two clock variables t and x in the task template. The clock t measures the time within each task period and monitors whether the response time exceeds the designated deadline. The clock x tracks the execution time of the task utilizing the processor core during the current period. Note that the clock x remains active only while the task is in a running state. It can be paused by assigning a value of zero to its derivative value x’ when the task is not running on the processor.
There are three normal locations WaitInitialOffset, Ready, and WaitNextPeriod in the template. The model begins at WaitInitialOffset, where it remains for the initial offset I j i of the task τ j i . Following this, the task transitions to the Ready location, emitting a ready notification. The model resides at Ready for a variable duration, which ranges between the best-case execution time E j i and the worst-case execution time G j i , as determined by the functions BCET and WCET, respectively. Upon completion of its execution within this period, the task model sends a message to the task scheduler via the finish channel, meanwhile transitioning to WaitNextPeriod. The model is then expected to return to the initial location WaitInitialOffset and repeat the above actions in the subsequent period.
The task template additionally designates a special location Error, which signifies that the task τ j i has missed its deadline D j i . A Boolean variable error is set to true whenever any task model enters the Error location. Thus, we assess the schedulability of the system by verifying the following temporal property:
A [ ]   not   error
which indicates that the Error state is unreachable in any task model, thereby ensuring the system’s reliability in meeting deadlines.
In accordance with the ARINC 653 [6], malfunctioning or misbehaving tasks are addressed in two specific scenarios:
(1)
If a task triggers a “Health-Monitoring (HM) error” detected by the OS, the kernel will not permit the faulted task to continue to run (i.e., fatal error). A faulted task must be stopped immediately before it can be restarted. This behavior is modeled as the transition to the Error location.
(2)
Tasks that report an application error will continue to be eligible to run. This case relies on implementation dependent behavior instead of being managed by HM, commonly causing the variance in execution times of related tasks. This feature can be described as the non-deterministic execution time in the task model.

6. Experiments

In this section, the model-based approach is applied to partition scheduling for two IMA multicore processing systems using a cluster computing platform. We first optimize the partition scheduling of a simple avionics system, demonstrating our method’s ability to find globally optimal solutions. We then address a more complex avionics workload, generating a feasible scheduling table within an acceptable time and memory constraints. Additionally, we compare the performance of our approach with the traditional analytical methods and classical genetic algorithms.

6.1. Implementation of Optimzation Method

The optimizer is developed in Python and deployed in a cluster computing environment, enabling a highly parallelized execution to accelerate both the model-based analysis and the search for optimal solutions. Table 1 presents the configuration of the cluster computing environment. The cluster comprises four interconnected nodes, each linked via 40 Gbps InfiniBand connections. To facilitate parallel model checking and genetic evolution, the cluster is equipped with a total of 128 processor cores (32 cores per node). Each node is allocated 512 GB of dedicated memory and shares access to 16 TB of disk storage.
The software configuration is based on the Ubuntu Linux operating system, employing the Message Passing Interface (MPI) protocol to partition and aggregate the data across the computing nodes within the cluster. The MPI services are implemented using the mpi4py and Open MPI libraries. The optimizer utilizes the 64-bit Linux version of the UPPAAL model checker. We implemented the basic Q-learning algorithm as defined in Section 4.2. The workloads of the original training dataset are extracted from the technical report [29].
As illustrated in Figure 9, the model-based optimizer is implemented with the following four modules:
  • Input manager: This module receives the configuration of the optimizer from the users’ Initialization (INI) file. A dedicated Configuration class manages various parameters including the ranges of periods, budgets, and iterations, the settings for genetic operators, fitness function, model checker, and Q-learning. The input manager invokes the Python library ConfigParser to access the INI files.
  • Model generator: This module generates a TA model of the system for each valid individual in the population during the genetic evolution process. Based on the temporal demand of the individual, a partition scheduling table is produced by the generation algorithm and imported into the TA model. The system models, formatted as XML (eXtensible Markup Language) files, are utilized as input for UPPAAL. This module leverages the Python library of the XML Document Object Model (DOM) to access and construct the XML model files.
  • Genetic algorithm: This module incorporates a suite of genetic operators—namely selection, crossover, and mutation—alongside the fitness function, all of which are designed to facilitate parallel processing using the Message Passing Interface (MPI). The MPI services are provided by the Open MPI library and accessed through the Python package mpi4py. The genetic algorithm manipulates the objects of the two Python classes GAPopulation and GAIndividual, which define the populations and individuals, respectively. For each individual, the fitness function invokes the UPPAAL verifier to evaluate the schedulability and processor occupancy of the corresponding system model. By collecting the reward from the fitness values during evolution, the Q-learning algorithm invokes a Python class GAStrategy to update the strategy vector and Q table.
  • Output manager: This module conducts a statistical analysis of the evolutionary process, providing the statistical results of each generation and finally delivering the optimal partition scheduling table to users. The Python library Openpyxl is employed to output statistical information to spreadsheets.
Note that the configuration of the time slices (such as those in the following experiments) can be defined as required in the ARINC 653 configuration of an IMA system, largely depending on the hosted applications’ requirement rather than any specific embedded hardware.

6.2. Experiment on a Simple Avionics System

The first experiment is to perform the ARINC 653 dual-core partition scheduling for the avionic task sets in [30]. As shown in Table 2, there are two partitions in this simple avionic system, where the time unit is 1 ms and the overhead for each partition context switch is 2 ms. We generate the optimal partition scheduling table by using the following four methods.
(1)
Exhaustive search (ES): In [30], the schedulability of the system was determined by a response time analysis. This method verified all the possible integer combinations of the partition periods within the given range [40, 2000]. For each combination, a binary search was employed to find the minimum partition budget. In this experiment, we check the schedulability by model checking. Although an exhaustive search can find the global optimal solution, it is only practical for simple systems. Its feasibility descends significantly as the solution space and system complexity increase.
(2)
Geometric programming: In [12], Geometric Programming (GP) was utilized to model and solve the optimization problem of partitioned scheduling systems. This method is suitable for addressing the nonlinear and non-convex optimization problems but is limited to handling simple sets of periodic tasks with fixed execution times. In this experiment, a GP optimizer for an avionic system is implemented using the Python GP library GPkit 1.1.
(3)
Classic genetic algorithm: Two well-known genetic algorithms are utilized: the Basic Genetic Algorithm [31] and Breeder Genetic Algorithm [32], marked as “GA1” and “GA2”, respectively. Both of the algorithms use binary encoding for individuals. In GA1, the crossover probability for each bit is set at 0.5 and the mutation probability is 0.2. In GA2, the proportion of truncation selection T% is set to 50%, the weight factor of the extended intermediate recombination operator is 0.5, the standard deviation for the Gaussian mutation operator is 1000, and the mutation probability is 0.2.
(3)
Model-based method for ARINC 653 in this paper (MBM653): The individuals in a population are also binary encoded. The base of the exponential ranking selection is assigned at c = 0.8, and the factor for recombination is d = 0.5. The initial standard deviation of the mutation is defined as σ 2 i 1 = π / 32 , σ 2 i = 1000 . The number of states v is set to 50 in the Q-learning algorithm.
Table 3 presents the optimization results in experiment 1. The MBM653 method gives the same global optimal solution as the exhaustive search: x opt = 1600 ,   341 ,   1600 ,   341 with the minimum processor occupancy U m i n = 45.1 % . In contrast, the two classic genetic algorithms produce two local optimal solutions with corresponding processor occupancies of 59.5% and 47.5%. Compared to the optimization methods based on the formal models, the geometric programming method using the traditional analytical model results in a schedulable solution with a significantly higher processor occupancy 67.8%.
Figure 10 illustrates the evolution of the minimum processor occupancy and cumulative processing time of the two classic genetic algorithms and the MBM653 method in experiment 1. GA1 prematurely converges to local optima after 43 generations. GA2 undergoes a tortuous evolution where the processor occupancy first reaches 56.9% at the 30th generation and remains stable until the 160th generation. Subsequently, the mutation operation provides an accidental eminent individual and reduces the processor occupancy sharply in the following 40 generations. However, the GA2 finally converges to local optima at 47.5% after the 200th generation. These eminent individuals between 160 and 200 generations also change most individuals in the population and thus bring about a sharp increase in the processing time.
In contrast, the MBM653 method dynamically adjusts the configuration of its genetic operators, thereby acquiring a steadier decline in the processor occupancy and processing time. During the evolution, the MBM653 optimizer gradually focused the search on smaller regions to find better solutions, finally converging to the globally optimal individual with a minimum processor utilization of 45.1 % in the 284th generation.

6.3. Experiment on a Concrete Avionic System

Experiment 2 focuses on the ARINC 653 quad-core partition scheduling of the concrete IMA task set outlined in Table 4. Given the non-deterministic execution time of the tasks and the much larger solution space, it is difficult for the traditional analytical models to provide precise representations, and also impractical for model checking to make an exhaustive search for the globally optimal solution. Thus, this experiment compares the optimization of the MBM653 method with the empirical scheduling detailed in [33]. The context-switching overhead between the partitions is set at 0.2 ms.
The detailed experimental configuration is as follows:
(1)
Empirical scheduling
All the partition periods are set to 25 ms, which corresponds to the shortest task period within the partitions and the greatest common divisor of most of the task periods. Each partition is subsequently allocated a 5 ms time slot within every partition period. A partition scheduling table is created based on the major time frame 25 ms, which is also the least common multiple of all the partition periods. Within a major time frame, five partition windows are arranged sequentially. In addition, a context-switching overhead of 0.2 ms is inserted into each partition time window, thus reducing the duration of each window to 4.8 ms.
(2)
MBM653 method
Given the large solution space and long processing times for each generation, the MBM653 method is configured with a new population size of 256 and a maximum iteration count of 200 generations. We apply the following genetic operator configuration in experiment 2, where the base of the exponential ranking selection operator is c = 0.8, the factor of the recombination operator is d = 0.5, and the standard deviation of mutation is σ 2 i 1 = π / 32 , σ 2 i = 100 .
Table 5 presents the optimization results of experiment 2. Due to the large solution space and high processor load, this IMA system posed more challenges in finding solutions that satisfy the system’s schedulability constraints compared to experiment 1. The traditional empirical scheduling method fails to produce a schedulable solution. The counterexample generated by UPPAAL indicates that task τ 3 1 missed its deadline at 50.2 ms. Although the 5 ms time window allocated to the partition P 1 is enough for running τ 3 1 according to [33], the additional context-switching overhead of 0.2 ms causes τ 3 1 to exceed its available partition budget. Moreover, this scheduling method occupies all the processor time, increasing the integration cost of the avionics workload.

6.4. Discussion

Based on the experimental results, it is evident that formal methods, which directly describe system behavior, offer greater expressiveness and accuracy than the traditional analytical models. Consequently, the optimization results of the formal methods (GA1, GA2, and MBM653) surpass those derived from the analytical methods (GP and Empirical Scheduling).
The GP method relies on a conservative approximation of the resource demand–supply relationship to formulate the schedulability constraints of the geometric programming model. This approach leads to more pessimistic optimization results.
The model-based method leverages the precise description of system models to obtain optimal solutions with a lower processor occupancy. However, the classic genetic operators used in GA1 and GA2 exhibit a low search efficiency, resulting in significant processor time wastage during optimization. Ultimately, GA1 succumbed to a premature convergence, while GA2 only reached local optima, requiring an exceptionally long processing time.
In contrast, MBM653 enhances genetic operators through Q-learning, allowing for the dynamic configuration of genetic operators during evolution. In the first experiment, MBM653 achieved the same globally optimal solutions as an exhaustive search but in a much shorter time. When applied to an IMA system with a larger solution space (5 partitions and 10 dimensions) and a higher task load (57.07%), empirical scheduling became infeasible; yet, the proposed MBM653 method successfully identified feasible partition scheduling tables with a lower processor occupancy. This demonstrates its superior applicability and optimization performance.

6.5. Scalability and Limitations

The proposed approach enhances scalability primarily via parallelization. In the context of ARINC 653 partition scheduling optimization, three key metrics are utilized to assess the computational scale: population size, the number of partitions within the system, and the number of tasks in each partition. These metrics correspond to the following aspects of scalability:
(1)
Parallelization and Scalability of the Genetic Algorithm: In the genetic algorithm, fitness evaluation is essential for assessing the quality of each individual in the population and represents the most computationally intensive step. For each generation, the algorithm independently evaluates each candidate individually (partition scheduling table). Given the independence of the fitness evaluations, they can be executed in parallel across different processors or computing nodes, significantly accelerating the processing for each generation. Furthermore, the genetic operators designed for this approach can also be parallelized for recombination, mutation, and selection. Thus, provided there are sufficient parallel computing resources, the processing time of our model-based optimization does not increase significantly with larger population sizes.
(2)
Parallelization and Scalability of Compositional Model Checking: The fitness evaluation for each individual requires a schedulability verification of the corresponding system model through model checking, which can be computationally intensive. Our approach employs a compositional method to verify the schedulability of each partition in parallel, merging all the local results to derive global conclusions regarding the schedulability and fitness evaluations. Consequently, the processing time remains manageable even as the number of partitions increases.
(3)
Scalability and limitations of Symbolic Model Checking: The primary performance bottleneck of our approach arises in the symbolic model checking within a single partition, where the processing time grows exponentially with the number of concurrent components (i.e., tasks) in the partition. To illustrate this limitation, we conducted a scalability experiment using the same configuration as in experiment 2, handling a single partition comprising the first j tasks from the open-source task dataset rand0000.stg in rnc50.tgz of STG [34], with j incrementing by five. The results, presented in Figure 11, demonstrate that the processing time exhibits exponential growth as the number of tasks within the partition increases. Notably, when the number of tasks exceeds 30, ordinary personal computers struggle to manage the computational demands.

7. Conclusions

This paper presents a model-based method for ARINC 653 multicore partition scheduling, leveraging the timed automata in UPPAAL. We demonstrate that our method significantly improves the precision, efficiency, and scalability of partition scheduling in IMA multicore processing systems when compared with the established approaches. This method delivers more precise schedulability bounds than the traditional analytical methods by applying a model-based system description and verification. Our approach also outperforms the model-based method based on classical random search algorithms. We integrate a Q-learning mechanism into the optimizer, thereby improving the adaptability and search efficiency of the parallel genetic algorithm, as well as finally generating feasible partition scheduling tables with lower processor occupancies. Additionally, the method incorporates compositional verification to efficiently explore the scheduling parameter space, thereby enhancing the scalability of the model checking approaches. As future work, we plan to optimize the ARINC 653 partition scheduling system in terms of employing more metrics such as throughput and latency (in addition to the processor occupancy) in a multi-objective optimization framework.

Author Contributions

Conceptualization, P.H. and Z.Z.; methodology, P.H.; software, W.H.; validation, W.H.; formal analysis, P.H.; investigation, Z.Z.; resources, M.H.; data curation, W.H.; writing—original draft preparation, P.H.; writing—review and editing, Z.Z.; visualization, W.H.; supervision, P.H.; project administration, P.H.; funding acquisition, P.H. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Industrial Science and Technology Research Project of the Henan Province under the grant number 222102210024 and the Doctoral Fund Project of Zhengzhou University of Light Industry under the grant number 2021BSJJ028.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank Brian Nielsen and Ulrik Nyman of Aalborg University for their great help with technical support for the model checker UPPAAL.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Airlines Electronic Committee (AEEC). ARINC Specification 653P0-2 Avionics Application Software Standard Interface Part 0—Overview of ARINC 653; SAE Industry Technologies Consortia (SAE ITC): Bowie, MD, USA, 2019. [Google Scholar]
  2. Lukić, B.; Ahlbrecht, A.; Friedrich, S.; Durak, U. State-of-the-Art Technologies for Integrated Modular Avionics and the Way Ahead. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC), Barcelona, Spain, 1–10 October 2023; pp. 1–10. [Google Scholar] [CrossRef]
  3. Bieber, P.; Boniol, F.; Boyer, M.; Noulard, E.; Pagetti, C. New Challenges for Future Avionic Architectures. Aerosp. Lab 2012, 4, 1–10. [Google Scholar] [CrossRef]
  4. Wang, H.; Niu, W. A Review on Key Technologies of the Distributed Integrated Modular Avionics System. Int. J. Wirel. Inf. Netw. 2018, 25, 358–369. [Google Scholar] [CrossRef]
  5. Rockschies, M.; Thielecke, F. Avionics Platform Design Optimization Considering Multi-/Many-core Processors. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC), Barcelona, Spain, 1–10 October 2023; pp. 1–10. [Google Scholar] [CrossRef]
  6. Airlines Electronic Committee (AEEC). ARINC Specification 653P1-5 Avionics Application Software Standard Interface Part 1—Required Services; SAE Industry Technologies Consortia (SAE ITC): Bowie, MD, USA, 2019. [Google Scholar]
  7. Kim, J.E.; Abdelzaher, T.; Sha, L. Schedulability Bound for Integrated Modular Avionics Partitions. In Proceedings of the 2015 Design, Automation Test in Europe Conference Exhibition (DATE), Grenoble, France, 9–13 March 2015; pp. 37–42. [Google Scholar] [CrossRef]
  8. Annighöfer, B.; Kleemann, E. Large-Scale Model-Based Avionics Architecture Optimization Methods and Case Study. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 3424–3441. [Google Scholar] [CrossRef]
  9. Blikstad, M.; Karlsson, E.; Lööw, T.; Rönnberg, E. An Optimisation Approach for Pre-Runtime Scheduling of Tasks and Communication in an Integrated Modular Avionic System. Optim. Eng. 2018, 19, 977–1004. [Google Scholar] [CrossRef]
  10. Craciunas, S.S.; Oliver, R.S. Combined Task- and Network-level Scheduling for Distributed Time-triggered Systems. Real-Time Syst. 2016, 52, 161–200. [Google Scholar] [CrossRef]
  11. Chen, J.; Du, C.; Xie, F.; Yang, Z. Schedulability Analysis of Non-Preemptive Strictly Periodic Tasks in Multi-Core Real-Time Systems. Real-Time Syst. 2016, 52, 239–271. [Google Scholar] [CrossRef]
  12. Yoon, M.K.; Kim, J.E.; Bradford, R.; Sha, L. Holistic Design Parameter Optimization of Multiple Periodic Resources in Hierarchical Scheduling. In Proceedings of the 2013 Design, Automation Test in Europe Conference Exhibition (DATE), Grenoble, France, 18–22 March 2013; pp. 1313–1318. [Google Scholar] [CrossRef]
  13. Alur, R.; Dill, D.L. A Theory of Timed Automata. Theor. Comput. Sci. 1994, 126, 183–235. [Google Scholar] [CrossRef]
  14. UPPAAL Home. Available online: http://www.uppaal.org/ (accessed on 15 June 2024).
  15. Boudjadar, J.; Kim, J.H.; Larsen, K.; Nyman, U. Compositional Schedulability Analysis of An Avionics System Using UPPAAL. In Proceedings of the 1st International Conference on Advanced Aspects of Software Engineering, ICAASE 2014, Constantine, Algeria, 2–4 November 2014; Volume 1294. [Google Scholar]
  16. Kim, J.H.; Boudjadar, A.; Nyman, U.; Mikučionis, M.; Larsen, K.G.; Skou, A.; Lee, I.; Thi Xuan Phan, L. Quantitative Schedulability Analysis of Continuous Probability Tasks in a Hierarchical Context. In Proceedings of the 18th International ACM SIGSOFT Symposium on Component-Based Software Engineering, CBSE ’15, Montreal, QC, Canada, 4–8 May 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 91–100. [Google Scholar] [CrossRef]
  17. Boudjadar, A.; David, A.; Kim, J.H.; Larsen, K.G.; Mikučionis, M.; Nyman, U.; Skou, A. Statistical and Exact Schedulability Analysis of Hierarchical Scheduling Systems. Sci. Comput. Program. 2016, 127, 103–130. [Google Scholar] [CrossRef]
  18. Kim, J.H.; Legay, A.; Traonouez, L.-M.; Boudjadar, A.; Nyman, U.; Larsen, K.G.; Lee, I.; Choi, J.-Y. Optimizing the Resource Requirements of Hierarchical Scheduling Systems. SIGBED Rev. 2016, 13, 41–48. [Google Scholar] [CrossRef]
  19. Ahn, S.J.; Hwang, D.Y.; Kang, M.; Choi, J.-Y. Hierarchical System Schedulability Analysis Framework Using UPPAAL. IEICE Trans. Inf. Syst. 2016, E99.D, 2172–2176. [Google Scholar] [CrossRef]
  20. Han, P.; Zhai, Z.; Nielsen, B.; Nyman, U. A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems. In Proceedings of the Electronic Proceedings in Theoretical Computer Science (EPTCS), Thessaloniki, Greece, 20 April 2018; Volume 268, pp. 150–168. [Google Scholar] [CrossRef]
  21. Singh, A.; D’Souza, M.; Ebrahim, A. Conformance Testing of ARINC 653 Compliance for a Safety Critical RTOS Using UPPAAL Model Checker. In Proceedings of the 36th Annual ACM Symposium on Applied Computing; SAC ’21, Virtual Event, 22–26 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1807–1814. [Google Scholar] [CrossRef]
  22. Han, P.; Zhai, Z.; Nielsen, B.; Nyman, U. Model-based optimization of ARINC-653 partition scheduling. Int. J. Softw. Tools Technol. Transfer. 2021, 23, 721–740. [Google Scholar] [CrossRef]
  23. Radio Technical Commission for Aeronautics (RTCA). RTCA DO-297: Integrated Modular Avionics (IMA) Development Guidance and Certification Considerations; RTCA: Washington, DC, USA, 2005. [Google Scholar]
  24. VanderLeest, S.H.; Matthews, D.C. Incremental Assurance of Multicore Integrated Modular Avionics (IMA). In Proceedings of the 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), San Antonio, TX, USA, 3–7 October 2021; pp. 1–9. [Google Scholar] [CrossRef]
  25. Shin, I.; Lee, I. Compositional real-time scheduling framework with periodic model. ACM Trans. Embed. Comput. Syst. 2008, 7, 1–39. [Google Scholar] [CrossRef]
  26. Hughes, W.J. Assurance of Multicore Processors in Airborne Systems; DOT/FAA/TC-16/51; Federal Aviation Administration (FAA). 2017; p. 121. Available online: https://www.faa.gov/sites/faa.gov/files/aircraft/air_cert/design_approvals/air_software/TC-16-51.pdf (accessed on 10 August 2024).
  27. Beyer, H.-G. An Alternative Explanation for the Manner in Which Genetic Algorithms Operate. Biosystems 1997, 41, 1–15. [Google Scholar] [CrossRef] [PubMed]
  28. Sakurai, Y.; Takada, K.; Kawabe, T.; Tsuruta, S. A Method to Control Parameters of Evolutionary Algorithms by Using Reinforcement Learning. In Proceedings of the 2010 Sixth International Conference on Signal-Image Technology and Internet Based Systems, Kuala Lumpur, Malaysia, 15–18 December 2010; pp. 74–79. [Google Scholar] [CrossRef]
  29. Easwaran, A.; Lee, I.; Sokolsky, O.; Vestal, S. A Compositional Framework for Avionics (ARINC-653) Systems (2009). Technical Reports (CIS). Paper 898. Available online: http://repository.upenn.edu/cis_reports/898 (accessed on 10 August 2024).
  30. Davis, R.; Burns, A. An Investigation into Server Parameter Selection for Hierarchical Fixed Priority Pre-Emptive Systems. In Proceedings of the 16th International Conference on Real-Time and Network Systems (RTNS 2008), Rennes, France, 16–17 October 2008. [Google Scholar]
  31. Bäck, T.; Fogel, D.B.; Michalewicz, Z. Evolutionary Computation 1—Basic Algorithms and Operators; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
  32. Mühlenbein, H.; Schlierkamp-Voosen, D. Predictive Models for the Breeder Genetic Algorithm i. Continuous Parameter Optimization. Evol. Comput. 1993, 1, 25–49. [Google Scholar] [CrossRef]
  33. Carnevali, L.; Pinzuti, A.; Vicario, E. Compositional Verification for Hierarchical Scheduling of Real-Time Systems. IEEE Trans. Softw. Eng. 2013, 39, 638–657. [Google Scholar] [CrossRef]
  34. Standard Task Graph Set. Available online: https://www.kasahara.cs.waseda.ac.jp/schedule/ (accessed on 10 August 2024).
Figure 1. The partition scheduling framework of an ARINC 653 multicore processing system.
Figure 1. The partition scheduling framework of an ARINC 653 multicore processing system.
Aerospace 11 00915 g001
Figure 2. An example of ARINC 653 multicore partition scheduling.
Figure 2. An example of ARINC 653 multicore partition scheduling.
Aerospace 11 00915 g002
Figure 3. The framework of the model-based optimizer.
Figure 3. The framework of the model-based optimizer.
Aerospace 11 00915 g003
Figure 4. An example of genetic reproduction (x, y: selected parents; z: recombined offspring of x and y; z′: mutated offspring of z).
Figure 4. An example of genetic reproduction (x, y: selected parents; z: recombined offspring of x and y; z′: mutated offspring of z).
Aerospace 11 00915 g004
Figure 5. Data flow of self-adaptive configuration based on Q-learning.
Figure 5. Data flow of self-adaptive configuration based on Q-learning.
Aerospace 11 00915 g005
Figure 6. Template of partition scheduler describing the entering and exiting actions of a partition.
Figure 6. Template of partition scheduler describing the entering and exiting actions of a partition.
Aerospace 11 00915 g006
Figure 7. Template of task scheduler that manages the tasks within a partition.
Figure 7. Template of task scheduler that manages the tasks within a partition.
Aerospace 11 00915 g007
Figure 8. Task template that models the temporal behavior of a task within a partition.
Figure 8. Task template that models the temporal behavior of a task within a partition.
Aerospace 11 00915 g008
Figure 9. Hierarchical structure of the optimizer.
Figure 9. Hierarchical structure of the optimizer.
Aerospace 11 00915 g009
Figure 10. Evolution of the populations in experiment 1.
Figure 10. Evolution of the populations in experiment 1.
Aerospace 11 00915 g010
Figure 11. Processing times in the scalability experiment.
Figure 11. Processing times in the scalability experiment.
Aerospace 11 00915 g011
Table 1. The configuration of the cluster in the experiments.
Table 1. The configuration of the cluster in the experiments.
ConfigurationHardware/Software Information
Nodes4 workstations
ProcessorsAMD Ryzen Threadripper 7970X 4.0 GHz × 4
Number of cores32 cores/processor
Memory512 GB/node
Disk size16 TB
Interaction40 Gbps QDR InfiniBand
OSUbuntu Server 22.04.4
UPPAAL64-bit Linux version 4.1.25
PythonPython 3.12.3
LibrariesOpenMPI 4.1.6, mpi4py 3.1.6
Table 2. Task sets in experiment 1.
Table 2. Task sets in experiment 1.
PartitionTask set
TaskPeriodWCETDeadlinePriorityCore
P 1 τ 1 1 160080100040
τ 2 1 2400120200030
τ 3 1 3200160300020
τ 4 1 4800240400010
P 2 τ 1 2 160080100041
τ 2 2 2400120200031
τ 3 2 3200160300021
τ 4 2 4800240400011
Table 3. Scheduling results in experiment 1.
Table 3. Scheduling results in experiment 1.
MethodDemand VectorProcessor OccupancyOptimal
ES(1600, 341, 1600, 341)45.1%Yes
GP(483, 137, 488, 152)67.8%No
GA 1(630, 170, 1260, 350)59.5%No
GA 2(1200, 280, 800, 160)47.5%No
MBM653(1600, 341, 1600, 341)45.1%Yes
Table 4. Task set in experiment 2.
Table 4. Task set in experiment 2.
PartitionTask Set
TaskPeriodInitial OffsetBCETWCETDeadlinePriorityCore
P 1 τ 1 1 2520.91.52550
τ 2 1 5030.20.45040
τ 3 1 5032.74.25030
τ 4 1 5000.10.25020
τ 5 1 12000.71.112010
P 2 τ 1 2 5001.93.05051
τ 2 2 5020.71.15041
τ 3 2 10000.10.210031
τ 4 2 100101.01.610021
P 3 τ 1 3 2500.50.82552
τ 2 3 5000.71.15042
τ 3 3 5001.01.65032
τ 4 3 100110.81.310022
P 4 τ 1 4 2530.71.22553
τ 2 4 5051.21.95043
τ 3 4 50250.10.25033
τ 4 4 100110.71.110023
τ 5 4 200133.75.820013
P 5 τ 1 5 5000.71.15060
τ 2 5 5021.21.95050
τ 3 5 20000.60.920040
τ 4 5 200141.52.420030
Table 5. Scheduling results in experiment 2.
Table 5. Scheduling results in experiment 2.
MethodDemand VectorProcessor Occupancy
Empirical(25, 4.8, 25, 4.8, 25, 4.8, 25, 4.8, 25, 4.8)100%
MBM653(25, 4.9, 25, 4.7, 25, 3.4, 25, 4.5, 50, 4.5)83%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, P.; Hu, W.; Zhai, Z.; Huang, M. A Model-Based Optimization Method of ARINC 653 Multicore Partition Scheduling. Aerospace 2024, 11, 915. https://doi.org/10.3390/aerospace11110915

AMA Style

Han P, Hu W, Zhai Z, Huang M. A Model-Based Optimization Method of ARINC 653 Multicore Partition Scheduling. Aerospace. 2024; 11(11):915. https://doi.org/10.3390/aerospace11110915

Chicago/Turabian Style

Han, Pujie, Wentao Hu, Zhengjun Zhai, and Min Huang. 2024. "A Model-Based Optimization Method of ARINC 653 Multicore Partition Scheduling" Aerospace 11, no. 11: 915. https://doi.org/10.3390/aerospace11110915

APA Style

Han, P., Hu, W., Zhai, Z., & Huang, M. (2024). A Model-Based Optimization Method of ARINC 653 Multicore Partition Scheduling. Aerospace, 11(11), 915. https://doi.org/10.3390/aerospace11110915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop