Multi-Core Time-Triggered OCBP-Based Scheduling for Mixed Criticality Periodic Task Systems

Mixed criticality systems are one of the relatively new directions of development for the classical real-time systems. As the real-time embedded systems become more and more complex, incorporating different tasks with different criticality levels, the continuous development of mixed criticality systems is only natural. These systems have practically entered every field where embedded systems are present: avionics, automotive, medical systems, wearable devices, home automation, industry and even the Internet of Things. While scheduling techniques have already been proposed in the literature for different types of mixed criticality systems, the number of papers addressing multiprocessor platforms running in a time-triggered mixed criticality environment is relatively low. These algorithms are easier to certify due to their complete determinism and isolation between components of different criticalities. Our research has centered on the problem of real-time scheduling on multiprocessor platforms for periodic tasks in a time-triggered mixed criticality environment. A partitioned, non-preemptive, table-driven scheduling algorithm was proposed, called Partitioned Time-Triggered Own Criticality Based Priority, based on a uniprocessor mixed criticality method. Furthermore, an analysis of the scheduling algorithm is provided in terms of success ratio by comparing it against an event-driven and a time-triggered method.


Introduction
Embedded real-time systems are becoming more present in our everyday life, from fields such as automotive, avionics, military and industrial control systems to medical equipment and even domestic applications and Internet of Things. A new trend in the design of real-time and embedded systems is the integration of components with different criticality levels into the same hardware platform. Mixed criticality systems (MCSs) are "embedded computing platforms in which application functions of different criticality share computation and/or communication resources" [1]. Additionally, these platforms are migrating from single cores to multi-cores due to an increase in application complexity and strict requirements such as cost, space, weight, power consumption and so on.
While multiple scheduling techniques have already been proposed in the literature for different types of mixed criticality systems, the number of papers addressing multiprocessor platforms running in a time-triggered mixed criticality environment is relatively low compared to event-driven approaches [2,3].
For a time-triggered environment, activities in the system are triggered by the progression of time [4]. The scheduling decisions made at each time instant follow the precomputed schedule stored in a scheduling table. These scheduling tables offer simplicity, isolation between components of different criticalities, determinism and are easy to verify,

•
The extension of a mixed criticality uniprocessor table-driven scheduling algorithm to a mixed criticality algorithm for periodic tasks on a multiprocessor platform (Sections 4.1, 4.2 and 4.6). The original method has been modified to employ a periodic mixed criticality job model (Sections 4.4 and 4.5). • The proposal of a task partitioning heuristic for the multiprocessor mixed criticality system (Section 4.3).

•
The comparison of the newly developed algorithm in terms of success ratio with two state-of-the-art methods (Section 5). • This current paper aims to demonstrate the efficiency of algorithm [6] through: • More experiments and comparisons. • Additional details about the algorithm implementation.
The remainder of this paper is structured as follows: Section 2 covers some related work regarding event-driven and time-triggered scheduling algorithms in mixed criticality systems. Section 3 addresses the scheduling problems of the time-triggered real-time mixed criticality systems. In Section 4, the Partitioned Time-Triggered Own Criticality Based Priority (P-TT-OCBP) multiprocessor scheduling algorithm is introduced and explained, while in Section 5 we analyze the performance of our method by comparing it against an eventdriven scheduling method (P-EDF-VD) and a time-triggered algorithm (P_FENP_MC) in terms of success ratio. Finally, Section 6 summarizes the conclusions.

Related Work
Since Vestal's initial work [7] a number of studies have been introduced for mixed criticality scheduling. Algorithms in mixed criticality systems (MCSs) can be classified based on their scheduling points (i.e., the moments in time when scheduling decisions occur) into two main categories: event-driven and time-triggered.

Event-Driven Scheduling Algorithms
Research in real-time scheduling for MCSs has been centered around event-driven approaches. In event-driven scheduling, the scheduling points are defined by task completion and task arrival events [8]. Some examples of event-driven schedulers are [9][10][11][12][13].
A well-known event-driven scheduling algorithm in MCSs is Earliest Deadline First with Virtual Deadlines (EDF-VD) [9] for two criticality levels (Hi (high) criticality and Lo (low) criticality). Under EDF-VD, if the system is in Lo mode, each high criticality task is assigned a virtual deadline, which is earlier than its actual deadline. If the system is in Hi mode, high criticality tasks are scheduled according to their real deadlines.
By extending EDF-VD to support adaptive task dropping under task-level mode switch, two uniprocessor algorithms were introduced in [10], namely EDF-Adaptive task Dropping and EDF-AD-E (Enhanced). For multiprocessor platforms, a method is described in [11] based on setting virtual deadlines from any feasible fluid rates, while in [12], a fluidbased algorithm was implemented, which allows tasks to execute on the same processor simultaneously. In [13], a semi-partitioned mixed criticality method is presented, which offers low criticality task migration from one processor to another once mode switch occurs, in order to improve the service of low criticality tasks in the high criticality mode.

Time-Triggered Scheduling Algorithms
Despite the popularity of event-driven algorithms, current practice in many safety critical application domains favors time-triggered (TT) methods due to their complete determinism, which facilitates certification. In the TT paradigm, scheduling decisions are made at predetermined points in time [4]. Thus, a schedule is computed prior to runtime for the entire execution of the system and is represented in a scheduling table. Each scheduling decision made during run-time is determined by examining this scheduling table. To the best of our knowledge, few papers have addressed time-triggered scheduling in MCSs [4,[14][15][16][17].
A scheduling algorithm with real-time, non-preemptive and table-driven characteristics for MCSs is proposed in [14]. This algorithm guarantees a perfectly periodical execution in time-triggered mixed criticality environments and was implemented in two variants, i.e., Fixed Execution Non-Preemptive Mixed Criticality (FENP_MC) and P_FENP_MC (Partitioned), to meet the demands of both uniprocessor and homogeneous multiprocessor settings. The main advantage is that the algorithm assures a perfectly periodical (i.e., jitter-less) task execution in time-triggered mixed criticality environment, but its main disadvantage is that it has a relatively low success ratio for a high processor utilization of the task set to be scheduled.
The time-triggered algorithm presented in [15] is specifically designed for uniprocessor platforms and applies dynamic voltage and frequency scaling to reduce the energy consumption. This paper proposes the first energy-efficient time-triggered algorithm for MCSs. The schedule constructed by energy-efficient TT-Merge outperforms energy-efficient EDF-VD [18]. However, the algorithm uses continuous frequency levels; therefore, it might not be optimal with respect to energy consumption for discrete frequency levels, which are more common in practice. Another noteworthy algorithm, but this time explicitly developed for identical multiprocessor platforms executing mixed criticality tasks, is reported in [16]. This algorithm performs better than previous time-triggered, multiprocessor methods [17] in terms of scheduling overhead.
Baruah et al. [5] offer a method for building scheduling tables to allocate jobs' priorities according to the Own Criticality Based Priority (OCBP) algorithm [19]. By this, a correct scheduling strategy driven by a priority-based mechanism has been provided. Our algorithm extends this approach for building scheduling tables with the specific case of periodic tasks, by considering the periodic task set as a collection of independent jobs which explicitly enumerates all the jobs in the system. This paper also provides a partitioning heuristic for the case of multiprocessor platforms. To our knowledge, very few time-triggered algorithms for multiprocessor platforms exist in the literature.
Our time-triggered algorithm offers complete determinism and isolation between components of different criticalities in comparison to the event-driven multiprocessor algorithms described previously [11][12][13]. This ensures the certification of high criticality functionalities under very conservative assumptions. In general, isolation between components of different criticalities can cause very low resource utilization. This happens because platform resources are reserved for the exclusive use of high criticality functionalities in order to meet certification requirements under pessimistic assumptions. Due to isolation, these resources cannot be reclaimed by less critical applications. However, the uniprocessor time-triggered algorithm which our paper extends allows high utilization of platform resources under less pessimistic assumptions. We have also decided to adapt the algorithm for periodic tasks because they are independent, run cyclically and their characteristics are known in advance.
In the following sections, the proposed scheduling algorithm is described, analyzed and compared in terms of success ratio to two multiprocessor methods from the literature, an event-driven and a time-triggered technique.

Model and Problem Statement
The problem which we address in this paper is to implement a multiprocessor mixed criticality scheduling algorithm by adapting a classical algorithm [5] to a periodic task execution model and also to extend it from a uniprocessor system to a multiprocessor one [20].
In this section, we formally define the mixed criticality job model used. For a dual criticality system, we used a task model with the following properties, based on the standard MCSs model [7,21] and an extension for periodic tasks [14]: • An MCS executes in either of two modes: Hi-criticality mode or Lo-criticality mode. • Each mixed-criticality task τ i is characterized by a set of parameters [7,14]: where T i , D i and L i denote, respectively, the period, the deadline and the criticality level (i.e., Lo or Hi) of the task i; C i,Lj is a vector containing the worst-case execution times (WCETs) for each criticality level; and S i,Lj is a vector where each element represents the execution start time, relative to its release time, for each criticality level that is lower than or equal to the task criticality level L i , with S i,Lj < D i .
• A task consists of a series of jobs that inherit some of the parameters of the task (T i ,D i ,L i ). Furthermore, each job adds its own parameters, which means that the k-th job of task i is characterized by the following: where: • a i,k represents the arrival time of job k, with a i,k+1 − a i,k ≥ T i . • d i,k is the absolute deadline of job k and can be obtained using d i,k = a i,k + D i . • c i,k expresses the execution time and depends on the criticality mode of the system (e.g., for L = L o , c i,k = C i,Lo ). • s i,k offers the absolute execution start time corresponding to job k and, similar to c i,k , also depends on the criticality mode of the system.

Algorithm P-TT-OCBP
This section describes the mapping heuristic used for partitioning tasks to processors and the non-preemptive scheduling algorithm implemented at the processor level. As mentioned before, the algorithm is an extension of the method described by Baruah et al. in [5].

Original Algorithm
The original algorithm [5] uses a sufficient MC-schedulability test, namely the Own Criticality Based Priority [19] to find a complete ordering of the jobs. The priority assignment list is constructed offline (Algorithm 1).
The job with the lowest priority is determined first: the lowest priority may be assigned to a job J k if there is at least c k,Lj units of time between its arrival time and its absolute deadline available when every other job J x is executed before J k for c x,Lj units of time. OCBP assumes that every job, other than J k , has priority over J k and ignores whether these jobs meet their deadlines or not. The algorithm is applied repeatedly to the set of jobs (excluding the lowest priority job), until all the jobs are ordered, or at some iteration, a lowest priority job does not exist [22,23].
In [22], the OCBP method was compared, in terms of processor speedup factor, to two techniques used for resource allocation and scheduling in MCSs and it was concluded that the OCBP-schedulability test has better performance.

Algorithm 1: Own Criticality Based Priority.
Input: ∆ p (the job list for processor p) Output: Y p (the priority list for processor p) sort ∆ p in non-decreasing order by

Working Hypothesis
For the proposed scheduling algorithm, we have considered a homogenous multicore (where the number of cores is equivalent to the number of processors), non-preemptive, dual criticality system (i.e., a mixed criticality system with two criticality levels: low and high), running periodical tasks.
• A dual criticality system is defined to execute in one of two modes: Lo-criticality mode and Hi-criticality mode. • Each job is characterized by the set of parameters described in (2), with C(Lo) ≤ C(Hi).

•
The system starts in Lo-criticality mode and does not change as long as jobs execute within their Lo-criticality WCETs.

•
If any job overruns its Lo-criticality WCET, then a criticality mode change occurs. • As the system instantly moves to Hi-criticality mode, all Lo-criticality jobs are dropped (they are no longer executed). Hi-criticality jobs are allowed to run according to their Hi-criticality WCETs.

•
The system remains in Hi-criticality mode.

•
In this paper, we only consider the mode change from Lo-criticality to Hi-criticality.

Partitioning Tasks to Processors
As the demand for increased performance and general-purpose programmability grows, general-purpose multi-core processors are being adopted in all segments of the industry. By adding more specifications while preserving reasonable power characteristics, parallel processing improves performance [24]. Thus, our algorithm was developed for mixed criticality multiprocessor platforms.
The task mapping algorithm that we are using is based on a well-known task partitioning heuristic from the literature, namely first fit decreasing (FFD) [25].
Tasks are selected one by one from the task set and added in each processor, where two conditions must be verified: the current processor utilization (for both Lo-criticality mode and Hi-criticality mode), which is the sum of utilizations for all the tasks on the processor, must not exceed 1 [26]. Tasks are sorted in non-decreasing order of their periods.
The task partitioning method is described below: • The utilization of each task is computed based on the criticality level (3): for Hicriticality tasks there will be two utilizations (one for each criticality level).
• Tasks are selected one by one from the task set and added into each processor where a test is performed. • Two conditions must be verified (4): (1) The current total processor utilization in Lo-criticality mode U P q (Lo) must not exceed 1.
(2) The current total processor utilization in Hi-criticality mode U P q (Hi) must not exceed 1.
• If the above two conditions are met, the task will be assigned to P q and the total processor utilizations are updated.

•
If one of the two conditions returns failure, the task is removed from P q and added into the next processor, where the same test is performed These steps are repeated until all the tasks are partitioned into processors.

Constructing the List of Jobs at the Processor Level
The periodic tasks on each processor are represented as a collection of independent jobs, obtained by explicitly enumerating all the jobs over the hyperperiod interval.
Each job inherits a set of parameters from the task (T i ,D i ,L i ,C i ), to which we add an arrival time and absolute deadline of the job according to (5) and (6):

Scheduling at the Processor Level
The priority list is constructed using an algorithm called Own Criticality Based Priority (OCBP) [5], where priorities are assigned to jobs based on the following criteria:

•
The job list to be prioritized must be parsed in non-decreasing order of deadlines d i,k .

•
The criticality level of the first job k from the list is verified: • If the criticality level is Lo we compute the sum of the Lo-criticality WCETs (sum(Lo)) for the rest of the jobs.

•
If the criticality level is Hi we compute two sums, one for Lo-criticality WCETs (sum(Lo)) and one for Hi-criticality WCETs (sum(Hi)) for the rest of the jobs.
• Next, the algorithm checks if job k can be added in the priority list, depending on its criticality level: • For a Lo-criticality level: • For a Hi-criticality level, two conditions must be met: • If these conditions are met, job k is moved from the list of jobs to the priority list. Otherwise, the next job k + 1 in the list is taken, until the entire list of jobs is verified.

•
If jobs are still in the list after the list of jobs is parsed at least once, the same algorithm is computed again, until no more jobs are left. • If at least two jobs remain in the list of jobs which cannot be prioritized, the set of tasks is deemed not schedulable.

•
The resulting priority list is sorted in non-decreasing order of deadlines. The schedule is constructed based on the priority list as follows: • The first job is extracted from the priority list, with s i,k = 0. • We then compute the completion time (ct i,k ) of the job: • For the next k − 1 jobs, we compare the arrival time with the previous job completion time: if the completion time is greater than the arrival time, then the start time will take on the value of the previous job completion time; otherwise, the start time will be equal to the current job arrival time. The completion time is computed using Equation (9).
Our scheduler creates, in an offline phase, two dispatch tables for each processor (one for the Lo-criticality mode and one for the Hi-criticality mode), called scheduling tables.
The scheduling table for processor q ( • We then compute the completion time (cti,k) of the job: For the next k − 1 jobs, we compare the arrival time with the previous job completion time: if the completion time is greater than the arrival time, then the start time will take on the value of the previous job completion time; otherwise, the start time will be equal to the current job arrival time. The completion time is computed using Equation (9). Our scheduler creates, in an offline phase, two dispatch tables for each processor (one for the Lo-criticality mode and one for the Hi-criticality mode), called scheduling tables.
The scheduling table for processor q ( ȴ q) is presented below as an array of structures: where ȴq is sorted in non-decreasing order of the job start time on each processor.
Next, we present an example in order to illustrate the construction of our scheduling tables for a single processor platform. Let us consider the task set presented in Table 1:  Tables 2 and 3 illustrate the scheduling tables for Lo-criticality mode and Hi-criticality mode of the task set example, presented in Table 1.
For the next k − 1 jobs, we compare the arrival time with the previous job completion time: if the completion time is greater than the arrival time, then the start time will take on the value of the previous job completion time; otherwise, the start time will be equal to the current job arrival time. The completion time is computed using Equation (9). Our scheduler creates, in an offline phase, two dispatch tables for each processor (one for the Lo-criticality mode and one for the Hi-criticality mode), called scheduling tables.
The scheduling table for processor q ( ȴ q) is presented below as an array of structures: where ȴq is sorted in non-decreasing order of the job start time on each processor.
Next, we present an example in order to illustrate the construction of our scheduling tables for a single processor platform. Let us consider the task set presented in Table 1:  Tables 2 and 3 illustrate the scheduling tables for Lo-criticality mode and Hi-criticality mode of the task set example, presented in Table 1.
where W 7 of 14

•
We then compute the completion time (cti,k) of the job: , = , + , (9) • For the next k − 1 jobs, we compare the arrival time with the previous job completion time: if the completion time is greater than the arrival time, then the start time will take on the value of the previous job completion time; otherwise, the start time will be equal to the current job arrival time. The completion time is computed using Equation (9). Our scheduler creates, in an offline phase, two dispatch tables for each processor (one for the Lo-criticality mode and one for the Hi-criticality mode), called scheduling tables.
The scheduling table for processor q ( ȴ q) is presented below as an array of structures: where ȴq is sorted in non-decreasing order of the job start time on each processor.
Next, we present an example in order to illustrate the construction of our scheduling tables for a single processor platform. Let us consider the task set presented in Table 1:  Tables 2 and 3 illustrate the scheduling tables for Lo-criticality mode and Hi-criticality mode of the task set example, presented in Table 1.

TaskID
JobID StartTime 1 0 0 q is sorted in non-decreasing order of the job start time on each processor. Next, we present an example in order to illustrate the construction of our scheduling tables for a single processor platform. Let us consider the task set presented in Table 1:  Tables 2 and 3 illustrate the scheduling tables for Lo-criticality mode and Hi-criticality mode of the task set example, presented in Table 1. Table 2. Lo-criticality mode scheduling table for the task set in Table 1.

System Execution
The system execution flowchart is illustrated in Figure 1. Each task in the task set is partitioned on processors using the first fit decreasing (FFD) algorithm [25]. At the processor level, two job lists are created (a list for all the jobs on the processor and a list containing only the jobs of Hi-criticality tasks) by explicitly enumerating the jobs of the tasks assigned to the processor. Then, the priority list construction is verified by using the Own Criticality Based Priority (OCBP) method for the two modes of the system. If the priority list creation fails, the task set is unschedulable. Otherwise, two scheduling tables are constructed, one for each criticality mode of the system.

System Execution
The system execution flowchart is illustrated in Figure 1. Each task in the task set is partitioned on processors using the first fit decreasing (FFD) algorithm [25]. At the processor level, two job lists are created (a list for all the jobs on the processor and a list containing only the jobs of Hi-criticality tasks) by explicitly enumerating the jobs of the tasks assigned to the processor. Then, the priority list construction is verified by using the Own Criticality Based Priority (OCBP) method for the two modes of the system. If the priority list creation fails, the task set is unschedulable. Otherwise, two scheduling tables are constructed, one for each criticality mode of the system.

Evaluation
In this section we will undertake an experimental comparison between P-TT-OCBP and two other multiprocessor scheduling methods: an event-driven, non-preemptive algorithm which uses the FFD [25] heuristic for task partitioning to processors, namely Partitioned Earliest Deadline First with Virtual Deadlines (P-EDF-VD) [27] and a tabledriven, non-preemptive, perfectly periodical scheduling method, called Partitioned Fixed Execution Non-Preemptive Mixed Criticality (P_FENP_MC) [14]. All tasks were randomly generated in Matlab (R2018b) and the simulation environment was developed for multiprocessor mixed criticality systems, using C++.

Task Set Generation
In our experiments, we employed randomly generated task sets inside a dual criticality platform (Lo, Hi) that were generated using a variant [28] of the workload-generation algorithm provided by Guan et al. [29]. The methodology used for generating the task sets is similar with the one used in [14] and generates the parameters of each new task τi as follows:  (3)), is a vector of size l, where l is the number of criticality levels. The utilizations are generated using five input parameters [28]:

Evaluation
In this section we will undertake an experimental comparison between P-TT-OCBP and two other multiprocessor scheduling methods: an event-driven, non-preemptive algorithm which uses the FFD [25] heuristic for task partitioning to processors, namely Partitioned Earliest Deadline First with Virtual Deadlines (P-EDF-VD) [27] and a table-driven, nonpreemptive, perfectly periodical scheduling method, called Partitioned Fixed Execution Non-Preemptive Mixed Criticality (P_FENP_MC) [14]. All tasks were randomly generated in Matlab (R2018b) and the simulation environment was developed for multiprocessor mixed criticality systems, using C++.

Task Set Generation
In our experiments, we employed randomly generated task sets inside a dual criticality platform (Lo, Hi) that were generated using a variant [28] of the workload-generation algorithm provided by Guan et al. [29]. The methodology used for generating the task sets is similar with the one used in [14] and generates the parameters of each new task τ i as follows: • Period: T i is drawn using a uniform distribution on [10,50].
Criticality level: L i = Hi with a given probability P Hi ; otherwise, L i = Lo. • Utilization: U i,Lj (see Equation (3)), is a vector of size l, where l is the number of criticality levels. The utilizations are generated using five input parameters [28]: where Hi(τ) is the subset of the entire task set τ which contains only the Hicriticality tasks.
The range of the ratio between Hi-criticality utilization of a task and Lo-criticality utilization, where 0 ≤ Z L ≤ Z U .
• WCET: (a) for Lo-criticality level: C i,L Lo = U i,L Lo ·T i and (b) for Hi-criticality level:

Execution Example and Comparison
An example of a task set is provided in Table 4 in order to illustrate the execution of our scheduling algorithm on a platform with two processors. The same task set is scheduled in Figure 2 using P-EDF-VD and P_FENP_MC for comparison.
Scheduling for both Lo-and Hi-criticality modes is illustrated in Table 4:

Success Ratio
In this section we will undertake an experimental evaluation on a dual criticality, multiprocessor platform between our algorithm P-TT-OCBP and two known scheduling methods in a non-preemptive context: P-EDF-VD and P_FENP_MC.
Each data point from the graph captions is determined by randomly generating 1000 task sets.
In Figure 3 we have four graphs with the number of processors ascending from 2 to 4, from 4 to 8 and from 8 to 12 processors. For each graph, the task set utilization bound on the x-axis ranges from 0.2 to 0.8 times the number of processors divided by 2, in steps of 0.1. The results of our experimental evaluation show that our algorithm has a high success ratio in comparison to P-EDF-VD and P_FENP_MC.

Success Ratio
In this section we will undertake an experimental evaluation on a dual criticality multiprocessor platform between our algorithm P-TT-OCBP and two known scheduling methods in a non-preemptive context: P-EDF-VD and P_FENP_MC.
Each data point from the graph captions is determined by randomly generating 1000 task sets.
In Figure 3 we have four graphs with the number of processors ascending from 2 to 4, from 4 to 8 and from 8 to 12 processors. For each graph, the task set utilization bound on the x-axis ranges from 0.2 to 0.8 times the number of processors divided by 2, in steps of 0.1. The results of our experimental evaluation show that our algorithm has a high suc cess ratio in comparison to P-EDF-VD and P_FENP_MC. For Figure 4, the number of processors on the x-axis ranges from 2 to 12, in steps of 2. It must be noted that between the four graphs, the Base Utilization bound (U bound = BU bound × (number o f processors/2)), ranges from 0.2 to 0.8, in steps of 0.2. The number of tasks in a task set varies according to the task set utilization bound. Therefore, a lower value on the x-axis decreases the number of tasks in a task set, while a higher value increases it.  For Figure 4, the number of processors on the x-axis ranges from 2 to 12, in steps of 2. It must be noted that between the four graphs, the Base Utilization bound ( = × ( /2)), ranges from 0.2 to 0.8, in steps of 0.2. The number of tasks in a task set varies according to the task set utilization bound. Therefore, a lower value on the x-axis decreases the number of tasks in a task set, while a higher value increases it.  For Figure 4, the number of processors on the x-axis ranges from 2 to 12, in steps of 2. It must be noted that between the four graphs, the Base Utilization bound ( = × ( /2)), ranges from 0.2 to 0.8, in steps of 0.2. The number of tasks in a task set varies according to the task set utilization bound. Therefore, a lower value on the x-axis decreases the number of tasks in a task set, while a higher value increases it. An apparently superior performance of P_FENP_MC for the first two graphs can be attributed to the particular implementation of the algorithm, which includes the mapping test executed when partitioning tasks to processors.  An apparently superior performance of P_FENP_MC for the first two graphs can be attributed to the particular implementation of the algorithm, which includes the mapping test executed when partitioning tasks to processors.
In Figure 5, the task set base utilization bound (BU bound ) ranges on the x-axis from 0.2 to 0.8, in steps of 0.1. It can be seen that the performance of the algorithm decreases when the number of processors increases. This is because the FFD heuristic, presented in Section 4.3 of this paper, allocates tasks on a processor as long as the total utilization of the processor is lower than or equal to 1. Since the utilization bound is higher when the number of processors increases, U bound = BU bound × (number o f processors/2), there is a higher chance of the task mapping being unsuccessful or the scheduling algorithm at the processor level failing. An apparently superior performance of P_FENP_MC for the first two graphs can be attributed to the particular implementation of the algorithm, which includes the mapping test executed when partitioning tasks to processors.
In Figure 5, the task set base utilization bound (BUbound) ranges on the x-axis from 0.2 to 0.8, in steps of 0.1. It can be seen that the performance of the algorithm decreases when the number of processors increases. This is because the FFD heuristic, presented in Section 4.3 of this paper, allocates tasks on a processor as long as the total utilization of the processor is lower than or equal to 1. Since the utilization bound is higher when the number of processors increases, = × ( /2), there is a higher chance of the task mapping being unsuccessful or the scheduling algorithm at the processor level failing.

Conclusions
As the complexity of safety critical applications increases, it is important to facilitate certification and to ensure efficient resource utilization. In this paper, we have proposed an algorithm for scheduling periodic tasks on multiprocessor mixed criticality systems, namely Partitioned Time-Triggered Own Criticality Based Priority (P-TT-OCBP). Our

Conclusions
As the complexity of safety critical applications increases, it is important to facilitate certification and to ensure efficient resource utilization. In this paper, we have proposed an algorithm for scheduling periodic tasks on multiprocessor mixed criticality systems, namely Partitioned Time-Triggered Own Criticality Based Priority (P-TT-OCBP). Our approach is based on a polynomial-time algorithm for generating time-triggered schedules and extended to deal with periodic tasks on multiprocessor platforms.
In addition, the algorithm performance was compared with an event-driven method, P-EDF-VD, and a table-driven approach, P_FENP_MC, in a non-preemptive context.
The experimental results show that our algorithm has a high success ratio when the number of processors is low. The higher the number of processors, the lower the success ratio due to the increased total utilization on each processor (the number of tasks scheduled on a processor is determined by the utilization bound). P-TT-OCBP outperforms the other two algorithms in terms of success ratio when the number of processors is low; however, if the number of processors increases, P_FENP_MC performs better due to the additional mapping test executed while partitioning tasks to processors.
As future work, practical implementations of the algorithm can be proposed for different real-time operating systems or real-time extensions of general-purpose operating systems such as Litmus-RT (a multiprocessor RT extension for Linux), which already provides support for time-triggered execution environment. The algorithm can also be adapted to heterogeneous multiprocessor mixed criticality systems. Funding: This research received no external funding.

Conflicts of Interest:
The authors declare no conflict of interest.