Scheduling Randomization Protocol to Improve Schedule Entropy for Multiprocessor Real-Time Systems

: Because most tasks on real-time systems are conducted periodically, its execution pattern is highly predictable. While such a property of real-time systems allows developing the strong schedulability analysis tools providing high analytical capability, it also leads that security attackers could analyze the predictable execution patterns of real-time systems and use them as attack surfaces. Among the few approaches to foil such a timing-inference security attack, TaskShufﬂer as a schedule randomization protocol received considerable attention owing to its simplicity and applicability. However, the existing TaskShufﬂer is only applicable to uniprocessor platforms, where the task execution pattern is quite simple to analyze when compared to multiprocessor platforms. In this study, we propose a new schedule randomization protocol for real-time systems on symmetry multiprocessor platforms where all processors are composed of the same architecture, which extends the existing TaskShufﬂer initially designed for uniprocessor platforms.


Introduction
The primary concerns in designing safe critical real-time systems are to develop a methodology to effectively allocate limited computing resources (e.g., memory and CPU (central processing unit)) to multiple real-time tasks (e.g., motor control and sensing), and to derive a mathematical analysis mechanism. This mechanism would ensure every operation conducted by real-time tasks are completed within predefined time units (called deadlines) to satisfy real-time requirements [1]. The former and latter, which are respectively referred to as real-time scheduling algorithms and schedulability analysis, have been extensively studied over the past several decades in the field of real-time systems [2][3][4].
Unlike conventional research, which mostly focuses on the timing guarantees under various computing environments with different operational constraints, recent studies have focused on the security aspect because the modern real-time systems are exposed to unknown security attacks. For instance, modern real-time systems are increasingly being connected to unsecured networks such as the Internet, which allows the sophisticated adversaries to launch security attacks on UAVs (unmanned aerial vehicles) [5], industrial control systems [6], and automobiles [7,8].
Predictability, which is a key property of real-time systems, facilitates the development of several effective schedulability analysis mechanisms, but causes an increase in the success ratio of security attacks because of easy timing inferences [9][10][11]. Because most of the real-time tasks are conducted periodically, the execution patterns can be highly predictable, and most of the strong schedulability analysis mechanisms such as deadline analysis (DA) [12] and response-time analysis (RTA) [13] exploit such a property to judge whether each real-time operation can be completed within the deadline on the target environment. Such a seemingly advantageous property plays a double-edged sword because the security attackers can analyze the predictable execution patterns of real-time systems and use them as attack surfaces. For example, recent studies have shown that an adversary can launch cache-based side-channel attacks by collecting information about important tasks or set up new covert channels [9,14]. The success ratio of such attacks is quite high because the periodic execution of real-time tasks results in repeated (based on the hyper-period of all real-time tasks on the system) scheduling patterns. After observing the real-time task schedules on the target systems, the adversary can predict future schedules of the system.
The main challenge in preventing such timing inference-based security attacks on real-time systems is making the initially predictable schedule unpredictable, while simultaneously satisfying the real-time constraint. Among the few proposed mechanisms that have been used to address the problem of the uniprocessor system, TaskShuffler (a schedule randomization protocol) has received considerable attention owing to its simplicity and applicability [15]. TaskShuffler exploits the notion of priority inversion budget, defined as the time interval between the finishing time of worst-case operation and deadline, of each task. Priority inversion budget value of each task implies that the task can complete its execution even if the other tasks execute for the amount of worst-case inversion budget instead of the task. Utilizing the calculated priority inversion budget value of each task, the TaskShuffler effectively selects random tasks at each schedule point and dynamically manages the value to induce uncertainty and satisfy the real-time constraint simultaneously.
As multiprocessor platforms have been increasingly adopted modern real-time systems to conduct highly resource-consuming tasks, the state-of-the-art techniques need to be tailored for multiprocessor platforms. For example, the latest embedded platforms developed for autonomous driving car consist of multiple CPUs to execute heavy-load tasks such as multiple sensing and image processing (e.g., NVIDIA Drive AGX Pegasus with 8-core "Carmel" CPUs based on ARM architecture) [16]. However, the existing TaskShuffler is only applicable to uniprocessor platforms, where the task execution pattern is quite simple to analyze when compared to multiprocessor platforms.
In this study, we propose a new schedule randomization protocol for real-time systems on symmetry multiprocessor platforms where all processors are composed of the same architecture, which extends the existing TaskShuffler initially designed for uniprocessor platforms. To develop a schedule randomization protocol for multiprocessor platforms, we need to address the following issues: (i) How to define the problem of improving the security (i.e., uncertainty of schedules) of real-time systems and satisfying the schedulability simultaneously on multiprocessor platforms, differentiating it from the uniprocessor case (Section 2), (ii) How to calculate the priority inversion budget value for each task on multiprocessors (Section 4.1), (iii) How to utilize the calculated priority inversion budget values in randomized schedules effectively, to improve uncertainty (Section 4.2), and (iv) How to satisfy the real-time requirement after applying the proposed schedule randomization protocol (Section 4.2).
To address point (i), we first recapitulate the underlying idea and purpose of the existing TaskShuffler designed for the uniprocessor case. Then, we define the problem for the multiprocessor case, which is addressed in this study. To address point (ii), we investigate each task's surplus computing power allowed for lower-priority tasks without missing its corresponding deadline. To address point (iii), such lower-bound computing power is effectively utilized by the new schedule randomization protocol proposed in this study. In addition, we demonstrate that if the task set is schedulable with the fixed-priority (FP) preemptive scheduling, then it is schedulable with the proposed schedule randomization protocol; this addresses point (iv). Using experimental simulations, we then discuss various factors affecting the uncertainty of new schedules.

Problem Definition
TaskShuffler assumes that the attacker knows task sets' parameters as well as scheduling policy of the target system [15]. The attacker aims at gleaning sensitive data such as the victim task's private key in shared resources such as DRAM (dynamic random access memory). Figure 1 briefly presents a scenario where the attacker launches a cache side-channel attack exploiting the scheduling pattern of real-time systems. The attacker first hijacks a task task A consecutively executing with a victim task task V ; it assumes that task A is relatively easy to hijack compared to task V because task A is not related to the security operation. Then, the attacker fills all cache sets with task A 's data before task V executes. Thereafter, task V operates a cryptographic operation with a private key. Here, all cache sets were already filled with task A ' data, and thus some cache sets are replaced by task V ' data. Later, task A reads cache sets and measures the latencies. Some cache sets used by task V result in slow latencies because of cache misses for task A . The attacker collects such timing information and reasons the location of the private key in the shared memory. To make such an attack scenario feasible, the attacker should monitor the execution pattern of the target system for a long time to catch the proper timing to launch the attack. Because the same execution pattern of real-time systems is (mostly) repeated, the success ratio of the attack would increase. Figure 2 presents the scheduling pattern of (20,3,20)} scheduled by a fixed-priority scheduling (τ 0 and τ 2 have the highest and lowest priorities, respectively) without ( Figure 2a) and with (Figure 2b) TaskShuffler on a uniprocessor platform. It shows which task executes in 200 time slots, and the number in each rectangle presents the task index executed at the time slot. As shown in Figure 2a, the scheduling pattern is repeated every 40 time units because the least common multiple of periods of tasks in τ is 40. On the other hand, the scheduling pattern is obfuscated by TaskShuffler (most importantly) without schedulability loss (Figure 2b). That is, every task both in Figure 2a,b completes its execution without any deadline miss. It implies that TaskShuffler could improve potential durability against timing-inference attacks by obfuscating the scheduling pattern of real-time systems without schedulability loss.
In this study, we aim to develop a new schedule randomization protocol for multiprocessor platforms by extending the existing TaskShuffler initially designed for uniprocessor platforms. To achieve this, we first recapitulate the underlying idea and purpose of the existing TaskShuffler designed for the uniprocessor case. Then, we define the problem for the multiprocessor case, which is addressed in this study.
The key property of real-time systems is that most of the tasks operate repetitively at periodic intervals. This indicates that the same scheduling pattern for a certain period of time (e.g., one hyper-period of all tasks) can be exhibited in the next period. Although such property aids in developing better-performing analysis techniques that can easily judge the schedulability of real-time systems, an adversary can launch timing-inference attacks by collecting information about important tasks or set up new covert channels.
The TaskShuffler compensates for such shortcoming of real-time systems by reducing the predictability of schedules for real-time tasks. Therefore, even if an observer can record the exact schedule for a certain time period, the same will not be exhibited in the next period under the TaskShuffler. The underlying idea of the existing TaskShuffler is to pick up a random task from the ready queue, unlike most of the real-time systems where the highest priority task is selected for scheduling. Such counterintuitive mechanisms lead to the priority inversion problem and deadline misses, placing system safety at risk. To solve this problem, the TaskShuffler only allows limited priority inversion for each task. That is, it restricts the use of priority inversion so that every task meets the original real-time constraint (i.e., meeting deadlines). To this end, it calculates the lower-bound amount of priority inversion that each task can tolerate, using the TaskShuffler protocol. When the priority inversion limit is reached during execution, the lower-priority task stops running, and the highest priority task is selected to be scheduled.  3,20)} scheduled by fixed-priority scheduling without and with TaskShuffler on a uniprocessor platform.
Then, the TaskShuffler achieves the following goal G.
G. Suppose that a task set τ is scheduled by a given FP scheduling algorithm S, and its schedulability is guaranteed by a given schedulability analysis A. Then, τ's schedulability is still guaranteed by a given schedulability analysis A when it is scheduled by the FP algorithm S incorporating the TaskShuffler protocol of uniprocessors.
Let λ(S) be the set of schedulable task sets scheduled by FP scheduling, and λ(A) be the set of task sets each of whose schedulability is guaranteed by a given schedulability analysis supporting FP scheduling. In addition, λ(S) denotes the set of schedulable task sets scheduled by FP scheduling incorporating TaskShuffler, and λ(A ) is the set of task sets each of whose schedulability is guaranteed by a given schedulability analysis supporting FP scheduling incorporating TaskShuffler. Figure 3a shows the regions of task sets covered by λ(S), λ(A), λ(S ), and λ(A ) for uniprocessor platforms. Figure 3a indicates a well-known fact that every schedulable task set τ ∈ λ(S)'s schedulability is guaranteed by the given exact schedulability analysis (e.g., RTA supporting FP scheduling) [13]. Then, the TaskShuffler targets task sets belonging to λ(A) (= λ(S)) and applies the TaskShuffler protocol; note that task sets out of λ(A) has nothing to do with the goal G. It utilizes a specialized schedulability analysis A to support the FP scheduling S incorporating the TaskShuffler protocol. When A is conducted for each task τ i ∈ τ, the lower-bound amount of priority inversion for each task is calculated, and the TaskShuffler exploits it without compromising the schedulability of task sets τ ∈ λ(A). Therefore, it achieves λ(S) = λ(A) = λ(A ) = λ(S ) as shown in Figure 3a.
For multiprocessor platforms, there is no exact schedulability analysis for our system model [17], which provides λ(S) = λ(A) as shown in Figure 3b. Therefore, we target (λ(A) λ(S)) in which tasks' schedulabiliy are guaranteed by the DA schedulability analysis and aim at achieving the goal G on multiprocessors.

System Model
We consider a task set τ the Liu and Layland task (and system) model (considered as the de-fecto standard in the field of real-time scheduling theory) in which every task τ k ∈ τ is scheduled by global, preemptive and work-conserving FP real-time scheduling algorithm on m identical multiprocessors [1]. A scheduling algorithm is called global, preemptive and work-conserving if a task migrates from one processor to another, a lower-priority task is preempted by a higher-priority one, and the processor is never idle when there are jobs to be executed, respectively. In FP scheduling, the priority is assigned to each task such that all jobs invoked by the same task have the same priority. The Liu and Layland model assumes that tasks are independent, no synchronization is needed, and resources (except processors or cores) are always available. According to the task model, we assume that all processors share the common cache and main memory. A task τ k periodically invokes a job J k in every T k time units, and each job is supposed to execute for at most C k time units (also known as the worst-case execution time (WCET) ) to declare its completion. Every job invoked by τ k should complete its execution in D k time units as a real-time constraint. j-th job invoked by a task τ k is denoted by J j k , and it is invoked at the release time r j k and should finish its execution within the absolute deadline d j k = r j k + D k . We use the notation J k when it indicates an arbitrary job of a task τ k . A job J k is said to be schedulable when it finishes its execution before its absolute deadline d j k . Further, a task τ k is said to be schedulable when all jobs invoked by τ k are schedulable, and a task set τ is said to be schedulable when all tasks τ k ∈ τ are schedulable. lp(τ k ) and hp(τ k ) represent the set of tasks whose priorities are lower than that of τ k and that whose priorities are higher than that of τ k , respectively. We consider online scheduling algorithms and offline schedulability analysis (in online scheduling algorithms and offline schedulability analysis, the priorities of pending or executing jobs (i.e., scheduled by the given scheduling algorithm) are determined after the system starts, while the schedulability of the given task set is judged before the system starts.)

Schedule Randomization Protocol for Multiprocessors
The key idea of our schedule randomization protocol is to select random jobs from the ready queue rather than the originally prioritized ones that are supposed to be selected by a given algorithm. This operation inevitably causes priority inversions for some tasks, which can induce an additional execution delay when compared to the existing schedule. This can result in a deadline miss in the worst case, even if the task is deemed schedulable by a given schedulability analysis. To avoid such situations, we should employ bounded priority inversions (calculated by the schedule randomization protocol) so that every task completes its execution before its corresponding deadline.
In this section, we first calculate the upper bound of allowed priority inversions for each task (in the first subsection), and then present how to effectively utilize such calculated priority inversions budget values in a schedule randomization protocol for multiprocessors.

Priority Inversion Budget Calculation
As an offline step of our schedule randomization protocol for multiprocessors, the maximum number of time units that are allowed for jobs of tasks in l p(τ k ) to execute when a job of the task τ k waits for another job to complete should be calculated (we use a subscript 'k' if the notation is related to a task whose schedulability will be judged. We use a subscript 'i' if the notation is related to higher priority tasks of τ k .). We refer such time units as the priority inversion budget, V k , which will be utilized in the online step of randomization protocol. We define the worst-case inversion budget as follows.
Definition 1. (Priority inversion budget V k ) The priority inversion budget V k of a task τ k is defined as the maximum amount of time units in [r k , d k ) that is allowed for jobs of tasks in l p(τ k ) to execute while a job J k of a task τ k waits on multiprocessors, which ensures schedulability of τ k .
Thus, V k can be lower-bound by calculating the allowable limit for delay in execution of J k caused by lower-priority jobs in [r k , d k ) without missing deadlines. To achieve this, we exploit the underlying mechanism of a well-known schedulability analysis called DA, which uses two notions of interference defined as follows.

Definition 2.
(Worst-case interference I k on τ k ) The worst-case interference I k on a task τ k in an interval [r k , d k ) on multiprocessors is defined as the maximum cumulative length of all the intervals in which J k is ready to execute but cannot be scheduled because of higher-priority jobs.

Definition 3.
(Worst-case interference I k←i of τ i on τ k ) The worst-case interference I k←i of a task τ i on a task τ k in an interval [r k , d k ) on multiprocessors is defined as the maximum cumulative length of all the intervals in which J k is ready to execute but cannot be executed on any processor when the job J i of the task τ i is executing.
Using the definition of I k , V k is calculated as follows. (1) For a job J k to interfere at a time unit in [r k , d k ), there are m jobs at the time unit. By the definition of I k←i , I k is calculated as follows.
To upper bound I k←i , we use the concept of workload of a task τ i in an interval of length , which is defined as the maximum amount of time units required for all jobs released from τ i in the interval of length . As shown in Figure 4, the left-most job (called the carry-in job) of τ i starts its execution at t (i.e., the beginning of the interval) and finishes it at t + . That is, it executes for C i without any delay. Thereafter, the following jobs are released and executed without any delay. By calculating the number of jobs executing for C i and the other jobs executing for a part of C i , the workload of τ i in an interval of length (i.e., I k←i ) is calculated by [18].
job release/deadline ℓ +1 +2 t t+ℓ  Figure 5 illustrates the underlying idea of DA with an example in which the target task τ k 's schedulability is judged considering its four higher priority tasks τ i ∈ hp(τ k ) on m = 3. In the interval [r j k , d j k ), there is only one job J j k of τ k , and there is no deadline miss if J j k 's execution is not hindered by more than D k − C k + 1. As seen in Figure 5, I k←i can be larger than D k − C k + 1, and some portion of that inevitably executes in parallel with J j k since we assume that each job cannot execution in parallel on more than one processor. Thus, DA limits the amount of I k←i to D k − C k + 1, which refines Equation (2) as follows.
job release/deadline -+1 ← Figure 5. An example of deadline analysis (DA) for four higher-priority tasks τ i and a target task τ k Theorem 1. Suppose that τ k ∈ τ scheduled by a given fixed-priority scheduling algorithm is deemed schedulable by DA, and tasks in l p(τ k ) do not delay τ k more than V k . Then, τ k is still schedulable with the randomization protocol.
Proof. By the definition τ k is schedulable if every job J j k released by τ k at r j k can finish its execution within D k . From Equation (1), we have The worst-case execution for J j k is upper bounded by C k , and the time in which J j k is hindered by higher-priority jobs is upper bounded by I k according to Equation (4). Therefore, if the tasks in l p(τ k ) do not delay τ k for more than V k , then every job J j k of τ k can finish its execution within D k .

Schedule Randomization Protocol
Based on the mechanism to calculate the priority inversion budget of each task τ i explained in the previous section, we illustrate how the new schedule randomization protocol for multiprocessors operates in this section. Let Q r = (J (1) , J (2) , · · · , J (|Q r |) ) be the ready queue in which active jobs are sorted in decreasing order of priority. It implies that J (1) and J (|Q r |) are the highest-and lowest-priority jobs in Q r , respectively. We assume that TaskSuffler operates only when Q r is greater than m because all active jobs will be selected for schedule, otherwise.
The TaskSuffler for multiprocessor conducts Algorithm 1 for Q r > m at every scheduling decision. It first adds J (1) to the candidate list, L c (Line 1). If its remaining inversion budget v (1) is equal to zero, then it returns J (1) and (m − 1) highest-priority jobs in Q r (Lines 2-4). Otherwise, with letting J (i) be the job in the current iteration, it iterates the following for J (2) through J (|Q r |) (Lines 5-11). If the remaining inversion budget v (i) of J (i) is larger than zero, it adds J (i) to the candidate list L c and considers the next job (Lines 6-7). Otherwise, it adds J (i) to the candidate list L c and stops the iteration (Lines 8-9). Then, if |L c | is smaller than or equal to m, it returns all jobs in L c and (m − |L c |) highest-priority jobs in Q r . Otherwise, it returns randomly selected m jobs in |L c |. After Algorithm 1 is conducted, the selected m jobs execute until the next scheduling decision.
return J (1) and (m − 1) highest-priority jobs in Q r 4: end if 5: for each J (i) ∈ Q r for i from 2 to |Q r | do 6: if v (i) > 0 then 7: add J (i) to L c 8: else 9: add J (i) to L c and goto Step 12 10: end if 11: end for 12: if |L c | ≤ m then 13: return all jobs in L c and (m − L c ) highest-priority jobs in Q r 14: else 15: return randomly selected m jobs in L c 16: end if Let τ b denote the task of the lowest-priority job among selected jobs, and τ u denote the set of tasks that were not selected. The next schedule decision is made at unless a new job arrives or any of the selected jobs finishes its execution before t . Thus, the remaining inversion budget of every released job belonging to a task τ i ∈ (hp(τ b ) ∩ τ u ) is deducted by one at each time unit until the next schedule decision will be made, a new job arrives, or any of the selected jobs finishes its execution before t . The remaining priority inversion budget v i of each job J i is set to V i when J i is released.

Theorem 2.
Suppose that τ k ∈ τ scheduled by a given fixed-priority scheduling algorithm is deemed schedulable by DA schedulability analysis, then it is still schedulable under the schedule randomization protocol.
Proof. By Theorem 1, τ k is schedulable if its execution is not hindered for more than V k by lower-priority tasks. τ k 's execution is interfered by lower-priority tasks only when τ k 's priority is lower than that of τ b , and the amount of interference from lower-priority tasks cannot be larger than V k since the next scheduling decision is made before v k becomes zero owing to Equation (6).
To implement TaskShuffler to the existing scheduler, the system should be capable for tracking tasks' remaining execution times and priority inversion budgets. Such monitoring ability is already commonly available on many real-time systems, whose aim is to guarantee that they do not exceed their execution allowances [19,20]. Utilizing such the ability, TaskShuffler may impose additional scheduling decisions according to the policy of TaskShuffler, compared to the vanilla scheduling algorithms (e.g., rate monotonic). It naturally increases scheduling costs such as preemption (or context switching) and migration costs. The system designers should consider how much such scheduling costs will happen for their target systems when TaskShuffler is considered to improve the schedule entropy without schedulability loss.

Schedule Entropy for Multiprocessors
Because our goal is to improve the uncertainty in scheduling to foil the fixed schedule pattern of real-time scheduling, we need to evaluate the improvement in the uncertainty of our proposed schedule randomization protocol. To overcome this issue, we use the concept of schedule entropy initially designed for a uniprocessor [21][22][23]. The underlying idea of schedule entropy is to measure randomness (or unpredictability) of schedule at each time unit, called slot entropy, and derive the summation of all slot entropies over a hyper-period L (defined as the least common multiple of T i of all tasks τ i ∈ τ). The slot entropy H τ (t) at a time slot t for a task set τ is calculated as follows.
where Pr(τ i , t) is the probability mass function of a task τ i appearing at time t. Pr(τ i , t) is obtained empirically by observing multiple hyper-periods [15]. Then, the schedule entropy H τ is calculated by the summation of all slot entropies over a hyper-period L as follows.
According to the considered task model, we assume that all processors share the common cache and main memory. Then, it also implies that which processor is assigned to a task τ i does not affect the level of scheduling entropy. That is, the level of scheduling entropy is influenced by whether τ i is scheduled or not in a time slot. Such the assumption is the limitation of our study, and thus it should be considered as potential future work.

Evaluation
In this section, we evaluate the performance of our schedule randomization protocol with randomly generated synthetic task sets to understand the effect of our approach on various factors more fully. We randomly generated even task sets from nine system utilization groups, [0.01 + 0.1 · i, 0.09 + 0.1 · i] for i = 0, · · · , 8, that is, 100 instances per group. The system utilization of a task set is defined as the sum of the utilizations of all tasks (∑ τ i ∈τ (C i /T i )) in the task sets. We considered three different numbers of processors m = 2, 4 and 8. Each system utilization group has five sub-groups, each of which has a fixed number of tasks; for m = 2, 4 and 8, the number of tasks for five sub-groups are {5, 7, 9, 11, 13}, {6, 9, 12, 15, 18} and {7, 11, 15, 19, 23}. Each task period T i was randomly selected from {20, 40, 80, 160, 320, 640, 1280, 2560}, and each WCET C i was selected from min([1, 50], T i ). A total of 100 task sets were generated for each sub-group, and thus 7 · 3 · 5 · 100 = 10,500 task sets were generated in total. As our goal is to improve the uncertainty of the schedule without compromising schedulability, we only selected the task sets whose schedulability is guaranteed by DA schedulability. We used rate-monotonic (RM) scheduling as our base scheduling algorithm in which our proposed schedule randomization was applied. To obtain the converged schedule entropy of each task set, we conducted a simulation for 10,000 hyper-periods of the task set as it ensures less than 0.01% (recommended to obtain the converged schedule entropy [15]) of difference in schedule entropy between those from 9999 and 10,000 hyper-periods. Figure 6 shows the plot of average schedule entropy of each sub-group's task sets (i.e., 100 task sets for each sub-group) over varying system utilization groups for m = 2, 4 and 8 (Due to limited space on x-axis, [0.01 + 0.1 · i, 0.09 + 0.1 · i] for i = 0, · · · , 8 is represented as [0.0, 0.1], [0.1, 0.2], · · · , [0.7, 0.8], [0.8, 0.9].). As shown in Figure 6a, the schedule entropy of a task set is high on average when it consists of a large number of tasks (System utilization group [0.8, 0.9] exhibits an exceptional result because of heavy utilization of its task set, which result that most of the task sets exhibit zero schedule entropy). This is because a higher number of tasks with our schedule randomization protocol possibly provides larger options for tasks to be randomly selected in every time slot. For example, the schedule entropy H τ (t) at a time slot t of a task set τ having two tasks τ i ∈ τ each of whose Pr(τ i , t) is 1/2 is 1, while H τ (t) is 2 if τ has four tasks τ i ∈ τ each of whose Pr(τ i , t) is 1/4. This implies that a larger number of tasks for each task set can improve schedule entropy. In addition, Figure 6a demonstrates that task sets included in the groups whose system utilization is low or high (e.g., results in high average schedule entropy. Because the task sets with low system utilization have less WCET, no process operates (i.e., processors are idling) in most of the time slots. When it comes to high utilization, most of the tasks have low V i values; therefore, the chances of tasks to be randomly selected are quite low. Therefore, a low schedule entropy results in both cases. From Figure 6a-c, we can observe that the average schedule entropy of task sets decreases as the number of processors m increases. This is mainly due to the underlying pessimism of the DA schedulability analysis in calculating I k in Equation (4). As Figure 5 implies, the DA schedulability analysis assumes that the execution corresponding to I k←i is performed as much as possible in the interval [r j k , r j k + D k − C k + 1) to upper bound I k . However, this does not happen in most cases since two jobs released consecutively from the same task may execute at intervals from each other, which implies that the amount of execution contributing to I k is overestimated under the DA schedulability analysis. Such pessimism of the DA schedulability analysis increases for a larger value of m. As our schedule randomization protocol is based on the DA schedulability analysis to derive V i , task sets with a lower value of V i (from a larger value of m) results in a lower average schedule entropy.
While Figure 6 presents the average schedule entropy over varying system utilization groups, Tables 1-3 show the maximum schedule entropy obtained from each setting. As the minimum schedule entropy of every setting is zero, Tables 1-3 also represent the range of schedule entropy that can be obtained from each setting. "-" in the tables represents a value lower than 0.1, and we exclude rows of the tables if all values corresponding to the row are "-". The trends shown in Figure 6a-c also appear in Tables 1-3, but the maximum schedule entropy of task sets in the high system utilization group are relatively high compared to the average schedule entropy of those task sets. This indicates that a very low number of task sets in the high system utilization group shows exceptionally high schedule entropy from a certain task parameter setting following the considered task set generation method, and the other task sets are with zero schedule entropies.
One may wonder whether much additional scheduling overhead is required to apply our schedule randomization protocol into the existing scheduling algorithm. Note that the schedule decision on preemptive scheduling algorithm (basically considered in our paper) without the schedule randomization protocol is made only when a job finishes its execution or a new job is released. In addition, an additional schedule decision stemming from the schedule randomization protocol is made when v i (i.e., the remaining priority inversion budget value of V i ) of a job becomes zero. Figure 7 shows the ratio between the number of schedule decisions made by naive RM and that made by RM with the schedule randomization protocol. As shown in Figure 7, for task sets with high system utilization or for larger m shows less schedule decision ratio. With high system utilization or larger m, each task has a lower priority inversion budget as aforementioned, and the schedule randomization protocol works rarely with such settings. Overall, a high schedule decision ratio implies a larger scheduling overhead, and thus the system designers should carefully consider the trade-off between scheduling overhead and the degree of security they want to achieve for their target system.

Related Work
The problem of information leakage for real-time systems has been addressed in several studies. Mohan et al. considered real-time tasks with different security levels and focused on information leakage on shared computing resources (e.g., RAM and cache). They proposed a mechanism for FP scheduling to flush the status of shared resources conditionally, which reduces the chances of an attacker from obtaining sensitive information of the resources [24]. Because they incorporated a security mechanism into the existing FP scheduling, an additional timing overhead was inevitably required. Therefore, they proposed a new sufficient schedulability analysis to accommodate that fact. In [25], an exact schedulability was proposed to improve the analytical capability. In addition, this work was extended to mixed-criticality systems in [26] The aforementioned studies addressed the issues only for non-preemptive scheduling; therefore, Pellizzoni et al. extended this work to preemptive scheduling [27]. They extended it to a more general task model, and proposed an optimal priority assignment method that determines the task preemptibility.
Another approach to improve the security of real-time systems is to randomize the schedules without compromising schedulability. TaskShuffler addressed in this study was proposed by Yoon et al. for a preemptive FP scheduling algorithm [15]. The goal of TaskShuffler is to improve uncertainty in schedule and reduce the success ratio of timing inference attacks simultaneously. Kr¨uger et al. proposed an online schedule randomization protocol for time-triggered systems [28]. While the aforementioned two approaches are applicable to uniprocessor platforms, we focus on multiprocessor platforms.

Conclusions
In this study, we aimed to develop a new scheduling randomization protocol for symmetry multiprocessors to improve the security and conserve schedulability of real-time systems simultaneously, by extending the existing TaskShuffler initially designed for uniprocessors. To this end, we first define the problem of improving the security of real-time systems and satisfying the schedulability simultaneously on multiprocessor platforms, differentiating it from the uniprocessor case. Then, we employed DA schedulability to derive priority inversion budget values for each task, and proposed an algorithm to effectively utilize the calculated priority inversion budget values in randomized schedules to improve uncertainty. Based on the simulation results, we investigated the effect of our approach on various factors. Non-preemptive [29] or partitioned [2] scheduling will be considered in our future work.
As our study adopts a fundamental task model (of the Liu and Lyland) assuming no consideration of task dependency, resource consideration and scheduling costs (e.g., migration or preemption cost), it cannot be directly applied to actual real-time systems without relieving the assumptions. We also leave it as our promising future work.