Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = real-time multi-processor scheduling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1728 KB  
Article
Optimizing Federated Scheduling for Real-Time DAG Tasks via Node-Level Parallelization
by Jiaqing Qiao, Sirui Chen, Tianwen Chen and Lei Feng
Computers 2025, 14(10), 449; https://doi.org/10.3390/computers14100449 - 21 Oct 2025
Viewed by 657
Abstract
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task [...] Read more.
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task set scheduling remain relatively low. Meanwhile, some studies have identified that certain parallel task nodes exhibit “parallelization freedom,” allowing them to be decomposed into sub-threads that can execute concurrently. This presents a promising opportunity for improving task schedulability. Building on this, we propose an approach that optimizes both node parallelization and processor core allocation under federated scheduling. Simulation experiments demonstrate that by parallelizing nodes, we can significantly reduce the number of cores required for each task and increase the percentage of task sets being schedulable. Full article
Show Figures

Graphical abstract

23 pages, 3558 KB  
Article
Research on High-Reliability Energy-Aware Scheduling Strategy for Heterogeneous Distributed Systems
by Ziyu Chen, Jing Wu, Lin Cheng and Tao Tao
Big Data Cogn. Comput. 2025, 9(6), 160; https://doi.org/10.3390/bdcc9060160 - 17 Jun 2025
Viewed by 2300
Abstract
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing [...] Read more.
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing studies now attempt to achieve the best balance in terms of time constraints, energy efficiency, and system reliability in Dynamic Voltage and Frequency Scaling environments. This study proposes a two-stage collaborative optimization strategy. With the help of an innovative algorithm design and theoretical analysis, the multi-objective optimization challenges mentioned above are systematically solved. First, based on a reliability-constrained model, we propose a topology-aware dynamic priority scheduling algorithm (EAWRS). This algorithm constructs a node priority function by incorporating in-degree/out-degree weighting factors and critical path analysis to enable multi-objective optimization. Second, to address the time-varying reliability characteristics introduced by DVFS, we propose a Fibonacci search-based dynamic frequency scaling algorithm (SEFFA). This algorithm effectively reduces energy consumption while ensuring task reliability, achieving sub-optimal processor energy adjustment. The collaborative mechanism of EAWRS and SEFFA has well solved the dynamic scheduling challenge based on DAG in heterogeneous multi-core processor systems in the Internet of Things environment. Experimental evaluations conducted at various scales show that, compared with the three most advanced scheduling algorithms, the proposed strategy reduces energy consumption by an average of 14.56% (up to 58.44% under high-reliability constraints) and shortens the makespan by 2.58–56.44% while strictly meeting reliability requirements. Full article
Show Figures

Figure 1

23 pages, 2620 KB  
Article
A Novel Overload Control Algorithm for Distributed Control Systems to Enhance Reliability in Industrial Automation
by Taikyeong Jeong
Appl. Sci. 2025, 15(10), 5766; https://doi.org/10.3390/app15105766 - 21 May 2025
Cited by 1 | Viewed by 1240
Abstract
This paper presents a novel real-time overload detection algorithm for distributed control systems (DCSs), particularly applied to thermoelectric power plant environments. The proposed method is integrated with a modular multi-functional processor (MFP) architecture, designed to enhance system reliability, optimize resource utilization, and improve [...] Read more.
This paper presents a novel real-time overload detection algorithm for distributed control systems (DCSs), particularly applied to thermoelectric power plant environments. The proposed method is integrated with a modular multi-functional processor (MFP) architecture, designed to enhance system reliability, optimize resource utilization, and improve fault resilience under dynamic operational conditions. As legacy DCS platforms, such as those installed at the Tae-An Thermoelectric Power Plant, face limitations in applying advanced logic mechanisms, a simulation-based test bench was developed to validate the algorithm in anticipation of future DCS upgrades. The algorithm operates by partitioning function code executions into segment groups, enabling fine-grained, real-time CPU and memory utilization monitoring. Simulation studies, including a modeled denitrification process, demonstrated the system’s effectiveness in maintaining load balance, reducing power consumption to 17 mW under a 2 Gbps data throughput, and mitigating overload levels by approximately 31.7%, thereby outperforming conventional control mechanisms. The segmentation strategy, combined with summation logic, further supports scalable deployment across both legacy and next-generation DCS infrastructures. By enabling proactive overload mitigation and intelligent energy utilization, the proposed solution contributes to the advancement of self-regulating power control systems. Its applicability extends to energy management, production scheduling, and digital signal processing—domains where real-time optimization and operational reliability are essential. Full article
Show Figures

Figure 1

21 pages, 630 KB  
Article
Polynomial Exact Schedulability and Infeasibility Test for Fixed-Priority Scheduling on Multiprocessor Platforms
by Natalia Garanina, Igor Anureev and Dmitry Kondratyev
Appl. Syst. Innov. 2025, 8(1), 15; https://doi.org/10.3390/asi8010015 - 20 Jan 2025
Viewed by 1466
Abstract
In this paper, we develop an exact schedulability test and sufficient infeasibility test for fixed-priority scheduling on multiprocessor platforms. We base our tests on presenting real-time systems as a Kripke model for dynamic real-time systems with sporadic non-preemptible tasks running on a multiprocessor [...] Read more.
In this paper, we develop an exact schedulability test and sufficient infeasibility test for fixed-priority scheduling on multiprocessor platforms. We base our tests on presenting real-time systems as a Kripke model for dynamic real-time systems with sporadic non-preemptible tasks running on a multiprocessor platform and an online scheduler using global fixed priorities. This model includes states and transitions between these states, allows us to formally justify a polynomial-time algorithm for an exact schedulability test using the idea of backward reachability. Using this algorithm, we perform the exact schedulability test for the above real-time systems, in which there is one more task than the processors. The main advantage of this algorithm is its polynomial complexity, while, in general, the problem of the exact schedulability testing of real-time systems on multiprocessor platforms is NP-hard. The infeasibility test uses the same algorithm for an arbitrary task-to-processor ratio, providing a sufficient infeasibility condition: if the real-time system under test is not schedulable in some cases, the algorithm detects this. We conduct an experimental study of our algorithms on the datasets generated with different utilization values and compare them to several state-of-the-art schedulability tests. The experiments show that the performance of our algorithm exceeds the performance of its analogues while its accuracy is similar. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

19 pages, 1885 KB  
Article
A Tetris-Based Task Allocation Strategy for Real-Time Operating Systems
by Yumeng Chen, Songlin Liu, Zongmiao He and Xiang Ling
Electronics 2025, 14(1), 98; https://doi.org/10.3390/electronics14010098 - 29 Dec 2024
Cited by 2 | Viewed by 1214
Abstract
Real-time constrained multiprocessor systems have been widely applied across various domains. In this paper, we focus on the scheduling algorithm for directed acyclic graph (DAG) tasks under partitioned scheduling on multiprocessor systems. Effective real-time task scheduling algorithms significantly enhance the performance and stability [...] Read more.
Real-time constrained multiprocessor systems have been widely applied across various domains. In this paper, we focus on the scheduling algorithm for directed acyclic graph (DAG) tasks under partitioned scheduling on multiprocessor systems. Effective real-time task scheduling algorithms significantly enhance the performance and stability of multiprocessor systems. Traditional real-time task scheduling algorithms commonly rely on a single-heuristic parameter as the reference for task allocation, which typically results in suboptimal performance. Inspired by the Tetris algorithm, we propose a novel heuristic scheduling algorithm, named Tetris game scoring scheduling algorithm (TGSSA), that integrates multiple-heuristic parameters. The process of real-time DAG task scheduling on a multiprocessor system is modeled as a Tetris game. Through simulations of the worst-case response time (WCRT) analysis and observed average response times in RT-Linux, which is a frequently-used real-time operating system, our algorithm demonstrates superior performance, effectively improving the efficiency and stability of real-time operating systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 594 KB  
Article
Enhanced Harmonic Partitioned Scheduling of Periodic Real-Time Tasks Based on Slack Analysis
by Jiankang Ren, Jun Zhang, Xu Li, Wei Cao, Shengyu Li, Wenxin Chu and Chengzhang Song
Sensors 2024, 24(17), 5773; https://doi.org/10.3390/s24175773 - 5 Sep 2024
Viewed by 1447
Abstract
The adoption of multiprocessor platforms is growing commonplace in Internet of Things (IoT) applications to handle large volumes of sensor data while maintaining real-time performance at a reasonable cost and with low power consumption. Partitioned scheduling is a competitive approach to ensure the [...] Read more.
The adoption of multiprocessor platforms is growing commonplace in Internet of Things (IoT) applications to handle large volumes of sensor data while maintaining real-time performance at a reasonable cost and with low power consumption. Partitioned scheduling is a competitive approach to ensure the temporal constraints of real-time sensor data processing tasks on multiprocessor platforms. However, the problem of partitioning real-time sensor data processing tasks to individual processors is strongly NP-hard, making it crucial to develop efficient partitioning heuristics to achieve high real-time performance. This paper presents an enhanced harmonic partitioned multiprocessor scheduling method for periodic real-time sensor data processing tasks to improve system utilization over the state of the art. Specifically, we introduce a general harmonic index to effectively quantify the harmonicity of a periodic real-time task set. This index is derived by analyzing the variance between the worst-case slack time and the best-case slack time for the lowest-priority task in the task set. Leveraging this harmonic index, we propose two efficient partitioned scheduling methods to optimize the system utilization via strategically allocating the workload among processors by leveraging the task harmonic relationship. Experiments with randomly synthesized task sets demonstrate that our methods significantly surpass existing approaches in terms of schedulability. Full article
(This article belongs to the Special Issue Intelligent Wireless Sensor Networks for IoT Applications)
Show Figures

Figure 1

28 pages, 1897 KB  
Article
Bi-Objective, Dynamic, Multiprocessor Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach
by Tamer F. Abdelmaguid 
Algorithms 2024, 17(8), 371; https://doi.org/10.3390/a17080371 - 21 Aug 2024
Cited by 2 | Viewed by 1494
Abstract
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly [...] Read more.
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly important for improving machines’ utilization and customer satisfaction level in maintenance and healthcare diagnostic systems, in which the studied Bi-DMOSP is mostly encountered. Since the studied problem is NP-hard for both objectives, fast algorithms are needed to fulfill the requirements of real-life circumstances. Previous attempts have included the development of an exact algorithm and two metaheuristic approaches based on the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective gray wolf optimizer (MOGWO). The exact algorithm is limited to small-sized instances; meanwhile, NSGA-II was found to produce better results compared to MOGWO in both small- and large-sized test instances. The proposed MOSS in this paper attempts to provide more efficient non-dominated solutions for the studied Bi-DMOSP. This is achievable via its hybridization with a novel, bi-objective tabu search approach that utilizes a set of efficient neighborhood search functions. Parameter tuning experiments are conducted first using a subset of small-sized benchmark instances for which the optimal Pareto front solutions are known. Then, detailed computational experiments on small- and large-sized instances are conducted. Comparisons with the previously developed NSGA-II metaheuristic demonstrate the superiority of the proposed MOSS approach for small-sized instances. For large-sized instances, it proves its capability of producing competitive results for instances with low and medium density. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)
Show Figures

Figure 1

33 pages, 2760 KB  
Article
Developing a Platform Using Petri Nets and GPenSIM for Simulation of Multiprocessor Scheduling Algorithms
by Daniel Osmundsen Dirdal, Danny Vo, Yuming Feng and Reggie Davidrajuh
Appl. Sci. 2024, 14(13), 5690; https://doi.org/10.3390/app14135690 - 29 Jun 2024
Cited by 1 | Viewed by 1765
Abstract
Efficient multiprocessor scheduling is pivotal in optimizing the performance of parallel computing systems. This paper leverages the power of Petri nets and the tool GPenSIM to model and simulate a variety of multiprocessor scheduling algorithms (the basic algorithms such as first come first [...] Read more.
Efficient multiprocessor scheduling is pivotal in optimizing the performance of parallel computing systems. This paper leverages the power of Petri nets and the tool GPenSIM to model and simulate a variety of multiprocessor scheduling algorithms (the basic algorithms such as first come first serve, shortest job first, and round robin, and more sophisticated schedulers like multi-level feedback queue and Linux’s completely fair scheduler). This paper presents the evaluation of three crucial performance metrics in multiprocessor scheduling (such as turnaround time, response time, and throughput) under various scheduling algorithms. However, the primary focus of the paper is to develop a robust simulation platform consisting of Petri Modules to facilitate the dynamic representation of concurrent processes, enabling us to explore the real-time interactions and dependencies in a multiprocessor environment; more advanced and newer schedulers can be tested with the simulation platform presented in this paper. Full article
Show Figures

Figure 1

18 pages, 1741 KB  
Article
Real-Time Performance Benchmarking of RISC-V Architecture: Implementation and Verification on an EtherCAT-Based Robotic Control System
by Taeho Yoo and Byoung Wook Choi
Electronics 2024, 13(4), 733; https://doi.org/10.3390/electronics13040733 - 11 Feb 2024
Cited by 14 | Viewed by 6259
Abstract
RISC-V offers a modular technical approach combined with an open, royalty-free instruction set architecture (ISA). However, despite its advantages as a fundamental building block for many embedded systems, the escalating complexity and functional demands of real-time applications have made adhering to response time [...] Read more.
RISC-V offers a modular technical approach combined with an open, royalty-free instruction set architecture (ISA). However, despite its advantages as a fundamental building block for many embedded systems, the escalating complexity and functional demands of real-time applications have made adhering to response time deadlines challenging. For real-time applications of RISC-V, real-time performance analysis is required for various ISAs. In this paper, we analyze the real-time performance of RISC-V through two real-time approaches based on processor architectures. For real-time operating system (RTOS) applications, we adopted FreeRTOS and evaluated its performance on HiFive1 Rev B (RISC-V) and STM3240G-EVAL (ARM M). For real-time Linux, we utilized Linux with the Preempt-RT patch and tested its performance on VisionFive 2 (RISC-V), MIO5272 (x86-64), and Raspberry Pi 4 B (ARM A). Through these experiments, we examined the response times on the real-time mechanisms of each operating system. Additionally, in the Preempt-RT experiments, scheduling latencies were evaluated by means of the cyclictest. These are very important parameters for implementing real-time applications comprised of multi-tasking. Finally, in order to show the real-time capabilities of RISC-V practically, we implemented motion control of a six-axis collaborative robot, which was performed on the VisionFive 2. This implementation provided a comparative result of RISC-V’s performance against the x86-64 architecture. Ultimately, the results indicated that the real-time performance of RISC-V for real-time applications was feasible. A noticeable achievement of this research is its first implementation of an EtherCAT master on RISC-V designed for real-time applications. The successful implementation of the EtherCAT master on RISC-V shows real-time capabilities for a wide range of real-time applications. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

21 pages, 6400 KB  
Article
MASA: Multi-Application Scheduling Algorithm for Heterogeneous Resource Platform
by Quan Peng and Shan Wang
Electronics 2023, 12(19), 4056; https://doi.org/10.3390/electronics12194056 - 27 Sep 2023
Cited by 4 | Viewed by 2679
Abstract
Heterogeneous architecture-based systems-on-chip enable the development of flexible and powerful multifunctional RF systems. In complex and dynamic environments where applications arrive continuously and stochastically, real-time scheduling of multiple applications to appropriate processor resources is crucial for fully utilizing the heterogeneous SoC’s resource potential. [...] Read more.
Heterogeneous architecture-based systems-on-chip enable the development of flexible and powerful multifunctional RF systems. In complex and dynamic environments where applications arrive continuously and stochastically, real-time scheduling of multiple applications to appropriate processor resources is crucial for fully utilizing the heterogeneous SoC’s resource potential. However, heterogeneous resource-scheduling algorithms still face many problems in practical situations, including generalized abstraction of applications and heterogeneous resources, resource allocation, efficient scheduling of multiple applications in complex mission scenarios, and how to ensure the effectiveness combining with real-world applications of scheduling algorithms. Therefore, in this paper, we design the Multi-Application Scheduling Algorithm, named MASA, which is a two-phase scheduler architecture based on Deep Reinforcement Learning. The algorithm is made up of neural network scheduler-based task prioritization for dynamic encoding of applications and heuristic scheduler-based task mapping for solving the processor resource allocation problem. In order to achieve stable and fast training of the network scheduler based on the actor–critic strategy, we propose optimization methods for the training of MASA: reward dynamic alignment (RDA), earlier termination of the initial episodes, and asynchronous multi-agent training. The performance of the MASA is tested with classic directed acyclic graph and six real-world application datasets, respectively. Experimental results show that MASA outperforms other neural scheduling algorithms and heuristics, and ablation experiments illustrate how these training optimizations improve the network’s capacity. Full article
(This article belongs to the Special Issue Progress and Future Development of Real-Time Systems on Chip)
Show Figures

Figure 1

16 pages, 423 KB  
Article
A Comparative Study on the Schedulability of the EDZL Scheduling Algorithm on Multiprocessors
by Sangchul Han, Woojin Paik, Myeong-Cheol Ko and Minkyu Park
Appl. Sci. 2023, 13(18), 10131; https://doi.org/10.3390/app131810131 - 8 Sep 2023
Viewed by 1563
Abstract
As multiprocessor (or multicore) real-time systems become popular, there has been much research on multiprocessor real-time scheduling algorithms. This work evaluates EDZL (Earliest Deadline until Zero Laxity), a scheduling algorithm for real-time multiprocessor systems. First, we compare the performance of EDZL schedulability tests. [...] Read more.
As multiprocessor (or multicore) real-time systems become popular, there has been much research on multiprocessor real-time scheduling algorithms. This work evaluates EDZL (Earliest Deadline until Zero Laxity), a scheduling algorithm for real-time multiprocessor systems. First, we compare the performance of EDZL schedulability tests. We measure and compare the ratio of task sets admitted by each test. We also investigate the dominance between EDZL schedulability tests and discover that the union of the demand-based test and the utilization-based test is an effective combination. Second, we compare the schedulability of EDZL and EDF(k). We prove that the union of the EDZL schedulability tests dominates the EDF(k) schedulability test, i.e., the union of the EDZL schedulability tests can admit all task sets admitted by the EDF(k) schedulability test. We also compare the schedulability of EDZL and EDF(k) through scheduling simulation by measuring the ratio of successfully scheduled task sets. EDZL can successfully schedule 7.0% more task sets than EDF(k). Full article
(This article belongs to the Special Issue Recent Advances in Hybrid Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3045 KB  
Article
Generating Datasets for Real-Time Scheduling on 5G New Radio
by Xi Jin, Haoxuan Chai, Changqing Xia and Chi Xu
Entropy 2023, 25(9), 1289; https://doi.org/10.3390/e25091289 - 2 Sep 2023
Cited by 2 | Viewed by 2933
Abstract
A 5G system is an advanced solution for industrial wireless motion control. However, because the scheduling model of 5G new radio (NR) is more complicated than those of other wireless networks, existing real-time scheduling algorithms cannot be used to improve the 5G performance. [...] Read more.
A 5G system is an advanced solution for industrial wireless motion control. However, because the scheduling model of 5G new radio (NR) is more complicated than those of other wireless networks, existing real-time scheduling algorithms cannot be used to improve the 5G performance. This results in NR resources not being fully available for industrial systems. Supervised learning has been widely used to solve complicated problems, and its advantages have been demonstrated in multiprocessor scheduling. One of the main reasons why supervised learning has not been used for 5G NR scheduling is the lack of training datasets. Therefore, in this paper, we propose two methods based on optimization modulo theories (OMT) and satisfiability modulo theories (SMT) to generate training datasets for 5G NR scheduling. Our OMT-based method contains fewer variables than existing work so that the Z3 solver can find optimal solutions quickly. To further reduce the solution time, we transform the OMT-based method into an SMT-based method and tighten the search space of SMT based on three theorems and an algorithm. Finally, we evaluate the solution time of our proposed methods and use the generated dataset to train a supervised learning model to solve the 5G NR scheduling problem. The evaluation results indicate that our SMT-based method reduces the solution time by 74.7% compared to existing ones, and the supervised learning algorithm achieves better scheduling performance than other polynomial-time algorithms. Full article
(This article belongs to the Special Issue Information Network Mining and Applications)
Show Figures

Figure 1

14 pages, 518 KB  
Article
Contention-Free Scheduling for Single Preemption Multiprocessor Platforms
by Hyeongboo Baek and Jaewoo Lee
Mathematics 2023, 11(16), 3547; https://doi.org/10.3390/math11163547 - 16 Aug 2023
Cited by 2 | Viewed by 1659
Abstract
The Contention-Free (CF) policy has been extensively researched in the realm of real-time multi-processor scheduling due to its wide applicability and the performance enhancement benefits it provides to existing scheduling algorithms. The CF policy improves the feasibility of executing other real-time tasks by [...] Read more.
The Contention-Free (CF) policy has been extensively researched in the realm of real-time multi-processor scheduling due to its wide applicability and the performance enhancement benefits it provides to existing scheduling algorithms. The CF policy improves the feasibility of executing other real-time tasks by assigning the lowest priority to a task at a moment when it is guaranteed not to miss its deadline during the remaining execution time. Despite its effectiveness, existing studies on the CF policy are largely confined to preemptive scheduling, leaving the efficiency and applicability of limited preemption scheduling unexplored. Limited preemption scheduling permits a job to execute to completion with a limited number of preemptions, setting it apart from preemptive scheduling. This type of scheduling is crucial when preemption or migration overheads are either excessively large or unpredictable. In this paper, we introduce SP-CF, a single preemption scheduling approach that incorporates the CF policy. SP-CF allows a preemption only once during each job’s execution, following a priority demotion under the CF policy. We also propose a new schedulability analysis method for SP-CF to determine whether each task is executed in a timely manner and without missing its deadline. Through simulation experiments, we demonstrate that SP-CF can significantly enhance the schedulability of the traditional rate-monotonic algorithm and the earliest deadline first algorithm. Full article
Show Figures

Figure 1

14 pages, 2073 KB  
Article
Multi-Core Time-Triggered OCBP-Based Scheduling for Mixed Criticality Periodic Task Systems
by Marian D. Baciu, Eugenia A. Capota, Cristina S. Stângaciu, Daniel-Ioan Curiac and Mihai V. Micea
Sensors 2023, 23(4), 1960; https://doi.org/10.3390/s23041960 - 9 Feb 2023
Cited by 3 | Viewed by 2544
Abstract
Mixed criticality systems are one of the relatively new directions of development for the classical real-time systems. As the real-time embedded systems become more and more complex, incorporating different tasks with different criticality levels, the continuous development of mixed criticality systems is only [...] Read more.
Mixed criticality systems are one of the relatively new directions of development for the classical real-time systems. As the real-time embedded systems become more and more complex, incorporating different tasks with different criticality levels, the continuous development of mixed criticality systems is only natural. These systems have practically entered every field where embedded systems are present: avionics, automotive, medical systems, wearable devices, home automation, industry and even the Internet of Things. While scheduling techniques have already been proposed in the literature for different types of mixed criticality systems, the number of papers addressing multiprocessor platforms running in a time-triggered mixed criticality environment is relatively low. These algorithms are easier to certify due to their complete determinism and isolation between components of different criticalities. Our research has centered on the problem of real-time scheduling on multiprocessor platforms for periodic tasks in a time-triggered mixed criticality environment. A partitioned, non-preemptive, table-driven scheduling algorithm was proposed, called Partitioned Time-Triggered Own Criticality Based Priority, based on a uniprocessor mixed criticality method. Furthermore, an analysis of the scheduling algorithm is provided in terms of success ratio by comparing it against an event-driven and a time-triggered method. Full article
Show Figures

Figure 1

31 pages, 7260 KB  
Article
An Overview of the nMPRA and nHSE Microarchitectures for Real-Time Applications
by Vasile Gheorghiță Găitan and Ionel Zagan
Sensors 2021, 21(13), 4500; https://doi.org/10.3390/s21134500 - 30 Jun 2021
Cited by 5 | Viewed by 3253
Abstract
In the context of real-time control systems, it has become possible to obtain temporal resolutions of microseconds due to the development of embedded systems and the Internet of Things (IoT), the optimization of the use of processor hardware, and the improvement of architectures [...] Read more.
In the context of real-time control systems, it has become possible to obtain temporal resolutions of microseconds due to the development of embedded systems and the Internet of Things (IoT), the optimization of the use of processor hardware, and the improvement of architectures and real-time operating systems (RTOSs). All of these factors, together with current technological developments, have led to efficient central processing unit (CPU) time usage, guaranteeing both the predictability of thread execution and the satisfaction of the timing constraints required by real-time systems (RTSs). This is mainly due to time sharing in embedded RTSs and the pseudo-parallel execution of tasks in single-processor and multi-processor systems. The non-deterministic behavior triggered by asynchronous external interrupts and events in general is due to the fact that, for most commercial RTOSs, the execution of the same instruction ends in a variable number of cycles, primarily due to hazards. The software implementation of RTOS-specific mechanisms may lead to significant delays that can affect deadline requirements for some RTSs. The main objective of this paper was the design and deployment of innovative solutions to improve the performance of RTOSs by implementing their functions in hardware. The obtained architectures are intended to provide feasible scheduling, even if the total CPU utilization is close to the maximum limit. The contributions made by the authors will be followed by the validation of a high-performing microarchitecture, which is expected to allow a thread context switching time and event response time of only one clock cycle each. The main purpose of the research presented in this paper is to improve these factors of RTSs, as well as the implementation of the hardware structure used for the static and dynamic scheduling of tasks, for RTOS mechanisms specific to resource sharing and intertask communication. Full article
(This article belongs to the Special Issue Sensors and Real Time Systems for IIoT)
Show Figures

Figure 1

Back to TopTop