Next Article in Journal
Enhancing Green Practice Detection in Social Media with Paraphrasing-Based Data Augmentation
Next Article in Special Issue
3D Urban Digital Twinning on the Web with Low-Cost Technology: 3D Geospatial Data and IoT Integration for Wellness Monitoring
Previous Article in Journal
Pyramidal Predictive Network V2: An Improved Predictive Architecture and Training Strategies for Future Perception Prediction
Previous Article in Special Issue
A Survey on the Applications of Cloud Computing in the Industrial Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model

1
Computer Science Department, Faculty of Computing and Artificial Intelligence, Sadat City University, Sadat City 32897, Egypt
2
Computer Science & Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(4), 80; https://doi.org/10.3390/bdcc9040080
Submission received: 10 January 2025 / Revised: 20 March 2025 / Accepted: 26 March 2025 / Published: 30 March 2025
(This article belongs to the Special Issue Application of Cloud Computing in Industrial Internet of Things)

Abstract

:
In recent years, the healthcare market has grown very fast and is dealing with a huge increase in data. Healthcare applications are time-sensitive and need quick responses with fewer delays. Fog Computing (FC) was introduced to achieve this aim. It can be applied in various application areas like healthcare, smart and intelligent environments, etc. In healthcare applications, some tasks are considered critical and need to be processed first; other tasks are time-sensitive and need to be processed before their deadline. In this paper, we have proposed a Task Classification algorithm based on Deadline and Criticality (TCDC) for serving healthcare applications in a fog environment. It depends on classifying tasks based on the critical level to process critical tasks first and considers the deadline of the task, which is an essential parameter to consider in real-time applications. The performance of TCDC was compared with some of the literature. The simulation results showed that the proposed algorithm can improve the overall performance in terms of some QoS parameters like makespan with an improved ratio from 60% to 70%, resource utilization, etc.

1. Introduction

1.1. Fog Computing

Fog Computing (FC) refers to the strategic placement of fog devices between local sites and cloud/data centers, as depicted in Figure 1 [1]. The topology of FC is essential, as it involves the geographical distribution of nodes that carry out computations, provide network services, and store data. FC operates as a distributed system, with software and data spread across various components. Infrastructure elements such as routers, gateways, and access points are examples of this setup. FC offers several advantages [2,3]:
(a)
Privacy: FC reduces data transmission by processing sensitive information at local gateways instead of centralized data centers, thereby protecting user privacy.
(b)
Reduced latency: By situating processing devices nearer to user devices, FC decreases latency due to shorter physical distances, leading to much quicker response times than traditional data centers.
(c)
Energy efficiency: Gateways can act as communication intermediaries instead of keeping sensors constantly active. When sensors are not in use, they can handle incoming requests, which can be processed later, enhancing the energy efficiency of the sensor devices.
(d)
Bandwidth: By handling large amounts of raw data at fog nodes, the volume of data sent to the data center is minimized.

1.2. Importance of FC in Healthcare System

Healthcare benefits from the services provided by Cloud Computing (CC) and Fog Computing (FC). However, accessing data stored in the cloud can result in inadequate response times and noticeable delays. In contrast, FC employs distributed architectures to support both cloud and Edge Computing (EC) [4]. It links wireless networks and wearable medical devices, offering an enhanced spatial interface to edge devices while reducing latency [5]. When combined with other technologies, FC optimizes e-health services. It integrates sensors, telecommunications, CC, the Internet of Things (IoT), and big data. Sensors and actuators are the primary components of IoT-based medical devices, which interface with cloud and fog services to generate health data. This real-time data are invaluable for delivering remote care services to patients.
In healthcare systems, Fog Computing (FC) was designed to meet the application requirements for fast response times with low latency [6]. Increased latency can adversely affect urgent care and health tracking services [7]. Healthcare applications generate vast amounts of data, necessitating the use of FC for efficient processing. FC is an ideal choice for healthcare applications because it provides quick response times, addresses latency issues, and manages large data volumes effectively. In e-health, streaming-based transmissions must meet real-time requirements, which can be efficiently managed with FC [8]. Utilizing FC infrastructure enhances elasticity, scalability, and redundancy [9]. However, FC alone cannot resolve all the challenges faced by healthcare applications. The underlying architecture influences issues such as latency, response time, data processing, mobility, scalability, monitoring, and reliability. Figure 2 illustrates the importance of FC in healthcare [10,11].

1.3. IoT in Healthcare

The advent of the Internet of Things (IoT) has transformed the industry by enabling machines and people to communicate and share data across networks without human intervention. Wireless sensor networks exemplify IoT technology. The cloud plays a crucial role in storing the vast amounts of data generated by IoT [12]. There are certain challenges associated with IoT. In critical situations, prompt action is essential; for such cases, Fog Computing (FC) or Edge Computing (EC) is often the preferred resolution [13,14,15,16]. Integrating IoT technology into the healthcare sector presents various difficulties, including data storage, management, security, etc. [17,18]. CC is a potential solution to these challenges.

1.4. Hierarchical Architecture of Fog Computing in Healthcare

Healthcare System Architecture relying on IoT/FC has three layers, named: devices, fog, and Cloud layer, as shown in Figure 3.
  • Devices or Sensors Layer: This layer that collects healthcare data from devices that send information through a Wi-Fi or cellular (4G/LTE) network. These devices are capable of transmitting data in real-time [19,20,21].
  • Fog Layer: This layer is responsible for processing healthcare data that are received from a variety of IoT medical devices. This layer also sends real-time alerts to users regarding the state of their health [21].
  • Cloud Layer: This layer processes tasks that cannot be processed at fog [20,21]. The fog layer can access the patient’s health data from the cloud layer for analyzing or forecasting [22].
The main contributions of this paper are listed below:
  • Overcoming latency problems that are related to critical tasks in the healthcare system.
  • Applying a prioritized fashion for distributing tasks to fog resources that are available so that critical tasks have the highest priority.
  • Improving the QoS parameters by taking into consideration the deadline of tasks to meet the user’s requirement.
  • Improving system scalability by handling a growing number of tasks while maintaining scheduling efficiency.
  • Preventing system overload and maintaining stability by ensuring even task distribution across VMs.
The paper is organized as follows: Section 2 presents related work. Section 3 illustrates the Problem definition. Section 4 discusses our proposed algorithm. Section 5 displays the simulation results. The performance and simulation results are discussed in Section 6. Finally, the conclusion and future work are demonstrated in Section 7.

2. Related Work

In Fog Computing (FC), resource scheduling and allocation are critical issues that significantly impact system performance. Recently, numerous studies have addressed this topic. A three-layered architecture was proposed in [23], along with an Efficient Resource Allocation (ERA) algorithm designed for resource provisioning in FC. Amal El-Nattat [24] introduced a Deadline-Aware Resource Allocation (DARA) algorithm, which aims to allocate application tasks to available resources in a prioritized manner while adhering to deadline constraints, thereby enhancing system performance in terms of makespan, load balancing, throughput, and resource utilization. A load balancing algorithm known as the Dynamic Resource Allocation Method (DRAM) was proposed in [25] to minimize load-balance variance and optimize average resource utilization. Kumar Behera [26] developed an algorithm called Level Monitoring Task Scheduling (LMTS) specifically for healthcare applications in fog environments. Yan Sun et al. [27] proposed a new FC architecture composed of three layers: terminal, edge, and core. They also introduced a novel resource scheduling scheme that enhances the overall stability of task processing. Simar Singh and Anand Nayyar [4] presented a model for effectively scheduling user tasks on FC resources. Tahani Aladwani [19] proposed a method called Task Classification and Virtual machine categorization (TCTV) to enhance the performance of static task scheduling algorithms based on task importance. Jamil et al. [28] introduced a job scheduling algorithm aimed at optimizing performance in healthcare systems. Khattak and Arshad [29] developed a strategy for load balancing in healthcare applications by categorizing requests into two types: normal and critical. Tang et al. [30] created a smart model for healthcare in fog computing that consists of three layers, along with a novel data-sharing strategy designed to minimize the encryption burden. L. Li and Q. Guan [31] utilized a framework comprising three parallel algorithms to improve task completion ratios. The key features of iFogSim and installation instructions were discussed in [32]. S. Kim [33] designed a resource allocation algorithm for the Social Internet of Things (SFIoT) system. The resource allocation problem in Virtual Fog Computing (VFC) was examined in [34] from a contract-matching integration perspective. In [35], the resource allocation problem in fog environments was addressed to meet hard deadlines. An architecture for deploying IoT-based healthcare systems in cloud environments was proposed in [36]. The research challenges in fog computing were discussed in [37].
In task scheduling, tasks are defined by various parameters such as deadlines, task lengths, critical levels, and execution times. One of the most crucial parameters affecting the system’s Quality of Service (QoS) is the deadline. In healthcare applications, some tasks are sensitive to latency and require real-time processing, making response time a vital factor. The DRAM algorithm [25] aims to maximize resource utilization but fails to consider key parameters like completion time, response time, and deadlines. On the other hand, the MRR algorithm [38] focuses on minimizing task waiting time but overlooks aspects such as makespan, response time, resource utilization, and throughput. The DARA algorithm [24] is designed for latency-sensitive tasks but does not take into account the criticality of the tasks. Previous research has indicated that resource allocation is a major challenge in the task scheduling problem within Fog Computing (FC). Additionally, when combining fog computing with healthcare systems, real-time processing is crucial. Most earlier studies have either prioritized critical tasks for real-time execution or focused on meeting deadline requirements to improve performance.
This research differs from other studies in its integration of task prioritization, dynamic resource allocation, deadline-conscious scheduling, and task migration in a distributed fog computing environment. Collectively, these features enhance real-time system efficiency and ensure that critical tasks, such as healthcare monitoring, are handled in a timely manner, improving operational efficiency and patient outcomes.

3. Problem Definition

3.1. System Model

The proposed model is shown in Figure 4. The architecture composed of four layers, as follows:
  • User layer: represents the individuals who use the system.
  • IoT Interface layer: consists of a variety of IoT devices.
  • Fog layer: comprises a number of fog servers to process tasks coming from the IoT layer.
  • Cloud layer features large datacenters that can hold the patient’s records for long-term analysis.
TCDC algorithm can be executed at the fog layer. In this layer, a fog device named fog broker is used to manage resources and takes the schedule decision for each task.

3.2. Functionalities of Fog Layer in Healthcare

  • Healthcare data require real time response with minimal latency. Fog layer can handle this type of data.
  • Huge amount of medical data are transmitted to the fog layer for compression and formatting of data.
  • The fog layer also deals with privacy and security matters that are related to patient’s private information included in healthcare data.
  • The fog layer decreases network traffic to the Cloud layer.

3.3. Problem Statement

Fog Computing (FC) involves allocating existing resources to consumer requests or activities in a structured order to meet user needs and quality of service (QoS) requirements. The application of fog computing is particularly significant in healthcare systems that depend on time-sensitive applications. Healthcare applications must be addressed immediately; however, numerous smaller applications within this category require higher priority. While some of these applications can tolerate minor delays, others are highly sensitive to delays. For instance, tasks from health monitoring applications, such as ventilators and electrocardiograms (ECGs) cannot afford to wait, making them critical. In contrast, activities related to hospital management systems can tolerate slight delays, rendering them less critical. Therefore, it is essential to prioritize these activities based on the types of applications they originate from. In our study, we focused on scheduling healthcare tasks in real-time by executing critical tasks first by user-defined deadlines.
The fog layer consists of S fog servers (S1, S2, …, Ss). For each fog server, there are M virtual machines (p1, p2, …, pm). Each virtual machine has its own resources and processing speed (si), measured in millions of instructions per second (MIPS). N = {T1, T2, …, Tn} represents the set of tasks executed in the fog layer. The problem can be formulated as follows: How can we assign a set of latency-sensitive tasks, numbered N, to a set of virtual machines, numbered M, without violating user-defined deadline constraints (d) and the critical levels of the tasks? The objective is to minimize task completion time and response time while maximizing resource utilization, all while ensuring that deadlines are met.

4. Proposed Algorithm

The TCDC algorithm aims to enhance the performance of latency-sensitive applications, such as those in healthcare, by considering the criticality (importance level) of each task. The algorithm begins by receiving tasks with deadlines, which are then classified according to their priority (critical level) and resource requirements. Subsequently, virtual machines (VMs) are evaluated based on their capacity to meet these deadlines, and tasks are allocated using a round-robin (RR) approach, as illustrated in Figure 5. The RR is used to allocate tasks across VMs, promoting fairness and preventing overload. Finally, tasks are monitored for deadline violations and are either scheduled or migrated to another fog server. Successfully scheduled tasks are removed from the queue, facilitating efficient load balancing and adherence to deadlines. For a clearer understanding, the steps of the TCDC algorithm are discussed below:
  • Start
    The process begins.
2.
Receive Tasks from Users
The system (Fog Broker) receives tasks from users. Each task comes with a specified deadline.
3.
Classify Tasks by Priority Groups
The Fog Broker classifies tasks into three priority levels based on task importance (critical level):
Level 1 (Highest Priority)
Level 2 (Medium Priority)
Level 3 (Lowest Priority)
4.
Classify Groups into Subgroups
Each priority group is further divided into subgroups based on the task’s specific resource requirements (e.g., memory needs, storage, etc.).
5.
Estimate Expected Execution Time (EET)
The expected processing time (EET) for each task on each VM is calculated using Equation (1):
EET = r s p i
where r is the task length and spi is the VM’s speed.
6.
Compare EET with Deadline
The broker compares the estimated processing time (EET) of each task on various VMs with the task’s deadline.
7.
Is the VM Valid?
Decision 1: Does the VM meet the deadline constraint ( E E T d e a d l i n e ) ?
If YES, the VM is labeled as Valid.
If NO, the VM is labeled as Invalid.
8.
Allocate VM for Task
The broker allocates a valid VM to the task, following these conditions:
The VM provides the least and enough task’s resource requirements.
The VM was not previously allocated to another task.
9.
Check Task Deadline Violation (DIV)
Decision 2: Check if the task’s deadline is violated:
If DIV(T) = 1, the task deadline is violated, and the task is migrated to another fog server.
If DIV(T) = 0, the task is scheduled on the allocated VM.
10.
Remove Task from Ready Queue
Once the task is scheduled, it is removed from the ready queue.
11.
End
The process ends.

TCDC Algorithm

TCDC is designed to be more scalable, efficiently managing larger task loads by organizing tasks based on priority levels (critical level of each task) and types (the resources required for each task). This organization allows TCDC to handle increasing task volumes more effectively. The round-robin (RR) technique is applied in the TCDC algorithm to prevent any single virtual machine (VM) from becoming overburdened while others remain underutilized. This approach ensures that all VMs are utilized evenly, and that the workload is balanced, thereby improving overall system efficiency.
TCDC algorithm differs from the DARA algorithm [24] in their respective objectives. DARA aims to reduce costs while meeting deadlines, but it may not achieve the highest throughput or performance compared to the TCDC algorithm. In contrast, TCDC seeks to optimize overall system performance—considering makespan, response time, and cost—while adhering to deadline constraints. It focuses on providing scalable and high-performance task scheduling. Additionally, the two algorithms differ in their approaches to task classification and grouping. For instance, DARA classifies tasks solely based on their resource requirements, without considering the criticality of the tasks themselves. This limitation makes DARA less suitable for real-time healthcare applications. In resource-constrained environments—such as cost-sensitive, low-load systems—DARA is more appropriate, as it prioritizes minimizing costs over maximizing performance. Conversely, TCDC is ideal for high-demand environments where efficiency and throughput take precedence over cost minimization. In the healthcare sector, for example, DARA is suitable for cost-sensitive systems like telemedicine in low-budget areas, while TCDC is better suited for high-performance applications such as real-time patient monitoring and diagnostics. In summary, DARA focuses on cost-effectiveness and deadline adherence, whereas TCDC emphasizes performance optimization (including makespan, response time, and cost), along with scalability and efficient task scheduling. The pseudocode for the TCDC algorithm is illustrated as Algorithm 1.
Algorithm 1: TCDC Algorithm
Input: A set of independent real-time tasks with a deadline.
Output: minimize the makespan, response time, and the total cost based on deadline constraints.
 1- Classify tasks according to the task’s critical level.
 2- For each group, categorize tasks into subgroups according to the task type (cat1: RAM, cat2: Storage).
3- In each subgroup, sort tasks in descending order based on the task’s requirement.
 4- Last VM = 0
 5- For j = 1 to n //n is the number of tasks in each group
 6-//check VM state:
   For i = 1 to m
   Calculate EETj
      If EETj ≤ dj         
            VMi state = valid
          Count ++
      Else VMi state = invalid
      End if
   End for
 7- If count > 0
   DIV = 0
      If cat = cat1& RAMj ≤ min {RAMi} & VMi != last VM ∀ i ϵ VM, j ϵ T
        Map task j to VMi, last VM = VMi

      If cat= cat2 & Sj ≤ min {Si} &
      VMi != last VM    ∀ i ϵ VM, j ϵ T
        Map task j to Vmi, last VM = VMi
      Else DIV = 1
   Migrate task to another fog server
   End if
End for//all tasks are mapped.

5. Simulation and Experimental Results

5.1. Simulation Environment

A series of experiments has been conducted to evaluate the performance of the TCDC algorithm. To simulate the fog environment, a simulator was developed using Visual C# .NET 4.0 on a machine equipped with an Intel(R) Core(TM) i3 CPU M 350 @ 2.27 GHz, 8.00 GB of RAM, and a 64-bit Windows 10 operating system. Each fog node comprises multiple virtual machines (VMs), each with distinct properties and varying processing capabilities, including CPU speed, memory, and capacity. The capabilities of the assumed fog nodes are detailed in Table 1. Simultaneously, each task is generated with different characteristics compared to other tasks, such as task length, arrival time, required resources, etc., as illustrated in Table 2.
In our experiment, six data sets were utilized, varying in size from 500 to 3000 tasks. The tasks in these data sets were randomly generated within the range illustrated in Table 2. Our experiment encompassed two types of tasks: capacity tasks and memory tasks.

5.2. Performance Evaluation Parameters

The performance of the TCDC algorithm is evaluated in a fog environment by measuring a set of evaluation parameters.

5.2.1. Makespan

The completion time of the last executed task on VMs is called makespan or schedule length. It should be minimized to improve the performance. Its calculation is illustrated in Equation (2) [39]:
Makespan = Max [CT(Pi)]
where CT is the completion time, i ϵ VMs (1≤ i ≤ m)

5.2.2. Response Time

RT is the time duration between the task’s submission and its completion [38]. RT is calculated using Equation (3) [40]:
RT = CTj − SBj
where CTj and SBj are the completion time and submission time, respectively, of task “j”.
For one VM, the average response time for all tasks can be calculated by Equation (4).
A v g . R T = j = 1 n R T n
For all VMs, the mean of the total average response time is calculated by Equation (5):
Mean   of   total   Avg .   RT = i = 1 m A v g . R T m
where n is the number of tasks in VM and m is the number of VMs.

5.2.3. Throughput

Throughput means the efficiency of the scheduling algorithm that is the number of tasks completed successfully per unit of time. Its calculation is given in Equation (6) [40]:
Throughput = n M a k e s p a n
where n is the total number of tasks, and makespan is calculated as mentioned before.

5.2.4. Resource Utilization (RU)

RU is the resource usage rate of the resource units available on the computing nodes. For improving the performance, it should be maximized as possible. Its calculation as follows [40]:
RU = i = 1 m M a k e s p a n i m M a x _ M a k e s p a n
where max_makespan is the maximum completion time among all VMs. It can be expressed as:
max_makespan = max{makespani}

5.2.5. Load Balancing

LB is the concept of distributing the load of processing a set of tasks over a set of resources as equally as possible to make efficient processing. Lower values indicate better performance. LB calculation is illustrated in Equation (9) [40]:
Load   Balancing = i = 1 m M a k e s p a n i m
where makespani is the makespan on VM “i” and m is the number of VMs.

5.2.6. Total Cost

To calculate the processing cost for task Tj on a VM Pi, the usage cost of VM’s resources must be calculated first. Two types of costs are considered in this work, storage cost and memory cost. Equation (10) [41] is used to calculate the cost of task “j” on a VM “i”:
Cos t   ( T j i ) = c r   ( T j i ) + c s   ( T j i )
where cr ( T j i ) and cs ( T j i ) denote the RAM cost and storage cost, respectively, of task “j” on VM “i”.
Equations (11) and (12) define the calculation of RAM and storage cost, respectively:
c r ( T j i )   = c 1   ×   RAM   ( T j i )
c s ( T j i )   = c 2   ×   S   ( T j i )
where c1, and c2 represent the usage cost of RAM and storage per data unit, respectively, in VM Pi. RAM (Tji), and S (Tji) are the required RAM and storage for task Tj respectively.
At last, Equation (13) can be used to calculate the cost for all tasks processed in the system.
Total   cost = i = 1 m j = 1 n C o s t ( T j i )
The TCDC algorithm efficiently schedules tasks in fog computing by prioritizing tasks, estimating execution times, and allocating VMs based on resource needs. It ensures high-priority tasks are processed first, optimizes resource use, and manages deadlines. If a task misses its deadline, it is migrated to another VM. This balance between performance and resource efficiency makes it ideal for real-time, resource-constrained environments like IoT, healthcare, and industrial automation.

5.3. Experimental Results

Some comparison parameters are measured to evaluate TCDC, DARA, DRAM, and MRR algorithms, with the number of VMs changing from 6 to 10 and the number of tasks from 500 to 3000.

5.3.1. Results on 6 VMs

Table 3, Table 4 and Table 5 list the comparison for 6 VMs. Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 illustrate the results.

5.3.2. Results on 8 VMs

Table 6, Table 7 and Table 8 list the comparison in the case of 8 VMs. Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 illustrate the results.

5.3.3. Results on 10 VMs

Table 9, Table 10 and Table 11 list the comparison in the case of 10 VMs. Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate the results.
From the results, we can see that TCDC offers the best throughput and resource utilization, making it ideal for high-performance environments, though it has high costs and scalability issues. DARA is the most cost-effective, particularly for resource-constrained systems, but suffers from lower throughput and poor scalability, making it less suitable for large-scale systems. DRAM strikes a balance between cost and performance but faces load balancing issues at higher task volumes, making it suitable for moderate-scale systems. MRR is the most cost-efficient for small task loads but struggles with scalability, limiting its effectiveness in larger systems.

6. Results and Performance Discussion

6.1. Makespan and Response Time

TCDC outperforms other algorithms in overall efficiency, achieving the lowest makespan and average response time across all task loads. It is highly scalable, with its makespan growing slowly as the task load increases, making it ideal for larger systems. In contrast, DARA and DRAM show deteriorating performance as the task load increases, suggesting scalability issues. MRR performs better than DARA and DRAM for small task sizes, but its efficiency drops significantly as the task load grows, making it the least effective for large loads. TCDC’s stable performance under increasing load makes it the best choice for high-demand environments focused on efficiency.

6.2. Throughput and Resource Utilization

In terms of throughput, TCDC is the most effective at processing tasks, with throughput increasing steadily as the task load grows, indicating efficient resource use and high task processing rate as the system scales. DARA shows modest throughput growth, suggesting it is less efficient at handling increased task loads. DRAM has the lowest throughput across all task loads, performing poorly compared to TCDC and DARA despite some increase with higher task loads. MRR maintains stable throughput, but it remains significantly lower than TCDC, especially as task numbers increase.
In terms of utilization, TCDC outperforms all other algorithms in both resource utilization and throughput across all task sizes, efficiently using system resources without overloading or underutilizing them. DARA and DRAM have lower resource utilization, which improves slightly with increasing task size but remains less efficient than TCDC. MRR has the highest resource utilization at smaller task sizes, but it becomes more variable and slightly decreases as task load increases. Finally, TCDC is the best choice for scalable, high-efficiency applications. Both DARA and DRAM are less efficient, with DRAM performing poorly in both throughput and resource utilization, while DARA shows modest improvement with higher task loads but still lags behind TCDC.

6.3. Load Balancing and Total Cost

For load balancing, TCDC performs better than other algorithms in load balancing, though its values increase with task load, indicating slightly reduced load balancing as the task size grows. DARA has the lowest load balancing values, particularly for smaller task sizes, and its performance worsens as task size increases. This suggests that DARA struggles more than others in distributing the workload evenly, particularly as the task volume increases. DRAM exhibits poor load balancing, with values similar to or worse than TCDC, especially for larger tasks. This indicates that DRAM is less efficient in balancing the load compared to TCDC and DARA. MRR has the highest load balancing values across all task sizes, which indicates that MRR faces substantial challenges in balancing the load, especially as the task size increases.
For total cost, TCDC consistently has the highest total cost across all task sizes, reflecting its higher resource demand to achieve superior throughput performance. Its cost increases steadily with the number of tasks. DARA is more cost-effective, with significantly lower total costs, especially in resource-constrained environments. For example, at 500 tasks with 6 VMs, DARA’s cost is much lower than TCDC’s. This trend is observed across all task sizes, indicating that DARA is a more economical choice, albeit with trade-offs in performance (throughput, resource utilization). MRR has the lowest total cost for small task sizes, but its cost rises sharply as task volume increases, making it less efficient at larger scales. Therefore, while MRR is cost-efficient for small tasks, it struggles to scale effectively.
Although TCDC has the highest total cost, its strong performance in throughput and resource utilization may justify the expense in high-performance environments where efficiency is prioritized over cost. On the other hand, DARA being more cost-effective is ideal for resource-constrained environments but sacrifices performance in throughput and load balancing. DRAM, while consuming more resources than DARA, remains cheaper than TCDC, likely due to lower resource utilization efficiency. It may be suitable for environments where moderate cost savings are preferred, even with some performance trade-offs. MRR is ideal for small-scale systems focused on cost minimization, but unsuitable for larger systems due to poor scalability and inefficiency in throughput and load balancing.

6.4. Overall QoS Analysis

Based on the simulation results, here is how each algorithm performs in terms of QoS as shown in Table 12.
TCDC outperforms other algorithms due to its tailored approach to optimizing task scheduling by effectively balancing key factors such as resource use, task prioritization, and time efficiency. Here are the reasons for TCDC’s superior performance:
  • Effective Resource Use: TCDC consistently achieves high resource utilization (over 90%), ensuring that resources are used efficiently with minimal idle time, leading to quicker task completion and increased throughput.
  • Reduced Makespan: TCDC is adept at minimizing the total time needed to finish all tasks (makespan) by strategically scheduling tasks to cut down on idle time and overlapping computations, which enhances system efficiency.
  • Quick Response Time: TCDC processes tasks rapidly after they are submitted, resulting in the lowest average response time across various task loads. This efficiency is likely due to its effective task prioritization and avoidance of bottlenecks.
  • High Throughput: By managing task execution and resource allocation effectively, TCDC achieves the highest throughput, completing more tasks per time unit than its competitors. This scalability makes it ideal for systems facing increasing workloads.
  • Task Prioritization and Scheduling Techniques: TCDC likely utilizes advanced scheduling methods (such as deadline-aware or priority-based scheduling) that ensure tasks are executed in a manner that minimizes delays and maximizes resource efficiency. This gives it an advantage over algorithms like DARA and DRAM, which struggle with uneven task distribution and inefficient resource use.
  • Scalability: TCDC performs well as task counts increase, maintaining low makespan and response times even with growing workloads. This indicates a robust design capable of managing larger and more complex tasks without a drop in performance.
Although TCDC may have a higher total cost, suggesting it uses more computational or operational resources, its exceptional performance in key quality of service metrics—such as makespan, response time, resource utilization, and throughput—demonstrates its optimization for fast, efficient, and scalable task execution. These benefits position TCDC as the top-performing algorithm, particularly in situations where speed, efficiency, and high resource utilization are essential.

7. Conclusions and Future Work

To bring cloud services closer to customers, FC acts as a mediator between the cloud and end users. It is perfect for healthcare applications because of this feature. Resource allocation is a crucial topic in FC that needs to be researched since ineffective task distribution lowers system performance. In this study, we put forward the TCDC algorithm to serve healthcare applications that require real-time response as well as giving priority for urgent tasks to be processed first. The foundation of TCDC is the effective distribution of application tasks that are limited by deadline and critical level on the system resources. It is possible to carry it out at the fog layer. A few of the current resource allocation methods, including DRAM, MRR, and DARA, are contrasted with TCDC. According to the results, TCDC offers the best performance in terms of makespan, response time, resource utilization, and throughput. It is suitable for environments requiring high efficiency, scalability, and quick task processing across varying task sizes. However, it may require more computational resources to handle scheduling mechanisms. DARA and DRAM offer moderate efficiency but are not as efficient as TCDC, especially as task load increases. DARA is useful for resource conservation but has lower scalability, making it best for smaller or less demanding systems where minimizing cost is critical. DRAM is suitable for tasks where task completion is not time critical but its inefficiency limits its application in larger systems. Both DARA and DRAM may be more suited for scenarios where resource constraints or simpler scheduling mechanisms are prioritized over optimal performance. MRR is cost-effective for small task sizes but becomes less efficient as the system scales. It may be suitable for small-scale systems or situations where minimizing cost is paramount and minimal resource usage is more important than achieving the highest throughput. The results demonstrated that, as the number of jobs and virtual machines (VMs) increased, the makespan improve ratio outperformed DRAM and MRR by roughly 67.68% and 60.34%, respectively. Ultimately, the best algorithm depends on the specific needs of the system, whether the priority is cost, performance, or a balance of both. If resource efficiency and throughput are critical, TCDC would be the most appropriate choice, while DARA and MRR may be considered based on specific resource or task constraints. DRAM might be less suitable due to its overall inefficiency.
Future research can examine additional performance metrics that impact system performance, such as energy consumption. To improve the outcomes of load balancing and the overall cost of using resources to increase the system’s overall performance, we can additionally take into account additional QoS limitations.

Author Contributions

A.E.-N. wrote the manuscript. N.A.E.-B., A.E.-S. and S.E. reviewed the manuscript. The manuscript was written by A.E.-N. and all authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data will be available on request.

Conflicts of Interest

The authors have no competing interests to declare that are relevant to the content of this article.

Nomenclature

SymbolDescription
STotal number of fog servers.
S1, S2,..., SsSet of fog servers.
MTotal number of virtual machines per server.
p1, p2,..., pmSet of virtual machines per server.
siProcessing speed of virtual machine i (MIPS).
NTotal number of tasks to be executed.
T1, T2,..., TnSet of tasks to be assigned for execution.
dTask deadline (max allowed completion time).

References

  1. Gia, T.N.; Jiang, M.; Rahmani, A.-M.; Westerlund, T.; Liljeberg, P.; Tenhunen, H. Fog computing in healthcare Internet of Things: A case study on ECG feature extraction. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Washington, DC, USA, 17–21 March 2015. [Google Scholar]
  2. Tanwar, S.; Tyagi, S.; Kumar, N. (Eds.) Security and Privacy of Electronics Healthcare Records (IET Book Series on E-Health Technologies); The Institution of Engineering and Technology: London, UK, 2019; pp. 1–450. [Google Scholar]
  3. Subbaraj, S.; Thiyagarajan, R. Performance oriented task-resource mapping and scheduling in fog computing environment. Cogn. Syst. 2021, 70, 40–50. [Google Scholar]
  4. Singh, S.P.; Nayyar, A.; Kaur, H.; Singla, A. Dynamic task scheduling using balanced VM allocation policy for fog computing platforms. Scalable Comput. Pract. Exp. 2019, 20, 433–456. [Google Scholar] [CrossRef]
  5. Wadhwa, H.; Aron, R. Optimized task scheduling and preemption for distributed resource management in fog-assisted IoT environment. J. Supercomput. 2023, 79, 2212–2250. [Google Scholar]
  6. Jamil, B.; Ijaz, H.; Shojafar, M.; Buyya, R. Resource Allocation and Task Scheduling in Fog Computing and Internet of Everything Environments: A Taxonomy, Review, and Future Directions. ACM Comput. Surv. 2022, 54, 1–38. [Google Scholar] [CrossRef]
  7. Khan, S.; Shah, I.A.; Nadeem, M.F.; Jan, S.; Whangbo, T.; Ahmad, S. Optimal Resource Allocation and Task Scheduling in Fog Computing for Internet of Medical Things Application. Hum. -Centric Comput. Inf. Sci. 2023, 13, 56. [Google Scholar]
  8. Thilakarathne, N.N.; Muneeswari, G.; Parthasarathy, V.; Alassery, F.; Hamam, H.; Mahendran, R.K.; Shafiq, M. Federated learning for privacy preserved medical Internet of Things. Intell. Autom. Soft Comput. 2022, 33, 157–172. [Google Scholar]
  9. Almaiah, M.A.; Hajjej, F.; Ali, A.; Pasha, M.F.; Almomani, O. A novel hybrid trustworthy decentralized authentication and data preservation model for digital healthcare IoT based CPS. Sensors 2022, 22, 1448. [Google Scholar] [CrossRef]
  10. Hu, P.; Dhelim, S.; Ning, H.; Qiu, T. Survey on fog computing: Architecture, key technologies, applications and open issues. J. Netw. Comput. Appl. 2017, 98, 27–42. [Google Scholar]
  11. Atlam, H.F.; Walters, R.J.; Wills, G.B. Fog computing and the internet of things: A review. Big Data Cogn. Comput. 2018, 2, 10. [Google Scholar] [CrossRef]
  12. Gupta, M.; Singla, N. Learner to Advanced: Big Data Journey. In Handbook of IoT and Big Data; CRC Press: Boca Raton, FL, USA, 2019; p. 187. [Google Scholar]
  13. Farahani, B.; Firouzi, F.; Chang, V.; Badaroglu, M.; Constant, N.; Mankodiya, K. Towards fog-driven IoT eHealth: Promises and challenges of IoT in medicine and healthcare. Future Gener. Comput. Syst. 2018, 78, 659–676. [Google Scholar]
  14. Khan, A.; Abbas, A.; Khattak, H.A.; Rehman, F.; Din, I.U.; Ali, S. Effective Task Scheduling in Critical Fog Applications. Sci. Program. 2022, 2022, 9208066. [Google Scholar] [CrossRef]
  15. Rghioui, A.; Lloret, J.; Harane, M.; Oumnad, A. A smart glucose monitoring system for diabetic patient. Electronics 2020, 9, 678. [Google Scholar] [CrossRef]
  16. Guevara, J.C.; da Fonseca, N.L.S. Task scheduling in cloud-fog computing systems. Peer-to-Peer Netw. Appl. 2021, 14, 962–977. [Google Scholar] [CrossRef]
  17. Tanwar, S.; Obaidat, M.S.; Tyagi, S.; Kumar, N. Online Signature-Based Biometric Recognition. In Biometric-Based Physical and Cybersecurity Systems; Springer: Cham, Switzerland, 2019; pp. 255–285. [Google Scholar]
  18. Tanwar, S.; Tyagi, S.; Kumar, N.; Obaidat, M.S. Ethical, Legal, and Social Implications of Biometric Technologies. In Biometric-Based Physical and Cybersecurity Systems; Springer: Cham, Switzerland, 2019; pp. 535–569. [Google Scholar]
  19. Aladwani, T. Scheduling IoT Healthcare Tasks in Fog Computing Based on their Importance. Procedia Comput. Sci. 2019, 163, 560–569. [Google Scholar]
  20. Stavrinides, G.L.; Karatza, H.D. A Hybrid Approach to Scheduling Real-Time Iot Workflows in Fog and Cloud Environments. Multimed. Tools Appl. 2019, 78, 24639–24655. [Google Scholar]
  21. Kopras, B.; Bossy, B.; Idzikowski, F.; Kryszkiewicz, P.; Bogucka, H. Task allocation for energy optimization in fog computing networks with latency constraints. IEEE Trans. Commun. 2022, 70, 8229–8243. [Google Scholar]
  22. Bansal, S.; Aggarwal, H.; Aggarwal, M. A systematic review of task scheduling approaches in fog computing. Trans. Emerg. Telecommun. Technol. 2022, 33, e4523. [Google Scholar]
  23. Agarwal, S.; Yadav, S.; Yadav, A. An Efficient Architecture and Algorithm for Resource Provisioning in Fog Computing. Int. J. Inf. Eng. Electron. Bus. 2016, 8, 48. [Google Scholar] [CrossRef]
  24. EL-Nattat, A.; El-Bahnasawy, N.A.; El-Sayed, A.; Elkazzaz, S. Performance Enhancement of Fog Environment with Deadline Aware Resource Allocation Algorithm. Menoufia. J. Electron. Eng. Res. 2022, 31, 107–119. [Google Scholar]
  25. Xu, X.; Fu, S.; Cai, Q.; Tian, W.; Liu, W.; Sun, X.; Liu, A.X. Dynamic Resource Allocation for Load Balancing in Fog Environment. Wirel. Commun. Mob. Comput. 2018, 2018, 6421607. [Google Scholar] [CrossRef]
  26. Behera, R.K. An Efficient Fog Layer Task Scheduling Algorithm for Multi- Tiered IoT Healthcare Systems. Int. J. Reliab. Qual. E-Healthc. 2022, 11, 1–11. [Google Scholar]
  27. Sun, Y.; Lin, F.; Xu, H. Multi-objective Optimization of Resource Scheduling in Fog Computing Using an Improved NSGA-II. Wirel. Pers. Commun. 2018, 102, 1369–1385. [Google Scholar] [CrossRef]
  28. Jamil, B.; Shojafar, M.; Ahmed, I.; Ullah, A.; Munir, K.; Ijaz, H. A job scheduling algorithm for delay and performance optimization in fog computing. Concurr. Comput. Pract. Exp. 2020, 32, e5581. [Google Scholar] [CrossRef]
  29. Khattak, H.A.; Arshad, H.; Ahmed, G.; Jabbar, S.; Sharif, A.M.; Khalid, S. Utilization and load balancing in fog servers for health applications. EURASIP J. Wirel. Commun. Netw. 2019, 2019, 91. [Google Scholar] [CrossRef]
  30. Tang, W.; Zhang, K.; Zhang, D.; Ren, J.; Zhang, Y.; Shen, X. Fog-enabled smart health: Toward cooperative and secure healthcare service provision. IEEE Commun. Mag. 2019, 57, 42–48. [Google Scholar] [CrossRef]
  31. Li, L.; Guan, Q.; Jin, L.; Guo, M. Resource allocation and task offloading for heterogeneous real-time tasks with uncertain duration time in a fog queueing system. IEEE Access 2019, 7, 9912–9925. [Google Scholar] [CrossRef]
  32. Mahmud, R.; Buyya, R. Modeling and simulation of fog and edge computing environments using ifogsim toolkit. In Fog and Edge Computing: Principles and Paradigms; Wiley Telecom: Hoboken, NJ, USA, 2019; pp. 433–465. [Google Scholar]
  33. Kim, S. Novel resource allocation algorithms for the social internet of things based fog computing paradigm. Wirel. Commun. Mob. Comput. 2019, 2019, 3065438. [Google Scholar] [CrossRef]
  34. Zhou, Z.; Liu, P.; Feng, J.; Zhang, Y.; Mumtaz, S.; Rodriguez, J. Computation resource allocation and task assignment optimization in vehicular fog computing: A contract-matching approach. IEEE Trans. Veh. Technol. 2019, 68, 3113–3125. [Google Scholar] [CrossRef]
  35. Wu, C.-G.; Wang, L. A Deadline-Aware Estimation of Distribution Algorithm for Resource Scheduling in Fog Computing Systems. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019. [Google Scholar]
  36. Thota, C.; Manogaran, G.; Sundarasekar, R.; Varatharajan, R.; Priyan, M.K. Centralized Fog Computing Security Platform for IoT and Cloud in Healthcare System; IGI Global: Hershey, PA, USA, 2018. [Google Scholar]
  37. Balamurugan, S.; Jeevitha, L.; Anupriya, A.; Shanker, G.K. Fog Computing: Synergizing Cloud, Big Data and IoT-Strengths, Weaknesses, Opportunities and Threats (SWOT) Analysis. Int. Res. J. Eng. Technol. 2016, 3, 726–731. [Google Scholar]
  38. Khurma, R.A.; Al Harahsheh, H.; Sharieh, A. Task Scheduling Algorithm in Cloud Computing Based on Modified Round Robin Algorithm. J. Theor. Appl. Inf. Technol. 2018, 96, 5869–5888. [Google Scholar]
  39. Madni, S.H.H.; Muhammad, S.A.L.; Coulibaly, Y.; Abdulhamid, S.M. An appraisal of meta-heuristic resource allocation techniques for IaaS cloud. Indian J. Sci. Technol. 2016, 9, 1–14. [Google Scholar] [CrossRef]
  40. Panda, S.K.; Gupta, I.; Jana, P.K. Task scheduling algorithms for multi-cloud systems: Allocation-aware approach. Inf. Syst. Front. 2019, 21, 241–259. [Google Scholar]
  41. Alworafi, M.A.; Dhari, A.; Al-Hashmi, A.A.; Suresha; Darem, A.B. Cost-Aware Task Scheduling in Cloud Computing Environment. Int. J. Comput. Netw. Inf. Secur. 2017, 9, 52–59. [Google Scholar] [CrossRef]
Figure 1. Fog nodes lie between cloud and ground [1].
Figure 1. Fog nodes lie between cloud and ground [1].
Bdcc 09 00080 g001
Figure 2. Importance of FC in healthcare [10,11].
Figure 2. Importance of FC in healthcare [10,11].
Bdcc 09 00080 g002
Figure 3. The architecture of healthcare system based on IoT/FC.
Figure 3. The architecture of healthcare system based on IoT/FC.
Bdcc 09 00080 g003
Figure 4. Overview of the proposed model.
Figure 4. Overview of the proposed model.
Bdcc 09 00080 g004
Figure 5. RR Task Scheduling [39].
Figure 5. RR Task Scheduling [39].
Bdcc 09 00080 g005
Figure 6. Results of Total Makespan with 6 VMs.
Figure 6. Results of Total Makespan with 6 VMs.
Bdcc 09 00080 g006
Figure 7. Results of Average Response Time with 6 VMs.
Figure 7. Results of Average Response Time with 6 VMs.
Bdcc 09 00080 g007
Figure 8. Results of Resource Utilization with 6 VMs.
Figure 8. Results of Resource Utilization with 6 VMs.
Bdcc 09 00080 g008
Figure 9. Results of Throughput with 6 VMs.
Figure 9. Results of Throughput with 6 VMs.
Bdcc 09 00080 g009
Figure 10. Results of Load Balancing with 6 VMs.
Figure 10. Results of Load Balancing with 6 VMs.
Bdcc 09 00080 g010
Figure 11. Results of Total Cost with 6 VMs.
Figure 11. Results of Total Cost with 6 VMs.
Bdcc 09 00080 g011
Figure 12. Results of Total Makespan with 8 VMs.
Figure 12. Results of Total Makespan with 8 VMs.
Bdcc 09 00080 g012
Figure 13. Results of Average Response Time with 8 VMs.
Figure 13. Results of Average Response Time with 8 VMs.
Bdcc 09 00080 g013
Figure 14. Results of Resource Utilization with 8 VMs.
Figure 14. Results of Resource Utilization with 8 VMs.
Bdcc 09 00080 g014
Figure 15. Results of Throughput with 8 VMs.
Figure 15. Results of Throughput with 8 VMs.
Bdcc 09 00080 g015
Figure 16. Results of Load Balancing with 8 VMs.
Figure 16. Results of Load Balancing with 8 VMs.
Bdcc 09 00080 g016
Figure 17. Results of Total Cost with 8 VMs.
Figure 17. Results of Total Cost with 8 VMs.
Bdcc 09 00080 g017
Figure 18. Results of Total Makespan with 10 VMs.
Figure 18. Results of Total Makespan with 10 VMs.
Bdcc 09 00080 g018
Figure 19. Results of Average Response Time with 10 VMs.
Figure 19. Results of Average Response Time with 10 VMs.
Bdcc 09 00080 g019
Figure 20. Results of Resource Utilization with 10 VMs.
Figure 20. Results of Resource Utilization with 10 VMs.
Bdcc 09 00080 g020
Figure 21. Results of Throughput with 10 VMs.
Figure 21. Results of Throughput with 10 VMs.
Bdcc 09 00080 g021
Figure 22. Results of Load Balancing with 10 VMs.
Figure 22. Results of Load Balancing with 10 VMs.
Bdcc 09 00080 g022
Figure 23. Results of Total Cost with 10 VMs.
Figure 23. Results of Total Cost with 10 VMs.
Bdcc 09 00080 g023
Table 1. Characteristics of fog nodes.
Table 1. Characteristics of fog nodes.
ParameterValue
Number of fog nodes3,4,5
Number of VMs in each node2
Computation power of VM (MIPS)[10, 200]
Storage capacity of VM (GB)5000–15,000
Memory of VM (MB)5000–15,000
Memory Usage Cost ($/MB)0.1–0.5
Storage Usage Cost ($/MB)0.1–0.5
Table 2. Attributes of Tasks.
Table 2. Attributes of Tasks.
AttributeValue
Number of tasks{500, 1000, 1500, 2000, 2500, 3000}
Critical level[1, 3]
Arrival Time (Time Unit)[0, 20]
Deadline (Time Unit)[2, 10]
Length (Instruction)[5, 50]
Required Capacity (GB)[5, 50]
Required Memory (MB)[5, 50]
Table 3. Comparison results on 6 VMs.
Table 3. Comparison results on 6 VMs.
No. of TasksTotal Makespan (Time Unit)Average Response Time (Time Unit)
TCDCDARADRAMMRRTCDCDARADRAMMRR
500506161324059663.98912.5015.0957.706
100010012066313220074.2628.0679.7538.015
150015703315395529264.0608.2218.3317.774
200020033542425039634.0646.6677.3277.894
250020842902473349814.3164.626.9097.975
300020714351485759354.0025.7076.5757.902
Table 4. Comparison results on 6 VMs (continue).
Table 4. Comparison results on 6 VMs (continue).
No. of TasksResource Utilization (%) Throughput (Tasks/Time Unit)
TCDCDARADRAMMRRTCDCDARADRAMMRR
5000.9410.3260.2640.9580.9880.3090.2070.517
10000.9330.3020.4100.9580.9990.4840.3190.498
15000.9140.2940.4190.9690.9550.4520.3790.512
20000.9580.3370.4660.9760.9980.5640.4700.504
25000.9350.5770.4730.9681.1990.8610.5280.501
30000.9650.4790.4910.9711.4480.6890.6170.505
Table 5. Comparison results on 6 VMs (continue).
Table 5. Comparison results on 6 VMs (continue).
No. of TasksLoad Balancing (Time Unit)Total Cost
TCDCDARADRAMMRRTCDCDARADRAMMRR
500923.833567.5925.8331830.8331,950,047.25613,302.851,691,217.2844,466
10001803.6661171.8331883.53834.51,972,928.75620,397.451,712,019.9890,132
15002302.16617532486.55653.3331,999,602.55633,209.451,738,439.78134,268
20002753.3332301.531237719.52,029,980.95654,435.051,769,740.88180,306
25002846.3332878.6663567.8339639.3332,061,332.85686,209.651,801,603.48217,089
3000282034443719.66611,515.1662,093,075.35728,197.051,833,458.28273,289
Table 6. Comparison results on 8 VMs.
Table 6. Comparison results on 8 VMs.
No. of TasksTotal Makespan (Time Unit)Average Response Time (Time Unit)
TCDCDARADRAMMRRTCDCDARADRAMMRR
500426110024059663.2587.48215.0957.706
10007561778313220073.2626.4829.7538.015
150010352270395529263.0405.3178.3317.774
200010981931425039633.2053.5727.3277.894
250010032488473349813.2223.9316.9097.975
30009642703485759353.1363.6756.5757.902
Table 7. Comparison results on 8 VMs (continue).
Table 7. Comparison results on 8 VMs (continue).
No. of TasksThroughput (Tasks/Time Unit)Resource Utilization (%)
TCDCDARADRAMMRRTCDCDARADRAMMRR
5001.1730.4540.2070.5170.8730.2310.2640.958
10001.3220.5620.3190.4980.9100.2600.4100.958
15001.4490.6600.3790.5120.9430.3790.4190.969
20001.8211.0350.4700.5040.9440.4710.4660.976
25002.4921.0040.5280.5010.9020.4580.4730.968
30003.1121.1090.6170.5050.9480.4790.4910.971
Table 8. Comparison results on 8 VMs (continue).
Table 8. Comparison results on 8 VMs (continue).
No. of TasksLoad Balancing (Time Unit)Total Cost
TCDCDARADRAMMRRTCDCDARADRAMMRR
500691.875425.125925.8331830.8331,860,783.75268,582.351,540,802.1844,466
10001125.25878.3751883.53834.51,874,113.85279,034.951,561,604.8890,132
15001389.3751313.752486.55653.3331,889,980.65300,243.951,588,024.68134,268
20001454.3751724.62531237719.51,906,143.55315,780.751,619,325.78180,306
25001363.251984.253567.8339639.3331,922,333.85335,729.951,651,188.38217,089
30001258.3752201.253719.66611,515.1661,938,212.75360,727.651,683,043.18273,289
Table 9. Comparison results on 10 VMs.
Table 9. Comparison results on 10 VMs.
No. of TasksTotal Makespan (Time Unit)Average Response Time (Time Unit)
TCDCDARADRAMMRRTCDCDARADRAMMRR
500320110024059662.5297.48215.0957.7063
10008091778313220072.7696.4829.7538.015
150012852351395529262.9875.2998.3317.774
200014841954425039633.3943.5977.3277.894
250023912536473349813.7114.0436.9097.975
300030682796485759354.0283.4456.5757.902
Table 10. Comparison results on 10 VMs (continue).
Table 10. Comparison results on 10 VMs (continue).
No. of TasksThroughput (Tasks/Time Unit)Resource Utilization (%)
TCDCDARADRAMMRRTCDCDARADRAMMRR
5001.5620.4540.2070.5170.9160.1850.2640.958
10001.2360.5620.3190.4980.7660.2080.4100.958
15001.1670.6380.3790.5120.7400.2640.4190.969
20001.3471.0230.4700.5040.7820.3710.4660.976
25001.0450.9850.5280.5010.6230.3580.4730.968
30000.9771.0720.6170.5050.5430.3950.4910.971
Table 11. Comparison results on 10 VMs (continue).
Table 11. Comparison results on 10 VMs (continue).
No. of TasksLoad Balancing (Time Unit)Total Cost
TCDCDARADRAMMRRTCDCDARADRAMMRR
500552.7340.1925.8331830.8331,737,064.85163,129.271,390,387.0844,466
10001151702.71883.53834.51,751,487.95173,581.871,411,189.7890,132
15001699.51030.12486.55653.3331,773,106.75194,790.871,437,609.58134,268
20002035.41151.831237719.51,797,640.25225,864.471,468,910.68180,306
25002246.914033567.8339639.3331,822,535.35245,786.971,500,773.28217,089
30002479.21737.73719.66611,515.1661,851,791.75271,110.271,532,628.08273,289
Table 12. QoS analysis of TCDC, DARA, DRAM, and MRR.
Table 12. QoS analysis of TCDC, DARA, DRAM, and MRR.
AlgorithmLoad BalancingTotal CostResource UtilizationThroughputMakespanResponse Time
TCDCModerateHighHighHighLowLow
DARABestLowLowModerateModerateModerate
DRAMPoorModerateLowLowHighHigh
MRRWorstLowHighModerateHighModerate
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

EL-Natat, A.; El-Bahnasawy, N.A.; El-Sayed, A.; Elkazzaz, S. Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model. Big Data Cogn. Comput. 2025, 9, 80. https://doi.org/10.3390/bdcc9040080

AMA Style

EL-Natat A, El-Bahnasawy NA, El-Sayed A, Elkazzaz S. Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model. Big Data and Cognitive Computing. 2025; 9(4):80. https://doi.org/10.3390/bdcc9040080

Chicago/Turabian Style

EL-Natat, Amal, Nirmeen A. El-Bahnasawy, Ayman El-Sayed, and Sahar Elkazzaz. 2025. "Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model" Big Data and Cognitive Computing 9, no. 4: 80. https://doi.org/10.3390/bdcc9040080

APA Style

EL-Natat, A., El-Bahnasawy, N. A., El-Sayed, A., & Elkazzaz, S. (2025). Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model. Big Data and Cognitive Computing, 9(4), 80. https://doi.org/10.3390/bdcc9040080

Article Metrics

Back to TopTop