Next Article in Journal
An Open-Source System for Public Transport Route Data Curation Using OpenTripPlanner in Australia
Next Article in Special Issue
Novel Hybrid Nature-Inspired Metaheuristic Algorithm for Global and Engineering Design Optimization
Previous Article in Journal
SteadyEval: Robust LLM Exam Graders via Adversarial Training and Distillation
Previous Article in Special Issue
Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing

1
Amity School of Engineering & Technology, Amity University Jharkhand, Ranchi 835303, Jharkhand, India
2
Department of Computer Science and Engineering, SRM University Delhi NCR, Sonipat 131001, Haryana, India
3
Department of Computer Science and Engineering, Lloyd Institute of Engineering and Technology, Greater Noida 201310, Uttar Pradesh, India
4
G.L. Bajaj Institute of Technology and Management, Greater Noida 201310, Uttar Pradesh, India
5
School of Computer Science Engineering & Technology, Bennett University, Greater Noida 201310, Uttar Pradesh, India
6
Department of Biotechnology, School of Engineering and Applied Sciences, Bennett University, Greater Noida 201310, Uttar Pradesh, India
7
CINBIO, Universidade de Vigo, Campus Universitario As Lagoas Marcosende, 36310 Vigo, Spain
*
Authors to whom correspondence should be addressed.
Computers 2026, 15(1), 57; https://doi.org/10.3390/computers15010057
Submission received: 1 December 2025 / Revised: 5 January 2026 / Accepted: 6 January 2026 / Published: 14 January 2026
(This article belongs to the Special Issue Operations Research: Trends and Applications)

Abstract

Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through data mapping. To meet these challenges, a novel task scheduling model is proposed using a hybrid meta-heuristic integration with a deep learning approach. We employed this novel task scheduling model to integrate deep learning with an optimized DNN, fine-tuned using improved grey wolf–horse herd optimization, with the aim of optimizing cloud-based task allocation and overcoming makespan constraints. Initially, a user initiates a task or request within the cloud environment. Then, these tasks are assigned to Virtual Machines (VMs). Since the scheduling algorithm is constrained by the makespan objective, an optimized Deep Neural Network (DNN) model is developed to perform optimal task scheduling. Random solutions are provided to the optimized DNN, where the hidden neuron count is tuned optimally by the proposed Improved Grey Wolf–Horse Herd Optimization (IGW-HHO) algorithm. The proposed IGW-HHO algorithm is derived from both conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO). The optimal solutions are acquired from the optimized DNN and processed by the proposed algorithm to efficiently allocate tasks to VMs. The experimental results are validated using various error measures and convergence analysis. The proposed DNN-IGW-HHO model achieved a lower cost function compared to other optimization methods, with a reduction of 1% compared to PSO, 3.5% compared to WOA, 2.7% compared to GWO, and 0.7% compared to HHO. The proposed task scheduling model achieved the minimal Mean Absolute Error (MAE), with performance improvements of 31% over PSO, 20.16% over WOA, 41.72% over GWO, and 9.11% over HHO.

1. Introduction

Cloud computing provides significant services through various models, such as Platform as a Service (PaaS), Software as a Service (SaaS), and Infrastructure as a Service (IaaS) [1]. Although cloud computing produces quality results using the aforementioned services, it faces notable constraints, such as task and resource scheduling. Resource and task scheduling in the cloud is imperative for optimizing computational resources, minimizing costs, and ensuring efficient utilization, ultimately enhancing overall performance and user satisfaction. The issues that arise in task scheduling include sustaining the reliability of cloud services, operating costs, and the resource utilization rate corresponding to the service quality. Multi-objective task scheduling is another problem in cloud computing with realistic constraints [2,3,4]. In previous studies, some active and tentative strategies have been used to manage resources and their loads. Poor management of these characteristics may cause significant damage to resources and service performance. The utilized resources are wasted when the resource load is low, whereas the service performance degrades if the resource load is high [5]. Moreover, cloud computing is an evaluation prototype that considers the requirements of cloud users regarding computation and storage [6]. In order to further enhance performance, cloud-based cloudlets are applied to meet high demands. In addition to this, efficient task scheduling plays a prominent role in cloud computing to attain a higher throughput value, a lower response time, optimum results for resource utilization, and enhanced energy conservation [7]. Some heuristic-based algorithms are also utilized for resolving scheduling issues; however, they require more computational power when users face larger workloads [8]. Recently, task scheduling has been extensively used in various applications based on a cloud environment. Even though past studies have explored ways to provide better results for solving the scheduling issue, it remains an NP-hard problem. Data centers are globally distributed in cloud computing regions [9]. Hence, every cloudlet comprises thousands of servers, in which a single server is partitioned into many VMs that are structured with CPU, memory, and storage. Generally, tasks can be performed by grouping two or more VMs, where scheduling generation is the primary issue. This requires including task characteristics like length, dependency, instance size, and execution time before processing by virtual resources. So far, no algorithm has been developed that can solve the issue in an optimal way, and it still struggles with NP-completeness problems [10]. Therefore, the primary aspect of employing task scheduling is to minimize the execution time and cost function while allocating the appropriate VM to process the task. Since VMs have varied features, task execution becomes challenging [11]. Thus, many challenges remain in cloud computing regarding the provision of an effective scheduling mechanism [12]. The current methods of task scheduling have several shortcomings, which, despite the plethora of studies, include a lack of scalability to dynamic workloads, an inability to utilize VM heterogeneity, and the failure to balance learning-based decision making and meta-heuristic search. It is these shortcomings that drive the desire for a unified framework that is able to learn task–VM relations and optimal scheduling decisions in a multi-objective fashion. Another challenge is to maintain quality prerequisites across different data centers with deadline limitations using a combination of cloud resources [13]. Diverse approaches have been designed to solve the scheduling problem on cloud platforms. Also, some standard machine learning techniques are employed for task scheduling. While achieving the shortest execution time, learning techniques have been utilized to optimize scheduling for each cloudlet [14]. This also enhances the load balancing performance, and classical learning models are applied to acquire the desired results. Task scheduling becomes an important strategy when employed with appropriate parameters. Some of the parameters, like multi-objective functions, are used to develop an effective scheduling mechanism [15,16]. A bartering double auction resource allocation model was introduced for Cloud Service Providers (CSPs), facilitating resource exchange without monetary transactions. Varied deployment of learning algorithms provokes significant outcomes in terms of overall analysis and adaptability. Heuristic and meta-heuristic approaches are widely used in cloud optimization algorithms [17,18], including techniques such as ACO, PSO, GWO, WOA, and FPA for optimal task–resource mapping. The authors of [19] proposed a Hybrid Genetic Algorithm (HGA) for reliable and cost-efficient workflow task scheduling in heterogeneous cloud environments. The method addresses the NP-hard problem by introducing new crossover and mutation operators along with a local search enhancement procedure to optimize makespan, monetary cost, and failure cost. Heuristic algorithms are used to identify the finest solution based on optimization rules and problem size; however, they fail to obtain feasible solutions that satisfy all challenges [20]. In contrast, meta-heuristic methods fuse obtained solutions into a single group to rectify individual issues [21]. Algorithms such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), BAT algorithm, Genetic Algorithm (GA), and Symbiotic Organisms Search have been adopted to achieve better optimization results [22]. Compared to traditional algorithms such as honey bee colony, PSO, differential evolution, and GA, the symbiotic organisms search algorithm provides superior performance [23]. Existing task scheduling approaches often focus on a single optimization goal [24], which degrades performance when multiple factors are considered. Due to differences between traditional computing and cloud systems, multi-objective optimization has become a challenging issue, motivating the development of new task scheduling methodologies [25,26,27,28]. Hybrid meta-heuristic algorithms offer advantages over clustering approaches, including enhanced global optimization, dynamic adaptability, consideration of task dependencies, and reduced makespan constraints. In this context, ref. [29] proposed the Advanced Phasmatodea Population Evolution (APPE) algorithm for heterogeneous cloud environments using task–VM mappings to balance load while minimizing makespan and cost under multiple constraints. APPE, an extension of the original phasmatodea population evolution strategy, demonstrated improved convergence and global exploration, outperforming state-of-the-art benchmark task sets in meta-heuristic scheduling. Current hybrid and meta-heuristic scheduling strategies mainly enhance either exploration or exploitation and rely on predetermined scheduling policies. Conversely, the proposed method integrates a Deep Neural Network (DNN) with a hybrid meta-heuristic optimizer to enable data-driven scheduling and optimize the learning architecture. Cloud computing presents significant challenges in resource and task scheduling due to its dynamic and heterogeneous nature. Efficient resource allocation is essential to meet Service-Level Agreements (SLAs) and minimize costs; however, balancing workload distribution, resource utilization, and energy efficiency is complex. Cloud environments also face constraints such as limited bandwidth, fluctuating resource availability, and varying user demands, making optimal allocation difficult. Traditional scheduling algorithms often struggle to adapt to these dynamics, resulting in suboptimal utilization and increased latency. These challenges highlight the research gap in developing scheduling models that integrate deep learning capabilities with hybrid meta-heuristic optimization to address the NP-hard nature of cloud task scheduling. This paper proposes a novel model integrating deep learning with a hybrid meta-heuristic algorithm, Improved Grey Wolf–Horse Herd Optimization (IGW-HHO), to improve task scheduling in cloud computing. The model minimizes makespan and processing time by optimizing task allocation to Virtual Machines (VMs). The IGW-HHO algorithm combines Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO) to generate optimal scheduling solutions. A deep neural network is employed for scheduling, with the number of hidden neurons optimized using IGW-HHO to mitigate offloading and overfitting issues. Unlike existing hybrid models that treat learning and optimization independently, the proposed IGW-HHO + DNN framework tightly integrates both processes, allowing the hybrid optimizer to directly tune the DNN architecture and enhance scheduling performance and generalization. The significant contributions of the research paper are described as follows.
  • To implement an optimal task scheduling model using a hybrid meta-heuristic algorithm that is incorporated with a DNN for deriving multi-objective optimization in a cloud environment.
  • To develop a hybrid meta-heuristic algorithm named IGW-HHO, in which the existing algorithm GWO is integrated with HHO. It is mainly used to render optimal solutions to enhance scheduling performance and compute the objective function.
  • To develop an optimized DNN-based scheduling network, where the number of hidden neurons is optimized by the proposed IGW-HHO algorithm. It is used to obtain the best solution to reduce offloading and overfitting problems while scheduling tasks to VMs.
  • To optimize certain factors, an objective function is derived for task scheduling. The derived function is mainly focused on minimizing the makespan and also reducing the processing time for task allocation.
  • The performance is analyzed using different error metrics, and a comparative analysis is carried out for convergence with existing optimization algorithms, leading to lower error in optimal task scheduling.
The rest of the paper is divided as follows. Part II surveys the existing work on task scheduling in cloud computing. Part III elucidates the challenges and problems of task scheduling. The generation of a solution by an optimized DNN is elaborated in Part IV. The development of a hybrid meta-heuristic algorithm is explored in Part V. Part VI illustrates the results and discussion of the proposed task scheduling model. Finally, Part VII brings closure to the paper.

2. Literature Review

2.1. Related Works

Xueying Guo et al., 2021 [30] have described a multi-objective measure for optimization using a fuzzy-based self-defense mechanism in cloud computing. The resource level in load balancing and less time were chosen during the multi-objective function. This functionality was dependent on task scheduling with the invented formulation and determined the objective function in terms of diverse constraints. Finally, the performance analysis proved that the proposed mechanism could enhance scheduling performance with respect to “maximum completion time, deadline violation rate, and VM resource utilization” rather than existing methodologies. Al-Maytami et al., 2019 [31] have explored the “Directed Acyclic Graph (DAG) based on the Prediction of Tasks Computation Time algorithm (PTCT)” for task scheduling in cloud environments. Consequently, the proposed framework utilized Principal Component Analysis (PCA) and the Expected Time to Compute (ETC) matrix for mitigating structural complexity and enhancing the makespan time. Contrary to traditional scheduling techniques such as Max–Min, MiM-MaM, Min–Min, and Quality of Service (QoS)-Guide, the simulation results outperformed the proposed work regarding speedup, efficiency, and schedule length ratio.
Moon et al., 2017 [32] have designed a new algorithm with the aid of ACO for scheduling in cloud computing. The proposed work was employed to generate tasks for cloud users in the form of VMs. Subsequently, two criteria, namely reinforcement and diversification, were conducted for improving performance with the assistance of slave ants. Finally, through experimental results, the proposed system resolved the global optimization issue by evading the wrong paths created by ants. Sreenivasulu, G. et al., 2021 [33] have developed a hybrid model for task scheduling in cloud computing using a hierarchical approach. The conventional scheduling method of Bandwidth-aware Divisible Task (BAT) was altered by imposing the Bar system model mechanism, thereby building the hybrid model. The minimum lease policy and minimum overload were adopted to deduce the overload complexity of VMs. The experimentation was processed using cloud data with the aid of diverse metrics and showed the efficient performance of the proposed hybrid system.
Jing et al., 2020 [34] have proposed a novel model using QoS scheduling, where cloud features were integrated. The main objective of the proposed model was to detect flaws that could be endured during task execution. Further, a novel QoS-aware scheduling algorithm named QoS-DPSO was deployed to satisfy QoS prerequisites in cloud computing. Finally, the simulation results demonstrated that the proposed QoS approach improved outcomes with respect to reliability, time, and cost when compared to traditional scheduling mechanisms.
Shirvani et al., 2021 [35] presented a novel hybrid heuristic-based list scheduling algorithm (HH-LiSch) designed for heterogeneous cloud computing environments. This algorithm addresses the NP-hard problem of scheduling dependent tasks onto heterogeneous resources by introducing new task priority strategies, insertion-based policies, and task duplication techniques to optimize makespan. The effectiveness of HH-LiSch was validated through experiments using six real-world scientific workflows and a random task graph, showing significant improvements.
Sohaib et al., 2021 [36] have presented a new heuristic algorithm known as the hybrid ant genetic algorithm for scheduling in cloud computing. The proposed algorithm was developed by incorporating the genetic algorithm and ACO. Initially, key features were extracted from both algorithms, which were then utilized for segmenting tasks into different VM groups. By achieving segmented tasks, the proposed model reduced the solution space and predicted the load, where pheromones were supplemented with VMs. With a reduced solution space, the model mitigated response and convergence time. Therefore, the proposed work provided a flexible mechanism to reduce time consumption for processing task workflows. Compared to conventional methods, the proposed heuristic algorithm attained lower values of 11% for center costs for entire data sets and 64% for execution time.
Seifhosseini et al., 2024 [37] presented a multi-objective optimization approach for task scheduling in fog computing environments targeting IoT applications. It introduces a new Scheduling Failure Factor (SFF) metric to measure resource reliability, alongside execution and monetary cost models. Utilizing a Multi-objective Discrete Grey Wolf Optimization (MoDGWO) algorithm, the proposed method effectively balances execution time, cost, and reliability, demonstrating superior performance against state-of-the-art algorithms through extensive simulations.
Dubey, K. et al., 2021 [38] have described a new hybrid algorithm known as Chemical Reaction PSO (CR-PSO) to resolve scheduling problems in cloud environments. The proposed work was applied to allocate many independent tasks to VMs. The hybrid algorithm was constructed using conventional algorithms of both particle swarm optimization and chemical reaction optimization, in which optimal features were fused sequentially. Moreover, tasks in the proposed algorithm were generated based on deadlines and demand, thereby increasing performance in terms of energy, cost, and makespan. The experimentation was carried out using the CloudSim toolkit and compared with classical methods. The performance of average execution time was validated by varying the number of tasks and VMs. In such cases, execution time was reduced by 1–6%, with tendencies of achieving higher efficiency of 5–12%, 2–10% reduction in total cost, and 1–9% improvement in energy consumption.
Wei, X. et al., 2020 [39] have explored an enhanced ACO algorithm for task scheduling in cloud computing. The key factor of using the improved algorithm was to rectify the issue of local optima during scheduling execution. Further, the proposed model influenced three factors, namely “shortest waiting time, degree of resource load balance, and cost of task completion,” to choose the finest solution. In ACO, punishment and reward coefficients were adopted to tune pheromone rules. Then, an overall analysis of the model was conducted by employing a volatility coefficient, and a load weight coefficient was used for updating pheromones. The experimentation was carried out using CloudSim, and performance analysis was evaluated using different metrics. The simulation results achieved maximum convergence speed and load balance, lower execution time, and higher utilization compared to traditional scheduling approaches. Beyond cloud scheduling, several recent works in other domains highlight the importance of advanced optimization techniques for complex resource allocation problems. Falsafain et al., 2022 [40] proposed a branch-and-price framework for a variant of the cognitive radio resource allocation problem, formulating spectrum assignment as a large-scale integer programming model and solving it via column generation to efficiently satisfy interference and quality-of-service constraints. Ahadi et al., 2024 [41] developed an Expected Cross Value (ECV) criterion and corresponding integer linear programming formulations for mate selection in plant breeding, enabling simultaneous optimization of multiple phenotypic traits while controlling inbreeding levels. Saghezchi et al., 2024 [42] presented a comprehensive optimization approach for financial resource allocation in scale-ups using a mean–variance-based model and reinforcement learning ideas to derive an optimal cash-flow allocation strategy across investing, operating, and financing activities. These studies collectively emphasize that multi-objective, constraint-rich allocation problems across diverse domains benefit from sophisticated hybrid and mathematical optimization frameworks, motivating the adoption of similar approaches for multi-objective task scheduling in cloud computing.

2.2. Research Gaps and Challenges

Cloud computing is widely employed in a wide range of applications and areas, but task and resource scheduling is still an area where work has to be done. Simply put, task scheduling methods, which permit the transfer of incoming tasks to machines, are required in a heterogeneous computing system to meet high-performance data mapping needs. Makespan is reduced and resource utilization is maximized when resources and tasks are properly mapped. Table 1 shows the features and challenges of existing task scheduling methods in cloud computing. The fuzzy self-defense algorithm proposed by Guo, X. et al., 2021 [30] shows that the rate of resource utilization on virtual machines is mostly adequate and the deadline violation rate is low. However, it does not benchmark real-world problems. PTCT [31] minimizes the overall makespan and also reduces task execution time. Still, it does not consider dynamic scheduling for real-world application graphs. ACO [32] develops minimal preprocessing overhead and also maximizes cloud server utilization. Yet, it does not consider heterogeneous clusters. The hybrid optimization algorithm [33] minimizes the load on VMs and is effective with respect to memory utilization, bandwidth utilization, and resource utilization. However, it is not used with real-time workflows. PSO [34] offers users satisfactory services for time-sensitive needs and also efficiently handles IoT and terminal applications. Still, a prediction model is not introduced for investigating host system logs. ACO and GA [35] detect the load on VMs and perform better in makespan and convergence optimization. However, they do not schedule VMs on the server. CR-PSO [37] generates optimal solutions in Alexandria Engineering by minimizing makespan and execution time and is more economical, enhancing overall cloud system performance. Still, it does not work on dependent task scheduling. ACO [38] enhances resource utilization associated with VMs and also minimizes convergence time and completion time. Yet, it has not been researched in terms of energy consumption minimization and QoS assurance in cloud data centers. Thus, it is necessary to develop a novel optimization model for performing effective task scheduling in cloud computing platforms.

3. Task Scheduling Problem in Cloud Computing: Solution Based on Soft Computing and Deep Learning

3.1. Task Scheduling in Cloud Computing

Cloud computing is a vast environment, where the myriad cloud users are participating to offer services such as IaaS, PaaS, and SaaS for improving capacity and functionality. The cloud environment can be processed by integrating cloud resources and enhances the efficiency of the system. It is also used to facilitate remote access for manipulating hardware and software resources. Some noteworthy features are considered for cloud computing as follows.
  • Offers services on the basis of request and response.
  • Ease of accessing a wide-area network.
  • Resource utilization of multiple clients or tenants.
  • Increasing the reliability, elasticity, and scalability.
  • Monitors the provision of services.
In general, three types of cloud are utilized: public cloud, private cloud, and hybrid cloud. Public cloud is the primary type, in which a large number of cloud service providers are engaged in computing resources. It can be accomplished through SaaS applications and tends to assign services to all VMs. Over the public internet, cloud users are rendered on-demand services that are maintained by third-party providers and shared among various organizations. Resources can be accessed through paid–free or pay-per-usage models. Private cloud refers to accessing operational and sensitive data by only one user. It is the most secure one, where only authorized users are involved, and it does not permit third-party providers. Yet, it enhances reliability, scalability, and faster delivery by controlling resource utilization and security. Some examples of private cloud are Ubuntu, Microsoft, HP Cloudlet, and so on. The hybrid cloud is defined as mixed computing, where applications are processed in a combined manner using public and private cloud resources. It has the tendency to provide cost-effective services compared to the other two clouds. It manages workload and involves on-premises data centers.
In a cloud environment, task scheduling is defined as the process of accounting for requests or tasks given by users, which are assigned to every VM with access to resource utilization. Users can send requests online, which are used to stimulate services over the Internet. Normally, the scheduling algorithm consists of two levels: (i) first level: the user sends tasks to be shared among many VMs, and (ii) second level: every task is assigned to each VM. Figure 1 shows task scheduling over cloud computing.
Some of the key advantages of tasks scheduling are mentioned below.
Provides better QoS services.
Manages CPU and memory.
Standard scheduling algorithms increase resource utilization while mitigating task processing time.
Enhances performance by performing all the tasks.
Suitable for real-time application.
Attains high throughput.
Balances workload issues.
On the other hand, many challenges and issues still exist in the cloud environment. In existing studies, scheduling algorithms pose ineffective makespan, less energy consumption, and lack of reliability. In order to overcome this, it is important to develop new task scheduling algorithms for cloud computing.

3.2. Proposed Task Scheduling Model

In cloud computing, there is a need to optimize various objective functions such as memory and consumption cost, processing time, makespan, energy conservation, and so on. On account of allocating a task to particular VMs, a scheduler is aware of the resource utility. Most of the problems occurring in existing scheduling algorithms are NP-hard or NP-complete, where more time is required to obtain the optimal solution since there is a wide solution space. Though all cloud users are engaged, “maximizing resource utilization and minimizing makespan” is one of the constraints in task scheduling models. The downside of some scheduling algorithms is that they do not distribute all tasks to all machines in cloud computing and cannot handle large-scale task issues. Some remarkable issues are found during task scheduling regarding the heterogeneity of user requests, resource scheduling, and maintenance of QoS parameters. Then, self-management of services becomes another issue since it allocates and exhibits cloud resources. Enhancing energy efficiency, security, scalability, and providing services with high quality and performance are challenging aspects. In order to overcome these flaws, a newly developed task scheduling algorithm is proposed using an optimized DNN with a hybrid algorithm, which is illustrated in Figure 2.
Task scheduling is the prime role for computing services in a cloud environment, which endeavors to increase the utilization of VMs as well as mitigate operational costs. It leads to achieving effective enhancement of scheduling performance. In general, to overcome the current shortcomings of scheduling algorithms, a novel scheduling model is proposed for enhancing and handling optimization problems and task allocation issues with the aid of a novel IGW-HHO algorithm and an optimized DNN. Thus, the proposed method is used to provide optimal task scheduling with respect to the requests of cloud users. Initially, the cloud user sends requests or tasks to be performed by VMs. Since makespan is one of the constraints, the proposed model is used to derive a mathematical formulation for minimizing makespan as well as cost and energy. Subsequently, the DNN model is employed to generate optimal solutions for scheduling tasks, where the hidden neurons are optimized by the proposed IGW-HHO algorithm. It is mainly used to mitigate scheduling, offloading computation, and overfitting issues. The proposed novel meta-heuristic algorithm, IGW-HHO, is developed in a hybrid manner, where the conventional GWO is superimposed with the HHO algorithm. Finally, the objective function is determined to acquire optimal solutions, such as tasks efficiently allocated to VMs in cloud computing. Thus, task scheduling is performed in a significant manner with respect to user requests.

3.3. Problem Formulation

On account of including many challenges and limitations, the proposed model focuses on handling the issue by optimizing diverse objective parameters such as energy, makespan, cost, and so on. To achieve this, the model considers a set of tasks and Virtual Machines (VMs) for deriving the scheduling expressions. Let T p denote the p-th task submitted by the cloud user, where p = 1,2 , , N _ T , and let V M q denote the q-th virtual machine, where q = 1,2 , , N _ M . The mathematical formulation is derived by considering a number of physical machines, where each physical machine hosts several virtual machines. Each VM is characterized by different computing factors such as storage capacity, memory capability, CPU power, and network bandwidth. Assuming that tasks are assigned to VMs, each task is represented by the following feature vector, as given in Equation (1):
T p = [ s n p , l e n g p , i n s p , p r e f p ]
Here, s n p denotes the serial number of the p-th task, l e n g p represents the task length, ins_p indicates the estimated execution time of task p under nominal conditions, whereas the actual execution time of task p on VM q is denoted by ins_(p,q), and p r e f p denotes the priority of the p-th task. Here, p = 1, 2, …, N_T. Furthermore, the matrix computation is given in Equation (2) for representing the execution instances of tasks across different virtual machines.
i n s = i n s 1,1 i n s 1,2 i n s 1 , N _ M i n s 2,1 i n s 2,2 i n s 2 , N _ M i n s   N _ T , 1 i n s N _ T , 2 i n s N _ T , N _ M
Here, ins_(p,q) represents the execution time of the p-th task on the q-th virtual machine, where p = 1, 2, …, N_T and q = 1, 2, …, N_M, and it is computed using Equation (3).
i n s p , q = l e n g p v q
Here, v q denotes the execution capacity of the q t h virtual machine.

4. Solution Generated by Optimized Deep Neural Network for Optimal Task Scheduling

4.1. DNN Model

Deep Neural Networks (DNNs) offer significant advantages across various domains, as evidenced by numerous examples and case studies. In healthcare, DNNs have shown remarkable performance in medical image analysis, such as detecting abnormalities in X-rays and MRIs with high accuracy, aiding in early diagnosis and treatment planning. In finance, DNNs are employed for fraud detection, where they can learn intricate patterns in transaction data and flag suspicious activities more effectively than traditional methods. In autonomous driving, DNNs enable vehicles to perceive and interpret their surroundings, making decisions in real time to navigate safely through complex environments. A DNN [44] is one kind of Artificial Neural Network (ANN) that is built with many layers between the input and output layers. The many layers are defined as hidden layers, which are composed of multiple neurons. The neurons are able to take the input and process it accordingly in the model. Then, the processed data are transferred from one neuron to another neuron. It is mainly used to render problem-solving solutions globally that depend on the given input. The DNN also includes activation functions and weights; every neuron in each layer performs a small function named an activation function, which stimulates the data to pass to nearby neurons. The weight is nothing but the parameter that shares the data within the hidden layers. The key advantages of a DNN are as follows:
(a)
Irrelevant data are deduced and optimized to acquire the best outcome.
(b)
Evades time consumption complexity.
(c)
Enhances the robustness of the system.
(d)
Used for many applications and has an adaptable nature.
The formulation for the DNN model is given in Equation (4).
Y = A F x = 1 X W x , z I z + B x
Activation function and bias are denoted by A F and B , respectively. Also, I z defines the input for the DNN model and the weight parameter is noted as W . Though it provides better outcomes, it still has a few shortcomings that degrade performance. Thus, the proposed model utilizes an optimized DNN for obtaining the optimum solution to allocate tasks to VMs. In this way, the presented model makes use of the optimized DNN to achieve optimal solutions for assigning tasks to VMs. In order to have reproducible results as well as methodological transparency, the study clearly specifies the Deep Neural Network (DNN) architecture that will be utilized for optimal task scheduling in the cloud environment. The proposed model is a fully connected feed-forward DNN that is trained to estimate the nonlinear mapping of task–VM features and optimal planning decisions. The input layer comprises features based on the task–VM formulation defined in Section 3, namely the length of the task, the estimated execution time of the task, task priority, and VM execution capacity parameters.
DNN architecture: The architecture of the DNN consists of an input layer, three hidden layers, and one output layer. The size of each hidden layer is optimally adjusted based on the proposed IGW-HHO algorithm within a range of 5 to 255 to balance scheduling accuracy and computational complexity. Each hidden layer uses the ReLU activation function to enhance convergence and prevent the vanishing gradient problem, and the output layer adopts a linear activation function to produce the final task-to-VM scheduling decision. The Adam optimizer is used to train the DNN with a learning rate of 0.001, a batch size of 32, and 100 epochs, and it minimizes the Mean Square Error (MSE) loss. To reduce overfitting and improve generalization performance, dropout regularization (rate = 0.3) and batch normalization are implemented at the end of every hidden layer. Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are used to measure model performance. Table 2 briefly demonstrates the full architecture of the optimized DNN model.

4.2. Optimized DNN Model for Solution Generation

The DNN architecture described in Section 4.1 serves as the base learning model, where the proposed IGW-HHO algorithm dynamically optimizes the hidden neuron count to enhance task scheduling performance. In general, DNNs are prone to overfitting issues and also involve computational offloading while implementing task scheduling in cloud computing. DNNs require a large amount of data to provide better outcomes, but this produces computational complexity. It leads to extensively increased costs for cloud users. To overcome these issues, the proposed model employs the IGW-HHO algorithm to provide the finest solutions. To reduce complexity, the algorithm is used to optimally tune the number of hidden neurons. Thus, the optimized DNN evades offloading and overfitting issues for tasks. The potential of using an optimized DNN is to mitigate errors and costs and enhance scheduling performance. It is mainly used to attain the generation of optimized solutions, followed by assigning tasks to VMs. The optimized DNN-assisted task scheduling consists of different layers and multiple neurons, in which every layer processes the training phase with the aid of tasks and VMs. Here, the numbers of tasks and virtual machines are denoted by N_T and N_M, respectively, which are used for training and testing phases of the neural network. During the training phase, random solutions are given to the input layer, where tasks and VMs are processed by different layers and neurons with activation functions, weights, and bias terms. Figure 3 demonstrates the optimized DNN model for solution generation.
The best solution can be generated while training the model with multiple neurons. Further, the processing time to provide the solution is computed, which is given in Equation (5).
P T = P T 1 , 1 P T 1 , n P T s , 1 P T s , n
Here, P T refers to the processing time with s solutions and n neurons. The objective function of the optimized DNN model is derived using Equation (6).
Min RMSE
RMSE is selected as the primary optimization objective as it strongly penalizes large prediction errors, which directly influence task–VM allocation quality and makespan, while MAE is used only as a complementary evaluation metric. In the above equation, hidden neuron count is defined as H n D N N , which is optimized by the proposed IGW-HHO algorithm, and lies in the range between 5 and 255.
Here, RMSE is used for evaluating the error measure for the scheduling model. It is expressed in Equation (7).
R M S E = 1 N j 1 N (   C j a c t u a l C j p r e d i c t e d ) 2
MAE is a measure of errors between the final observed value and true value, which is derived using Equation (8).
M A E = 1 N j 1 N | C j a c t u a l C j p r e d i c t e d |
where C j a c t u a l represents the actual scheduling cost derived from the objective function, C j p r e d i c t e d denotes the corresponding DNN-predicted cost, and N is the total number of task–VM samples.

5. Development of Hybrid Meta-Heuristic Algorithm for Task Scheduling in Cloud Computing

5.1. Proposed Optimization

The proposed IGW-HHO algorithm is mainly used to determine the optimum solution for improving scheduling performance. Grey Wolf Optimization (GWO) has the benefits of faster convergence and lower memory usage, whereas Horse Herd Optimization (HHO) can be used in discrete and sequential optimization tasks. In spite of these merits, both algorithms have some weaknesses, including a lack of resistance to local optima and the inability to directly derive objective functions. To overcome these challenges, the two algorithms are combined in a hybrid manner with the help of random vectors, which leads to the proposed IGW-HHO algorithm. The trade-off between exploration and exploitation is also greatly improved through hybridization in optimization. The exploitative power of HHO in adapting and exploiting is combined with the fast convergence properties of GWO; thus, the proposed algorithm overcomes the drawbacks of premature convergence and local optimum stagnation. Consequently, IGW-HHO has better convergence speed, lower time complexity, and better optimization of objective functions.
IGW-HHO is also used together with the optimized DNN in the proposed framework, which optimizes performance in task scheduling. The DNN computes the predicted cost of scheduling task–VM pairs according to task characteristics and VM capacities, and the predictions are optimized against an objective function based on RMSE. The obtained error value is used as the fitness of the respective candidate solution in the IGW-HHO optimization loop. During execution, the algorithm initializes the population and maximum iterations, calculates random control vectors, and determines the best-performing (alpha) and second-best (beta) solutions. When the random control value is greater than a preset threshold, the solution update is implemented using GWO to enhance global exploration; otherwise, HHO is used to enhance local exploitation.
The GWO stage is used to update solution positions with alpha, beta, and gamma wolves, whereas the HHO stage is used to refine solutions using herd fitness-based behaviors such as grazing, defense, hierarchy, sociability, imitation, and roaming among horses of various ages. The closed-loop interaction continuously refines the DNN structure and prediction accuracy of the IGW-HHO algorithm, resulting in efficient decision making in task–VM scheduling. It comprises conventional GWO and HHO algorithms. The advantages of GWO include faster convergence and lower storage requirements, whereas the advantages of HHO include solving discrete and sequential optimization problems. Despite providing key benefits, both algorithms still possess limitations that require further improvement. These limitations include an inability to derive objective functions directly, slow convergence rate in GWO, and inability to solve local optima problems.
To satisfy these challenges, the two algorithms are combined in a hybrid manner with the aid of random vectors. The proposed IGW-HHO algorithm significantly impacts exploration and exploitation in optimization. By combining conventional GWO advantages, such as faster convergence and reduced storage, with HHO benefits in solving discrete and sequential optimization challenges, the hybrid algorithm addresses limitations such as slow convergence rate and local optima issues. This synergistic approach harnesses the strengths of both algorithms, enhancing exploration capabilities for thorough solution-space coverage and exploitation capabilities for efficiently deriving optimal solutions. Here, the advantages of the proposed IGW-HHO algorithm are higher convergence speed, minimized time complexity, and the ability to derive objective functions.
The proposed IGW-HHO algorithm aims to optimize task scheduling performance by combining the strengths of conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO) algorithms. Initially, the algorithm initializes the total population size and maximum number of iterations. It then computes random vectors to indicate positions for targeting prey, where the fittest solution is identified as alpha and the second-best solution as beta. If a random value exceeds a predefined threshold, the solution is updated using GWO; otherwise, HHO is employed. In the GWO phase, inspired by wolf behavior, positions are updated using alpha, beta, and gamma wolves, and the hunting position is determined using coefficient vectors. In the HHO phase, inspired by horse herd behavior, positions are updated based on fitness evaluations and arranged according to ascending fitness values. Motion vectors are computed for horses of different ages and iterations, reflecting various behaviors such as grazing, defense mechanisms, hierarchy, sociability, imitation, and roaming. The steps to be explored are as follows:
Step 1. Initialization: Generally, the proposed algorithm mimics the behavior of encircling and hunting the prey. In this step, the total number of populations and maximum iterations are assigned. The term G refers to the total population and I max is the total number of iterations. Here, the population is defined as the resultant solution of the DNN model.
Step 2. Random vector computation: The proposed IGW-HHO algorithm is developed to improve the performance. The random vectors are determined to indicate the position to target the prey. The fittest solution is defined as alpha ( α ) and beta ( β ) is designated as the best solution. Here, R 1 and R 2 are the two random vectors, which are formulated by using the best value of alpha and beta solutions and fitness value. They are expressed in Equations (9) and (10):
R 1 = b e s ( α S c o r e ) m e a n ( f i t )
R 2 = b e s ( β S c o r e ) m e a n ( f i t )
The result of two random values is used to estimate the average value, which is given in Equation (11):
R = R 1 + R 2 2
Using the above equation, the R value is computed. If ( R > 0.5 ) , the solution is updated by using the GWO algorithm, otherwise, the HHO algorithm is employed to update the position.
Step 3. Exploration using GWO: The GWO [45] algorithm is inspired by the natural behavior of wolves. Grey wolves have the potential to detect the location of prey and encircle it. This hunting phase is carried out by alpha, beta, and gamma wolves. By using three obtained best solutions, the position is updated using Equation (12).
W ^ i + 1 = W ^ 1 + W ^ 2 + W ^ 3 3
Here, W ^ 1 , W ^ 2 , and W ^ 3 represent the position vectors of alpha, beta, and gamma solutions. They are expressed in Equation (13):
W ^ 1 = W ^ a U ^ 1 V ^ a   W ^ 2 = W ^ b U ^ 2 V ^ b   W ^ 3 = W ^ c U ^ 3 V ^ c
Here, V ^ a , V ^ b , and V ^ g define the hunting position of prey that is evaluated in Equation (14):
V ^ a = Y ^ 1 W ^ a W ^   V ^ b = Y ^ 2 W ^ b W ^   V ^ c = Y ^ 3 W ^ c W ^
Here, U ^ and Y ^ are coefficient vectors that are derived using Equation (15):
U ^ = 2 d R 1 d ,   Y ^ = 2 R 2
Thus, the position is updated and acquires the new optimal solution.
Step 4. Exploitation using HHO: The HHO [46] algorithm mimics the behavior of a horse herd. The fitness evaluation is performed and it is followed by checking the fitness. Depending on the fitness value, the horses are arranged in ascending order. The positions are updated with respect to different ages of the horses as a 1 , a 2 , a 3 , and a 4 . Equation (16) illustrates the position updating of the horses.
H g A , i = H g A , i 1 + F g A , i
In the aforementioned equation, the terms A and i define the various ages of the horses and the current iteration. The velocity of each horse is denoted as F g A , i . Then, the motion vectors are computed for horses in every age group, which is expressed in Equation (17).
F g a 1 , i = J g a 1 , i + K g a 1 , i   F g a 2 , i = J g a 2 , i + L g a 2 , i + M g a 2 , i + K g a 2 , i   F g a 3 , i = J g a 3 , i + L g a 3 , i + N g a 3 , i + O g a 3 , i + M g a 3 , i + K g a 3 , i   F g a 4 , i = J g a 4 , i + N g a 4 , i + O g a 4 , i
The individual motion vectors used in Equation (17) are mathematically defined as follows:
J g ( a , i ) = r 1 . ( X b e s t X i )   K g ( a , i ) = r 2 . ( X m e a n X i )   L g ( a , i ) = r 3 . ( X l e a d e r X i )   M g ( a , i ) = r 4 . ( X r a n d X i )   N g a , i = r 5 . µ 0,1   O g ( a , i ) = r 6 . ( U B L B )
where X i denotes the current position of the i t h horse (solution), X b e s t is the global best solution, X m e a n is the mean position of the population, X l e a d e r represents dominant horses determined by fitness ranking, X r a n d is a randomly selected solution, U B   a n d   L B are the upper and lower bounds of the search space, r 1 , .   r 6 ∈ [0,1] are uniformly distributed random numbers, and µ 0,1 is Gaussian noise to maintain exploration.
Here, the terms J g A , i , K g A , i , L g A , i , M g A , i , N g A , i , and O g A , i mathematically model grazing, defence, hierarchy, sociability, imitation, and roaming behaviors of horses, respectively, with their influence varying across age groups and iterations as defined in Equation (17). The pseudocode of the proposed IGW-HHO algorithm is presented in Algorithm 1. The time complexity is expressed as O (M * n), where M is the maximum number of iterations and n is the number of tasks.
Algorithm 1: Proposed IGW-HHO algorithm
Initialize G number of population, maximum iteration I max
Compute the random vectors, R 1 and R 2
Determine the objective function
For ( i < I max )
Determining the fitness function for solutions as R
If ( R > 0.5 )
Solution update by GWO
Search agents are alpha, beta, and gamma
Calculate the fitness of each agent
Update the position of each search agent using Equation (9)
Update the position for alpha, beta, and gamma wolves
Else
Solution update by HHO
Assume four different age groups for horses
Update the position vector using Equation (16)
Using Equation (17), the velocity is computed
End if
Obtain the optimal solution
i = i + 1 ;
End for
Return the best solution
The flow chart diagram of the proposed IGW-HHO algorithm is demonstrated in Figure 4.

5.2. Derived Objective Function for Optimal Task Scheduling

Optimal task scheduling is performed by deriving the objective function. Task scheduling is mainly used to optimize certain objectives such as cost, execution time, machine capacity to process tasks, especially makespan, and so on. In general, scheduling algorithms contain a huge solution space that requires more time to identify the optimal solution. Thus, the proposed IGW-HHO algorithm is employed to determine the best solution for performing task-to-VM scheduling. Due to the determination of optimum results, the proposed scheduling approach minimizes the makespan, assisted by processing time in cloud computing. In the proposed model, the scheduling considers P tasks and Q VMs. The solution encoding of tasks and VMs using the proposed algorithm is represented in Figure 5.
In the above figure, scheduling is performed by allocating two or three or more tasks to a single VM. Sometimes, a VM becomes fragile while assigning many tasks given by cloud users. Thus, scheduling capacity is an important objective. This can be achieved through the estimation of the makespan function. The term “makespan” is defined as the length of time that elapses between the start and end of tasks. By reducing the makespan, the proposed model attains better scheduling performance. This tends to have the potential to reduce processing time, reduce the capacity burden of machines, and mainly enhance the resource utilization ratio. Therefore, it becomes feasible when a large number of hosts participate, as tasks can be appropriately allocated to the respective VMs in the cloud environment. Thus, the objective function is derived for optimally scheduling tasks, which is formulated in Equation (19).
M s p a n = m a x Q = 1,2 , , N M P = 1 N T i n s P , Q
Here, N T denotes the total number of tasks and N M denotes the total number of virtual machines. The term i n s ( p , q ) represents the execution time of the p-th task on the q-th virtual machine. The makespan is defined as the maximum cumulative execution time among all VMs, and minimizing this value leads to improved task scheduling performance.

6. Results and Analysis

6.1. Experimental Setup

The proposed task scheduling model was implemented using MATLAB 2020a, and the simulation results were studied. Several metrics were employed to assess scheduling performance. The metrics were MEP, MASE, SMAPE, MAE, RMSE, One-Norm, Two-Norm, and Infinity-Norm. Here, the proposed algorithm considered a population size of 10 and a maximum number of iterations of 10. A total of 100 VMs were used, and the number of tasks ranged from [200 to 1200], where tasks and VMs were segmented into five variations of limits. The different scenarios and cases are given in Table 3 and Table 4. The proposed model was compared with traditional heuristic algorithms such as PSO [13], GWO [45], HHO [46], and WOA [47].
In order to make the results statistically reliable and to compare the results fairly, every experiment was carried out in several independent runs under the same calculation conditions. Each algorithm was tested with the same population size (10) and the same stopping condition (a maximum of 10 iterations), thus having equal computational resources. The different task variation scenarios and VM variation cases considered in the experiments are presented in Table 3 and Table 4, respectively. The reported results indicate the cumulative performance over multiple runs, and statistical metrics including best, mean, and standard deviation are reported in Tables 7 and 8, indicating the stability and consistency of the proposed DNN-IGW-HHO algorithm.

6.2. Performance Metrics

Some of the validating metrics used for the task scheduling model are expressed below.
MEP: MEP is evaluated by the computed average of percentage errors by the difference between actual values and final values using Equation (20).
M E P = 100 % d j = 1 c b j a j b j
SMAPE: Symmetric mean absolute percentage error is an accuracy measure based on percentage (or relative) errors. It is given in Equation (21).
S M A P E = 100 % d j = 1 c a j b j a j + b j 2
MASE: Mean absolute scaled error is a measure of the accuracy of forecasts. It is the mean absolute error of the forecast values divided by the mean absolute error. It is derived in Equation (22).
M A S E = 1 d j = 1 c b j a j b j
MAE: It is derived in Equation (8).
RMSE: It is given in Equation (7).
One-Norm: It is defined as the “sum of the magnitudes of the vectors in a space,” which is defined by Equation (23).
1 N = j m j
Two-Norm: It is the shortest distance to go from one point to another. It can be a Euclidean norm. It is expressed in Equation (24).
2 N = j = 1 c m j 2 1 2
Infinity-Norm: The length of the vector can be evaluated using the maximum norm. It is expressed in Equation (25).
I n f N = max 1 j c m j
In all the above-mentioned equations, m refers the matrix and a and b are the final value and actual value, respectively.

6.3. Convergence Analysis on Task Variation over Heuristic Algorithms

The convergence analysis of the proposed scheduling model is demonstrated in Figure 6 with respect to varying the number of iterations and different scenarios. In Scenario 4, at the 994th iteration, the proposed model achieves cost function values of 1%, 3.5%, 2.7%, and 7.5%, which are lower than those of PSO, WOA, GWO, and HHO, respectively, as shown in Figure 6h. Similarly, Figure 6i shows the convergence analysis for Scenario 5, where the proposed DNN-IGW-HHO model attains a lower cost function value, which is 0.3% less than PSO, 1% less than WOA, 2.3% less than GWO, and 1.1% less than HHO at the 990th iteration. Thus, the representation of results using task variation indicates that the proposed model obtains a lower cost function value, leading to higher convergence speed and better scheduling performance.

6.4. Convergence Analysis on VM Variation over Heuristic Algorithms

The convergence analysis of the proposed scheduling model is demonstrated in Figure 7 with respect to varying the number of iterations and different VM cases. In Case 1 with 1000 iterations, the obtained value of the proposed IGW-HHO algorithm is 0.2% less than PSO, 2% less than WOA, 0.15% less than GWO, and 0.6% less than HHO. This is demonstrated in Figure 7a. Simultaneously, Figure 7e shows the convergence analysis of VM variation Case 4, in which the proposed model attains a low cost function value that is 1% less than PSO, 3.5% less than WOA, 2.7% less than GWO, and 0.7% less than HHO at the 998th iteration. Through the analysis of the figure below, it is seen that the proposed DNN-IGW-HHO model outperforms in terms of convergence speed when compared to existing algorithms. Thus, the proposed scheduling model ensures the effectiveness of the system to assign the tasks to VMs significantly.

6.5. Overall Analysis of Task Variation over Heuristic Algorithms

In all scenarios, the proposed IGW-HHO consistently outperforms other algorithms in terms of most performance metrics, including MEP, SMAPE, MASE, MAE, RMSE, and all norm metrics (L1-NORM, L2-NORM, and L-INF-NORM). This indicates that IGW-HHO achieves better task scheduling results with lower error rates and improved accuracy compared to PSO, WOA, GWO, and HHO algorithms. Specifically, IGW-HHO demonstrates its robustness by consistently achieving the lowest error rates across all scenarios. However, it is essential to acknowledge that, while IGW-HHO shows promising results in these scenarios, its performance may vary in other scenarios or real-world applications. Table 5 evaluates the overall performance analysis of the proposed task scheduling model with various scenarios of task variation and their limits. Standard error metrics are also used for validating the performance and are compared with conventional algorithms. When considering the third scenario of task variation, the proposed algorithm achieves a minimal value of MASE, which is 17.15% lower than PSO, 0.64% lower than WOA, and 13.86% and 11.02% lower than the GWO and HHO algorithms, respectively. Subsequently, the analysis of the L1-norm is given in the table for Scenario 5, where the values are 26.64% for PSO, 1.9% for WOA, 7.57% for GWO, and 3.84% for HHO, which are higher than those of the proposed algorithm. Thus, the proposed DNN-IGW-HHO algorithm acquires minimal error values, leading to enhanced performance in task scheduling.

6.6. Overall Analysis of VM Variation over Heuristic Algorithms

The overall performance analysis of the proposed task scheduling model with different cases of VM variation and their limits is given in Table 6. When considering the second case of VM variation, the proposed algorithm achieves a minimal value of MAE, which is 31% lower than PSO, 20.16% lower than WOA, and 41.72% and 9.11% lower than the GWO and HHO algorithms, respectively. Subsequently, the analysis of RMSE is given in the table for Case 1, where the values are 15.6% for PSO, 42.5% for WOA, 54.8% for GWO, and 50.5% for HHO, which are higher than those of the proposed algorithm. Thus, the proposed DNN-IGW-HHO model acquires lower MAE and RMSE values, demonstrating effective task allocation to VMs in cloud computing.

6.7. Statistical Analysis of Task Variation over Heuristic Algorithms

The statistical analysis provided in Table 6 and Table 7 offers valuable insights into the performance of the proposed DNN-IGW-HHO algorithm compared to existing heuristic algorithms across different scenarios of task and VM variations. Measures such as best, mean, and standard deviation help evaluate scheduling performance comprehensively. In both task and VM variation scenarios, the proposed DNN-IGW-HHO consistently outperforms other algorithms across all metrics. The lower values of best, mean, and standard deviation for DNN-IGW-HHO indicate its superiority in achieving better task scheduling outcomes with less variability compared to PSO, WOA, GWO, and HHO algorithms. The consistency of results across various scenarios highlights the robustness and effectiveness of the DNN-IGW-HHO algorithm in handling different complexities in task and VM variations. This consistency strengthens confidence in the algorithm’s performance and its applicability in real-world scenarios. The statistical analysis of the proposed scheduling model is given in Table 7. Measures such as best, mean, and standard deviation are employed to validate performance. According to analyses carried out with various scenarios of task variation, the proposed DNN-IGW-HHO algorithm outperforms others in terms of scheduling enhancement.

6.8. Statistical Analysis of VM Variation over Heuristic Algorithms

The statistical analysis of the proposed scheduling model is given in Table 8. Measures like best, mean, and standard deviation are employed to evaluate the scheduling performance. By varying the different cases of VM variations, the proposed DNN-IGW-HHO algorithm attains the desired outcome in contrast with existing algorithms.

7. Conclusions and Future Direction

This paper has presented an optimal task scheduling model using a hybrid meta-heuristic algorithm and a DNN. The main intention of the proposed model was to assign tasks given by users to VMs. In the cloud environment, cloud users send tasks to VMs. In order to achieve this, some objective criteria were derived to improve performance. Consequently, an optimized DNN was developed, in which the neuron count was optimized by the proposed IGW-HHO algorithm. The optimized DNN was mainly used to reduce the error rate and resolve the computation offloading problem. The best solutions generated from the optimized DNN were fed into the novel IGW-HHO algorithm. The innovative combination of the DNN with the IGW-HHO algorithm effectively enhanced task scheduling performance, demonstrating significant cost reductions and higher convergence rates. The proposed IGW-HHO algorithm was designed using both GWO and HHO algorithms and was used to yield optimum solutions for performance enhancement. Furthermore, objective functions were derived for optimal task scheduling to mitigate makespan between task-to-VM allocations. Thus, the experimental results were validated using heterogeneous strategies across different scenarios of task variation and various cases of VM variation. Through convergence analysis of VM variation, the proposed DNN-IGW-HHO model attained lower cost function values compared with higher values of 1% for PSO, 3.5% for WOA, 2.7% for GWO, and 0.7% for HHO. Hence, the proposed scheduling model rendered higher convergence results for assigning tasks to VMs effectively. The current study assumes static task arrivals and homogeneous VM configurations, and the model is evaluated under simulated environments, which may limit direct generalization to highly dynamic real-world cloud systems. Future work will explore the scalability and adaptability of the proposed DNN-IGW-HHO model for larger and dynamic cloud environments, providing insights into its real-world applicability. Investigating the integration of additional optimization techniques or considering multi-objective optimization objectives could further enhance the model’s performance and versatility.

Author Contributions

Conceptualization, M.K. and R.K.; methodology, M.K. and B.K.G.; software, A.K.; validation, A.K., R.K., and A.S.; formal analysis, M.K.; investigation, R.K.; resources, M.K.; data curation, A.S.; writing—original draft preparation, M.K.; writing—review and editing, K.K. and R.K.; visualization, K.K. and B.K.G.; supervision, K.K. and A.K.; project administration, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Panda, S.K.; Jana, P.K. Efficient task scheduling algorithms for heterogeneous multi-cloud environment. J. Supercomput. 2015, 71, 1505–1533. [Google Scholar] [CrossRef]
  2. Munir, E.U.; Li, J.; Shi, S. QoS sufferage heuristic for independent task scheduling in grid. Inf. Technol. J. 2007, 6, 1166–1170. [Google Scholar] [CrossRef]
  3. Kumar, A.; Kumar, M.; Mahapatra, R.P.; Bhattacharya, P.; Le, T.T.H.; Verma, S.; Kavita; Mohiuddin, K. Flamingo-Optimization-Based Deep Convolutional Neural Network for IoT-Based Arrhythmia Classification. Sensors 2023, 23, 4353. [Google Scholar] [CrossRef]
  4. Kaur, K.; Verma, S.; Bansal, A. Iot big data analytics in healthcare: Benefits and challenges. In Proceedings of the 6th International Conference on Signal Processing, Computing and Control (ISPCC), Waknaghat, India, 7–9 October 2021; IEEE: New York, NY, USA, 2021; pp. 176–181. [Google Scholar]
  5. He, X.; Sun, X.; Von Laszewski, G. QoS guided min-min heuristic for grid task scheduling. J. Comput. Sci. Technol. 2003, 18, 442–451. [Google Scholar]
  6. Duan, K.; Fong, S.; Siu, S.W.; Song, W.; Guan, S.S.U. Adaptive incremental genetic algorithm for task scheduling in cloud environments. Symmetry 2018, 10, 168. [Google Scholar] [CrossRef]
  7. Milan, S.T.; Rajabion, L.; Darwesh, A.; Hosseinzadeh, M.; Navimipour, N.J. Priority-based task scheduling method over cloudlet using a swarm intelligence algorithm. Clust. Comput. 2020, 23, 663–671. [Google Scholar] [CrossRef]
  8. Topcuoglu, H.; Hariri, S.; Wu, M.Y. Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 2002, 13, 260–274. [Google Scholar] [CrossRef]
  9. Kfatheen, S.V.; Marimuthu, A. ETS: An efficient task scheduling algorithm for grid computing. Adv. Comput. Sci. Technol. 2017, 10, 2911–2925. [Google Scholar]
  10. Zhang, Y.; Xu, B. Task scheduling algorithm based-on QoS constrains in cloud computing. Int. J. Grid Distrib. Comput. 2015, 8, 269–280. [Google Scholar] [CrossRef]
  11. Sinnen, O.; Sousa, L.A. Communication contention in task scheduling. IEEE Trans. Parallel Distrib. Syst. 2005, 16, 503–515. [Google Scholar] [CrossRef]
  12. Jang, S.H.; Kim, T.Y.; Kim, J.K.; Lee, J.S. The study of genetic algorithm-based task scheduling for cloud computing. Int. J. Control Autom. 2012, 5, 157–162. [Google Scholar]
  13. Agarwal, M.; Srivastava, G.M.S. A PSO algorithm based task scheduling in cloud computing. Int. J. Appl. Metaheuristic Comput. 2019, 10, 1–17. [Google Scholar] [CrossRef]
  14. Baital, K.; Chakrabarti, A. Dynamic scheduling of real-time tasks in heterogeneous multicore systems. IEEE Embed. Syst. Lett. 2018, 11, 29–32. [Google Scholar] [CrossRef]
  15. Lathigara, A.; Aluvalu, R. Clustering based EO with MRF technique for effective load balancing in cloud computing. Int. J. Pervasive Comput. Commun. 2021, 20, 168–192. [Google Scholar]
  16. Wang, G.; Yu, H.C. Task scheduling algorithm based on improved Min-Min algorithm in cloud computing environment. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Baech, Switzerland, 2013; Volume 303, pp. 2429–2432. [Google Scholar]
  17. Ghasemian Koochaksaraei, M.H.; Toroghi Haghighat, A.; Rezvani, M.H. A bartering double auction resource allocation model in cloud environments. Concurr. Comput. Pract. Exp. 2022, 34, e7024. [Google Scholar] [CrossRef]
  18. Kumar, R.; Bhagwan, J. A comparative study of meta-heuristic-based task scheduling in cloud computing. In Artificial Intelligence and Sustainable Computing: Proceedings of ICSISCET 2020; Springer: Singapore, 2022; pp. 129–141. [Google Scholar]
  19. Khademi Dehnavi, M.; Broumandnia, A.; Hosseini Shirvani, M.; Ahanian, I. A hybrid genetic-based task scheduling algorithm for cost-efficient workflow execution in heterogeneous cloud computing environment. Clust. Comput. 2024, 27, 10833–10858. [Google Scholar] [CrossRef]
  20. Kumar, M.; Kumar, A.; Kumar, S.; Chauhan, P.; Selvarajan, S. An African vulture optimization algorithm based energy efficient clustering scheme in wireless sensor networks. Sci. Rep. 2024, 14, 31412. [Google Scholar] [CrossRef]
  21. Madni, S.H.H.; Abd Latiff, M.S.; Abdullahi, M.; Abdulhamid, S.I.M.; Usman, M.J. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment. PLoS ONE 2017, 12, e0176321. [Google Scholar] [CrossRef]
  22. Zhan, S.; Huo, H. Improved PSO-based task scheduling algorithm in cloud computing. J. Inf. Comput. Sci. 2012, 9, 3821–3829. [Google Scholar]
  23. Kumar, M.; Mukherjee, P.; Verma, S.; Kavita; Shafi, J.; Wozniak, M.; Ijaz, M.F. A smart privacy preserving framework for industrial IoT using hybrid meta-heuristic algorithm. Sci. Rep. 2023, 13, 5372. [Google Scholar] [CrossRef]
  24. Kfatheen, S.V.; Banu, M.N. MiM-MaM: A new task scheduling algorithm for grid environment. In Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications, Ghaziabad, India, 19–20 March 2015; IEEE: New York, NY, USA, 2015; pp. 695–699. [Google Scholar]
  25. Li, G.; Wu, Z. Ant colony optimization task scheduling algorithm for SWIM based on load balancing. Future Internet 2019, 11, 90. [Google Scholar] [CrossRef]
  26. Arif, M.S.; Iqbal, Z.; Tariq, R.; Aadil, F.; Awais, M. Parental prioritization-based task scheduling in heterogeneous systems. Arab. J. Sci. Eng. 2019, 44, 3943–3952. [Google Scholar] [CrossRef]
  27. Kothi Laxman, R.R.; Lathigara, A.; Aluvalu, R.; Viswanadhula, U.M. PGWO-AVS-RDA: An intelligent optimization and clustering based load balancing model in cloud. Concurr. Comput. Pract. Exp. 2022, 34, e7136. [Google Scholar] [CrossRef]
  28. Raghavender Reddy, K.L.; Lathigara, A.; Aluvalu, R.; Viswanadhula, U.M. Scheduling the Tasks and Balancing the Loads in Cloud Computing Using African Vultures-Aquila Optimization Model. In Proceedings of the International Conference on Intelligent Computing and Networking, Hangzhou, China, 17–18 November 2023; Springer Nature: Singapore, 2023; pp. 197–219. [Google Scholar]
  29. Zhang, A.N.; Chu, S.C.; Song, P.C.; Wang, H.; Pan, J.S. Task scheduling in cloud computing environment using advanced phasmatodea population evolution algorithms. Electronics 2022, 11, 1451. [Google Scholar] [CrossRef]
  30. Guo, X. Multi-objective task scheduling optimization in cloud computing based on fuzzy self-defense algorithm. Alex. Eng. J. 2021, 60, 5603–5609. [Google Scholar] [CrossRef]
  31. Al-Maytami, B.A.; Fan, P.; Hussain, A.; Baker, T.; Liatsis, P. A task scheduling algorithm with improved makespan based on prediction of tasks computation time algorithm for cloud computing. IEEE Access 2019, 7, 160916–160926. [Google Scholar] [CrossRef]
  32. Moon, Y.; Yu, H.; Gil, J.M.; Lim, J. A slave ants based ant colony optimization algorithm for task scheduling in cloud computing environments. Hum.-Centric Comput. Inf. Sci. 2017, 7, 28. [Google Scholar] [CrossRef]
  33. Sreenivasulu, G.; Paramasivam, I. Hybrid optimization algorithm for task scheduling and virtual machine allocation in cloud computing. Evol. Intell. 2021, 14, 1015–1022. [Google Scholar] [CrossRef]
  34. Jing, W.; Zhao, C.; Miao, Q.; Song, H.; Chen, G. QoS-DPSO: QoS-aware task scheduling for cloud computing system. J. Netw. Syst. Manag. 2021, 29, 5. [Google Scholar] [CrossRef]
  35. Shirvani, M.H.; Talouki, R.N. A novel hybrid heuristic-based list scheduling algorithm in heterogeneous cloud computing environment for makespan optimization. Parallel Comput. 2021, 108, 102828. [Google Scholar] [CrossRef]
  36. Ajmal, M.S.; Iqbal, Z.; Khan, F.Z.; Ahmad, M.; Ahmad, I.; Gupta, B.B. Hybrid ant genetic algorithm for efficient task scheduling in cloud data centers. Comput. Electr. Eng. 2021, 95, 107419. [Google Scholar] [CrossRef]
  37. Seifhosseini, S.; Shirvani, M.H.; Ramzanpoor, Y. Multi-objective cost-aware bag-of-tasks scheduling optimization model for IoT applications running on heterogeneous fog environment. Comput. Netw. 2024, 240, 110161. [Google Scholar] [CrossRef]
  38. Dubey, K.; Sharma, S.C. A novel multi-objective CR-PSO task scheduling algorithm with deadline constraint in cloud computing. Sustain. Comput. Inform. Syst. 2021, 32, 100605. [Google Scholar] [CrossRef]
  39. Wei, X. Task scheduling optimization strategy using improved ant colony optimization algorithm in cloud computing. J. Ambient. Intell. Humaniz. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  40. Falsafain, H.; Heidarpour, M.R.; Vahidi, S. A branch-and-price approach to a variant of the cognitive radio resource allocation problem. Ad Hoc Netw. 2022, 132, 102871. [Google Scholar] [CrossRef]
  41. Ahadi, P.; Balasundaram, B.; Borrero, J.S.; Chen, C. Development and optimization of expected cross value for mate selection problems. Heredity 2024, 133, 113–125. [Google Scholar] [CrossRef] [PubMed]
  42. Saghezchi, A.; Kashani, V.G.; Ghodratizadeh, F. A Comprehensive Optimization Approach on Financial Resource Allocation in Scale-Ups. J. Bus. Manag. Stud. 2024, 6, 62. [Google Scholar] [CrossRef]
  43. Jin, W.; Rezaeipanah, A. Dynamic task allocation in fog computing using enhanced fuzzy logic approaches. Sci. Rep. 2025, 15, 18513. [Google Scholar] [CrossRef] [PubMed]
  44. Chen, Z.; Hu, J.; Chen, X.; Hu, J.; Zheng, X.; Min, G. Computation offloading and task scheduling for DNN-based applications in cloud-edge computing. IEEE Access 2020, 8, 115537–115547. [Google Scholar]
  45. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  46. MiarNaemi, F.; Azizyan, G.; Rashki, M. Horse herd optimization algorithm: A nature-inspired algorithm for high-dimensional optimization problems. Knowl.-Based Syst. 2021, 213, 106711. [Google Scholar]
  47. Chen, X.; Cheng, L.; Liu, C.; Liu, Q.; Liu, J.; Mao, Y.; Murphy, J. A WOA-based optimization approach for task scheduling in cloud computing systems. IEEE Syst. J. 2020, 14, 3117–3128. [Google Scholar] [CrossRef]
Figure 1. Diagrammatic representation of task scheduling over cloud computing.
Figure 1. Diagrammatic representation of task scheduling over cloud computing.
Computers 15 00057 g001
Figure 2. Architecture diagram of proposed task scheduling model with optimized DNN and hybrid algorithm.
Figure 2. Architecture diagram of proposed task scheduling model with optimized DNN and hybrid algorithm.
Computers 15 00057 g002
Figure 3. Diagrammatic representation of optimized DNN for solution generation.
Figure 3. Diagrammatic representation of optimized DNN for solution generation.
Computers 15 00057 g003
Figure 4. Flow chart diagram of proposed IGW-HHO algorithm.
Figure 4. Flow chart diagram of proposed IGW-HHO algorithm.
Computers 15 00057 g004
Figure 5. Solution diagram for task and VM using proposed IGW-HHO algorithm.
Figure 5. Solution diagram for task and VM using proposed IGW-HHO algorithm.
Computers 15 00057 g005
Figure 6. Convergence analysis of proposed task scheduling algorithm with different scenarios of task variation over existing algorithms in terms of (a) scenario 1, (b) Zoomed-in view of scenario 1, (c) scenario 2, (d) Zoomed-in view of scenario 2, (e) scenario 3, (f) Zoomed-in view of scenario 3, (g) scenario 4, (h) Zoomed-in view of scenario 4, (i) scenario 5, (j) Zoomed-in view of scenario 5.
Figure 6. Convergence analysis of proposed task scheduling algorithm with different scenarios of task variation over existing algorithms in terms of (a) scenario 1, (b) Zoomed-in view of scenario 1, (c) scenario 2, (d) Zoomed-in view of scenario 2, (e) scenario 3, (f) Zoomed-in view of scenario 3, (g) scenario 4, (h) Zoomed-in view of scenario 4, (i) scenario 5, (j) Zoomed-in view of scenario 5.
Computers 15 00057 g006
Figure 7. Convergence analysis of proposed task scheduling algorithm with different cases of VM variation over existing algorithms in terms of (a) case 1, (b) Zoomed-in view of case 1, (c) case 2, (d) Zoomed-in view of case 2, (e) case 3, (f) Zoomed-in view of case 3, (g) case 4, (h) Zoomed-in view of case 4, (i) case 5, (j) Zoomed-in view of case 5.
Figure 7. Convergence analysis of proposed task scheduling algorithm with different cases of VM variation over existing algorithms in terms of (a) case 1, (b) Zoomed-in view of case 1, (c) case 2, (d) Zoomed-in view of case 2, (e) case 3, (f) Zoomed-in view of case 3, (g) case 4, (h) Zoomed-in view of case 4, (i) case 5, (j) Zoomed-in view of case 5.
Computers 15 00057 g007
Table 1. Features and challenges of existing task scheduling in cloud computing methods.
Table 1. Features and challenges of existing task scheduling in cloud computing methods.
Author [Citation]MethodologyFeaturesChallenges
Xueying Guo et al., 2020 [30]Fuzzy self-defense algorithm
  • The rate of resource utilization on the VM is mostly adequate.
  • The deadline violation rate is low.
  • It does not benchmark in real-world problems.
Maytami et al., 2021 [31]PTCT
  • The task execution time is minimized.
  • The overall makespan is minimized.
  • Dynamic scheduling is not considered for real-world application graphs.
Moon et al., 2017 [32]ACO
  • The cloud server utilization is maximized.
  • The minimal preprocessing overhead is developed.
  • Heterogeneous clusters are not considered.
Sreenivasulu and Paramasivam, 2016 [33]Hybrid optimization algorithm
  • It is effective with respect to memory utilization, bandwidth utilization, and resource utilization.
  • The load on the VMs is minimized.
  • It is not used with real-time workflows.
Jing et al., 2021 [34]PSO
  • It efficiently handles IoT and terminal applications.
  • The users are offered satisfactory services on time regarding their needs.
  • It does not introduce a prediction model for investigating host system logs.
Sohaib et al., 2021 [35]ACO and GA
  • It is better in makespan and convergence optimization.
  • The load is detected on the VM.
  • The VMs are not scheduled on the server.
Dubey and Sharma, 2021 [38]CR-PSO
  • It is much more economical and enhances the entire cloud system performance.
  • The optimal solutions are generated in minimum makespan and execution time.
  • It does not work on dependent task scheduling.
Xianyong Wei et al., 2021 [39]ACO
  • The convergence time and completion time are minimized.
  • The resource utilization associated with the VM is enhanced.
  • The research is not conducted on energy consumption minimization and ensuring QoS in cloud data centers.
  • Jin, W et al., 2025 [43]
  • Enhanced fuzzy-logic-based dynamic task allocation (DTA-FLE) in fog–cloud environment
  • Uses a hierarchical fuzzy logic mechanism to decide whether tasks execute on fog nodes or are offloaded to the cloud, improving deadline satisfaction and responsiveness for latency-sensitive applications.
  • Achieves lower execution time, better resource utilization, and reduced latency compared with conventional scheduling strategies in iFogSim-based experiments.
  • Focuses on fog–cloud IoT scenarios; does not extensively analyze pure large-scale cloud data center workloads or diverse multi-objective trade-offs such as explicit energy–cost–reliability optimization.
Table 2. Optimized DNN Configuration.
Table 2. Optimized DNN Configuration.
ParameterValue
ArchitectureFully Connected DNN
Hidden Layers3
Hidden NeuronsOptimized (5–255)
Activation (Hidden)ReLU
Activation (Output)Linear
OptimizerAdam
Learning Rate0.001
Batch Size32
Epochs100
RegularizationDropout (0.3), Batch Norm
Table 3. Different scenarios for task variation and limits.
Table 3. Different scenarios for task variation and limits.
ScenariosTask Limit
Scenario 1200 to 400
Scenario 2400 to 600
Scenario 3600 to 800
Scenario 4800 to 1000
Scenario 51000 to 1200
Table 4. Different cases for VM variation and their numbers.
Table 4. Different cases for VM variation and their numbers.
CasesNumber of VMs
Case 120
Case 240
Case 360
Case 480
Case 5100
Table 5. Overall estimation of proposed task scheduling model for different scenarios of task variation over heuristic algorithms.
Table 5. Overall estimation of proposed task scheduling model for different scenarios of task variation over heuristic algorithms.
TermsPSOWOAGWOHHOIGW-HHO
Scenario 1
MEP41.84941.61740.13135.331.817
SMAPE0.47830.47560.45860.40340.3636
MASE0.55970.38180.52680.49840.3882
MAE505.6283.19365.47272.26263.88
RMSE940.49611.01684.07527.26543.52
L1-NORM50562831.93654.72722.62638.8
L2-NORM2974.11932.22163.21667.41718.8
L-INF-NORM2156.91609.31663.81308.91407.1
Scenario 2
MEP17.67318.09318.29815.51513.543
SMAPE0.2020.20680.20910.17730.1548
MASE0.21260.10610.15040.13460.1253
MAE455.18125.24195.05177.18149.99
RMSE879.28344.75413.55390.32343.16
L1-NORM4551.81252.41950.51771.81499.9
L2-NORM2780.51090.21307.81234.31085.2
L-INF-NORM2066.21067.91134.61082.4959.3
Scenario 3
MEP10.20910.37111.0759.22927.9777
SMAPE0.11670.11850.12660.10550.0912
MASE0.16420.06090.10870.09020.0784
MAE411.5254.061193.05171.71154.64
RMSE769.35147.51415.83390.59363.02
L1-NORM4115.2540.611930.51717.11546.4
L2-NORM2432.9466.4613151235.21148
L-INF-NORM1823.1446.21126.11094.61016.2
Scenario 4
MEP7.94218.31688.9147.4496.2967
SMAPE0.09080.0950.10190.08510.072
MASE0.12920.06050.08010.06340.0533
MAE375.9170.948192.17162.45141.73
RMSE725.19187.07419.95369.48342.16
L1-NORM3759.1709.481921.71624.51417.3
L2-NORM2293.2591.5813281168.41082
L-INF-NORM1762.5556.781147.91023.3977
Scenario 5
MEP6.4236.80667.54776.23135.0543
SMAPE0.07340.07780.08630.07120.0578
MASE0.09160.04440.0680.05640.043
MAE343.1876.713152.46115.1595.953
RMSE691.57215.56334.19270.04237.32
L1-NORM3431.8959.531524.61151.5767.13
L2-NORM2186.9681.651056.8853.93750.46
L-INF-NORM1726.2661.44921.35767.27685.22
Table 6. Overall estimation of proposed task scheduling model for different cases of VM variation over heuristic algorithms.
Table 6. Overall estimation of proposed task scheduling model for different cases of VM variation over heuristic algorithms.
TermsPSOWOAGWOHHOIGW-HHO
Case 1
MEP15.02915.02314.82412.85311.495
SMAPE0.17180.17170.16940.14690.1314
MASE0.26571.22550.19140.16190.1329
MAE100.029.591936.38232.90527.621
RMSE171.7857.78970.14865.83815.25
L1-NORM1000.295.919363.82329.05276.21
L2-NORM543.2348.224221.83208.2182.75
L-INF-NORM383.425.446183.13173.72156.44
Case 2
MEP17.58518.13218.25915.44713.416
SMAPE0.2010.20720.20870.17650.1533
MASE0.21620.10580.17470.15320.123
MAE448.76158.24179.8147.19138.08
RMSE850.46437.79375.3330.68318.2
L1-NORM4487.61582.417981471.91380.8
L2-NORM2689.41384.41186.81045.71006.2
L-INF-NORM2024.61308.21019.6931.91898.92
Case 3
MEP10.15810.63811.1079.26868.0997
SMAPE0.11610.12160.12690.10590.0926
MASE0.17330.07010.09740.07910.0717
MAE404.57138.54196.81176.2155.06
RMSE747.49341.53424.05398.2363.87
L1-NORM4045.71385.41968.117621550.6
L2-NORM2363.8108013411259.21150.7
L-INF-NORM1755.1976.591151.21102.91028
Case 4
MEP8.09148.38448.88597.53076.2855
SMAPE0.09250.09580.10160.08610.0718
MASE0.12880.0590.08470.06880.0576
MAE389.76124.59196.44165.91136.9
RMSE750.86300.78431.85375.42325.81
L1-NORM3897.61245.91964.41659.11369
L2-NORM2374.4951.161365.61187.21030.3
L-INF-NORM1814856.031178.31034.4924.64
Case 5
MEP6.36646.69987.51936.21825.0717
SMAPE0.07280.07660.08590.07110.058
MASE0.09790.19680.06820.05780.0423
MAE339.948.9432157.37133.65105.02
RMSE685.9916.695349.24310.59259.52
L1-NORM3399.489.4321573.71336.51050.2
L2-NORM2169.352.7941104.4982.17820.67
L-INF-NORM1711.638.176967.21875.48755.1
Table 7. Statistical analysis of proposed task scheduling model for different scenarios of task variation over heuristic algorithms.
Table 7. Statistical analysis of proposed task scheduling model for different scenarios of task variation over heuristic algorithms.
AlgorithmsBestMeanStandard Deviation
Scenario 1
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Scenario 2
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Scenario 3
PSO6.36640.07280.0979
WOA6.69980.07660.1968
GWO7.51930.08590.0682
HHO6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Scenario 4
PSO6.36640.07280.0979
WOA6.69980.07660.1968
GWO7.51930.08590.0682
HHO6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Scenario 5
PSO6.36640.07280.0979
WOA6.69980.07660.1968
GWO7.51930.08590.0682
HHO6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Table 8. Statistical analysis of proposed task scheduling model for different cases of VM variation over heuristic algorithms.
Table 8. Statistical analysis of proposed task scheduling model for different cases of VM variation over heuristic algorithms.
AlgorithmsBestMeanStandard Deviation
Case 1
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Case 2
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Case 3
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Case 4
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Case 5
PSO 6.36640.07280.0979
WOA 6.69980.07660.1968
GWO 7.51930.08590.0682
HHO 6.21820.07110.0578
IGW-HHO5.07170.0580.0423
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, M.; Kant, R.; Gupta, B.K.; Shadab, A.; Kumar, A.; Kant, K. IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing. Computers 2026, 15, 57. https://doi.org/10.3390/computers15010057

AMA Style

Kumar M, Kant R, Gupta BK, Shadab A, Kumar A, Kant K. IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing. Computers. 2026; 15(1):57. https://doi.org/10.3390/computers15010057

Chicago/Turabian Style

Kumar, Mohit, Rama Kant, Brijesh Kumar Gupta, Azhar Shadab, Ashwani Kumar, and Krishna Kant. 2026. "IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing" Computers 15, no. 1: 57. https://doi.org/10.3390/computers15010057

APA Style

Kumar, M., Kant, R., Gupta, B. K., Shadab, A., Kumar, A., & Kant, K. (2026). IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing. Computers, 15(1), 57. https://doi.org/10.3390/computers15010057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop