Next Article in Journal
Edge Caching Strategies for Seamless Handover in SDN-Enabled Connected Cars
Previous Article in Journal
Explainable Artificial Intelligence System for Guiding Companies and Users in Detecting and Fixing Multimedia Web Vulnerabilities on MCS Contexts
Previous Article in Special Issue
A Dependency-Aware Task Stealing Framework for Mobile Crowd Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency

1
Department of Computer Information Systems, Faculty of Information Technology and Systems, The University of Jordan, Aqaba 77110, Jordan
2
Computer Science Department, Faculty of Science and Information Technology, Al-Zaytoonah University of Jordan, Amman 11733, Jordan
3
Department of Cybersecurity, Faculty of Information Technology, Zarqa University, Zarqa 13110, Jordan
4
Department of Artificial Intelligence, The Faculty of Science and Information Technology, Irbid National University, Irbid 21110, Jordan
5
Department of Public Administration, School of Business, The University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(11), 526; https://doi.org/10.3390/fi17110526
Submission received: 20 October 2025 / Revised: 9 November 2025 / Accepted: 12 November 2025 / Published: 18 November 2025

Abstract

The composition of cloud services plays a vital role in optimizing resource allocation, load balancing, task scheduling, and energy management. However, it remains a significant challenge due to the dynamic nature of workloads and the variability in resource demands, where addressing these challenges is essential for ensuring seamless service delivery. This research investigated the implementation of the Cuckoo Optimization Algorithm (COA) in a cloud computing environment to optimize service composition. In the proposed approach, each service was treated as an egg, where high-demand services represented the host’s original eggs, while low-demand services represented the cuckoo bird’s eggs that competed for the same resources. This implementation enabled the algorithm to balance workloads dynamically and allocate resources efficiently while optimizing load balancing, task scheduling, cost reduction, processing and response times, system stability, and energy management. The simulations were conducted using CloudSim 5.0, and the results were compared with the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms across key performance metrics. Experimental results clearly demonstrate that the COA outperformed both PSO and ACO across all evaluated metrics. The COA achieved higher efficiency in task scheduling, dynamic load balancing, and energy-aware resource allocation. It consistently maintained lower operational costs, reduced SLA violations, and achieved superior task completion and VM utilization rates. These findings underscore the COA’s potential as a robust and scalable approach for optimizing cloud service composition in dynamic and resource-constrained environments.

1. Introduction

Cloud computing is a technology used to develop systems based on dynamic resource sharing to enable the integration of multiple systems and services to provide end-to-end services [1]; cloud computing provides three types of services, software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), as part of its Service-Oriented Architecture (SOA) [2]. Cloud computing provides several advantages, including reduced operational costs, scalability, security, mobility, flexibility, availability, disaster recovery, improved system performance, quality control, faster deployment, automated software upgrades, and sustainability, according to [3,4,5], in addition to the other business advantages, such as seamless business service delivery, increased efficiency and productivity through automation, access to vast amounts of information and resources, optimizing resource utilization, and the ability to connect and collaborate with others from anywhere in the world as explained [6,7]. Scholars [8,9] stated that cloud service composition complexity is influenced by several factors, including the heterogeneous nature of cloud environments, unpredictable workload demands, and the need to balance between different optimization goals [10]. Studies [8,11,12,13,14,15,16] argue that due to the dynamic nature of cloud resources, conventional methods and approaches may not be able to achieve the ideal service composition. Hence metaheuristic methods are widely employed to discover and find near-optimal solutions within acceptable computational time, resources, and budget.
This study implemented the COA in a cloud computing environment to optimize the service composition using the CloudSim 5.0 simulation environment. The COA is a nature-inspired metaheuristic algorithm that employs Lévy-flight-based exploration for random search and uses host birds’ nest replacement strategies to find optimal solutions [14,15,16]. The COA has gained significant attention as one of the promising nature-inspired algorithms due to its simplicity, robust search capabilities, and effectiveness in handling complex optimization problems [17,18]. Inspired by the brood parasitism behavior of cuckoos in nature, the COA dynamically explores, discovers, and exploits promising solutions, which makes it effective for optimizing cloud computing resources and load balancing [3,19,20]. Nevertheless, it is essential to compare and evaluate the COA’s performance against other optimization methods to determine its effectiveness and suitability for cloud service composition.
In this study, the results of the proposed COA are compared with PSO and ACO results. PSO is inspired by swarm intelligence in nature, where PSO dynamically adjusts service selection based on both individual and collective experience, making it effective for optimizing cloud resource allocation and load balancing [21,22]. On the other hand, ACO is a popular bio-inspired algorithm that simulates the pheromone-based foraging behavior of ants [23,24]. ACO demonstrates effectiveness in cloud optimization [25,26]. The comparison between the COA, PSO, and ACO is conducted by analyzing key cloud computing performance metrics, including execution time, power consumption, cost, SLA violations, latency, response time, load-balancing efficiency, VM utilization, and task completion rate.
Performance metrics of cloud computing play a crucial role in the efficiency and cost-effectiveness evaluation of cloud-based services [27,28]. In this environment, execution time represents the total time needed to process and complete the requested task or a group of tasks (cloudlets) [29,30,31]. Execution time encompasses time needed for resource allocation, data transfer, computation, and result retrieval, which directly impacts the user experience and application performance [32,33,34,35]. Cost is recognized as one of the critical metrics in the cloud computing environment, encompassing not only the direct financial charges related to the use of cloud resources, but also other indirect costs associated with data transfer, storage, and potential over-provisioning [30,31]. Power consumption has gained increasing attention due to its environmental impact; according to [32], power consumption refers to the total energy consumed by a data center’s physical and VMs, network infrastructure, and other resources. Researchers [29,32,33] stated that optimizing energy can reduce operational costs and carbon footprints, making it key to sustainable cloud computing.
According to [36,37,38], SLA is defined as a formal contract between a cloud service provider and a client that sets out the overall performance parameters. SLA violations arise when cloud vendors fail to fulfill predefined performance guarantees and requirements, such as availability, throughput, response time, or resource allocation [36,37]. To reduce SLA violations and enhance the utilization of cloud resources, advanced load-balancing techniques have been implemented and adopted [38], where load-balancing techniques dynamically distribute workloads across servers and VMs to ensure optimal performance [39,40,41]. Both studies [42,43] identify load balancing as the process of distributing workloads across multiple VMs to prevent resource overutilization or underutilization. Another important metric in the cloud computing environment is response time, which refers to the duration a cloud service takes to respond to a user request [40,44]. The response time includes network latency, data retrieval, and processing times, in addition to any delays in communication between different cloud components, as stated by [43,45,46]. Both studies [47,48,49] emphasized that the response time has a direct impact on user experience and application efficiency, where minimizing the response time is essential for ensuring high-quality service delivery and reducing the SLA violations.
One of the most significant cloud computing challenges is task scheduling, according to [1,50]. In fact, [49] stated that task scheduling determines how tasks are assigned to available VMs and resources; therefore task scheduling is a critical factor in achieving a high task completion rate [50], where the task completion rate is the ratio of the number of successfully completed tasks to the total number of submitted tasks within a specific timeframe [51]. Studies [52,53,54] stated that the primary objective of task scheduling algorithms is to maximize VM throughput and to achieve effective VM resource utilization. VM resource utilization refers to the effective use of computing resources allocated in the data center; such resources include CPU, memory, storage, and network bandwidth [1,55]. Scholars [56,57] asserted that VM resource utilization directly influences cost efficiency, energy consumption, and quality of service; accordingly, it is considered a key performance metric in cloud computing.
The metrics mentioned in the previous studies provide a comprehensive view of cloud performance, which supports organizations in making informed decisions about resource allocation, workload management, service quality, and optimization [43,52,53,55,56,57,58]. However, no previous study has combined all these metrics within a single framework or evaluated their effectiveness through comparison with the outcomes of an applied optimization algorithm. This gap emphasizes the need for a more integrated and comparative approach, which the current research seeks to fulfill.
While efficient service composition plays a critical role in maintaining quality of service (QoS) and minimizing operational costs in cloud environments, it continues to face persistent challenges in achieving efficient resource allocation, load balancing, and task scheduling [4,5,36,45,59,60,61]. This is primarily due to the dynamic and heterogeneous nature of cloud environments, as explained in [53,61], where traditional approaches often fail to adapt to the complexity and variability of cloud-based workloads [42,43,45,49,50,55]. The proposed study introduces a dynamic optimization model that simulates high-demand services as original eggs and low-demand services as cuckoo bird eggs, utilizing the COA to enhance service placement efficiency and overall system responsiveness in the cloud computing environment.
This study evaluates the efficiency of the COA in improving cloud service composition and optimizing cloud resource management. The following are the study objectives and key contributions:
  • Develop a COA-based cloud service composition model that provides effective and efficient resource allocation and power management.
  • Compare the performance of the COA against PSO and ACO, highlighting the effectiveness of swarm-intelligence-based methods in cloud optimization.
  • Evaluate important cloud performance metrics, such as execution time, power consumption, response time, costs, and SLA violations.
  • Demonstrate the applicability and the capabilities of the COA in cloud services, presenting insights into its capability for actual, real-world cloud service control and management.
By addressing these objectives, this study contributes to the advancement of cloud service optimization, allowing greater efficiency and sustainable cloud service delivery.

2. Literature Review

This section reviews the existing literature on optimization algorithms used in cloud computing. Various heuristic and metaheuristic algorithms such as Genetic Algorithms (GAs), PSO, and ACO were adopted to optimize the cloud computing environment, including load-balancing efficiency, resource allocation, load balancing, energy management, service level, execution time, response time, latency, cost optimization, task scheduling, VM utilization, and overall system performance [61,62,63,64,65]. Kurdi et al. [22] proposed a bio-inspired optimization algorithm based on cuckoo birds to find the optimal service composition in a multi-cloud environment (MCE) efficiently. The researchers claimed that MultiCuckoo significantly improved service composition efficiency in MCE by reducing the number of examined services, minimizing cloud provider selection, and lowering execution time. Tarawneh et al. [8] proposed a model that employs the Spider Monkey Optimization (SMO) algorithm to enhance the cloud service composition; the proposed model intelligently ensures an optimal balance between VMs’ resources, processing capacity, and service composition capabilities. In addition, it improves the utilization of service resources and effectively optimizes the reusability of these resources. Krishnadoss et al. [66] introduced a new algorithm called Rider Cuckoo Optimization Algorithm (RCOA), which optimized the task scheduling process. The main objective of the proposed algorithm is to reduce the energy consumption and time taken for task execution. Rallabandi et al. [3] proposed an optimized efficient job scheduling resource (OEJSR) approach using cuckoo and gray wolf job optimization to enhance resource search in a cloud environment.
Azari et al. [67] proposed service composition in the cloud environment using Cuckoo Optimization and Artificial Bee Colony algorithms to find the best combination of services. Researchers claimed that the proposed method improved fitness function, and the results were close to the optimal solution with enhanced running time. Dahan and Alwabel [68] introduced a hybrid Artificial Bee Colony with a Cuckoo Search algorithm to solve the service composition problem in cloud computing. The proposed algorithm overcomes the Artificial Bee Colony’s limitations by allowing the abandoned bees to enhance their search and override the local optimum using the Cuckoo Search. Subbulakshmi et al. [69] proposed an automatic approach for weight calculation using Genetic Algorithm (GA) and COA to find the weight of QoS attributes and improve web service QoS values, where the services with high QoS values are taken as candidate services for service composition, and the cuckoo-based algorithm is used to identify optimal web service combinations. Tawfeek et al. [70] introduced a cloud task scheduling approach based on ACO, aiming to minimize makespan for a given set of tasks. Their ACO-based algorithm was compared with traditional methods like FCFS and round-robin. The research experimental results confirmed that ACO outperformed both FCFS and round-robin in terms of efficiency. Nabi et al. [71] introduced AdPSO as an adaptive task scheduling approach for cloud computing based on the pos algorithm. Their method employs a Linearly Descending and Adaptive Inertia Weight strategy to optimize load balancing between local and global search and to improve task execution time, throughput, and resource utilization. Sharma et al. [72] implemented the ACO for QoS task scheduling in a cloud computing environment; the proposed approach combines ACO with a neural network to enhance global search capabilities and optimize multi-objective scheduling, leading to more efficient task organization and better resource utilization. Dubey and Sharma [72,73] proposed a multi-objective CR-PSO task scheduling algorithm with deadline constraint in cloud computing; the proposed algorithm integrates features of Chemical Reaction Optimization and PSO to optimize task allocation on VMs, improving the quality in terms of cost, energy, and makespan. Pradhan and Bisoy [74] introduced a load-balancing technique for a cloud computing platform based on modified PSO, called (LBMPSO); the proposed technique focuses on reducing makespan and enhancing resource utilization by achieving balanced task allocation and efficient task–resource coordination. Alsaidy et al. [75] proposed improved initialization of PSO using heuristic algorithms, such as Longest Job to Fastest Processor (LJFP) and Minimum Completion Time (MCT), to enhance task scheduling in cloud computing by minimizing makespan, total execution time, and energy consumption. researchers [72] presented a new technique that focuses on optimizing QoS-based task scheduling in cloud computing environments; the proposed technique employed the ACO technique and the neural network approach for optimal global search. The results of the study demonstrated that the proposed algorithm is effective and fast, providing improved mean access times and more consistent job assignments in the dynamic nature of cloud environments. Dogani and Khunjush [76] proposed a method using a combination of Genetic Algorithm and PSO algorithm for cloud services composition. Nazif et al. [15] presented a fuzzy-based PSO for composing cloud services to overcome these obstacles. Nezafat Tabalvandani et al. [77] suggested the multi-objective PSO algorithm to solve the combinatorial problem in the large search space of a multi-cloud environment (MCE) by maximizing reliability while minimizing the cost of services and making Pareto optimal points. Zavieh et al. [23] combined the Cuckoo algorithm and the PSO to choose the best VMs that could be assigned to each host in the cloud computing infrastructure. Alhadid et al. [12] proposed the Smart Multistage Forward Search (SMFS) as an effective technique to select and construct the service composition. It searches for a service with available resources to create a new service composition, even when web services are integrated within other compositions. Tarawneh et al. [8] proposed a model that enhances the processes of cloud service selection and composition by minimizing the number of integrated services using Multistage Forward Search (MSF). In addition, the proposed model uses the Spider Monkey Optimization (SMO) algorithm, which improves the service composition. Table 1 shows a comparative overview of optimization techniques applied in cloud computing. However, no prior COA-based approach has applied similar adaptive mechanisms for real-time adjustment in cloud environments, highlighting a research gap in COA–ML integration for self-optimized service composition.
According to previous studies, cloud computing still faces numerous challenges; such challenges include task scheduling, resource allocation, load balancing, consolidation, VM placement, service composition, energy consumption, and replication, where most of the challenges are classified as NP-hard problems. In this study, researchers integrate multiple performance metrics related to the cloud computing environment, including execution time, power consumption, cost, SLA violations, latency, response time, load-balancing efficiency, VM utilization, and task completion rate, providing a more comprehensive evaluation. Moreover, as shown in Table 1, no prior study has addressed all these performance metrics within a single framework. This highlights a significant gap in the literature and justifies the need for a more integrative and holistic optimization approach, as proposed in this study. Unlike prior metaheuristic and hybrid approaches that focused on isolated objectives, this study’s model unifies multiple metrics within one optimization framework, positioning it as a more comprehensive solution for cloud service composition.
Also, this study contributes to the existing literature by implementing and evaluating COA for cloud service composition, with a direct performance comparison against PSO and ACO. Furthermore, by utilizing CloudSim 5.0 for rigorous simulation and direct comparative analysis between COA, PSO, and ACO, this research establishes a more scalable and efficient approach to cloud service composition. This holistic optimization strategy addresses key limitations in prior work, contributing significantly to the advancement of intelligent cloud resource management, where the findings will help cloud service providers and researchers select the most suitable optimization technique for efficient cloud resource management.

3. Methodology

The COA is an algorithm inspired by the brood parasitism behavior of cuckoo birds [14]. To implement the COA, we used the CloudSim (5.0) simulation environment. Figure 1 illustrates the COA mechanisms, including the Cuckoo Egg Placement (Lévy flight), fitness function, Nest Selection and Replacement, and elitism [14,15,16]. Meanwhile, Figure 2 shows the COA pseudocode.

3.1. Implementation of the COA

The implementation of the COA in the CloudSim 5.0 simulation environment involves modeling service requests; applying Lévy flights for exploration; and optimizing resource allocation, task scheduling, and energy management using an iterative evolutionary process. The algorithm proceeds through the below stages.

3.1.1. Initialization of Parameters

The primary parameters related to the COA in CloudSim (5.0) must be initialized to simulate the cloud service composition environment, as follows:
  • n: Number of host nests (VMs);
  • m: Number of service requests (cuckoo eggs);
  • MaxGen: Maximum number of generations (iterations);
  • Pa: Probability of discovering alien eggs (abandonment rate);
  • w1 to w5: Weight coefficients for multi-objective fitness;
  • α: Step size scaling factor for Lévy flights;
  • Nests (VMs): Each nest represents a possible service composition (VM allocation);
  • Eggs (services):
    • Original eggs (high-demand services): Services with strict QoS requirements.
    • Cuckoo eggs (low-demand services): Services that can be dynamically replaced for better efficiency.
Figure 3 illustrates the pseudocode for Defining High-Demand and Low-Demand Services.

3.1.2. Initial Population Generation

Cloud service compositions (N nests) are randomly generated, where each nest represents a VM allocation strategy. In this phase, each nest represents a service composition plan, mapping service requests to VMs, as shown in Equation (1):
Χ i = { χ i 1 ,   χ i 2 ,   χ i 3 ,     ,   χ i d }
where
χ i j : the assignment of service (j) to a VM in the solution (i);
D: total number of services;
i: 1, 2, …, n (number of cloud services).
Figure 4 shows the Initial Population Generation.

3.1.3. Generating New Solutions (Lévy-Flight-Based)

The Lévy flight is used to explore the search space globally. Meanwhile, the probabilistic step ensures that short steps and long jumps are taken, allowing for exploitation and exploration. As shown in Equation (2), the current solution X i ( t ) is used to generate the new solution X i ( t + 1 ) :
X i ( t + 1 ) =   X i ( t ) +   α     L é v y   (   λ   )
where
α : step size scaling factor;
: entry-wise multiplication;
L é v y ( λ ) : random step generated using the Lévy distribution in Equation (3), where the generated random number represents the Lévy distribution parameter:
L é v y ~ μ =   τ λ , 1 < λ 3
Figure 5 shows the Lévy-flight-based generation of new solutions.

3.1.4. Fitness Function

A multi-objective fitness function that balances overall performance and performance is used to assess each generated solution, wherein the multi-objective fitness function is used to optimize several objectives at the same time, as proven in Equation (4):
F i t n e s s χ = ω 1 . E T x + ω 2 . P C x + ω 3 . S L A x + ω 4 . C o s t x + ω 5 . L a t e n c y ( x )
where
E T x : Execution time of service composition;
P C x : Power consumption (Watts);
S L A x : SLA violation rate;
C o s t x : Operational cost (e.g., per CPU-hour);
. L a t e n c y ( x ) : Average communication delay between composed services;
ω 1 , ω 2 , ω 3 , ω 4 , ω 5 : Weight coefficients.
According to Equation (4), the weight coefficients ω 1 , ω 2 , ω 3 , ω 4 , ω 5 are adjusted to reflect the priority related to each objective (e.g., energy-aware optimization: ω 2 > ω 4 ).
The following equations are used to calculate the value for each objective in the fitness function: Equation (5) is used to find the execution time (ET) [29,30].
E T = i = 1 n T a s k L e n g t h i V M S p e e d i
Equation (6) is used to find the cloud service composition power consumption (PC) [32].
P C = i = 1 m ( P i d l e + ( P m a x P i d l e )   .   U i )
where
Ui = utilization of VMi;
P i d l e : Idle power consumption;
P m a x : Maximum power at full utilization.
Cloud service composition SLA violation (SLAV) is calculated using Equation (7) [41,42].
S L A V = N u m b e r   o f   m i s s e d   d e a d l i n e s T o t a l   t a s k s
Cloud service composition operational cost (C) can be calculated using Equation (8) [30]
C = i = 1 m ( C o s t V M i .   R u n t i m e i )
Equation (9) is used to find the cloud service latency [43].
L = i = 1 n ( N e t w o r k D e l a y i + P r o c e s s i n g D e l a y i )
Figure 6 represents the fitness function pseudocode.

3.1.5. Discovery and Replacement Strategy

According to Figure 7, if the Probability of Replacement P α is greater than the generated random number (rand), then the host identifies an alien egg and replaces. Accordingly, poor solutions are discarded ( Χ i ) and new compositions are introduced ( Χ i n e w ) , which enhances the diversity of the solutions.
where
Rand: random number (0–1);
P α : Probability of Replacement = 0.25 (empirically set);
Χ i : existing solution (nest);
Χ i n e w : new solution (nest).

3.1.6. Nest Selection and Elitism

To prevent loss of best-found solutions, elitism is applied by storing the (topK) best solutions across iterations after ranking the nests based on their fitness, and VM migration is implemented to ensure the dynamic load balancing. Figure 8 shows the pseudocode of the Nest Selection and elitism process.
To ensure continuous improvement and convergence toward optimality of the solution generated, these elite solutions (topK) are preserved and reinserted into the next generation.

3.1.7. Task Scheduling Strategy

The goal of the Task Scheduling Strategy is to assign Si to the VM with the best available performance-to-energy ratio. Equation (10) is used to find the VM that satisfies the requirements of the task scheduling strategy:
S e l e c t   v m i s u c h   t h a t   m i n E T ( s i   , v m j ) 1 + U   ( v m j )
where
s i : Cloud service i, where s ∈ { s 1 , s 2 , …, s n };
v m j : VMs ∈ {vm1, vm2, …, vmm};
E T ( s i , v m j ) : Expected execution time of service Si on VMj;
U ( v m j ) : Current utilization of VMj.
Figure 9 illustrates the Optimizing Task Scheduling strategy pseudocode

3.1.8. Energy-Aware VM Consolidation

Energy-Aware VM Consolidation is adopted to minimize energy consumption and optimize resource utilization while maintaining required service levels, where Equation (11) is employed either to set the VM in standby mode or to mitigate tasks to balance load between VMs.
I f i = 1 m U i < β   .   m   ,   t h e n   s h u t   d o w n   m m   V M s
where
β = 0.6 (Consolidation Threshold).
Figure 10 shows the Energy-Aware VM Consolidation pseudocode.

3.1.9. Energy Management Model

Power consumption (pvm) of a VM is modeled as shown in Equation (12):
P v m = P i d l e +   P m a x   P i d l e   .   U ( v m )
where
P i d l e : Idle power consumption;
P m a x : Maximum power at full utilization;
U v m ∈ [0, 1]: Utilization factor of the VM.
The total energy over time (ETotal) is calculated using Equation (13):
E t o t a l = i = 1 n 0 T P v m i   t d t
Furthermore, minimizing the value of the total energy (Etotal) is an implicit goal of the fitness function.
And finally, Figure 11 illustrates the algorithm termination process.

4. Simulation, Results and Discussion

4.1. Simulation

In this study, a cloud computing environment was modeled and conducted using the CloudSim 5.0 Framework [85]. All simulation experiments were performed on a machine equipped with an Intel Core i7 processor (12th Gen) running at 3.6 GHz, 16 GB of DDR4 RAM, a 512 GB NVMe SSD, and the Windows 11 operating system (64-bit) utilizing Java Development Kit (JDK) 8 and CloudSim 5.0 was used for running the experiments.
Table 2 summarizes the CloudSim 5.0 simulation key parameter configuration, including number of requests, cloudlets, VMs, number of services per composition, and other important parameters used in the experiments for testing and comparing the COA, PSO, and ACO in terms of resource allocation, execution time, power consumption, load-balancing efficiency, SLA violations, and response time, where the system specifications ensure consistency and reliability in evaluating how each algorithm optimizes cloud service composition and resource management.
Due to the deterministic nature of CloudSim 5.0 simulations, which do not rely on random sampling, traditional statistical significance testing methods, such as ANOVA, were not utilized. The consistent trends observed across multiple runs confirm the robustness of the results.

4.2. Results and Discussion

The study experiments were conducted using CloudSim 5.0, which provides a comprehensive comparison of the COA, PSO, and ACO algorithms’ performances in a cloud computing environment. The evaluation metrics used include key performance indicators such as execution time, power consumption, load balancing, SLA violations, cost, latency, response time, VM utilization, and task completion rate. The data collected over ten runs are analyzed and discussed in detail below.
The COA delivered the best execution times when compared to PSO and ACO in all experiments. As evidenced by Figure 12, COA completed the task in the least amount of time, executed within a period of 110 to 120 (ms). Comparatively, PSO had higher execution times of 121 to 132 s, whereas ACO had the maximum execution time of 128 to 140 (ms). This suggests that the COA is more capable of completing tasks within the cloud environment. Thus, the COA is more suitable for execution in time-critical systems.
During the experimental run execution time analysis, the COA consistently demonstrated the lowest execution time, which reflects clearly the efficiency achieved in task scheduling. Unlike the COA, PSO operates moderately effectively with some delay in execution time; on the other hand, ACO shows the highest execution time of all three algorithms, which means that it takes a longer time to find the best composition solution. The key insight lies in the COA’s faster convergence due to its proficiency in exogenous service demand prioritization, making it the best out of the three methods.
The COA also demonstrated superior performance in terms of power consumption in comparison with PSO and ACO, where the results shown in Figure 13 illustrate that the COA is more energy-efficient, with the ability to optimize resource usage effectively, which is critical for operational expenses and environmental effects in data centers.
Figure 14 shows that the COA’s load balancing results are better than PSO and ACO’s, where the COA scores 90–99, while the PSO and ACO scores range from 80 to 89 and 70 to 70, respectively. The higher load balancing indicates that the workload is distributed more efficiently across the VMs, and resources are better utilized than the other algorithms, which ultimately reduces the risk of VM overloading. Because of the progressive learning and adaptation of the optimization algorithms over multiple simulation runs, the results of the load balancing start with a low value and then increase.
Figure 15 illustrates that the COA achieved the lowest number of SLA violations compared to the results of PSO and ACO, which highlights the reliability and efficiency of the proposed approach in meeting clients’ SLA requirements, and confirms the COA’s capability to dynamically adjust resource allocation, which is crucial for maintaining user satisfaction, building trust, and ensuring high SLA compliance in cloud computing environments.
In terms of cost, the COA proved to be the most economical, as shown in Figure 16, where the COA consistently maintains the lowest operational cost throughout all simulation runs due to its ability to optimize resource allocation and minimize unnecessary VM usage. This cost-effectiveness establishes the COA as the most cost-efficient solution among the three algorithms, and as a result, makes it the most attractive option for cost-sensitive cloud applications.
As illustrated in Figure 16, the three algorithms’ operational costs are initially high due to suboptimal resource allocation and over-provisioning, which leads to inefficient task scheduling and unnecessary VM usage. Over time, the COA outperforms PSO and ACO by precisely optimizing resources and reducing unnecessary VM usage, making it the most cost-efficient algorithm.
As shown in Figure 17, due to the COA’s ability to prioritize high-demand services effectively, the COA consistently achieves lower latency and response times compared to PSO and ACO, where the results of the COA demonstrate the lowest delay in processing requests, ensuring that tasks are executed quickly and efficiently; this makes the COA the most efficient in reducing latency and ensuring timely request processing in cloud environments.
In the evaluation of the response time between the results of the COA, PSO, and ACO, Figure 18 illustrates that the COA achieves the fastest task completion, ensuring that services are delivered more quickly and efficiently. On the other hand, PSO demonstrates moderate response times, which are not as effective as the COA response time. Meanwhile, ACO recorded the longest response time, resulting in inefficient task execution and slower service delivery. Due to the previously discussed results related to load-balancing and scheduling capabilities, this explains the COA’s faster response times, and the efficiency in managing tasks and optimizing performance in cloud environments.
Figure 19 shows that the COA also demonstrated higher VM utilization, where the COA leverages the Lévy flight mechanism to intelligently assign cloudlets to VMs, resulting in optimal workload distribution and preventing underutilization or overloading. On the other hand, the results show that PSO and ACO struggle under high workloads, leading to uneven resource distribution where some VMs are overloaded while others remain underutilized.
Figure 20 shows that the COA achieves the highest VM utilization average (88.1%), ensuring optimal workload distribution, while PSO and ACO results show lower efficiency (77.1% and 69.1%, respectively) due to uneven task allocation.
Figure 21 shows the results of the simulation runs related to the task completion rate, where the COA ranged from 92 to 99%, compared to 83–90% for PSO and 70–77% for ACO.
The results illustrated in Figure 21 and Figure 22 show that the COA achieves the highest task completion rate, where most assigned tasks are completed within their deadlines due to its efficiency and effectiveness in task scheduling and cloud resource management. On the other hand, PSO and ACO exhibit task scheduling inefficiencies that lead to delays and some unfinished tasks under high workloads. They may also struggle to adapt effectively to dynamic and heavily loaded cloud environments.
In the overall conclusion, all the results confirm that the COA consistently outperforms both PSO and ACO across all evaluated performance metrics, establishing it as the most effective and reliable approach for cloud service composition.

5. Conclusions

This study investigated the application of the COA for optimizing cloud service composition across multiple critical performance dimensions, including execution time, power consumption, SLA violations, cost, latency, response time, VM utilization, load balancing, and task completion rate. Through a series of simulation experiments using CloudSim 5.0, the performance of the COA was benchmarked against two widely used algorithms: PSO and ACO.
The findings conclusively show that the COA consistently outperforms both PSO and ACO across all evaluated metrics. The COA demonstrated superior efficiency in task scheduling, energy-aware resource allocation, and dynamic load balancing. It maintained the lowest operational cost, minimized SLA violations, and achieved the highest task completion and VM utilization rates. These outcomes highlight the COA’s potential as a robust and scalable solution for intelligent cloud service composition, especially in dynamic and resource-sensitive cloud environments.
Despite the promising results, this study has a few limitations. First, this study relies entirely on simulations conducted using the CloudSim 5.0 environment. While this provides a controlled and flexible testing framework, it may not fully capture the complexities and unpredictability of actual real-world cloud infrastructures and environments. Second, the research compared the COA with PSO and ACO algorithms. Although these are widely used, other recent and advanced algorithms were not included in comparative analysis, like Gray Wolf Optimizer and the Whale Optimization Algorithm. Moreover, even if the COA was shown to be better than PSO and ACO, we believe that more work needs to be conducted on its effectiveness across the variations in service demands and different cloud architectures. Also, some simulation parameters, such as VM capacities and service request patterns, were predefined and static, while cloud environments are often more volatile and require adaptive strategies in real-world implementation. Also, the power consumption in the CloudSim simulation environment was estimated based on standard utilization-based formulas, which may not reflect actual hardware-level energy behaviors in physical data centers. Finally, other critical aspects of cloud services, such as security limitations and data privacy, were not addressed in this study.
Future research could explore several promising directions to extend this study. It could include the utilization of hybrid optimization approaches, such as combining the COA with machine learning-based predictive models to enhance dynamic service composition and workload forecasting. Additionally, expanding the proposed model to support multi-cloud and edge computing environments would increase its applicability to real-world, distributed cloud infrastructures. Incorporating adaptive and self-tuning mechanisms into the COA can further improve its responsiveness to dynamic workloads and resource variability. Comparative analysis with other advanced metaheuristic algorithms, such as Gray Wolf Optimizer (GWO), Artificial Bee Colony (ABC), and Deep Reinforcement Learning (DRL)-based schedulers, may also offer valuable insights. Also, future research may focus on the tuning and optimization of COA hyperparameters, convergence analysis, and strategies to further study the proposed ACO implementation in large-scale or real-time cloud environments. This would enhance its robustness, scalability, and practical applicability, in addition to preventing premature stagnation.
Moreover, addressing critical concerns such as cloud security policies, regulatory compliance, and privacy protection will be essential for deploying the model in sensitive domains like healthcare or finance. Future studies may also benefit from incorporating accurate real-time workload prediction to improve resource allocation and scalability, particularly during fluctuating demand scenarios. Finally, deploying and evaluating the proposed approach on commercial cloud platforms such as Amazon Web Services (AWS) and Microsoft Azure will provide deeper insights into its practical effectiveness under operational constraints. However, it is important to understand that the COA might involve significant computational intensity in real-time implementation or large-scale service environments, which could impact its scalability and responsiveness. Such limitations provide a clearer understanding of the COA’s applicability and can be considered as a potential direction for future research.

Author Contributions

Conceptualization, I.A. and E.A.-T.; Methodology, S.K., I.A., A.A., and M.E.D.; Software, S.A. and R.S.A.; Validation, R.S.A., S.A., and S.K.; Formal Analysis, R.S.A., I.A., E.A.-T., and M.E.D.; Investigation, I.A. and E.A.-T.; Resources, A.A., M.A.R., and R.S.A.; Data Curation, A.A., S.K., and R.S.A.; Writing—Original Draft Preparation, E.A.-T., I.A., and R.S.A.; Writing—Review and Editing, S.K., E.A.-T., R.S.A., M.A.R., and I.A.; Visualization, R.S.A., E.A.-T., I.A., and M.A.R.; Supervision, I.A., M.A.R., and S.A.; Project Administration, I.A.; Funding Acquisition, R.S.A. and I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SLAService Level Agreement
VMVirtual Machine
COACuckoo Optimization Algorithm
PSOParticle Swarm Optimization
ACOAnt Colony Optimization
SaaSSoftware As a Service
PaaSPlatform As a Service
IaaSInfrastructure As a Service
SOAOriented Architecture
CPUCentral Processing Unit
MCEMulti-Cloud Environment
GAGenetic Algorithm
OEJSROptimized Efficient Job Scheduling Resource
LJFPLongest Job to Fastest Processor
MCTMinimum Completion Time
MSFMultistage Forward Search
SMOSpider Monkey Optimization
AWCOAdvanced Willow Catkin Optimization
GWOGray Wolf Optimization
AWSAmazon Web Services

References

  1. Mahdizadeh, M.; Montazerolghaem, A.; Jamshidi, K. Task Scheduling and Load Balancing in SDN-Based Cloud Computing: A Review of Relevant Research. J. Eng. Res. 2024, S2307187724002773. [Google Scholar] [CrossRef]
  2. Choudhary, V.; Vithayathil, J. The Impact of Cloud Computing: Should the IT Department Be Organized as a Cost Center or a Profit Center? J. Manag. Inf. Syst. 2013, 30, 67–100. [Google Scholar] [CrossRef]
  3. Rallabandi, V.S.S.S.N.; Gottumukkala, P.; Singh, N.; Shah, S.K. Optimized Efficient Job Scheduling Resource (OEJSR) Approach Using Cuckoo and Grey Wolf Job Optimization to Enhance Resource Search in Cloud Environment. Cogent Eng. 2024, 11, 2335363. [Google Scholar] [CrossRef]
  4. Avram, M.G. Advantages and Challenges of Adopting Cloud Computing from an Enterprise Perspective. Procedia Technol. 2014, 12, 529–534. [Google Scholar] [CrossRef]
  5. Biswas, D.; Jahan, S.; Saha, S.; Samsuddoha, M. A Succinct State-of-the-Art Survey on Green Cloud Computing: Challenges, Strategies, and Future Directions. Sustain. Comput. Inform. Syst. 2024, 44, 101036. [Google Scholar] [CrossRef]
  6. Mumtaz, R.; Samawi, V.; Alhroob, A.; Alzyadat, W.; Almukahel, I. PDIS: A Service Layer for Privacy and Detecting Intrusions in Cloud Computing. Int. J. Adv. Soft Comput. Its Appl. 2022, 14, 15–35. [Google Scholar] [CrossRef]
  7. Idahosa, M.E.; Eireyi-Edewede, S.; Eruanga, C.E. Application of Cloud Computing Technology for Enhances E-Government Services in Edo State. In Advances in Electronic Government, Digital Divide, and Regional Development; Lytras, M.D., Alkhaldi, A.N., Ordóñez De Pablos, P., Eds.; IGI Global: Hershey, PA, USA, 2024; pp. 221–278. ISBN 979-8-3693-7678-2. [Google Scholar]
  8. Tarawneh, H.; Alhadid, I.; Khwaldeh, S.; Afaneh, S. An Intelligent Cloud Service Composition Optimization Using Spider Monkey and Multistage Forward Search Algorithms. Symmetry 2022, 14, 82. [Google Scholar] [CrossRef]
  9. Ma, H.; Chen, Y.; Zhu, H.; Zhang, H.; Tang, W. Optimization of Cloud Service Composition for Data-Intensive Applications via E-CARGO. In Proceedings of the 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design (CSCWD), Nanjing, China, 9–11 May 2018; IEEE: Piscataway, NJ, USA; pp. 785–789. [Google Scholar]
  10. Wajid, U.; Marin, C.A.; Karageorgos, A. Optimizing Energy Efficiency in the Cloud Using Service Composition and Runtime Adaptation Techniques. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; IEEE: Piscataway, NJ, USA; pp. 115–120. [Google Scholar]
  11. Khwaldeh, S.; Abu-taieh, E.; Alhadid, I.; Alkhawaldeh, R.S.; Masa’deh, R. DyOrch: Dynamic Orchestrator for Improving Web Services Composition. In Proceedings of the 33rd International Business Information Management Association Conference, IBIMA 2019, Granada, Spain, 10–11 April 2019; pp. 6030–6047. [Google Scholar]
  12. Alhadid, I.; Tarawneh, H.; Kaabneh, K.; Masa’deh, R.; Hamadneh, N.N.; Tahir, M.; Khwaldeh, S. Optimizing Service Composition (SC) Using Smart Multistage Forward Search (SMFS). Intell. Autom. Soft Comput. 2021, 28, 321–336. [Google Scholar] [CrossRef]
  13. AlHadid, I.; Abu-Taieh, E. Web Services Composition Using Dynamic Classification and Simulated Annealing. MAS 2018, 12, 395. [Google Scholar] [CrossRef]
  14. Masdari, M.; Nozad Bonab, M.; Ozdemir, S. QoS-Driven Metaheuristic Service Composition Schemes: A Comprehensive Overview. Artif. Intell. Rev. 2021, 54, 3749–3816. [Google Scholar] [CrossRef]
  15. Nazif, H.; Nassr, M.; Al-Khafaji, H.M.R.; Jafari Navimipour, N.; Unal, M. A Cloud Service Composition Method Using a Fuzzy-Based Particle Swarm Optimization Algorithm. Multimed. Tools Appl. 2023, 83, 56275–56302. [Google Scholar] [CrossRef]
  16. Sreeramulu, M.D.; Mohammed, A.S.; Kalla, D.; Boddapati, N.; Natarajan, Y. AI-Driven Dynamic Workload Balancing for Real-Time Applications on Cloud Infrastructure. In Proceedings of the 2024 7th International Conference on Contemporary Computing and Informatics (IC3I), Greater Noida, India, 18 September 2024; IEEE: Piscataway, NJ, USA; pp. 1660–1665. [Google Scholar]
  17. Fister, I.; Yang, X.-S.; Fister, D.; Fister, I. Cuckoo Search: A Brief Literature Review. In Cuckoo Search and Firefly Algorithm; Yang, X.-S., Ed.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2014; Volume 516, pp. 49–62. ISBN 978-3-319-02140-9. [Google Scholar]
  18. Rajabioun, R. Cuckoo Optimization Algorithm. Appl. Soft Comput. 2011, 11, 5508–5518. [Google Scholar] [CrossRef]
  19. Yang, X.-S.; Deb, S. Cuckoo Search: Recent Advances and Applications. Neural Comput Applic 2014, 24, 169–174. [Google Scholar] [CrossRef]
  20. Chiroma, H.; Herawan, T.; Fister, I.; Fister, I.; Abdulkareem, S.; Shuib, L.; Hamza, M.F.; Saadi, Y.; Abubakar, A. Bio-Inspired Computation: Recent Development on the Modifications of the Cuckoo Search Algorithm. Appl. Soft Comput. 2017, 61, 149–173. [Google Scholar] [CrossRef]
  21. Yang, X.-S. (Ed.) Cuckoo Search and Firefly Algorithm: Theory and Applications; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2014; Volume 516, ISBN 978-3-319-02140-9. [Google Scholar]
  22. Kurdi, H.; Ezzat, F.; Altoaimy, L.; Ahmed, S.H.; Youcef-Toumi, K. MultiCuckoo: Multi-Cloud Service Composition Using a Cuckoo-Inspired Algorithm for the Internet of Things Applications. IEEE Access 2018, 6, 56737–56749. [Google Scholar] [CrossRef]
  23. Zavieh, H.; Javadpour, A.; Li, Y.; Ja’fari, F.; Nasseri, S.H.; Rostami, A.S. Task Processing Optimization Using Cuckoo Particle Swarm (CPS) Algorithm in Cloud Computing Infrastructure. Clust. Comput. 2023, 26, 745–769. [Google Scholar] [CrossRef]
  24. Liang, H.T.; Kang, F.H. Adaptive Mutation Particle Swarm Algorithm with Dynamic Nonlinear Changed Inertia Weight. Optik 2016, 127, 8036–8042. [Google Scholar] [CrossRef]
  25. Dordaie, N.; Navimipour, N.J. A Hybrid Particle Swarm Optimization and Hill Climbing Algorithm for Task Scheduling in the Cloud Environments. ICT Express 2018, 4, 199–202. [Google Scholar] [CrossRef]
  26. Guntsch, M.; Middendorf, M. A Population Based Approach for ACO. In Applications of Evolutionary Computing; Cagnoni, S., Gottlieb, J., Hart, E., Middendorf, M., Raidl, G.R., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2279, pp. 72–81. ISBN 978-3-540-43432-0. [Google Scholar]
  27. STUTZLE, T.; DORIGO, M. ACO Algorithms for the Traveling Salesman Problem. Evol. Algorithms Eng. Comput. Sci. 1999, 4, 163–183. [Google Scholar]
  28. Yin, C.; Li, S.; Li, X. An Optimization Method of Cloud Manufacturing Service Composition Based on Matching-Collaboration Degree. Int. J. Adv. Manuf. Technol. 2024, 131, 343–353. [Google Scholar] [CrossRef]
  29. Bei, L.; Wenlin, L.; Xin, S.; Xibin, X. An Improved ACO Based Service Composition Algorithm in Multi-Cloud Networks. J Cloud Comp 2024, 13, 17. [Google Scholar] [CrossRef]
  30. Prity, F.S.; Uddin, K.M.A.; Nath, N. Exploring Swarm Intelligence Optimization Techniques for Task Scheduling in Cloud Computing: Algorithms, Performance Analysis, and Future Prospects. Iran J. Comput. Sci. 2024, 7, 337–358. [Google Scholar] [CrossRef]
  31. Gupta, S.; Tripathi, S. A Comprehensive Survey on Cloud Computing Scheduling Techniques. Multimed. Tools Appl. 2023, 83, 53581–53634. [Google Scholar] [CrossRef]
  32. Aslanpour, M.S.; Gill, S.S.; Toosi, A.N. Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Research. Internet Things 2020, 12, 100273. [Google Scholar] [CrossRef]
  33. Kumar, M.; Sharma, S.C.; Goel, A.; Singh, S.P. A Comprehensive Survey for Scheduling Techniques in Cloud Computing. J. Netw. Comput. Appl. 2019, 143, 1–33. [Google Scholar] [CrossRef]
  34. Assudani, P.; Abimannan, S. Cost Efficient Resource Scheduling in Cloud Computing: A Survey. Int. J. Eng. Technol. 2018, 7, 38–43. [Google Scholar]
  35. Katal, A.; Dahiya, S.; Choudhury, T. Energy Efficiency in Cloud Computing Data Centers: A Survey on Software Technologies. Clust. Comput. 2023, 26, 1845–1875. [Google Scholar] [CrossRef]
  36. Mao, M.; Humphrey, M. A Performance Study on the VM Startup Time in the Cloud. In Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing, Honolulu, HI, USA, 24–29 June 2012; IEEE: Piscataway, NJ, USA; pp. 423–430. [Google Scholar]
  37. Yakubu, I.Z.; Musa, Z.A.; Muhammed, L.; Ja’afaru, B.; Shittu, F.; Matinja, Z.I. Service Level Agreement Violation Preventive Task Scheduling for Quality of Service Delivery in Cloud Computing Environment. Procedia Comput. Sci. 2020, 178, 375–385. [Google Scholar] [CrossRef]
  38. Ahmad, M.; Abawajy, J.H. Service Level Agreements for the Digital Library. Procedia—Soc. Behav. Sci. 2014, 147, 237–243. [Google Scholar] [CrossRef]
  39. Qazi, F.; Kwak, D.; Khan, F.G.; Ali, F.; Khan, S.U. Service Level Agreement in Cloud Computing: Taxonomy, Prospects, and Challenges. Internet Things 2024, 25, 101126. [Google Scholar] [CrossRef]
  40. Ghandour, O.; El Kafhali, S.; Hanini, M. Adaptive Workload Management in Cloud Computing for Service Level Agreements Compliance and Resource Optimization. Comput. Electr. Eng. 2024, 120, 109712. [Google Scholar] [CrossRef]
  41. Mondal, B. Load Balancing in Cloud Computing Using Cuckoo Search Algorithm. Int. J. Cloud Comput. 2024, 13, 267–284. [Google Scholar] [CrossRef]
  42. Ala’anzy, M.; Othman, M. Load Balancing and Server Consolidation in Cloud Computing Environments: A Meta-Study. IEEE Access 2019, 7, 141868–141887. [Google Scholar] [CrossRef]
  43. Devi, N.; Dalal, S.; Solanki, K.; Dalal, S.; Lilhore, U.K.; Simaiya, S.; Nuristani, N. A Systematic Literature Review for Load Balancing and Task Scheduling Techniques in Cloud Computing. Artif. Intell. Rev. 2024, 57, 276. [Google Scholar] [CrossRef]
  44. Mahmoud, H.; Thabet, M.; Khafagy, M.H.; Omara, F.A. An Efficient Load Balancing Technique for Task Scheduling in Heterogeneous Cloud Environment. Clust. Comput. 2021, 24, 3405–3419. [Google Scholar] [CrossRef]
  45. Shafiq, D.A.; Jhanjhi, N.Z.; Abdullah, A. Load Balancing Techniques in Cloud Computing Environment: A Review. J. King Saud Univ.—Comput. Inf. Sci. 2022, 34, 3910–3933. [Google Scholar] [CrossRef]
  46. Milani, A.S.; Navimipour, N.J. Load Balancing Mechanisms and Techniques in the Cloud Environments: Systematic Literature Review and Future Trends. J. Netw. Comput. Appl. 2016, 71, 86–98. [Google Scholar] [CrossRef]
  47. Jayaswal, C.J.; Bindulal, P. Edge Computing: Applications, Challenges and Opportunities. J. Comput. Technol. Appl. 2023, 9, 1–4. [Google Scholar] [CrossRef]
  48. Zhou, Z.; Li, F.; Zhu, H.; Xie, H.; Abawajy, J.H.; Chowdhury, M.U. An Improved Genetic Algorithm Using Greedy Strategy toward Task Scheduling Optimization in Cloud Environments. Neural Comput. Appl. 2020, 32, 1531–1541. [Google Scholar] [CrossRef]
  49. Kadarla, K.; Sharma, S.C.; Bhardwaj, T.; Chaudhary, A. A Simulation Study of Response Times in Cloud Environment for IoT-Based Healthcare Workloads. In Proceedings of the 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), Orlando, FL, USA, 22–25 October 2017; IEEE: Piscataway, NJ, USA; pp. 678–683. [Google Scholar]
  50. Ebadifard, F.; Babamir, S.M. PSO Based Task Scheduling Algorithm Improved Using a Load—Balancing Technique for the Cloud Computing Environment. Concurr. Comput. 2018, 30, e4368. [Google Scholar] [CrossRef]
  51. Abbasi, A.A.; Abbasi, A.; Shamshirband, S.; Chronopoulos, A.T.; Persico, V.; Pescape, A. Software-Defined Cloud Computing: A Systematic Review on Latest Trends and Developments. IEEE Access 2019, 7, 93294–93314. [Google Scholar] [CrossRef]
  52. Yin, H.; Huang, X.; Cao, E. A Cloud-Edge-Based Multi-Objective Task Scheduling Approach for Smart Manufacturing Lines. J. Grid Comput. 2024, 22, 9. [Google Scholar] [CrossRef]
  53. Behera, I.; Sobhanayak, S. Task Scheduling Optimization in Heterogeneous Cloud Computing Environments: A Hybrid GA-GWO Approach. J. Parallel Distrib. Comput. 2024, 183, 104766. [Google Scholar] [CrossRef]
  54. Sarrafzade, N.; Entezari-Maleki, R.; Sousa, L. A Genetic-Based Approach for Service Placement in Fog Computing. J. Supercomput. 2022, 78, 10854–10875. [Google Scholar] [CrossRef]
  55. Kruekaew, B.; Kimpan, W. Multi-Objective Task Scheduling Optimization for Load Balancing in Cloud Computing Environment Using Hybrid Artificial Bee Colony Algorithm With Reinforcement Learning. IEEE Access 2022, 10, 17803–17818. [Google Scholar] [CrossRef]
  56. Wang, D.; Wang, J.; Liu, X.; Yu, J.; Gu, H.; Wang, C.; Liu, J.; Zhang, Y. Towards Dynamic Virtual Machine Placement Based on Safety Parameters and Resource Utilization Fluctuation for Energy Savings and QoS Improvement in Cloud Computing. Future Gener. Comput. Syst. 2025, 171, 107853. [Google Scholar] [CrossRef]
  57. Banerjee, S.; Roy, S.; Khatua, S. Towards Energy and QoS Aware Dynamic VM Consolidation in a Multi-Resource Cloud. Future Gener. Comput. Syst. 2024, 157, 376–391. [Google Scholar] [CrossRef]
  58. Muhairat, M.; Abdallah, M.; Althunibat, A. Cloud Computing in Higher Educational Institutions. Compusoft 2019, 8, 3507–3513. [Google Scholar]
  59. Du, T.; Xiao, G.; Chen, J.; Zhang, C.; Sun, H.; Li, W.; Geng, Y. A Combined Priority Scheduling Method for Distributed Machine Learning. J. Wireless Com Network 2023, 2023, 45. [Google Scholar] [CrossRef]
  60. Rawajbeh, M.A. Performance Evaluation of a Computer Network in a Cloud Computing Environment. ICIC Express Lett. 2019, 13, 719–727. [Google Scholar]
  61. N Al-Dwairi, R.; Jditawi, W. The Role of Cloud Computing on the Governmental Units Performance and EParticipation (Empirical Study). Int. J. Adv. Soft Comput. Its Appl. 2022, 14, 79–93. [Google Scholar] [CrossRef]
  62. Alwasouf, A.A.; Kumar, D. Research Challenges of Web Service Composition. In Software Engineering; Hoda, M.N., Chauhan, N., Quadri, S.M.K., Srivastava, P.R., Eds.; Advances in Intelligent Systems and Computing; Springer: Singapore, 2019; Volume 731, pp. 681–689. ISBN 978-981-10-8847-6. [Google Scholar]
  63. Houssein, E.H.; Gad, A.G.; Wazery, Y.M.; Suganthan, P.N. Task Scheduling in Cloud Computing Based on Meta-Heuristics: Review, Taxonomy, Open Challenges, and Future Trends. Swarm Evol. Comput. 2021, 62, 100841. [Google Scholar] [CrossRef]
  64. Shao, K.; Fu, H.; Wang, B. An Efficient Combination of Genetic Algorithm and Particle Swarm Optimization for Scheduling Data-Intensive Tasks in Heterogeneous Cloud Computing. Electronics 2023, 12, 3450. [Google Scholar] [CrossRef]
  65. Asghari, S.; Jafari Navimipour, N. The Role of an Ant Colony Optimisation Algorithm in Solving the Major Issues of the Cloud Computing. J. Exp. Theor. Artif. Intell. 2023, 35, 755–790. [Google Scholar] [CrossRef]
  66. Krishnadoss, P.; Chandrashekar, C.; Poornachary, V.K. RCOA Scheduler: Rider Cuckoo Optimization Algorithm for Task Scheduling in Cloud Computing. Int. J. Intell. Eng. Syst. 2022, 15, 505–514. [Google Scholar] [CrossRef]
  67. Azari, M.S.; Bouyer, A.; Zadeh, N.F. Service Composition with Knowledge of Quality in the Cloud Environment Using the Cuckoo Optimization and Artificial Bee Colony Algorithms. In Proceedings of the 2015 2nd International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 5–6 November 2015; IEEE: Piscataway, NJ, USA; pp. 539–545. [Google Scholar]
  68. Dahan, F.; Alwabel, A. Artificial Bee Colony with Cuckoo Search for Solving Service Composition. Intell. Autom. Soft Comput. 2023, 35, 3385–3402. [Google Scholar] [CrossRef]
  69. Subbulakshmi, S.; Ramar, K.; Saji, A.E.; Chandran, G. Optimized Web Service Composition Using Evolutionary Computation Techniques. In Intelligent Data Communication Technologies and Internet of Things; Hemanth, J., Bestak, R., Chen, J.I.-Z., Eds.; Lecture Notes on Data Engineering and Communications Technologies; Springer: Singapore, 2021; Volume 57, pp. 457–470. ISBN 978-981-15-9508-0. [Google Scholar]
  70. Tawfeek, M.A.; El-Sisi, A.; Keshk, A.E.; Torkey, F.A. Cloud Task Scheduling Based on Ant Colony Optimization. In Proceedings of the 2013 8th International Conference on Computer Engineering & Systems (ICCES), Cairo, Egypt, 26–27 November 2013; IEEE: Piscataway, NJ, USA; pp. 64–69. [Google Scholar]
  71. Nabi, S.; Ahmad, M.; Ibrahim, M.; Hamam, H. AdPSO: Adaptive PSO-Based Task Scheduling Approach for Cloud Computing. Sensors 2022, 22, 920. [Google Scholar] [CrossRef]
  72. Sharma, N.; Sonal; Garg, P. Ant Colony Based Optimization Model for QoS-Based Task Scheduling in Cloud Computing Environment. Meas. Sens. 2022, 24, 100531. [Google Scholar] [CrossRef]
  73. Dubey, K.; Sharma, S.C. A Novel Multi-Objective CR-PSO Task Scheduling Algorithm with Deadline Constraint in Cloud Computing. Sustain. Comput. Inform. Syst. 2021, 32, 100605. [Google Scholar] [CrossRef]
  74. Pradhan, A.; Bisoy, S.K. A Novel Load Balancing Technique for Cloud Computing Platform Based on PSO. J. King Saud Univ.—Comput. Inf. Sci. 2022, 34, 3988–3995. [Google Scholar] [CrossRef]
  75. Alsaidy, S.A.; Abbood, A.D.; Sahib, M.A. Heuristic Initialization of PSO Task Scheduling Algorithm in Cloud Computing. J. King Saud Univ.—Comput. Inf. Sci. 2022, 34, 2370–2382. [Google Scholar] [CrossRef]
  76. Dogani, J.; Khunjush, F. Cloud Service Composition Using Genetic Algorithm and Particle Swarm Optimization. In Proceedings of the 2021 11th International Conference on Computer Engineering and Knowledge (ICCKE), Mashhad, Iran, 28 October 2021; IEEE: Piscataway, NJ, USA; pp. 98–104. [Google Scholar]
  77. Nezafat Tabalvandani, M.A.; Hosseini Shirvani, M.; Motameni, H. Reliability-Aware Web Service Composition with Cost Minimization Perspective: A Multi-Objective Particle Swarm Optimization Model in Multi-Cloud Scenarios. Soft Comput. 2024, 28, 5173–5196. [Google Scholar] [CrossRef]
  78. Jangu, N.; Raza, Z. Improved Jellyfish Algorithm-Based Multi-Aspect Task Scheduling Model for IoT Tasks over Fog Integrated Cloud Environment. J. Cloud Comput. 2022, 11, 98. [Google Scholar] [CrossRef]
  79. Li, Q.; Peng, Z.; Cui, D.; Lin, J.; Zhang, H. UDL: A Cloud Task Scheduling Framework Based on Multiple Deep Neural Networks. J. Cloud Comput. 2023, 12, 114. [Google Scholar] [CrossRef]
  80. Kadhim, Q.K.; Yusof, R.; Mahdi, H.S.; Ali Al-shami, S.S.; Selamat, S.R. A Review Study on Cloud Computing Issues. J. Phys. Conf. Ser. 2018, 1018, 012006. [Google Scholar] [CrossRef]
  81. Cheikh, S.; Walker, J.J. Solving Task Scheduling Problem in the Cloud Using a Hybrid Particle Swarm Optimization Approach. Int. J. Appl. Metaheuristic Comput. 2021, 13, 1–25. [Google Scholar] [CrossRef]
  82. Wei, X. Task Scheduling Optimization Strategy Using Improved Ant Colony Optimization Algorithm in Cloud Computing. J. Ambient. Intell. Hum. Comput. 2020. [Google Scholar] [CrossRef]
  83. Chahal, P.K.; Kumar, K.; Soodan, B.S. Grey Wolf Algorithm for Cost Optimization of Cloud Computing Repairable System with N -Policy, Discouragement and Two-Level Bernoulli Feedback. Math. Comput. Simul. 2024, 225, 545–569. [Google Scholar] [CrossRef]
  84. Yu, N.; Zhang, A.-N.; Chu, S.-C.; Pan, J.-S.; Yan, B.; Watada, J. Innovative Approaches to Task Scheduling in Cloud Computing Environments Using an Advanced Willow Catkin Optimization Algorithm. Comput. Mater. Contin. 2025, 82, 2495–2520. [Google Scholar] [CrossRef]
  85. Calheiros, R.N.; Ranjan, R.; Beloglazov, A.; De Rose, C.A.F.; Buyya, R. CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms. Softw. Pract. Exp. 2011, 41, 23–50. [Google Scholar] [CrossRef]
Figure 1. COA process.
Figure 1. COA process.
Futureinternet 17 00526 g001
Figure 2. COA pseudocode.
Figure 2. COA pseudocode.
Futureinternet 17 00526 g002
Figure 3. Defining High-Demand and Low-Demand Services pseudocode.
Figure 3. Defining High-Demand and Low-Demand Services pseudocode.
Futureinternet 17 00526 g003
Figure 4. Initial Population Generation pseudocode.
Figure 4. Initial Population Generation pseudocode.
Futureinternet 17 00526 g004
Figure 5. Lévy-flight-based implementation pseudocode.
Figure 5. Lévy-flight-based implementation pseudocode.
Futureinternet 17 00526 g005
Figure 6. Fitness function pseudocode.
Figure 6. Fitness function pseudocode.
Futureinternet 17 00526 g006
Figure 7. Discovery and Replacement Strategy pseudocode.
Figure 7. Discovery and Replacement Strategy pseudocode.
Futureinternet 17 00526 g007
Figure 8. Nest Selection and elitism pseudocode.
Figure 8. Nest Selection and elitism pseudocode.
Futureinternet 17 00526 g008
Figure 9. Optimizing Task Scheduling pseudocode.
Figure 9. Optimizing Task Scheduling pseudocode.
Futureinternet 17 00526 g009
Figure 10. Energy-Aware VM Consolidation pseudocode.
Figure 10. Energy-Aware VM Consolidation pseudocode.
Futureinternet 17 00526 g010
Figure 11. Algorithm termination.
Figure 11. Algorithm termination.
Futureinternet 17 00526 g011
Figure 12. Execution time comparison.
Figure 12. Execution time comparison.
Futureinternet 17 00526 g012
Figure 13. Power consumption.
Figure 13. Power consumption.
Futureinternet 17 00526 g013
Figure 14. Load-balancing efficiency.
Figure 14. Load-balancing efficiency.
Futureinternet 17 00526 g014
Figure 15. SLA violations.
Figure 15. SLA violations.
Futureinternet 17 00526 g015
Figure 16. Cost analysis.
Figure 16. Cost analysis.
Futureinternet 17 00526 g016
Figure 17. Latency (delay in processing requests).
Figure 17. Latency (delay in processing requests).
Futureinternet 17 00526 g017
Figure 18. Response time.
Figure 18. Response time.
Futureinternet 17 00526 g018
Figure 19. Virtual utilization.
Figure 19. Virtual utilization.
Futureinternet 17 00526 g019
Figure 20. Virtual utilization comparison—average (COA, PSO, and ACO).
Figure 20. Virtual utilization comparison—average (COA, PSO, and ACO).
Futureinternet 17 00526 g020
Figure 21. Task completion rate comparison.
Figure 21. Task completion rate comparison.
Futureinternet 17 00526 g021
Figure 22. Task completion rate—average COA, POS, ACO comparison.
Figure 22. Task completion rate—average COA, POS, ACO comparison.
Futureinternet 17 00526 g022
Table 1. Optimization techniques and evaluated performance metrics in cloud computing.
Table 1. Optimization techniques and evaluated performance metrics in cloud computing.
Performance Metrics EvaluatedAuthor(s)/YearOptimization Technique/Algorithm
Response TimeKurdi et al., 2018 [22]Bio-inspired optimization algorithm based on cuckoo birds
Load Balance, VM UtilizationTarawneh et al., 2022 [8]Spider Monkey Optimization (SMO) algorithm
Latency and Response TimeBei et al. [29] ACO
Task Scheduling, SLAJangu and Raza, 2022 [78]Improved Jellyfish Algorithm-based
Task Scheduling, Execution TimeLi et al., 2023 [79]Multiple deep neural networks
Load Balancing, Execution Time, CostKadhim et al., 2018 [80]Limitations of Traditional Rule-Based Approaches
QoS Optimization, Avoid Local OptimaBei et al., 2024 [29]Enhanced ACO + Genetic-Algorithm-Inspired Mutation
Data-Intensive Task SchedulingShao et al., 2023 [64]PGSAO (GA + PSO) hybrid
Task Scheduling Across VMsCheikh and Walker, 2021 [81]Hybrid Particle Swarm Optimization
Task Scheduling, Load BalancingWei, 2022 [82]Improved ACO with Satisfaction Function
Fault Tolerance, Cost OptimizationChahal et al., 2024 [83]Gray Wolf Algorithm (GWA)
Multi-objective Task SchedulingBehera and Sobhanaya, 2024 [53]Hybrid Genetic Algorithm + Gray Wolf Optimization (GA-GWO)
Task Scheduling, Load BalancingYu et al., 2025 [84]Advanced Willow Catkin Optimization (AWCO)
Task Scheduling, Load BalancingPradhan and Bisoy, 2022 [74]PSO
Task SchedulingSharma et al., 2022 [72]ACO
CostAhmad et al., 2023 [85]Cost optimization in a cloud/fog environment based on Task Deadline
Table 2. CloudSim configuration.
Table 2. CloudSim configuration.
ParameterValue
Number of Requests1000–5000 (varies)
Number of Cloudlets100–500 (varies)
Number of VMs10–50
Number of Data Centers1–3
Host ConfigurationQuad-core, 16GB RAM
VM Configuration2 vCPUs, 4GB RAM, 1000 MIPS
Scheduling PolicyTime-Shared/Space-Shared
Load Balancing AlgorithmCOA, PSO, ACO
Simulation Runs10
Energy-Aware MechanismEnabled
Number of Services per Composition5–20
Service CategoriesComputation-intensive, storage-intensive, latency-sensitive, energy-efficient
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

AlHadid, I.; Abu-Taieh, E.; Al Rawajbeh, M.; Afaneh, S.; Daghbosheh, M.E.; Alkhawaldeh, R.S.; Khwaldeh, S.; Alrowwad, A. Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency. Future Internet 2025, 17, 526. https://doi.org/10.3390/fi17110526

AMA Style

AlHadid I, Abu-Taieh E, Al Rawajbeh M, Afaneh S, Daghbosheh ME, Alkhawaldeh RS, Khwaldeh S, Alrowwad A. Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency. Future Internet. 2025; 17(11):526. https://doi.org/10.3390/fi17110526

Chicago/Turabian Style

AlHadid, Issam, Evon Abu-Taieh, Mohammad Al Rawajbeh, Suha Afaneh, Mohammed E. Daghbosheh, Rami S. Alkhawaldeh, Sufian Khwaldeh, and Ala’aldin Alrowwad. 2025. "Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency" Future Internet 17, no. 11: 526. https://doi.org/10.3390/fi17110526

APA Style

AlHadid, I., Abu-Taieh, E., Al Rawajbeh, M., Afaneh, S., Daghbosheh, M. E., Alkhawaldeh, R. S., Khwaldeh, S., & Alrowwad, A. (2025). Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency. Future Internet, 17(11), 526. https://doi.org/10.3390/fi17110526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop