1. Introduction
In the last few years, we have seen a rapid evolution in the field of information technology, where everything is interconnected. The Internet of Things (IoT) enables seamless communication between people, processes, and objects through embedded devices that connect them to the internet. The future presents a significant challenge with the IoT, which includes the connection of a huge set of objects and data through the internet. This large-scale interconnection necessitates a robust infrastructure to manage and support numerous connected devices. Thus, cloud computing is essential for storing, processing, and analyzing the vast data streams generated by IoT devices [
1].
In cloud computing, client requests are often routed through an intermediary that acts as a bridge between clients and cloud service providers. This intermediary, known as a cloud broker, is defined by the International Organization for Standardization as a service that streamlines interactions between cloud service users and providers [
2]. Cloud service providers also leverage cloud brokers to enhance resource selection and optimize service delivery.
Cloud brokering presents a significant challenge in resource allocation within cloud environments, as it must account for the growing complexity and heterogeneity of cloud services. This challenge is closely tied to dynamic user demands, requiring the selection of the most suitable offer from multiple service providers responding to the same request. As cloud computing continues to evolve and power everything from enterprise systems to consumer applications, the number and variety of services offered by providers have expanded dramatically. For businesses and developers, choosing the right combination of resources balancing cost, performance, energy efficiency, and reliability has become increasingly complex. This is where cloud service brokers (CSBs) come into play. Acting as intelligent intermediaries, CSBs simplify the resource selection process, negotiate service-level agreements (SLAs), and optimize deployments across multiple providers. Prior research has emphasized the strategic importance of CSBs in enabling agile, market-oriented cloud management [
3].
The cloud brokerage problem is thus modeled as a multi-objective optimization task with three primary goals: minimizing the response time for user requests, reducing energy consumption, and maximizing broker profits. These goals often conflict, making the problem NP-hard and intractable for large-scale, real-time solutions [
4]. As a result, researchers have turned to approximation methods, particularly heuristic and metaheuristic algorithms. Among the most prominent approaches are genetic algorithms (GAs) [
5,
6,
7] and the Non-dominated Sorting Genetic Algorithm II (NSGA-II), often enhanced with local searches [
8]. Hybrid techniques, such as combining GAs with simulated annealing [
9], have also been introduced to boost the solution quality and convergence speed [
10]. However, most existing approaches tend to emphasize specific Quality-of-Service (QoS) parameters, such as the execution time or energy consumption, without effectively balancing all the objectives [
11]. To address these limitations, this study proposes an enhanced version of the NSGA-III algorithm integrated with a Genetic K-means clustering method (NSGA-III-GKM). NSGA-III, originally introduced by Deb et al. [
12], was designed to overcome NSGA-II’s weaknesses, particularly in diversity maintenance and scalability. Based on these fundamentals, our framework leverages clustering to guide the solution evolution and improve the convergence. Such hybrid models reflect the growing trend toward more robust, adaptive optimization frameworks that can better handle complex, high-dimensional problems in cloud environments [
12].
Building upon the work of [
13], where the NSGA-III-GKM algorithm leverages K-means clustering to group initial reference points and refines cluster centers using a genetic algorithm, we enhance this approach by integrating K-means++. Recognized for its intelligent cluster initialization, K-means++ mitigates the risk of local minima, improves the clustering quality, and accelerates the convergence [
14]. This enhancement leads to our proposed NSGA-III-GKM++ algorithm, designed to effectively balance conflicting objectives and generate high-quality solutions in complex optimization scenarios.
Our primary goal is to evaluate the superiority of NSGA-III-GKM++ over established algorithms, including Multi-Objective Particle Swarm Optimization (MOPSO), NSGA-II, and the standard NSGA-III-GKM. This optimization is particularly critical in the context of the exponential growth of IoT devices and cloud computing services, which impose increasing demands on efficient resource management. In this regard, cloud brokerage, serving as an intermediary between service providers and consumers, presents a multi-faceted challenge that necessitates a delicate balance among the response time, energy consumption, and broker profitability.
This paper is structured as follows:
Section 2 presents a comprehensive literature review, highlighting related work and existing solutions.
Section 3 details our proposed methodology and the components of the NSGA-III-GKM++ approach.
Section 4 focuses on the implementation and experimental evaluation, while
Section 5 discusses the results and concludes with insights into the effectiveness of our approach for optimizing cloud brokerage.
2. Related Works
The rapid expansion of connected devices across networks has created an increasing demand for the efficient analysis and storage of massive volumes of data. The rapid proliferation of IoT-enabled devices generates an ever-increasing volume of data, making local and temporary storage impractical. The IoT consists of small, widely distributed devices with limited storage and processing capacities, leading to challenges in reliability, performance, security, and privacy. The integration of the Internet of Things (IoT) with Cloud Computing (CC) has led to the emergence of the Cloud of Things (CoT), also known as the CloudIoT, a paradigm that merges the data generation capabilities of smart objects with the processing and storage power of cloud infrastructures. The CoT allows the functions of smart devices to be offered as services, making them accessible to multiple applications simultaneously [
15]. This synergy enables scalable, real-time data processing and centralized control, which are crucial for application domains, such as smart cities, industrial automation, connected healthcare, and autonomous transportation [
16,
17,
18,
19]. By offloading heavy computational tasks to the cloud, the CloudIoT enhances the overall capabilities of IoT systems, effectively overcoming the inherent limitations of resource-constrained edge devices. Despite these advantages, the convergence of the IoT and cloud computing also introduces several operational and architectural challenges. These include issues related to dynamic resource allocation, latency control, energy consumption, and data security. Such challenges are especially critical in large-scale and distributed environments, where response times and bandwidth constraints must be carefully managed. Moreover, in multi-cloud ecosystems, where services are distributed across multiple providers, these challenges are further exacerbated by heterogeneous infrastructures, fluctuating workloads, and diverse QoS requirements. As a result, the success of the CloudIoT heavily relies on the development of intelligent and adaptive cloud brokerage mechanisms capable of handling these complex, multi-objective tradeoffs.
Most existing cloud brokerage schemes overlook optimization techniques for service deployment. Several genetic-algorithm-based approaches have been proposed to address this issue. Early approaches to cloud brokerage primarily focused on heuristic and genetic algorithms (GAs). For instance, refs. [
5,
6] applied GAs to optimize virtual machine (VM) selection based on response times and costs. Ref. [
7] introduced a genetic method for multi-cloud service brokerage, optimizing VM selection for low latency and cost using the CloudSim framework. While outperforming prior methods, it fails to consider service request localization. Similarly, Qos-aware genetic cloud brokering (QBROKAGE) [
5] optimizes Infrastructure-as-a-Service (IaaS) resource selection based on QoS metrics, like response times and throughput, achieving near-optimal solutions, even with hundreds of providers. However, it lacks adaptability for dynamic applications requiring resource elasticity.
Hybrid techniques combining GAs with simulated annealing or local searches [
20] have shown improvements in the convergence speed and solution quality but have often neglected aspects like service request localization or profit maximization. To further optimize cloud brokerage, previous work, such as [
21], has formulated the problem as a cost and execution time minimization task, employing a biased random key genetic algorithm (BRK-GA) to achieve fast decision making and high-quality solutions. However, similar to other early approaches, it failed to consider critical factors, like the geographical distribution of service requests, limiting adaptability in heterogeneous cloud environments. These limitations underscore the broader demand for more adaptive, context-aware optimization strategies. Recent studies have responded by emphasizing the need for intelligent cloud brokerage mechanisms capable of efficiently allocating multi-cloud resources while addressing conflicting objectives, such as cost, latency, and energy consumption. Moreover, the evolution of cloud brokerage has expanded beyond algorithmic efficiency to include architectural challenges, such as interoperability, SLA negotiation, and elasticity management [
22], highlighting the complexity and multidimensionality of modern cloud service brokerage.
A multi-objective genetic algorithm in cloud brokerage (MOGA-CB) [
6] distributes VM requests to minimize the cost and response time using an evolutionary search, achieving Pareto-optimal solutions. However, it does not optimize VM instance-level pricing. NSGA-II-based approaches [
23] have also been employed to handle multi-cloud brokerage problems by simultaneously considering the cost, response time, and availability, with CDOXplorer achieving up to 60% improvement over traditional methods [
24]. Despite their promising results, these approaches often overlook service request localization and face scalability limitations in dynamic environments. Evolutionary multi-objective optimization (EMO) algorithms, such as NSGA-II and NSGA-III, remain widely adopted due to their strong capabilities to approximate the Pareto front across multiple objectives [
12,
25,
26]. However, in high-dimensional objective spaces, their convergence behavior can deteriorate. To address this, clustering-enhanced strategies have emerged. For instance, ref. [
27] proposed a K-means-based EMO that improves convergence by preserving the population diversity. These advances are conceptually aligned with decomposition-based approaches, like MOEA/D [
28], which focus on dividing the objective space to simplify optimization and promote convergence across diverse solution regions.
To enhance scheduling and resource management, Extreme NSGA-III (E-NSGA-III) [
29] optimizes scientific workflows, outperforming NSGA-II and NSGA-III. Similarly, NSGA-III-based Virtual Machine Placement (VMP) [
30] improves resource allocation by 7% and enhances environmental efficiency. In content delivery networks (CDNs), SMS-EMOA [
31] reduces cloud resource costs by up to 10.6% while maintaining high QoS. These results show that the proposed approach is effective for deploying cloud-based CDNs at a lower cost without compromising service quality.
The main objective of the cloud service provider is to increase the profit of the cloud infrastructure, while the cloud users want to run their applications at the minimum cost and in the minimum execution time. Reducing the energy consumption and achieving the maximum profit in a cloud environment are challenging problems due to the incompatibility between workstations (physical machines) and unpredictable user demands.
PSO-COGENT [
32] improves scheduling by reducing the execution time, cost, and energy consumption but lacks QoS considerations. Similarly, CLOUDRB [
33] integrates PSO for resource allocation, outperforming GAs, the ant colony optimization algorithm (ACO), and the Reformative Bat Algorithm (RBA) in cost and deadline adherence. The results demonstrated that the proposed framework effectively executes tasks within the specified deadlines while achieving significant reductions in both the execution time and cost. Artificial-Bee-Colony (ABC)-based scheduling [
34] dynamically adjusts the VM allocation, optimizing costs while balancing the workload distribution. However, these approaches face limitations in adaptability and QoS optimization. To ensure these objectives, the characterization of the relationship between the profit of a service provider and the customer’s satisfaction, based on the utility model, allows us to cope at the same time with the two main criteria that make up the broker’s profit and customer’s satisfaction.
In the Software-as-a-service (SaaS) cloud market, multiple equivalent services exist, each with different QoS attributes. Selecting and configuring services to meet diverse tenant needs are NP-hard multi-objective problems. Existing approaches focus on QoS adaptation but overlook multi-tenant deployment flexibility.
Ref. [
35] proposed a dynamic service composition framework, using a multi-objective evolutionary algorithm based on decomposition, that integrates Stable Matching (MOEA/D-STM) and NSGA-II, with results showing MOEA/D-STM outperforming NSGA-II in the solution quality and computation time. Cloud service providers (CSPs) must reduce the energy consumption while maximizing profits, but minimizing active servers and optimizing the VM placement are conflicting objectives. The maximum VM placement with the minimum power consumption (MVMP) [
36] addresses this, using simulated annealing (SA), outperforming five state-of-the-art algorithms in energy efficiency, profit, and execution time. However, VM migration remains time consuming.
For multi-cloud service brokerage, ref. [
37] proposed a large neighborhood search (LNS) approach, reducing costs by 10–12% compared to those of a greedy heuristic. While LNS avoids local optima by increasing the neighborhood size, it is still an individual-based method because there is only one solution in the problem search space.
Selecting the optimal cloud resources while meeting QoS requirements is challenging due to service heterogeneity. Ref. [
13] proposed a hybrid NSGA-II approach with local searches to minimize the deployment costs and response time, outperforming NSGA-II and Strength Pareto Evolutionary Algorithm 2 (SPEA2), though it neglects the VM location’s impact. In a cloud IaaS environment, selecting VMs from different data centers with multiple objectives, like minimizing the response time, cost, and energy consumption, is a challenging problem due to the heterogeneity of services in terms of resources and technology.
The NSGA-II and Gravitational Search Algorithm (GSA) [
38] improves VM selection for scheduling but prioritizes task scheduling over choosing the optimal computing resource, that is, the VM on which the execution should take place.
Ref. [
10] proposed a novel parallel evolutionary algorithm to address the issue of subleasing virtual machines in CC in order to optimize the cloud broker’s earnings. The problem concerns the efficient allocation of a set of requests from a client’s VMs to available resources pre-reserved by a cloud broker. The proposed parallel algorithm uses a distributed subpopulation model and a simulated annealing operator, and they compared it with a greedy heuristic algorithm using real data from cloud providers. It is shown that the new algorithm significantly outperforms the best existing results in the literature. However, neither method was able to simultaneously optimize the response time and the profit of the cloud broker.
Cloud computing environments offer clients a variety of on-demand services and facilitate resource sharing. Business processes are managed through the cloud using workflow technology, which presents challenges for efficient resource utilization due to task dependencies [
39]. To address this, the proposed hybrid GA-PSO algorithm in [
39] is designed to optimize the assignment of tasks to resources efficiently. This algorithm aims to reduce the duration and cost and balance the load of dependent tasks across heterogeneous resources in cloud computing environments. The experimental results show that the GA-PSO algorithm reduces the total execution time of workflow tasks compared with GAs, PSO, the hybrid simplex-genetic algorithm (HSGA), the weighted-sum genetic algorithm (WSGA), and Mother-to-Child Transmission (MTCT) algorithms. In addition, it reduces the implementation cost and improves the load balancing of the workflow application with the available resources. The results demonstrate that the proposed algorithm not only reaches good solutions more quickly but also delivers higher-quality outcomes compared to those of other algorithms.
This section critically evaluates established cloud brokerage optimization techniques, including GAs, swarm intelligence (SI), and hybrid models, emphasizing their efficacy in addressing specific facets of multi-objective optimization [
40]. Genetic algorithms demonstrate robust performance in balancing competing objectives, such as response times and energy efficiency, through mechanisms like Pareto front exploration, enabling systematic tradeoff analyses between these criteria. Swarm intelligence methods, such as particle swarm optimization, excel in dynamic environments by leveraging decentralized search mechanisms to iteratively refine resource allocation, thereby optimizing energy consumption without compromising service latency. Hybrid approaches further enhance optimization outcomes by synergizing the global search capabilities of GAs with the adaptability of swarm intelligence (SI), allowing them to effectively navigate complex solution spaces. A notable refinement within this context is the integration of advanced clustering methods, such as K-means++, which improves centroid initialization by reducing the likelihood of suboptimal local minima [
41]. When embedded into metaheuristic frameworks, K-means++ has been shown to accelerate the convergence and enhance the solution quality by promoting better population structuring and maintaining diversity [
42]. However, despite promising results in controlled scenarios, these hybrid methods often fail to fully address real-world constraints. Scalability, critical for large-scale and heterogeneous cloud infrastructures, is frequently under-evaluated, especially under dynamic workloads or in geographically distributed environments. Likewise, adaptability to unpredictable operational conditions, such as workload surges or hardware failures, remains insufficiently explored, raising questions about their practical deployment. To bridge these gaps in scalability, adaptability, and clustering effectiveness, we propose the NSGA-III-GKM++ framework, which tightly integrates K-means++ clustering with reference-point-based evolutionary optimization. By grouping objectives or resources into coherent clusters, K-means++ reduces the computational complexity and enhances scalability, while NSGA-III’s reference-point-driven selection ensures responsiveness in dynamic, multi-cloud scenarios. This dual strategy retains the strengths of existing methods, such as the optimization of the response time and energy efficiency, while addressing core limitations related to the convergence speed, diversity, and operational robustness, ultimately offering a more comprehensive and practical solution for real-world cloud brokerage challenges.
5. Conclusions
In this paper, we have introduced one of the fundamental problems of cloud computing, namely, the cloud brokerage problem. In CC, a customer’s request can sometimes be submitted to the cloud through an intermediary, which resides between the customers and the cloud service providers. It is the intermediary (called the cloud broker) that optimizes the resource selection in CC. The brokerage problem consists of choosing the best proposal among the number of offers received from different providers that respond to the same call. The call is defined as a call prepared by the customer to specify its requirements. Therefore, the problem has the objectives of reducing users’ request response times, cutting down on CC’s energy usage, and optimizing the cloud broker’s profits.
For this, we have proposed a new multi-objective optimization approach that combines the K-means genetic algorithm++ and the improved non-dominated sorting genetic algorithm III (NSGAIII), named NSGAIII-GKM++, in order to find the necessary connections between customers and service providers. Extensive simulations have been conducted, and the results show that NSGAIII-GKM++ is able to find suitable solution sets for cloud brokerage. We have compared the performance of the NSGAIII-GKM++ algorithm with those of NSGAIII-GKM, MOPSO, and NSGAII.
The results demonstrate that in comparison to the other algorithms, the suggested algorithm effectively lowers the system’s response time and energy consumption while also increasing the cloud broker’s profit. In future work, we plan to extend our experimental evaluations using widely adopted and comprehensive cloud simulation environments, such as CloudSim, iCanCloud, and GreenCloud. These platforms will enable a deeper exploration of performances under dynamic workloads, pricing models, and infrastructure constraints. Furthermore, to improve the practical usability and deployment of our framework, we intend to develop it as a modular plugin or API that can integrate with existing cloud management platforms (e.g., OpenStack, Kubernetes-based brokers, or public cloud APIs). This will allow cloud providers and brokers to adopt our optimization approach directly within their orchestration systems, thereby broadening its real-world applicability and operational impact. We can also apply our approach to solve the dynamic cloud brokerage problem. We want to further improve our algorithm by considering the use of other nesting strategies.