Next Article in Journal
Design of Fast Response Compound Control System for Hypersonic Skid-to-Turn Missile
Next Article in Special Issue
Classification-Based Q-Value Estimation for Continuous Actor-Critic Reinforcement Learning
Previous Article in Journal
Research on Symmetry Optimization of Designer Requirements and Prototyping Platform Functionality in the Context of Agile Development
Previous Article in Special Issue
Dual-Performance Multi-Subpopulation Adaptive Restart Differential Evolutionary Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Cloud Security Optimization Using Novel Hybrid JADE-Geometric Mean Optimizer

by
Ahmad K. Al Hwaitat
1,* and
Hussam N. Fakhouri
2
1
King Abdullah the II IT School, Department of Computer Science, The University of Jordan, Amman 11942, Jordan
2
Data Science and Artificial Intelligence Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(4), 503; https://doi.org/10.3390/sym17040503
Submission received: 22 January 2025 / Revised: 13 February 2025 / Accepted: 14 February 2025 / Published: 26 March 2025
(This article belongs to the Special Issue Symmetry in Intelligent Algorithms)

Abstract

:
This paper proposes a novel hybrid metaheuristic, called JADEGMO, that combines the adaptive parameter control of adaptive differential evolution with optional external archive (JADE) with the search strategies of geometric mean optimizer (GMO). The goal is to enhance both exploration and exploitation stratifies for solving complex optimization tasks. JADEGMO inherits JADE’s adaptive mutation and crossover strategies while leveraging GMO’s swarm-inspired velocity updates guided by elite solutions. The experimental evaluations on IEEE CEC2022 benchmark suites demonstrate that JADEGMO not only achieves superior average performance compared to multiple state-of-the-art methods but also exhibits low variance across repeated runs. Convergence curves, box plots, and rank analyses confirm that JADEGMO consistently finds high-quality solutions while maintaining diversity and avoiding premature convergence. To highlight its applicability, we employ JADEGMO in a real-world multi-cloud security configuration scenario. This problem models the trade-offs among baseline risk, encryption overhead, open ports, privilege levels, and subscription-based security features across three cloud platforms. JADEGMO outperforms other common metaheuristics in locating cost-efficient configurations that minimize risk while balancing overhead and subscription expenses.

1. Introduction

Metaheuristics have become essential for solving complex computational problems that are otherwise intractable using exact methods within practical time constraints [1,2]. By drawing inspiration from natural phenomena such as evolution, swarm intelligence, and adaptive behaviors, metaheuristics offer flexible, high-level frameworks for navigating vast and dynamic solution spaces [3]. This adaptability and robustness have made them indispensable in diverse fields—from logistics and engineering design to machine learning and data analytics—spurring continuous innovation and the hybridization of established approaches.
The recent shifts toward multi-cloud environments have introduced additional layers of complexity, particularly in security. In such environments, organizations rely on multiple cloud service providers to avoid vendor lock-in, enhance reliability, and optimize costs [4,5]. However, each provider presents unique operational protocols, cost structures, and compliance standards, making the harmonization of security measures both intricate and critical [6]. The need to maintain consistent security configurations, identity and access controls, data encryption protocols, and disaster recovery strategies across different platforms presents a multi-objective optimization challenge that often involves conflicting requirements for cost, performance, and compliance [7,8]. Traditional optimization methods frequently encounter difficulties in these large-scale, high-dimensional, and discrete problem spaces [9], whereas metaheuristics excel at escaping local optima and adapting to evolving constraints [10,11].
In this context, this paper introduces a novel JADEGMO algorithm for multi-cloud security optimization. The proposed method leverages the self-adaptive control parameters of JADE (an adaptive differential evolution technique) and the intensification–diversification strengths often associated with GMO-inspired frameworks (e.g., variants of grey wolf optimization or genetic multi-objective search). By uniting these complementary features, JADEGMO balances exploration and exploitation, allowing it to respond dynamically to shifting security threats and changing performance requirements. This adaptive balance is vital in multi-cloud security scenarios, where real-time updates to workloads, cost models, and regulatory demands necessitate continuous, near-optimal reconfiguration.
Beyond the technical aspects, this research is motivated by the growing complexity of hybrid and multi-cloud infrastructure, which increasingly integrates on-premises resources, legacy systems, and various public or private clouds. Addressing security in these heterogeneous settings demands advanced optimization algorithms capable of encompassing additional factors such as latency, interoperability, data locality, and resiliency. The new JADEGMO method aims to fill this gap by providing a robust, scalable, and adaptive approach that can holistically optimize security while balancing multiple, often competing, objectives.

Motivation and Contributions

Novel hybrid metaheuristic: We propose a new method—JADEGMO—that combines JADE’s self-adaptive properties with GMO-based global search techniques. Multi-cloud security optimization: The algorithm is specifically tailored to handle security requirements, cost constraints, performance trade-offs, and regulatory factors in multi-cloud environments. Adaptive and scalable: By blending exploration and exploitation, JADEGMO is designed to adapt to dynamic changes in workload demands and evolving cyber threats, offering robust performance in large-scale scenarios. Benchmarking and validation: Comprehensive experiments, using both synthetic and real-world multi-cloud case studies, demonstrate JADEGMO’s effectiveness compared to existing metaheuristics and mathematical programming approaches.

2. Literature Review

The optimization of resource allocation and task scheduling within multi-cloud environments has garnered significant attention in recent years due to the rapid growth in cloud computing and its diverse applications. Several research works have addressed the challenges associated with ensuring efficiency, security, and cost-effectiveness in such environments [12].

2.1. Multi-Cloud Optimization

The problem of service composition across multiple clouds has been widely studied. Various decision-making algorithms have been proposed to select and integrate cloud services while minimizing cost and satisfying quality of service (QoS) constraints. Shirvani [13] introduced a bi-objective genetic optimization algorithm for web service composition in multi-cloud environments, optimizing cost and performance. Similarly, Amirthayogam et al. [14] proposed a hybrid optimization algorithm to enhance QoS-aware service composition. Other researchers, such as Wang et al. [15], explored game-theoretic approaches for scheduling workflows in heterogeneous cloud environments. These studies demonstrate that multi-cloud service composition can reduce costs and improve performance by intelligently distributing components. However, real-world validation remains a challenge, and adapting these methods to dynamic environments like edge-cloud computing is an open issue.

2.2. Scheduling and Resource Allocation

Scheduling in multi-cloud environments is a complex problem that has been addressed through various optimization techniques. Toinard et al. [16] proposed a cloud brokering framework using the PROMETHEE method, incorporating trust and assurance criteria. Díaz et al. [17] focused on the optimal allocation of virtual machines (VMs) with pricing considerations, while Peng et al. [18] addressed cost minimization for cloudlet-based workflows. More recently, Kaur et al. [19] explored bio-inspired algorithms for scheduling in multi-cloud environments. Machine learning techniques have also been integrated into scheduling frameworks, as demonstrated by Cui et al. [20], who proposed a dynamic load-balancing approach based on learning mechanisms. Despite these advances, real-time adaptability and fault tolerance in scheduling remain ongoing research challenges.

2.3. Security and Trust in Multiple Clouds

Security remains a major concern in multi-cloud environments. Casola et al. [21] proposed an optimization approach for security-by-design in multi-cloud applications, ensuring compliance with security policies. John et al. [22] examined attribute-based access control for secure collaborations between multiple clouds, addressing dynamic authorization challenges. Similarly, Yang et al. [23] introduced a secure and cost-effective storage policy using NSGA-II-C optimization. These studies highlight the need for robust security frameworks to ensure trust and confidentiality in multi-cloud operations. However, interoperability and compliance issues continue to pose challenges, requiring further standardization efforts.

2.4. Cloud Brokering and Resource Management

Cloud brokering is a critical aspect of multi-cloud management. Ramamurthy et al. [24] investigated the selection of cloud service providers for web applications in a multi-cloud setting. Pandey et al. [25] developed a knowledge-engineered multi-cloud resource broker to optimize application workflows. Additionally, Addya et al. [26] explored optimal VM coalition strategies for multi-tier applications. While these studies have provided efficient resource management strategies, seamless interoperability between different cloud providers remains a significant challenge. Comparative summary of the key literature on multi-cloud environment strategies is presented in Table 1.
In  [27], the authors propose an optimized encryption-integrated container scheduling framework to enhance the orchestration of the containers in multi-cloud data centers. The study formulated container scheduling as an optimization problem, incorporating constraints related to energy consumption and server consolidation. A novel encryption model based on container attributes was introduced to ensure the security of the migrated containers, while the cost implications of encryption were carefully integrated into the scheduling process. The experimental results demonstrated substantial reductions in the number of active servers and power consumption, alongside balanced server loads, showcasing the efficacy of the proposed approach in real-world scenarios.
The significance of secure data sharing and task management in distributed systems was also emphasized in [28], where a range of methodologies were explored to address data security and privacy concerns. The work highlighted advancements in information flow containment, cooperative data access in multi-cloud settings, and privacy-preserving data mining. Such efforts underline the growing need for integrated security policies and multi-party authorization frameworks to foster trust and collaboration in distributed systems.
Furthermore, the optimization of task scheduling in multi-cloud environments was explored [29], where an improved optimization theory was applied to address the complexities of task allocation among multiple cloud service providers (CSPs). The proposed framework introduces a secure task scheduling paradigm that ensures collaboration among the CSPs while overcoming the limitations of the existing solutions. By addressing interoperability and security concerns, the work contributed to the development of robust scheduling mechanisms tailored to the multi-cloud landscape. Jawade and Ramachandram [30] introduced a secure task scheduling scheme for multi-cloud environments, addressing the issue of risk probability during task execution. The study utilized a hybrid dragon-aided grey wolf optimization (DAGWO) technique to optimize task allocation while considering metrics such as makespan, execution time, utilization cost, and security constraints. The experimental results highlighted the effectiveness of DAGWO in achieving optimal resource utilization and improved security compared to existing methods.
Massonet et al. [31] explored the integration of industrial standard security control frameworks into the multi-cloud deployment process. The study emphasized the importance of selecting cloud service providers (CSPs) based on their security capabilities, including their ability to monitor and meet predefined security requirements. By modeling security requirements as constraints within deployment objectives, the proposed approach aimed to generate optimal deployment plans. The research underscored the critical role of cloud security standards in achieving efficient and secure deployments across multiple CSPs.
Jogdand et al. [32] addressed the security challenges associated with cloud storage, specifically focusing on data integrity, public verifiability, and availability in multi-cloud environments. The proposed solution combined the Merkle Hash Tree with the DepSky system model to ensure robust verification mechanisms and reliable data availability. The work advanced the field by incorporating inter-cloud collaboration to tackle the pressing issue of trustworthy and secure data management in multi-cloud systems.
Ref. [33] provided a platform for diverse studies in cloud computing and related fields. The topics discussed included dynamic resource allocation, image-based password mechanisms, and innovative cloud computing models. These contributions highlight the ongoing efforts to address the multifaceted challenges of cloud computing, particularly in environments that require secure and efficient communication and data management.

2.5. Metaheuristic Algorithms in Edge and Cloud Computing

Metaheuristic algorithms have been extensively employed in edge and cloud computing environments to address a wide range of optimization challenges [34]. These challenges include resource allocation, task scheduling, data routing, and service deployment. The inherent ability of metaheuristics to explore large and complex search spaces efficiently makes them particularly suitable for scenarios where traditional or exact methods are often infeasible due to high computational cost, frequent system reconfigurations, or rapidly changing problem constraints.
Differential evolution (DE) stands out among these metaheuristic techniques because of its simplicity, robustness, and powerful exploration capabilities [35]. One noteworthy variation of DE involves adapting its population size dynamically (i.e., a variable population size), which helps to maintain a balance between the exploration of new solutions and the exploitation of high-quality solutions [36]. This approach is especially useful in UAV-assisted IoT data collection systems, where deployment optimization frequently revolves around determining optimal UAV trajectories and placements. These optimizations aim to maximize data throughput or minimize energy consumption while respecting constraints such as UAV battery limitations and heterogeneous IoT device distributions [37].
Beyond differential evolution, computational intelligence (CI) techniques (e.g., fuzzy systems, neural networks, evolutionary algorithms, and swarm intelligence) have also played a significant role in tackling issues specific to cloud and edge computing [38]. For instance, resource management and scheduling solutions that incorporate CI-based approaches (e.g., genetic algorithms, deep reinforcement learning) can effectively balance energy consumption, latency requirements, and service-level agreements [39]. Similarly, evolutionary strategies for service placement and migration in cloud-edge hierarchies can dynamically reposition services or virtual machines based on changing user demands [40]. Swarm-based techniques, such as ant colony optimization and particle swarm optimization, have proven valuable in optimizing network paths, mitigating congestion, and achieving load balancing in distributed and often volatile edge networks [41].

3. Proposed JADEGMO Hybrid Algorithm

The hybrid algorithm combines the strengths of JADE (adaptive differential evolution with optional external archive) [42] and GMO (geometric mean optimizer) to create an efficient and robust optimization method. JADE is known for its adaptive parameter control and the use of an external archive to maintain diversity in the population, which helps with avoiding premature convergence. GMO, on the other hand, utilizes the concept of the geometric mean for guiding the search process and employs velocity updates similar to particle swarm optimization (PSO) to explore the solution space effectively.
By leveraging the adaptive mutation and crossover mechanisms from JADE along with the elite-guided search and velocity updates from GMO, the algorithm aims to optimize complex functions more effectively. This hybrid approach seeks to enhance both the exploration and exploitation capabilities of the optimization process. The adaptive parameters in JADE ensure a dynamic response to the current state of the search process, while the elite-guided mechanism in GMO helps with focusing the search around the promising regions of the solution space.
  • JADE Operations:
Initialization: The initial population P of size N is generated randomly within the bounds [ l b , u b ] . Each individual P i is a potential solution in the search space.
Fitness evaluation: The fitness of each individual P i is evaluated using an objective function f ( P i ) .
Adaptive parameter control: The crossover rate C R i and scaling factor F i for each individual are adapted using normal and Cauchy distributions, as shown in Equations (1) and (2):
C R i N ( u C R , 0.1 )
F i C ( u F , 0.1 )
where u C R and u F are the mean crossover rate and scaling factor, respectively.
Mutation: The mutant vector V i for each individual P i is generated as shown in Equation (3):
V i = P i + F i · ( P best P i ) + F i · ( P 1 P 2 )
where P best is the best individual, and P 1 and P 2 are randomly selected individuals from the population or archive.
Crossover: The trial vector U i is created by combining V i and P i , as shown in Equation (4):
U i , j = V i , j if rand ( 0 , 1 ) C R i or j = j rand P i , j otherwise
where C R i is the crossover rate, and j rand is a randomly chosen index.
Selection: The individual P i is replaced if U i has a better fitness, as we can see in Equation (5):
P i = U i if f ( U i ) < f ( P i ) P i otherwise
  • GMO Operations:
Initialization: The initial positions pp and velocities pv of the particles are generated randomly within their respective bounds [ l b , u b ] and [ v e l m i n , v e l m a x ] .
Fitness evaluation: The fitness of each particle pp i is evaluated using an objective function f ( pp i ) .
Elite selection: The best individuals (elite solutions) are selected based on their fitness values. The number of elite solutions k decreases linearly from k m a x to k m i n over the iterations, as shown in Equation (6):
k = k m a x ( k m a x k m i n ) · t M a x _ i t e r
where t is the current iteration number.
Dual-fitness index (DFI): The DFI for each solution is calculated to assess its quality, as shown in Equation (7):
DFI j = l j 1 1 + exp 4 σ e ( f ( P l ) μ )
where σ is the standard deviation of the fitness values, μ is the mean fitness, and f ( P l ) is the fitness of the l-th solution.
Guide solution calculation: The guide solution G i is computed as shown in Equation (8):
G i = j = 1 k DFI j l = 1 k DFI l · P j
where DFI j are the dual-fitness indices, and k is the number of elite solutions.
Velocity update: The velocity V i of each particle is updated as shown in Equation (9):
V i = w · V i + ( 1 + ( 2 · rand ( 0 , 1 ) 1 ) · w ) · ( M i P i )
where M i is the mutated guide solution, and w is an inertia weight factor.
Position update: The position P i is updated using the new velocity, as shown in Equation (10):
P i = P i + V i

3.1. Algorithm Description and Pseudocode

JADEGMO integrates JADE and GMO operations into a cohesive optimization process. The pseudocode outlining the main steps of the algorithm as shown in Algorithm 1 and the flow chart steps are shown in Figure 1.
JADEGMO begins with the initialization of the population and velocities within predefined bounds. It then evaluates the fitness of the initial population and initializes an external archive. During each iteration, the JADE operations adaptively adjust the crossover rate and scaling factor, perform mutation and crossover to generate trial vectors, handle boundary conditions, and select better solutions to replace the current population, updating the archive accordingly. Concurrently, the GMO operations update particle velocities and positions, handle boundary conditions, evaluate fitness, and calculate guide solutions based on elite individuals. This iterative process continues until the maximum number of iterations is reached, ultimately returning the best solution found. The hybrid approach effectively explores the solution space, maintains diversity, and improves convergence towards optimal solutions. The source code is available at https://www.mathworks.com/matlabcentral/fileexchange/180068-hybrid-jade-gmo-optimizer (accessed on 20 January 2025).
Algorithm 1 Hybrid JADE-GMO algorithm.
 1:
Initialize population P and velocities V randomly within bounds
 2:
Evaluate initial fitness of population
 3:
Initialize archive A
 4:
for   i t e r a t i o n = 1 to M a x _ i t e r  do
 5:
    for each individual i in P  do
 6:
        Adapt C R i and F i according to Equations (1) and (2)
 7:
        Mutation: Generate V i using best individual and random individuals as in Equation (3)
 8:
        Crossover: Generate U i by combining V i and P i according to Equation (4)
 9:
        Boundary handling for U i
10:
       Selection: Update P i if U i is better according to Equation (5)
11:
       Add replaced individuals to archive A
12:
    end for
13:
    for each particle i in pp  do
14:
       Update velocities V using guide solutions and inertia weight as in Equation (9)
15:
       Update positions P using new velocities according to Equation (10)
16:
       Boundary handling for updated positions
17:
       Evaluate fitness of updated positions
18:
       Update best-so-far solutions and objectives
19:
       Calculate guide solutions using elite solutions as in Equation (8)
20:
    end for
21:
end for
22:
return The best solution found

3.2. Exploration and Exploitation Behavior of JADEGMO

The exploration phase of JADEGMO is primarily driven by the JADE component. JADE’s adaptive parameter control and external archive significantly contribute to maintaining diversity in the population and preventing premature convergence. JADE dynamically adjusts the crossover rate C R i and scaling factor F i for each individual using normal and Cauchy distributions, as described in Equations (1) and (2). This adaptability allows the algorithm to explore different regions of the search space effectively.
The mutation process, as shown in Equation (3), generates mutant vectors by combining the best individual and randomly selected individuals from the population or archive. This helps with exploring new solutions that are not confined to the vicinity of the current population. Additionally, JADE maintains an external archive A of superior solutions that are not part of the current population. This archive enhances the algorithm’s exploratory capability by providing a diverse set of solutions that can be used in the mutation process, ensuring that the search does not become stuck in local optima. During the mutation process, the algorithm randomly selects individuals from the population and the archive. This random selection introduces variability and helps with exploring a wide range of solutions.

Exploitation Behavior

The exploitation phase of JADEGMO is significantly enhanced by the GMO component, which focuses on refining the search around the promising regions of the solution space. GMO selects the best individuals (elite solutions) based on their fitness values. The number of elite solutions decreases linearly from k m a x to k m i n over the iterations, as shown in Equation (6). This focus on elite solutions ensures that the algorithm exploits the most promising areas of the search space.
The guide solutions G i , computed using the dual-fitness indices (DFIs) as shown in Equation (8), direct the search towards regions with high potential. These guide solutions help refine the search and improve the quality of the solutions. GMO updates the velocities of the particles using an inertia weight factor w and the difference between the mutated guide solutions and current positions, as shown in Equation (9). This mechanism ensures that the particles are directed towards better solutions, enhancing the exploitation of the search space. The position update, as described in Equation (10), moves the particles closer to the guide solutions, further refining the search around the best-found solutions.
The DFI, calculated as shown in Equation (7), assesses the quality of each solution by considering the fitness values of the other solutions in the population. This index helps with identifying the most promising solutions for exploitation, ensuring that the algorithm focuses its efforts on the best areas of the search space.

3.3. Rationale for Hybridizing JADE and GMO

In this section, we detail the theoretical and empirical justifications for combining JADE (adaptive differential evolution with optional external archive [42] with GMO (geometric mean optimizer) to form the proposed hybrid algorithm. This design choice stemmed from our objective of achieving a robust balance between exploration and exploitation, while minimizing extensive parameter tuning and avoiding premature convergence.

3.3.1. Adaptive Parameter Control in JADE

JADE is distinguished from classic differential evolution (DE) by its built-in adaptive mechanism for parameter tuning. Specifically, JADE updates the mutation factor (F) and crossover rate ( C r ) through a feedback process that relies on the performance of successful mutations. This adaptation reduces the dependency on manual parameter selection and enables JADE to effectively navigate different problem landscapes. As a result, JADE demonstrates an aptitude for refining candidate solutions around promising regions, which strengthens the exploitation phase of the search process.

3.3.2. Diversity Preservation in GMO

GMO introduces geometric strategies aimed at maintaining a diverse set of solutions. Its mutation operators are designed to mitigate premature convergence by introducing higher levels of variation in the population. This mechanism is particularly beneficial in multi-modal and complex optimization problems where the risk of stagnation is significant. By preserving a broad search horizon, GMO ensures that the global search capability of JADEGMO remains potent.

3.3.3. Strengths and Synergy

Combining JADE and GMO leverages the complementary strengths of these two algorithms:
  • Adaptive exploitation (JADE): The adaptive parameter control in JADE helps fine-tune solutions once promising regions of the search space are located.
  • Enhanced exploration (GMO): GMO’s diverse mutation strategies promote continuous exploration across the search space, reducing the likelihood of local optima entrapment.
When integrated, JADE’s feedback-driven refinements and GMO’s diversity-oriented approach reinforce one another, forming a balanced hybrid that can adapt to both simple and complex problem landscapes.

3.3.4. Empirical Evidence from Preliminary Studies

Prior to finalizing the proposed hybrid algorithm, we conducted several pilot experiments to evaluate different potential pairings of evolutionary optimizers. In these preliminary studies, the JADEGMO combination consistently achieved superior performance in terms of convergence speed and solution quality. We attribute this efficacy to the effective interplay between JADE’s adaptive parameter adjustments and GMO’s population diversity mechanisms. These findings informed our decision to focus on the JADEGMO hybrid for the main experimental phase of this research.

4. Experimental Results and Testing

4.1. Overview of IEEE Congress on Evolutionary Computation (CEC2022)

The IEEE Congress on Evolutionary Computation (CEC2022) ( See Figure 2) is a premier international event dedicated to the advancement and dissemination of research in evolutionary computation. Held annually, CEC2022 brings together researchers, practitioners, and industry experts from diverse fields to present their latest findings, exchange ideas, and discuss emerging trends and challenges in the domain. The conference covers a wide range of topics, including genetic algorithms, swarm intelligence, evolutionary multi-objective optimization, and real-world applications of evolutionary computation.

4.2. Results of CEC2022

As it can be seen in Table 2 and Table 3, JADEGMO consistently shows strong performance across a range of test functions. For instance, on functions F1, F2, F3, and F6, it achieves very low mean values and attains top-two overall ranks, reflecting its robustness in both unimodal and multi-modal landscapes. Notably, JADEGMO ranks forst on F2, underscoring its ability to escape local optima and converge effectively under that function’s conditions. Even on more challenging problems, such as F4 or F7, JADEGMO’s rankings remain near the upper quartile, indicating that its adaptive differential evolution strategies enable it to maintain competitiveness against a wide array of powerful modern optimizers (e.g., MFO, CMAES, and L_SHADE).
In comparison with the other optimizers, JADEGMO often outperforms prominent metaheuristics including FLO, STOA, and SPBO, which frequently obtain larger mean values or rank lower. However, a few algorithms occasionally outperform JADEGMO on selected test functions (e.g., L_SHADE on F1 and F3), illustrating that no single optimizer is universally dominant. Nevertheless, JADEGMO’s consistently low standard deviations and standard errors demonstrate stable convergence behavior. Overall, the results highlight JADEGMO as a top-tier method, exhibiting both reliability and strong optimization capability across a diverse set of benchmark functions.

4.3. JADE MGO Convergence Diagram

In Figure 3 and Figure 4, the convergence curves of functions F3, F5, and F6 demonstrate a pattern characterized by an initial rapid decrease in the objective function value, followed by phases of stagnation. This behavior suggests that the JADEGMO algorithm encounters challenges in further refining the solution after achieving an initial improvement, which is indicative of complex landscapes with numerous local optima. The intermittent plateaus observed in these curves imply that the algorithm might benefit from enhanced diversification strategies to avoid premature convergence. Moreover, the gradual downward trend seen in the later stages highlights the need for adaptive parameter tuning mechanisms that can dynamically adjust the exploration–exploitation balance to sustain progress and achieve better final solutions.
The convergence curves for JADEGMO JADEGMO from functions F1 to F12 on the CEC2022 benchmark exhibit a general pattern of rapid initial improvement, which stabilizes as the iterations increase. Specifically, the sharp initial drop observed in the curves suggests that JADEGMO efficiently identifies regions of significant improvement early in the search process. Functions F1, F2, and F3 show very steep declines within the first few iterations, indicating that the algorithm quickly approaches near-optimal solutions. This is indicative of JADEGMO’s effectiveness in exploring and exploiting the search space for functions with simpler landscapes or fewer local optima.
As the function indices increase, particularly from F 7 to F 12 , the convergence curves become gradually smoother after the initial drop, reflecting a slower rate of improvement as the algorithm fine-tunes the solutions in more complex problem landscapes. This variance across different function types highlights JADEGMO’s adaptive capability but also underscores challenges in handling functions with potentially deceptive or rugged landscapes.

Comparison of JADEGMO Convergence with Other Optimizers

In Figure 5 and Figure 6, JADEGMO demonstrates superior convergence characteristics compared to most of the other optimizers across multiple test functions. The convergence plots indicate that JADEGMO consistently achieves a rapid decline in the objective function value within the initial iterations, suggesting strong exploration capabilities. Unlike some other optimizers such as STOA and SPBO, which exhibit stagnation in certain iterations, JADEGMO maintains a smooth and steady decline, indicating a balanced trade-off between exploration and exploitation. For instance, in functions such as F12 and F11, JADEGMO reaches an optimal value significantly faster than TTHHO and Chimp, which show delayed improvements or premature convergence. This suggests that JADEGMO is more resilient in navigating complex landscapes, avoiding local minima more effectively. Additionally, its final optimized values remain competitive, often outperforming traditional optimizers like ROA and WOA.

4.4. JADEGMO Search History Diagram

In Figure 7 and Figure 8, the search history plots of JADEGMO JADEGMO on CEC2022 functions F 1 to F 12 demonstrate varied explorative and exploitative behaviors across two-dimensional search spaces. For F 1 and F 8 , the algorithm disperses widely, suggesting a broad exploratory strategy, possibly due to flatter landscapes or deceptive optima. Contrastingly, F 2 , F 3 , and F 4 exhibit dense clustering around the optimum, indicating strong local exploitation and possibly smoother landscapes. F 6 shows more focused exploration, hinting at the presence of narrower global optima. F 9 and F 11 indicate targeted explorative behavior with notable spread, suggesting a mix of landscape features. Finally, F 10 and F 12 show a denser, centralized search pattern, reflecting effective exploitation amidst potentially challenging search landscapes with multiple local optima.

4.5. JADEGMO Average Fitness Diagram

In Figure 9 and Figure 10, the progression of the average fitness for JADEGMO JADEGMO from functions F 1 to F 12 on the CEC2022 benchmark demonstrates varied rates of convergence and stabilization. Functions F 1 , F 2 , and F 4 display rapid initial decreases in average fitness, indicating a swift convergence towards better solutions in the early iterations. The steady decline through the subsequent iterations suggests continuous improvement, albeit at a decreasing rate, pointing towards the algorithm’s efficiency in refining solutions. In contrast, functions F 5 and F 7 exhibit a more gradual descent, potentially highlighting a more challenging landscape or slower optimization progress. Functions F 8 and F 9 show significant fluctuations, which may indicate the algorithm’s struggle with complex landscapes characterized by multiple local optima or steep gradients. Functions F 10 , F 11 , and F 12 reveal a more consistent decrease after an initial steep drop, suggesting effective exploitation of the search space after rapidly escaping inferior regions.

4.6. JADEGMO Exploitation Diagram

In Figure 11 and Figure 12, the exploitation metrics of JADEGMO JADEGMO over the CEC2022 benchmark for functions F1 to F12 reveal significant insights into its performance dynamics. Initially, for all functions, there is a steep drop in the exploitation metric, indicating rapid convergence towards regions of higher exploitation in the search space. This is particularly evident in the initial iterations where the algorithm aggressively exploits the potential solutions. As the iterations progress, the curves tend to flatten, suggesting a reduction in the rate of exploitation improvement. This behavior is consistent across all functions, with F1 and F2 showing a sharp initial decline before stabilizing, while functions like F5 and F11 display occasional spikes in later iterations, indicating intermittent explorations that potentially escape local optima.

4.7. JADEGMO Diversity Diagram

In Figure 13 and Figure 14, the hybrid JADEGMO algorithm’s behavior on the CEC2022 benchmark for functions F1 to F12 illustrates a pronounced trend in diversity decay across all dimensions, as observed in the standard deviation of the particles’ positions over iterations. The diversity within the swarm shows a steep drop within the initial iterations, reflecting a quick convergence towards regions of interest. This pattern is most prominent in the dimensions where the initial diversity is the highest; typically, these initial high values indicate exploration, where the algorithm tests a wide variety of possible solutions. As the iterations progress, the decrease in standard deviation across all dimensions signifies a shift from exploration to exploitation, narrowing the search to promising regions. This transition is visible across all test functions, suggesting the consistent performance characteristic of JADEGMO in maintaining sufficient exploration before converging, which is critical for avoiding local minima in complex optimization landscapes.

4.8. Box-Plot Analysis

In Figure 15 and Figure 16, the box plot analysis of the hybrid JADEGMO algorithm across various benchmark functions (F1 to F12) from the CEC2022 suite demonstrates a consistent performance with varied ranges of fitness scores. For simpler functions like F2 and F3, the box plots reveal tightly grouped fitness scores, indicating a robust optimization capability with minimal deviation among the runs. In contrast, more complex functions such as F6 and F12 display wider interquartile ranges and outliers, suggesting challenges in achieving consistent optima across runs. Functions like F4 and F5 show minimal outliers, implying that the algorithm manages to maintain stability across these problem landscapes.

4.9. JADEGMO Heatmap Analysis

In Figure 17 and Figure 18, the sensitivity analysis heatmaps for the JADEGMO algorithm foor functions F1 to F12 within the CEC2022 benchmark suite provide insightful revelations into the algorithm’s performance sensitivity relative to the number of search agents and maximum iterations. Notably, the performance tends to stabilize with an increasing number of search agents and iterations, as evidenced by narrower color gradients in the heatmaps for higher values. For example, F1 shows drastic changes at lower search agents but stabilizes as iterations increase. In contrast, functions like F6 display extreme sensitivity across both dimensions, with significant performance variation even at higher numbers of iterations. Functions such as F12 demonstrate minor variations across different setups, indicating robustness against changes in the number of agents and iterations.

4.10. JADEGMO Histogram Analysis

In Figure 19 and Figure 20, we can observe that JADEGMO yields consistently favorable outcomes across the 12 benchmark functions (F1–F12) of the CEC2022 suite. In particular, for functions such as F1, F2, F3, and F5, the final fitness values cluster tightly around a relatively low region, suggesting both high solution quality and low variance. This tight clustering indicates that JADEGMO not only converges to promising minima but does so reliably across multiple runs. On the moderately more difficult cases, for example, F4 and F7, the histograms still show a unimodal or near-unimodal pattern around a good fitness level, reinforcing the stability of the algorithm’s performance.
In the more challenging functions (e.g., F6, F9, and F10), the histograms widen somewhat, reflecting a larger spread in the final solutions. Nonetheless, JADEGMO still produces predominantly low fitness values, indicating that while the search space may be more complex, the algorithm consistently locates promising regions. Overall, these histograms underscore JADEGMO’s robustness in handling a variety of CEC2022 functions under constrained function evaluations. Its ability to converge with comparatively small variance highlights the effectiveness of its adaptive strategies in maintaining a balance between exploration and exploitation.

5. Application of JADEGMO in Multi-Cloud Security Configuration

Cloud computing has evolved into a dominant paradigm for delivering scalable and on-demand IT services to organizations of all sizes. With the ability to spin-up virtual machines, containers, serverless functions, and specialized software stacks at will, businesses can adapt much more efficiently to fluctuating computational demands. However, the popularity of cloud environments has also led to increasingly complex security requirements. As organizations embrace hybrid and multi-cloud strategies—where resources are distributed across different providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)—the challenge of managing security configurations grows in scope and difficulty.
Traditionally, security in cloud environments involves configuring firewalls, identity and access management (IAM) policies, encryption schemes, patching routines, and specialized threat detection. Yet, the interplay of these measures can be intricate. For instance, enabling stronger encryption can drastically reduce certain vulnerabilities at the cost of higher computational overhead. Restricting the number of open ports can limit an attack surface but also degrade performance for valid use cases. Adding advanced security subscriptions (e.g., analytics modules or auto-patching) may reduce risk but at an increased financial or performance expense. These push-and-pull relationships motivate the need to systematically find the best compromise between risk mitigation and operational cost.
Metaheuristic algorithms, known for tackling computationally challenging optimization tasks in a wide range of fields, offer a promising approach. They excel in exploring high-dimensional or nonlinear search spaces while avoiding local minima by design. However, no single metaheuristic consistently outperforms all others in every context, as articulated by the “No Free Lunch” [43]. This reality underscores the importance of algorithmic comparison and problem-specific tuning, further supported by a robust experimental methodology.
We solved a multi-cloud security configuration problem with three major cloud platforms. Each platform had five discrete control parameters, leading to a 15-dimensional decision space. Our goal was to minimize an enhanced security-cost function that merged various risk and cost components. The main contributions are as follows:
Problem formulation: We designed an objective function that models baseline risk, port usage trade-offs, encryption overhead, privilege-level effects, and subscription-based features. The function ensures no trivial zero-cost solution exists, forcing nontrivial exploration.
We compared JADEGMO with six other well-known or newly proposed metaheuristics over 30 runs each. We recorded the standard and advanced statistics, visualized convergence, conducted rank analyses, and held detailed performance discussions.

6. Problem Definition

6.1. Multi-Cloud Configuration Parameters

We consider a scenario involving three primary cloud platforms, each with five parameters that control security-related settings. This results in a 15-dimensional vector representing the entire problem space, as shown in Equation (11):
x = x 1 , 1 , x 1 , 2 , x 1 , 3 , x 1 , 4 , x 1 , 5 , x 2 , 1 , x 2 , 2 , x 2 , 3 , x 2 , 4 , x 2 , 5 , x 3 , 1 , x 3 , 2 , x 3 , 3 , x 3 , 4 , x 3 , 5 ,
where indices 1, 2, and 3 correspond to AWS, Azure, and GCP, respectively. The five parameters for each platform are as follows: encryption level ( 0 x i , 1 3 , integer), number of open ports ( 0 x i , 2 10 , integer), privilege level ( 0 x i , 3 5 , integer), advanced analytics subscription ( 0 x i , 4 1 , binary), and auto-patching subscription ( 0 x i , 5 1 , binary). Even when real-coded optimizers generate continuous values, each parameter was rounded and clamped within the function evaluation to ensure validity. Equation (11) encapsulates the configuration vector for this multi-cloud setup.

6.2. Enhanced Security-Cost Function

One of the contributions of this study is an enhanced cost function that discourages trivial or naive solutions. The cost for each cloud was computed through multiple terms:

6.2.1. Baseline Risk

A minimal level of risk is always present. We chose a baseline of 5 units of risk.

6.2.2. Port Risk

An excessively high number of open ports increases the attack surface, while too few ports may hamper legitimate operations. We thus imposed a penalty for fewer than 2 ports and for more than 5 ports, as shown in Equation (12):
portRisk = 2 × ( 2 openPorts ) , if openPorts < 2 , 1 , if 2 openPorts 5 , 3 × ( openPorts 5 ) , if openPorts > 5 .

6.2.3. Privilege-Level Risk

The privilege-level risk was modeled nonlinearly to reflect the steep rise in danger as privileges accumulate. If p is the privilege level, we use Equation (13):
privRisk = p 1.5 .

6.2.4. Encryption Effects

Encryption can drastically lower risk but at a resource and administrative overhead. If e is the encryption level (0 to 3), as shown in Equation (14):
( encRisk , encOverheadCost ) = ( 20 , 0 ) , if e = 0 , ( 10 , 2 ) , if e = 1 , ( 5 , 5 ) , if e = 2 , ( 2 , 10 ) , if e = 3 .
Higher encryption levels decrease residual risk but increase overhead cost.

6.2.5. Subscriptions

Two binary options are available:
advAnalyticsSub and autoPatchingSub .
If advAnalyticsSub = 1 , we reduce the risk by 5 and add a subscription cost of 15. If autoPatchingSub = 1 , we reduce the risk by 3 and add a subscription cost of 8.

6.2.6. Aggregation

Summing these contributions yields a partial risk:
partialRisk = baselineRisk + portRisk + privRisk + encRisk
+ ( riskReduction from subscriptions ) .
We then impose a residual floor:
finalRisk = max ( 1 , partialRisk ) .
The final cost for a single cloud is
cloudCost = finalRisk + encOverheadCost + subscriptionCost .
Finally, as shown in Equation (15), the cost across all three clouds is
totalCost ( x ) = i = 1 3 cloudCost ( x i , 1 , x i , 2 , , x i , 5 ) .

6.3. Why This Formulation Is Nontrivial

The above cost function has multiple opposing forces. Decreasing open ports beyond a certain threshold triggers a functionality penalty, while opening more than five ports triggers a security penalty. Selecting the highest encryption level eliminates most of the risk but imposes a large overhead. Subscribing to advanced features helps reduce the risk further, yet it inflates costs. Privilege-level risk grows faster than linearly. By carefully balancing these factors, we ensure there is no single, obvious “best” extreme solution (e.g., everything turned off or on), and metaheuristics must explore a nuanced configuration space.

7. Methodology and Experimental Setup

We employed consistent settings across all algorithms: a population size of 20 search agents, a problem dimension of 15 (5 parameters per cloud multiplied by 3 clouds), a maximum of 200 iterations, and 30 independent runs per algorithm. For each run, we recorded the best solution (lowest cost) found by the algorithm, the convergence curve of the best objective value over the iterations, and the execution time. Overall, the dataset comprised a total of 30 × 7 = 210 runs.

Statistical Analysis

After collecting all runs, extended statistical metrics were computed for each optimizer to evaluate their performance comprehensively. These metrics included the single best cost (minimum) and the worst cost (maximum) found across all runs, as well as the mean and median best costs to represent central tendencies. Dispersion was analyzed using the standard deviation and the interquartile range (IQR), which captures the range between the 75th and 25th percentiles of the best costs. Additionally, the mean and standard deviation of the execution times per run (MeanTime and StdTime) were calculated. To compare algorithms, an average rank was determined by ranking them from 1 to 7 based on the final best cost in each of the 30 runs, with ties assigned an average rank. These ranks were then averaged across runs to provide an overall performance indicator. The average rank is particularly insightful as it highlights an algorithm’s general performance and consistency, with a lower rank indicating stronger or more reliable outcomes.

8. Multi-Cloud Configuration Parameters Results

Table 4 displays the aggregated statistics over 30 runs for each of the seven algorithms. JADEGMO emerges as the top contender, achieving a minimum best cost of 42, a mean best cost near 47.47, and the most favorable average rank (1.40). SCA places second in average rank, while algorithms like HLOA, MVO, and WOA lie in the middle range with slightly higher costs and/or greater variability. FLO and AOA generally yield even higher final costs.
Figure 21 shows the best convergence curves for each algorithm selected from the run that yielded the lowest final cost for that method. JADEGMO converges quickly in the early iterations and continues to refine its solution. Notably, MVO’s best run shows an aggressive improvement early on but eventually levels out at a higher cost. FLO tends to plateau as well, while SCA demonstrates generally smooth and effective convergence to a moderate cost.
Figure 22 presents a box plot of the final best costs across all 30 runs. JADEGMO displays a comparatively lower median and a tight interquartile range. SCA’s distribution is also relatively compact but shifted upward. The others, including HLOA, MVO, WOA, FLO, and AOA, exhibit noticeably wider spreads or higher medians.
In Figure 23, we show the average ranks for each algorithm across the 30 runs. JADEGMO’s rank is roughly 1.40, indicating that it often achieves the best or near-best result. SCA is next-best ranked at around 2.35. HLOA, MVO, and WOA cluster around ranks of four to five, whereas FLO and AOA near or exceed an average rank of five. The grouped bar chart in Figure 24 further illustrates the best, mean, and median cost values per optimizer.
As can be seen in Table 4, our proposed JADEGMO strategy outperforms the other algorithms by a clear margin, whether judged by best solution, mean performance, median performance, or ranking metrics. SCA emerges as the second strongest competitor, though it rarely achieves results on par with JADEGMO.
From a practical viewpoint, in a multi-cloud environment, consistently obtaining lower cost solutions means discovering security configurations that remain robust to a variety of threats while incurring lower subscription costs and overhead. JADEGMO’s hybrid structure—which blends adaptive parameter updates (from JADE) with a genetic operator for diversity—is likely key in achieving these solutions. The early rapid improvements in the convergence curve suggest that the algorithm quickly identifies and locks onto promising configurations, while the subsequent iterations refine it further.
The box plots and standard deviation values reveal important information about stability. Algorithms that have wide cost distributions or large standard deviations (e.g., MVO, HLOA) produce excellent solutions in certain runs but fail to converge in other runs, resulting in worse final costs. JADEGMO’s tight distribution underscores its reliability. In security-critical contexts where consistent results matter more than best-case solutions, such reliability is paramount.
One interesting observation is how the algorithms handle the subscription toggles. Including advanced analytics (−5 risk, cost 15) or auto-patching (−3 risk, cost 8) can drastically change the partial risk. Some solutions with no subscriptions attempt to compensate with high encryption or minimal open ports, but these might raise overhead or functionality penalties. The best approach usually involves a carefully balanced combination, often turning on at least one subscription, selecting moderate encryption, and keeping open ports in a safe yet functional range.
The presence of a minimum risk floor (set to one in our formula) ensures that solutions cannot trivialize the objective by piling on every risk reduction measure for free. The overhead and subscription costs become the limiting factors. In real settings, organizations must weigh the intangible benefits of advanced services or patching automation (e.g., reduced labor). Our cost model performs an approximate job of capturing these aspects, but future expansions could consider a more nuanced subscription synergy.

9. Conclusions

We introduced JADEGMO, a hybrid optimization framework designed to blend JADE’s adaptive memory-driven evolutionary mechanisms with GMO’s elite-guided velocity-based exploration. Through extensive comparisons on the CEC2022 and CEC2017 benchmark problems, JADEGMO demonstrated robust convergence properties, achieving best-in-class performance on many test functions and exhibiting minimal variance across multiple runs. The convergence curves and rank analyses consistently ranked JADEGMO at or near the top, confirming that adaptive parameter tuning plus swarm-inspired guidance yields strong outcomes.
To validate its real-world relevance, we applied JADEGMO to a multi-cloud security configuration problem, constructing a novel cost function that balances security risk, overhead costs, and subscription-based features. The results showed that JADEGMO navigated these conflicting objectives effectively, discovering configurations that lower risk without incurring excessive expenses. This highlights the algorithm’s adaptability and potential as a decision-support tool in modern cloud infrastructures. Future work can extend JADEGMO to higher dimensions, dynamic security scenarios, and multi-objective formulations, broadening its applicability in both industry and research contexts.

Author Contributions

Conceptualization, H.N.F.; methodology, A.K.A.H. and H.N.F.; formal analysis, A.K.A.H. and H.N.F.; resources, A.K.A.H.; writing—original draft, A.K.A.H. and H.N.F.; review and editing, A.K.A.H. and H.N.F. All authors have read and agreed to the published version of this manuscript.

Funding

This research is funded by the Security Management Technology Group (SMT).

Data Availability Statement

Data are contained within this article.

Acknowledgments

We thank Samir M. Abu Tahoun, Security Management Technology Group (SMT) (http://www.smtgroup.org/ accessed on 1 July 2024 ), for supporting our research project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar]
  2. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  3. Gogna, A.; Tayal, A. Metaheuristics: Review and application. J. Exp. Theor. Artif. Intell. 2013, 25, 503–526. [Google Scholar] [CrossRef]
  4. Panda, S.K.; Jana, P.K. Efficient task scheduling algorithms for heterogeneous multi-cloud environment. J. Supercomput. 2015, 71, 1505–1533. [Google Scholar]
  5. Hwaitat, A.K.A.; Fakhouri, H.N. The OX optimizer: A novel optimization algorithm and its application in enhancing support vector machine performance for attack detection. Symmetry 2024, 16, 966. [Google Scholar] [CrossRef]
  6. Dubey, M.; Singh, K. Multi-Cloud Management Strategies—A Comprehensive Review. Res. Rev. J. Embed. Syst. Appl. 2019, 9, 289–299. [Google Scholar]
  7. Sekar, J. Multi-Cloud Strategies for Distributed AI Workflows and Application. J. Emerg. Technol. Innov. Res. 2023, 10, 600–610. [Google Scholar]
  8. Duncan, R. A multi-cloud world requires a multi-cloud security approach. Comput. Fraud. Secur. 2020, 2020, 11–12. [Google Scholar] [CrossRef]
  9. Salhi, S. Heuristic Search Methods; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1998. [Google Scholar]
  10. Shanmugapriya, M.; Manivannan, K.K. Compare the Performance of Meta-Heuristics Algorithm: A Review. In Metaheuristics Algorithm and Optimization of Engineering and Complex Systems; IGI Global: Hershey, PA, USA, 2024; pp. 242–253. [Google Scholar]
  11. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel hybrid success history intelligent optimizer with Gaussian transformation: Application in CNN hyperparameter tuning. Clust. Comput. 2024, 27, 3717–3739. [Google Scholar]
  12. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hani, I.B.; Alkhalaileh, M.; Hamad, F. A comprehensive study on the role of machine learning in 5G security: Challenges, technologies, and solutions. Electronics 2023, 12, 4604. [Google Scholar] [CrossRef]
  13. Shirvani, M.H. Web Service Composition in multi-cloud environment: A bi-objective genetic optimization algorithm. In Proceedings of the 2018 IEEE (SMC) International Conference on Innovations in Intelligent Systems and Applications (INISTA), Thessaloniki, Greece, 3–5 July 2018. [Google Scholar]
  14. Amirthayogam, G.; Ananth, C.A.; Elango, P. QOS Aware Web Services Composition Problem in Multi-Cloud Environment Using Hybrid Optimization Algorithm. J. Theor. Appl. Inf. Technol. 2022, 100, 5562–5577. [Google Scholar]
  15. Wang, Y.; Jiang, J.; Xia, Y.; Wu, Q.; Luo, X.; Zhu, Q. A multi-stage dynamic game-theoretic approach for multi-workflow scheduling on heterogeneous virtual machines from multiple infrastructure-as-a-service clouds. Lect. Notes Comput. Sci. 2018, 10969, 137–152. [Google Scholar]
  16. Toinard, C.; Ravier, T.; Cérin, C.; Ngoko, Y. The PROMETHEE Method for Cloud Brokering with Trust and Assurance Criteria. In Proceedings of the 2015 IEEE 29th International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Hyderabad, India, 25–29 May 2015; pp. 1109–1118. [Google Scholar]
  17. Díaz, J.L.; Entrialgo, J.; García, M.; García, J.; García, D.F. Optimal allocation of virtual machines in multi-cloud environments with reserved and on-demand pricing. Future Gener. Comput. Syst. 2017, 71, 129–144. [Google Scholar]
  18. Peng, G.; Qiu, M. Cost Minimization for Music Uploading to a Cloudlet. In Proceedings of the 2020 7th IEEE International Conference on Cyber Security and Cloud Computing and 2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (CSCloud-EdgeCom), New York, NY, USA, 1–3 August 2020; pp. 168–173. [Google Scholar]
  19. Kaur, R.; Anand, D.; Kaur, U.; Verma, S. Analysis and Evaluation of Bio-Inspired Algorithmic Framework, Potential Application in Cloud/Multi-Cloud Environment. In Proceedings of the 5th IEEE International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA), Hamburg, Germany, 7–8 October 2023; pp. 354–361. [Google Scholar] [CrossRef]
  20. Cui, J.; Chen, P.; Yu, G. A learning-based dynamic load balancing approach for microservice systems in multi-cloud environment. In Proceedings of the International Conference on Parallel and Distributed Systems (ICPADS), Hong Kong, 2–4 December 2020; pp. 334–341. [Google Scholar]
  21. Casola, V.; Benedictis, A.D.; Rak, M.; Villano, U. Security-by-design in multi-cloud applications: An optimization approach. Inf. Sci. 2018, 454–455, 344–362. [Google Scholar] [CrossRef]
  22. John, J.C.; Sural, S.; Gupta, A. Optimal Rule Mining for Dynamic Authorization Management in Collaborating Clouds Using Attribute-Based Access Control. In Proceedings of the 2017 IEEE International Conference on Cloud Computing (CLOUD), Honolulu, HI, USA, 25–30 June 2017; pp. 739–742. [Google Scholar] [CrossRef]
  23. Yang, J.; Zhu, H.; Liu, T. Secure and economical multi-cloud storage policy with NSGA-II-C. Appl. Soft Comput. 2019, 83, 105649. [Google Scholar] [CrossRef]
  24. Ramamurthy, A.; Saurabh, S.; Gharote, M.; Lodha, S. Selection of cloud service providers for hosting web applications in a multi-cloud environment. In Proceedings of the 2020 IEEE 13th International Conference on Services Computing (SCC), Beijing, China, 18–24 October 2020; pp. 202–209. [Google Scholar]
  25. Pandey, A.; Calyam, P.; Lyu, Z.; Wang, S.; Chemodanov, D.; Joshi, T. Knowledge-Engineered Multi-Cloud Resource Brokering for Application Workflow Optimization. IEEE Trans. Netw. Serv. Manag. 2023, 20, 3072–3088. [Google Scholar] [CrossRef]
  26. Addya, S.K.; Satpathy, A.; Chakraborty, S.; Ghosh, S.K. Optimal VM Coalition for Multi-Tier Applications over Multi-Cloud Broker Environments. In Proceedings of the 2019 11th International Conference on Communication Systems & Networks (COMSNETS), Bangalore, India, 7–1 January 2019; pp. 141–148. [Google Scholar]
  27. Altahat, M.A.; Daradkeh, T.; Agarwal, A. Optimized Encryption-Integrated Strategy for Containers Scheduling and Secure Migration in Multi-Cloud Data Centers. IEEE Access 2024, 12, 51330–51345. [Google Scholar] [CrossRef]
  28. Halchenko, Y.O.; Meyer, K.; Poldrack, B.; Solanky, D.S.; Wagner, A.S.; Gors, J.; MacFarlane, D.; Pustina, D.; Sochat, V.; Ghosh, S.S.; et al. DataLad: Distributed system for joint management of code, data, and their relationship. J. Open Source Softw. 2021, 6, 3262. [Google Scholar]
  29. Jawade, P.B.; Ramachandram, S. Task scheduling in multi-cloud environment via improved optimisation theory. Int. J. Wirel. Mob. Comput. 2024, 27, 64–77. [Google Scholar] [CrossRef]
  30. Jawade, P.B.; Ramachandram, S. DAGWO based secure task scheduling in Multi-Cloud environment with risk probability. Multimed. Tools Appl. 2024, 83, 2527–2550. [Google Scholar]
  31. Massonet, P.; Luna, J.; Pannetrat, A.; Trapero, R. Idea: Optimising multi-cloud deployments with security controls as constraints. Lect. Notes Comput. Sci. 2015, 8978, 102–110. [Google Scholar]
  32. Jogdand, R.M.; Goudar, R.H.; Sayed, G.B.; Dhamanekar, P.B. Enabling public verifiability and availability for secure data storage in cloud computing. Evol. Intell. 2015, 6, 55–65. [Google Scholar] [CrossRef]
  33. Senyo, P.K.; Addae, E.; Boateng, R. Cloud computing research: A review of research themes, frameworks, methods, and future research directions. Int. J. Inf. Manag. 2018, 38, 128–139. [Google Scholar] [CrossRef]
  34. Shahidinejad, A.; Ghobaei-Arani, M. A metaheuristic-based computation offloading in edge-cloud environment. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 2785–2794. [Google Scholar] [CrossRef]
  35. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  36. Rauf, H.T.; Bangyal, W.H.K.; Lali, M.I. An adaptive hybrid differential evolution algorithm for continuous optimization and classification problems. Neural Comput. Appl. 2021, 33, 10841–10867. [Google Scholar] [CrossRef]
  37. Huang, P.-Q.; Wang, Y.; Wang, K.; Yang, K. Differential evolution with a variable population size for deployment optimization in a UAV-assisted IoT data collection system. IEEE Trans. Emerg. Top. Comput. Intell. 2019, 4, 324–335. [Google Scholar] [CrossRef]
  38. Fotovatikhah, F.; Herrera, M.; Shamshirband, S.; Chau, K.-W.; Ardabili, S.F.; Piran, M.J. Survey of computational intelligence as basis to big flood management: Challenges, research directions and future work. Eng. Appl. Comput. Fluid Mech. 2018, 12, 411–437. [Google Scholar] [CrossRef]
  39. Such, F.P.; Madhavan, V.; Conti, E.; Lehman, J.; Stanley, K.O.; Clune, J. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv 2017, arXiv:1712.06567. [Google Scholar]
  40. Heng, L.; Yin, G.; Zhao, X. Energy aware cloud-edge service placement approaches in the Internet of Things communications. Int. J. Commun. Syst. 2022, 35, e4899. [Google Scholar] [CrossRef]
  41. Asim, M.; Wang, Y.; Wang, K.; Huang, P.-Q. A review on computational intelligence techniques in cloud and edge computing. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 742–763. [Google Scholar]
  42. Zhang, Z.; Zhu, J.; Nie, F. A novel hybrid adaptive differential evolution for global optimization. Sci. Rep. 2024, 14, 19697. [Google Scholar]
  43. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar]
Figure 1. Hybrid JADEGMO flowchart.
Figure 1. Hybrid JADEGMO flowchart.
Symmetry 17 00503 g001
Figure 2. Illustration of the CEC2022 benchmark functions (F1–F6).
Figure 2. Illustration of the CEC2022 benchmark functions (F1–F6).
Symmetry 17 00503 g002
Figure 3. Convergence curve analysis for CEC2022 benchmark functions (F1–F6).
Figure 3. Convergence curve analysis for CEC2022 benchmark functions (F1–F6).
Symmetry 17 00503 g003
Figure 4. Convergence curve analysis for CEC2022 benchmark functions (F7–F12).
Figure 4. Convergence curve analysis for CEC2022 benchmark functions (F7–F12).
Symmetry 17 00503 g004
Figure 5. Convergence curve analysis for CEC2022 benchmark functions of all optimizers (F1–F6).
Figure 5. Convergence curve analysis for CEC2022 benchmark functions of all optimizers (F1–F6).
Symmetry 17 00503 g005
Figure 6. Convergence curve analysis for CEC2022 benchmark functions of all optimizers (F7–F12).
Figure 6. Convergence curve analysis for CEC2022 benchmark functions of all optimizers (F7–F12).
Symmetry 17 00503 g006
Figure 7. Search history analysis for CEC2017 (F1–F6).
Figure 7. Search history analysis for CEC2017 (F1–F6).
Symmetry 17 00503 g007
Figure 8. Search history analysis for CEC2017 (F7–F12).
Figure 8. Search history analysis for CEC2017 (F7–F12).
Symmetry 17 00503 g008
Figure 9. Average fitness analysis for CEC2017 (F1–F6).
Figure 9. Average fitness analysis for CEC2017 (F1–F6).
Symmetry 17 00503 g009
Figure 10. Average fitness analysis for CEC2017 (F7–F12).
Figure 10. Average fitness analysis for CEC2017 (F7–F12).
Symmetry 17 00503 g010
Figure 11. Exploratoryanalysis for CEC2017 (F1–F6).
Figure 11. Exploratoryanalysis for CEC2017 (F1–F6).
Symmetry 17 00503 g011
Figure 12. Exploratory analysis for CEC2017 (F7–F12).
Figure 12. Exploratory analysis for CEC2017 (F7–F12).
Symmetry 17 00503 g012
Figure 13. Diversity analysis for CEC2017 (F1–F6).
Figure 13. Diversity analysis for CEC2017 (F1–F6).
Symmetry 17 00503 g013
Figure 14. Diversity analysis for CEC2017 (F7–F12).
Figure 14. Diversity analysis for CEC2017 (F7–F12).
Symmetry 17 00503 g014
Figure 15. Box plot analysis for CEC2022 (F1–F6).
Figure 15. Box plot analysis for CEC2022 (F1–F6).
Symmetry 17 00503 g015
Figure 16. Box plot analysis for CEC2022 (F7–F12).
Figure 16. Box plot analysis for CEC2022 (F7–F12).
Symmetry 17 00503 g016
Figure 17. Sensitivity analysis for CEC2022 (F1–F6).
Figure 17. Sensitivity analysis for CEC2022 (F1–F6).
Symmetry 17 00503 g017
Figure 18. Sensitivity analysis for CEC2022 (F7–F12).
Figure 18. Sensitivity analysis for CEC2022 (F7–F12).
Symmetry 17 00503 g018
Figure 19. Histogram analysis for CEC2022 (F1–F6).
Figure 19. Histogram analysis for CEC2022 (F1–F6).
Symmetry 17 00503 g019
Figure 20. Histogram analysis for CEC2022 (F7–F12).
Figure 20. Histogram analysis for CEC2022 (F7–F12).
Symmetry 17 00503 g020
Figure 21. Best convergence curves for each optimizer.
Figure 21. Best convergence curves for each optimizer.
Symmetry 17 00503 g021
Figure 22. Distribution of best scores over 30 runs (box plot).
Figure 22. Distribution of best scores over 30 runs (box plot).
Symmetry 17 00503 g022
Figure 23. Average rank (Friedman-like) across 30 runs.
Figure 23. Average rank (Friedman-like) across 30 runs.
Symmetry 17 00503 g023
Figure 24. Grouped bar chart of best, mean, and median scores.
Figure 24. Grouped bar chart of best, mean, and median scores.
Symmetry 17 00503 g024
Table 1. Comparative summary of the key literature on multi-cloud environment strategies.
Table 1. Comparative summary of the key literature on multi-cloud environment strategies.
Reference Multi-Cloud FocusApproach/MethodologyKey Contributions/FindingsLimitations/Gaps
[27] Container orchestration in multi-cloud data centersOptimized encryption-integrated container scheduling frameworkBalances server load, minimizes energy consumption, considers encryption cost in schedulingLimited exploration of broader CSP-level interoperability, mainly focused on container-based deployments
[28] Data security and privacy in distributed systemsVarious methodologies for information flow containment, multi-cloud data accessEarly emphasis on trust and collaboration, advocates integrated security policiesLimited real-world validation in large-scale multi-cloud contexts, primarily conceptual frameworks
[29,30] Secure task scheduling among multiple CSPsHybrid Dragon Aided Grey Wolf Optimization (DAGWO)Optimizes task allocation under security constraints, reduces makespan, execution time, and utilization costPrimarily tested on limited-scale scenarios, does not extensively address multi-objective trade-offs (e.g., compliance, QoS)
[31] Secure multi-cloud deployments based on CSP capabilitiesSecurity control frameworks integrated into deployment processEmphasizes industrial security standards, models security requirements as constraints, generates optimal deployment plansLack of detailed cost-performance trade-off analysis, security constraints not extensively tested in diverse scenarios
[32] Data integrity and availability in multi-cloud storageCombines the Merkle Hash Tree with the DepSky modelEnsures robust verification and availability, highlights inter-cloud collaboration benefitsLimited focus on cost or performance metrics, assumes minimal variation in CSP reliability
[33] General cloud computing models and resource allocationConference proceedings covering dynamic resource allocation and security mechanismsDemonstrates the need for secure and dynamic solutions, encourages innovative models for industry adoptionMostly position papers and preliminary studies, lacks integrated frameworks addressing full multi-cloud scope
Table 2. Statistical results of CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
Table 2. Statistical results of CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
FunctionMeasureJADEGMOFLOSTOASOASPBOAOSSOATTHHOChimpCPOROAGWO
F1Mean342.578756.601093.041733.1333,543.351995.9213,807.63621.322594.405562.318657.792041.67
Std18.201080.52689.261752.467395.271887.493740.11183.931094.817007.701718.681736.33
SEM8.14483.22308.25783.723307.26844.111672.6382.26489.623133.94768.62776.51
Rank2175723920411151610
F2Mean407.711775.03441.96441.971070.92418.871603.85430.46613.40455.41715.50442.50
Std1.66386.1222.4830.99137.5119.76234.7334.7693.8246.35171.5028.40
SEM0.74172.6810.0513.8661.508.84104.9715.5541.9620.7376.7012.70
Rank1238921322717132010
F3Mean600.03645.37609.29608.96675.60615.58661.38636.47639.75649.02646.70601.44
Std0.0211.246.932.894.907.369.8913.077.8313.7719.501.54
SEM0.015.033.101.292.193.294.425.853.506.168.720.69
Rank2197623822171821204
F4Mean821.80851.75822.96825.12901.05821.87867.06828.42837.26832.44850.84814.49
Std3.045.487.656.9514.587.796.016.837.010.5411.062.60
SEM1.362.453.423.116.523.482.693.053.140.244.951.16
Rank52091023622111514193
F5Mean900.001612.41968.021032.223501.021007.071735.751435.171317.291512.391455.05905.45
Std0.00169.5233.32123.31454.1753.80353.48150.01128.08226.27168.697.45
SEM0.0075.8114.9055.14203.1124.06158.0867.0957.28101.1975.443.33
Rank32061023922171619184
F6Mean2113.0880,492,297.7429,036.3927,418.66778,300,770.6016,841.83693,916,508.645693.38867,567.554801.872,567,288.395734.89
Std426.0068,552,616.3323,491.1823,278.33413,432,204.2810,840.05580,001,088.442104.06569,541.303067.453,098,896.002626.24
SEM190.5130,657,662.0310,505.5810,410.38184,892,502.574847.82259,384,372.15940.96254,706.611371.801,385,868.421174.49
Rank2211514231322111691912
F7Mean2020.832120.102037.022033.142124.342043.442153.942059.872058.522199.102065.432037.94
Std3.7222.927.488.2923.0617.6827.3614.817.5567.4223.675.52
SEM1.6610.253.353.7110.317.9112.246.623.3830.1510.592.47
Rank2205421722141323156
F8Mean2219.382245.582229.982226.032487.802226.942437.362227.842305.232272.492243.512223.59
Std7.0614.331.491.74216.501.3567.161.5163.1966.4318.421.58
SEM3.166.410.670.7896.820.6030.040.6828.2629.718.240.71
Rank2161042362282120153
F9Mean2529.282813.162554.562587.502849.172581.512820.322677.102601.232586.062699.962565.08
Std0.0071.4822.8437.5165.6251.71110.611.5430.0665.4562.3520.49
SEM0.0031.9610.2116.7829.3523.1349.470.6913.4429.2727.889.16
Rank22151123922191410206
F10Mean2576.143210.692500.642500.692602.662547.753145.482613.533089.472587.392540.542679.77
Std53.59535.370.110.1433.3663.54705.6862.52675.2280.0554.29157.06
SEM23.97239.420.050.0614.9228.41315.5927.96301.9735.8024.2870.24
Rank9231215822162112718
F11Mean2600.103667.972830.692803.483931.292695.793680.652786.583365.482722.503389.272913.81
Std0.05515.55178.66196.04380.4675.64552.3379.44218.30167.65610.42177.15
SEM0.02230.5679.9087.67170.1533.83247.0135.5397.6274.97272.9979.22
Rank1211092332281841916
F12Mean2863.133245.932864.202863.832897.412866.673028.812958.112875.323001.542994.022864.97
Std1.85161.750.821.1211.842.5752.8281.0620.52102.5289.760.91
SEM0.8372.340.370.505.301.1523.6236.259.1845.8540.140.41
Rank2235315722191021206
Table 3. Statistical results of CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
Table 3. Statistical results of CEC2022 (F1–F12) with FES = 1000 and 30 independent runs.
FunctionMeasureWOAMFOSHIOZOAMTDEFVIMSCADOASCSOCMAESL_SHADE
F1Mean31,903.119063.865032.82490.4811,952.304224.471612.423765.291969.2727,374.01300.00
Std8700.175460.181714.11124.584874.541885.44995.591483.412291.8214,968.240.00
SEM3890.832441.87766.5755.712179.96843.19445.24663.401024.9327,074.010.00
Rank221814319136128211
F2Mean451.99428.00444.18460.42501.34425.53456.88650.42429.75676.19407.93
Std33.8231.4233.5938.3426.2625.6715.40224.2028.7148.752.20
SEM15.1214.0515.0217.1411.7411.486.89100.2612.84276.197.93
Rank125111516414186192
F3Mean631.22601.11617.14619.18616.43603.10617.77632.80618.62621.05600.00
Std20.721.5410.108.434.163.343.7818.366.1628.910.00
SEM9.270.694.523.771.861.491.698.212.7621.050.00
Rank153101395111612141
F4Mean839.48830.49821.96811.68863.64822.50841.15838.65831.92816.55804.42
Std24.7012.0010.093.5411.889.567.4513.706.417.791.13
SEM11.055.374.511.585.324.283.336.132.8716.554.42
Rank17127221818161341
F5Mean1656.31937.151069.351109.451278.911002.62992.071151.121242.71900.00900.00
Std620.6054.83158.90135.64165.98188.8413.4176.13203.230.000.00
SEM277.5424.5271.0660.6674.2384.456.0034.0590.890.000.00
Rank21511121587131411
F6Mean4797.574558.044187.233743.271,527,883.055618.252,126,593.193832.833822.2429,800,376.201800.80
Std1810.412579.772497.372031.83985,318.252528.841,094,236.392741.651661.5230,491,803.080.78
SEM809.641153.711116.86908.66440,647.721130.93489,357.391226.10743.0529,798,576.200.80
Rank876317101854201
F7Mean2068.632025.592047.532054.322082.272058.022058.322076.632045.242099.882000.11
Std27.385.1118.0814.3016.7026.768.5734.2310.6397.190.13
SEM12.252.298.096.407.4711.973.8315.314.7599.880.11
Rank163910181112178191
F8Mean2240.312226.332228.402248.992236.562247.782234.062237.512227.532264.802214.71
Std11.096.314.4052.975.6753.393.2611.112.116.926.86
SEM4.962.821.9723.692.5323.871.464.970.9464.8014.71
Rank145918121711137191
F9Mean2596.672547.642636.892656.782579.552600.832567.582618.542619.392540.502529.28
Std56.9429.5440.8552.2915.9850.2610.17108.6651.2613.070.00
SEM25.4613.2118.2723.387.1522.484.5548.6022.93240.50229.28
Rank12417188137151631
F10Mean2706.632577.832588.542578.702507.802587.422502.212632.032524.853026.932521.47
Std386.46160.5449.3567.673.2748.861.4169.2554.34658.7547.26
SEM172.8371.7922.0730.261.4621.850.6330.9724.30626.93121.47
Rank191014114133176205
F11Mean2866.222874.412891.122777.392838.262775.782781.463436.252838.983081.302660.00
Std171.85178.99284.6832.97101.58119.939.98562.19219.46220.00134.16
SEM76.8580.05127.3114.7545.4353.634.47251.4298.14481.3060.00
Rank131415611572012172
F12Mean2929.092863.962883.232936.242878.292884.992870.622905.432866.972877.932860.15
Std61.720.2432.0922.554.2617.901.1631.205.487.991.22
SEM27.600.1114.3510.091.908.010.5213.952.45177.93160.15
Rank174131812149168111
Table 4. Statistical results (over 30 runs) for compared optimizers.
Table 4. Statistical results (over 30 runs) for compared optimizers.
OptimizerBestWorstMeanMedianStdIQRMeanTimeStdTimeAvgRank
JADEGMO425747.47474.0850.15500.02811.40
SCA445751.27523.8150.02060.00252.35
HLOA448260.5759.838.82130.09460.00604.43
MVO4685.1863.166410.31170.03810.00444.75
WOA477760.97627.57100.01660.00334.42
FLO558766.83678.17130.03160.00805.52
AOA567464.1663.504.5160.02750.00665.13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al Hwaitat, A.K.; Fakhouri, H.N. Multi-Cloud Security Optimization Using Novel Hybrid JADE-Geometric Mean Optimizer. Symmetry 2025, 17, 503. https://doi.org/10.3390/sym17040503

AMA Style

Al Hwaitat AK, Fakhouri HN. Multi-Cloud Security Optimization Using Novel Hybrid JADE-Geometric Mean Optimizer. Symmetry. 2025; 17(4):503. https://doi.org/10.3390/sym17040503

Chicago/Turabian Style

Al Hwaitat, Ahmad K., and Hussam N. Fakhouri. 2025. "Multi-Cloud Security Optimization Using Novel Hybrid JADE-Geometric Mean Optimizer" Symmetry 17, no. 4: 503. https://doi.org/10.3390/sym17040503

APA Style

Al Hwaitat, A. K., & Fakhouri, H. N. (2025). Multi-Cloud Security Optimization Using Novel Hybrid JADE-Geometric Mean Optimizer. Symmetry, 17(4), 503. https://doi.org/10.3390/sym17040503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop