Next Article in Journal
Research on the Follow-Up Braking Control of the Aircraft Engine-Off Taxi Towing System Under Complex Conditions
Previous Article in Journal
Nonlocal Effects and Chaotic Wave Propagation in the Cubic–Quintic Nonlinear Schrödinger Model for Optical Beams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization

1
College of Design, Hanyang University, Ansan 15588, Gyeonggi-do, Republic of Korea
2
College of Art, Sungkyunkwan University, Seoul 03063, Republic of Korea
3
Spatial Design Department, Graduate School of Industrial Art, Hongik University, Wausan-ro, Mapo-gu, Seoul 04066, Republic of Korea
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(12), 2130; https://doi.org/10.3390/sym17122130
Submission received: 29 October 2025 / Revised: 16 November 2025 / Accepted: 5 December 2025 / Published: 11 December 2025
(This article belongs to the Section Engineering and Materials)

Abstract

To address the issues of uneven population initialization, insufficient individual information interaction, and passive boundary handling in the standard Golden Jackal Optimization (GJO) algorithm, while improving the accuracy and efficiency of multilevel thresholding in artistic image segmentation, this paper proposes an improved Golden Jackal Optimization algorithm (MGJO) and applies it to this task. MGJO introduces a high-quality point set for population initialization, ensuring a more uniform distribution of initial individuals in the search space and better adaptation to the complex grayscale characteristics of artistic images. A dual crossover strategy, integrating horizontal and vertical information exchange, is designed to enhance individual information sharing and fine-grained dimensional search, catering to the segmentation needs of artistic image textures and color layers. Furthermore, a global-optimum-based boundary handling mechanism is constructed to prevent information loss when boundaries are exceeded, thereby preserving the boundary details of artistic images. The performance of MGJO was evaluated on the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) benchmark suites against seven algorithms, including GWO and IWOA. Population diversity analysis, exploration–exploitation balance assessment, Wilcoxon rank-sum tests, and Friedman mean-rank tests all demonstrate that MGJO significantly outperforms the comparison algorithms in optimization accuracy, stability, and statistical reliability. In multilevel thresholding for artistic image segmentation, using Otsu’s between-class variance as the objective function, MGJO achieves higher fitness values (approaching Otsu’s optimal values) across various artistic images with complex textures and colors, as well as benchmark images such as Baboon, Camera, and Lena, in 4-, 6-, 8-, and 10-level thresholding tasks. The resulting segmented images exhibit superior peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) compared to other algorithms, more precisely preserving brushstroke details and color layers. Friedman average rankings consistently place MGJO in the lead. These experimental results indicate that MGJO effectively overcomes the performance limitations of the standard GJO, demonstrating excellent performance in both numerical optimization and multilevel thresholding artistic image segmentation. It provides an efficient solution for high-dimensional complex optimization problems and practical demands in artistic image processing.

1. Introduction

In the field of computer vision and image processing, image segmentation serves as a crucial bridge between image acquisition and high-level semantic analysis. Its primary objective is to partition an image into non-overlapping regions with clear semantic meaning, thereby providing a foundation for subsequent tasks such as object detection, feature extraction, and image understanding [1,2]. With the growing complexity of image data—such as high-resolution images, medical imaging, and remote sensing imagery—traditional single-threshold segmentation methods are increasingly insufficient for meeting the demands of fine-grained segmentation. In this context, multilevel thresholding has emerged as a research hotspot due to its ability to more accurately capture the characteristics of image gray-level distributions [3,4]. The core of multilevel thresholding lies in identifying the optimal combination of thresholds that maximizes inter-class differences or minimizes intra-class variance. Commonly employed objective functions include Otsu’s between-class variance method and Kapur’s entropy method. However, as the number of thresholds increases, the search space for optimal thresholds grows exponentially. Consequently, traditional approaches such as exhaustive search and greedy algorithms suffer from high computational complexity and are prone to local optima, making them inefficient for solving high-dimensional threshold optimization problems [5,6,7].
The task of multilevel thresholding is an NP-hard combinatorial problem, where the required computational effort surges dramatically with a higher count of thresholds. While traditional solvers tend to be slow and produce mediocre results [8,9], modern metaheuristics draw inspiration from the self-organizing principles of animal groups. This bio-inspired class of algorithms efficiently locates near-optimal solutions to complex thresholding issues within manageable computation periods.
With the advancement of research, a wide variety of swarm intelligence algorithms have emerged. Examples include Particle Swarm Optimization (PSO), inspired by the social behavior of bird flocks and fish schools, where particles adjust their direction and velocity based on personal and neighborhood experiences to search for optimal solutions [10]; the Gray Wolf Optimizer (GWO), modeled on the social hierarchy and hunting strategies of gray wolves [11]; the Crested Porcupine Optimizer (CPO), inspired by the diverse defensive behaviors of crested porcupines [12]; Ant Colony Optimization (ACO), founded upon the cooperative foraging of ant populations [13]; the Bat Algorithm (BA), which abstracts the echolocation principles of bats [14]; and the Animated Oat Optimization algorithm (AOO), a simulation of the dynamic movements exhibited by wild oats [15]; the Snake Optimizer (SO), designed according to the foraging and reproductive behaviors of snakes [16]; the Sparrow Search Algorithm (SSA), modeled after sparrows’ foraging and anti-predation behaviors [17]; the Grasshopper Optimization Algorithm (GOA), inspired by grasshopper swarming behaviors [18]; the Remora Optimization Algorithm (ROA), derived from the symbiotic feeding behaviors of remoras attached to different host species [19]; the Black Widow Optimization algorithm (BWO), based on the distinctive reproductive strategies of black widow spiders [20]; other sophisticated methods draw from animal kingdom behaviors: the Dung Beetle Optimizer (DBO) captures the essence of the insect’s resource management, from locomotion to reproduction [21]; the Golden Eagle Optimizer (GEO) emulates the raptor’s dynamic control of attack speed during its distinctive spiral descent [22]; and the Salp Swarm Algorithm (SSA) replicates the coordinated movement and group formation of salps in their marine habitat [23]; Other notable examples include the Chimpanzee Optimization Algorithm (ChOA), which simulates chimpanzees’ cooperative hunting strategies—attacking, chasing, blocking, and driving prey [24]; and Artificial Rabbits Optimization (ARO), based on rabbits’ survival strategies such as detour foraging and random hiding [25].
Moreover, many researchers have applied swarm intelligence algorithms to image segmentation. For example, Mohamed employed the Whale Optimization Algorithm (WOA) and Moth-Flame Optimization (MFO) to determine the optimal multilevel thresholds for image segmentation [26]. Mookiah et al. proposed an Enhanced Sine Cosine Algorithm (ESCA) to identify thresholds for segmenting color images [27]. Aranguren et al. introduced a multilevel thresholding method based on LSHADE for the segmentation of magnetic resonance brain images, and validated the approach using three sets of reference images [28]. Hairu Guo et al. [8] developed an Improved Aquila Optimizer (IAO) for multilevel thresholding in image segmentation, and experimental results on different benchmark images demonstrated that IAO achieved strong segmentation performance in most cases. Krishna Gopal Dhal et al. [29] employed nature-inspired optimization algorithms (NIOA) as an alternative for solving multilevel thresholding problems, and provided a comprehensive review of the most significant NIOAs applied in multilevel threshold-based image segmentation. Ziqi Jiang et al. [30] proposed a novel Ensemble Multi-Teaching-Learning-Based Optimization (EMTLBO), which integrates three different variants of the Teaching-Learning-Based Optimization (TLBO) algorithm—the original TLBO, its neighborhood search variant, and its differential variant—and applied it to image segmentation. Mohamed Abd Elaziz et al. [31] introduced an alternative multilevel thresholding method for polyp image segmentation (MPOA). Their approach enhances the Planetary Optimization Algorithm by incorporating operators from the Reptile Search Algorithm, thereby improving segmentation accuracy.
Despite the effectiveness of the aforementioned swarm intelligence-based multilevel thresholding methods in certain scenarios, they still share several common drawbacks. Most approaches rely heavily on random initialization strategies during the initial population generation stage, which often leads to uneven population distribution within the search space. As a result, the potential optimal threshold regions may not be adequately covered, causing insufficient global exploration and subsequently reducing optimization efficiency [32]. In addition, the boundary-handling mechanisms adopted by these methods are typically simple strategies, such as resetting solutions to the boundary or applying random adjustments. Such passive treatments not only interrupt the natural search trajectories of individuals and discard valuable directional information but may also overlook optimal thresholds located near the boundaries, leading to wasted computational resources. Furthermore, these methods generally lack adaptive adjustment mechanisms tailored to varying gray-level distribution characteristics of images. This limitation makes it difficult to achieve a balance among segmentation accuracy, stability, and computational efficiency, thereby restricting their generalizability and robustness when applied to complex real-world scenarios [33].
Introduced by Nitish Chopra and Mohsin Ansari in 2022, the Golden Jackal Optimization (GJO) algorithm [34], is a modern metaheuristic technique grounded in swarm intelligence. It emulates the cooperative hunting strategies of mated golden jackal pairs. The method’s core mechanics involve a two-stage process: an exploration stage, where the algorithm conducts a global search for promising regions, and an exploitation stage, where it focuses on locally encircling and refining solutions [35]. Compared with traditional algorithms, GJO demonstrates advantages in terms of convergence speed and optimization accuracy. However, it still exhibits several noticeable shortcomings in practical applications [36]. First, its reliance on purely random initialization leads to a scattered distribution of individuals in the search space, limiting the coverage of potential optimal regions and reducing global exploration efficiency. Second, the individual update mechanism is solely guided by the male and female jackals (i.e., the global best and second-best solutions), which restricts effective information sharing and collaboration within the population. As a result, GJO tends to suffer from insufficient local exploitation and slow convergence in the later stages, especially for high-dimensional problems. Finally, the boundary-handling mechanism adopts a passive reset-to-boundary strategy, which interrupts natural search trajectories, discards valuable search information, and risks overlooking optimal solutions near the boundaries, further constraining its overall performance [36,37].
To overcome these limitations, the present study introduces an enhanced Golden Jackal Optimization (MGJO) approach, augmented through multiple strategic improvements, and employs it specifically for image segmentation using multilevel thresholding. The principal innovations of this research are delineated below, encompassing both algorithmic advancements and practical implementations. Theoretically, MGJO integrates good-point set initialization, a dual crossover strategy, and a boundary-handling mechanism guided by the global best solution to overcome the limitations of standard GJO, enrich the improvement strategies of swarm intelligence algorithms, and provide new approaches for solving high-dimensional and complex optimization problems. From an application perspective, MGJO is employed in multilevel threshold image segmentation to enhance segmentation accuracy and efficiency, thereby offering technical support for practical domains such as medical image analysis, remote sensing interpretation, and industrial quality inspection.
To distinguish MGJO from existing improved GJO variants and highlight its innovative value, we supplemented a comparative analysis table (Table 1) that systematically contrasts MGJO with representative enhanced GJO methods (IGJO, AGJO, EGJO, DE-GJO) in terms of improvement strategies, core innovations, and performance gains. Conceptually, MGJO differs from existing methods by integrating three synergistic strategies (good-point set initialization, dual crossover, global-optimum boundary handling) rather than single-module improvements; functionally, the dual crossover (horizontal + vertical) enables both inter-individual information sharing and dimension-level fine-grained search, while the global-optimum boundary mechanism avoids information loss—addressing limitations of existing methods that focus only on exploration or exploitation.
The main contributions of this paper are as follows:
(1).
We propose a Modified Golden Jackal Optimization (MGJO) algorithm by integrating three innovative strategies to address the inherent limitations of the standard GJO. First, the good-point set population initialization strategy replaces random initialization, ensuring the initial population is more uniformly and widely distributed in the search space, which lays a solid foundation for global exploration. Second, the dual crossover strategy—combining horizontal crossover (promoting information sharing among individuals to expand search coverage) and vertical crossover (enabling fine-grained search at the dimension level to deepen local exploitation)—effectively enhances the algorithm’s ability to balance exploration and exploitation, especially for high-dimensional complex problems. Third, the global-optimum-based boundary handling mechanism replaces the passive “reset-to-boundary” strategy, guiding out-of-bound individuals toward the global optimal region to retain valid search information and improve the utilization of boundary-area search resources. These three strategies work synergistically to overcome the shortcomings of standard GJO, including uneven initialization, insufficient information exchange, and passive boundary handling, enriching the improvement framework of swarm intelligence algorithms.
(2).
We conduct comprehensive validation of MGJO’s performance through numerical optimization experiments and multilevel threshold image segmentation applications. On the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) benchmark suites, MGJO outperforms seven mainstream algorithms (e.g., GWO, IWOA, GJO) in optimization accuracy, stability, and exploration–exploitation balance, as verified by population diversity analysis, ablation experiments, and statistical tests (Wilcoxon rank-sum test, Friedman mean-rank test). When applied to multilevel threshold segmentation of artistic images and benchmark images (e.g., Baboon, Lena) with Otsu’s between-class variance as the objective function, MGJO achieves higher fitness values (closer to Otsu’s optimal values) across 4-, 6-, 8-, and 10-level threshold tasks, and the segmented images exhibit superior peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM), effectively preserving brushstroke details and color layers. This dual validation confirms MGJO’s effectiveness in both numerical optimization and practical image segmentation scenarios, providing an efficient solution for high-dimensional complex optimization problems and artistic image processing demands.
The remainder of this paper is organized as follows: Section 2 introduces the basic principles of the standard Golden Jackal Optimization algorithm and presents the three enhancement strategies and mathematical model of MGJO. Section 3 reports numerical experiments on the CEC2017 and CEC2022 benchmark suites to validate the optimization performance of MGJO, along with comparative analysis against mainstream algorithms. Section 4 applies MGJO to multilevel threshold image segmentation, using Otsu’s method as the objective function, and evaluates segmentation performance with PSNR, FSIM, and SSIM metrics. Section 5 concludes the study and discusses potential future research directions.

2. Golden Jackal Optimization and the Proposed Methodology

2.1. Golden Jackal Optimization (GJO)

Proposed by Nitish Chopra and Mohsin Ansari in 2022, the Golden Jackal Optimization (GJO) algorithm is a metaheuristic inspired by the cooperative hunting tactics of the golden jackal (Canis aureus), a medium-sized canid widely distributed across North Africa, Eastern Europe, the Middle East, and parts of Asia. Characterized by their relatively small body size and long, slender legs, golden jackals are well adapted for long-distance running to pursue prey. Notably, they often hunt in male–female pairs, a cooperative strategy that serves as the biological foundation for the design of GJO [34].

2.1.1. Search Space Formulation

As a swarm intelligence method, the GJO algorithm begins by creating an initial set of potential solutions, which are randomly positioned throughout the entire search space to serve as starting points for the optimization process. The initial solution Z 0 can be expressed as follows [34]:
Z 0 = U B L B · r a n d + L B
where L B and U B denote the lower and upper bounds of the search space, respectively, and r a n d represents a uniformly distributed random number in the interval [0, 1].
At this stage, an initial population matrix is formed, representing the prey set, which can be expressed as:
P r e y = Z 1,1 Z 2,1 Z p o p , 1   Z 1,2 Z 2,2 Z p o p , 2     Z 1 , d i m Z 2 , d i m Z p o p , d i m
where Z i , j represents the j t h element of the j t h prey, p o p is the total number of prey, and d i m denotes the dimensionality of the decision variables. Each parameter of a solution is referred to as the position of the prey. During the optimization process, the fitness of each prey is evaluated using the objective function.
The fitness values of all prey are calculated as follows:
F i t = f Z 1,1 , Z 1,2 , , Z 1 , d i m f Z 2,1 , Z 2,2 , , Z 2 , d i m f Z p o p , 1 , Z p o p , 2 , , Z p o p , d i m
where F i t is the fitness matrix containing all prey fitness values, and f · represents the cost function. Among all fitness values, the prey with the best fitness is considered the male golden jackal, while the prey with the second-best fitness is designated as the female golden jackal.

2.1.2. Exploration Stage (Target Prospecting and Discovery)

The golden jackal possesses the ability to detect and track prey; however, in certain situations, the prey may escape. In such cases, the golden jackal chooses to wait and search for new prey. During the hunting process, the male jackal typically takes the lead, while the female jackal follows behind. This process can be mathematically expressed as follows [34]:
Z 1 ( t ) = Z M ( t ) E Z M ( t ) R L × P r e y ( t ) Z 2 ( t ) = Z F M ( t ) E Z F M ( t ) R L × P r e y ( t )
where Z M ( t ) and Z F M ( t ) denote the positions of the male and female jackals in the search space, respectively. E represents the escaping energy of the prey, which is calculated as shown in Equation (5). R L is a random vector based on the Lévy flight distribution, defined in Equation (6).
E = E 1 × E 0 E 0 = 2 × r 1 E 1 = c 1 × ( 1 t T )
L F z = 0.01 × μ × σ | v | 1 / β σ = Γ 1 + β   ×   s i n π β / 2 Γ 1 + β 2   ×   β   ×   2 β 1 2 1 / β
where r is a random number within the interval [0, 1], c 1 is a constant (set to 1.5), t denotes the current iteration, and T is the maximum number of iterations. During the iterative process, E 1 decreases linearly from 1.5 to 0. Additionally, μ and σ are random numbers within [0, 1], and β is a constant set to 1.5.
Finally, the position of the golden jackal is updated according to:
Z ( t + 1 ) = Z 1 t + Z 2 ( t ) 2

2.1.3. Intensification Phase (Prey Capture and Constriction)

When the prey is disturbed by the golden jackals, its escaping ability gradually diminishes. At this stage, the male and female jackals begin to encircle the prey detected during the earlier phase. Once the encirclement is completed, they pounce on the prey and capture it. Based on this predatory behavior of the male and female jackals, the following mathematical model can be established [34]:
Z 1 ( t ) = Z M ( t ) E R L Z M ( t ) P r e y ( t ) Z 2 ( t ) = Z F M ( t ) E R L Z F M ( t ) P r e y ( t )
in Equation (8), R L introduces random behavior during the exploitation phase, enhancing the algorithm’s exploration capability and helping to prevent it from becoming trapped in local optima.

2.1.4. Transitioning from Exploration to Exploitation

In the GJO algorithm, the transition from the exploration phase to the exploitation phase is controlled by the value of E . During the prey’s escape process, its energy level decreases significantly. When E 0 decreases from 0 to −1, the prey’s stamina gradually weakens; conversely, when E 0 increases from 0 to 1, the prey’s stamina gradually strengthens. E 1 , the male and female jackals disperse to different regions to search for prey, indicating that the algorithm is in the exploration phase. If E < 1 , the jackals initiate an attack on the prey, and the algorithm transitions into the exploitation phase [34].

2.2. Proposed Golden Jackal Optimization

2.2.1. Good-Point-Set Population Initialization

To address the limitation of the standard GJO in which the population distribution is insufficiently uniform and extensive—resulting in incomplete exploration of the search space, potentially missing better solutions or slowing convergence—we introduce a good-point set (GPS) for initializing the golden jackal population, ensuring a wider and more uniform coverage across the entire search space. The good-point set can be computed as follows:
G n z = o 1 ( n ) × l , o 2 ( n ) × l , , o d i m ( n ) × l , 1 l n     φ z = C r , ε × n 1 + ε                                                                                                                               o i = 2 × c o s 2 × π × i p , 1 i m                                                                                              
where C ( r , ε ) is a constant dependent only on r and ε ( ε > 0 ) , d i m denotes the problem dimension, P n z represents the good-point set, o i is the element of the good-point set, and p is the smallest prime number satisfying p 3 / 2 s . φ z is the discrepancy of the good-point set G n z , which quantifies the uniformity of the point set distribution in the search space; a smaller φ z indicates a more uniform distribution. denotes the size of the good-point set, which is equal to the population size ( n = p o p ) in this study, and its value is set to 30 consistently with the experimental parameter settings.
Finally, the population initialization using the good-point set is expressed as:
Z 0 = G n z · U B L B + L B
Figure 1 illustrates the population distributions of GJO and MGJO. It can be observed that MGJO, initialized using the good-point set, exhibits a broader and more uniform population distribution, which enhances the exploration capability of the algorithm.

2.2.2. Double-Crossover Strategy

In the standard GJO algorithm, individual updates primarily rely on interactions between individuals and the global best solution, while effective information exchange among individuals within the population is lacking. This limitation is particularly pronounced in high-dimensional and complex optimization problems, leading to insufficient local exploitation and slow convergence in later stages. Moreover, when handling variables of different dimensions, the standard algorithm employs a uniform update strategy, which cannot perform fine-grained search tailored to the unique characteristics of each dimension, thereby significantly affecting overall performance. To address these challenges, this study innovatively integrates horizontal and vertical crossover strategies, forming a dual-crossover mechanism designed to enhance the search capability of the algorithm comprehensively. The double-crossover strategy is adopted because horizontal crossover promotes inter-individual information sharing to expand search coverage, while vertical crossover enables intra-individual dimensional fine-tuning to deepen local exploitation, jointly addressing the balance between exploration and exploitation in high-dimensional optimization [41].
(1) Horizontal crossover: enhancing information sharing among individuals
The horizontal crossover strategy focuses on promoting information flow and integration between different individuals within the population. Specifically, the population indices are first randomly permuted to generate parent pairs ( p a r e n t 1 and p a r e n t 2 ). For each parent pair, two offspring are generated as follows:
o f f s p r i n g 1 = r 1 · p a r e n t 1 + 1 r 1 · p a r e n t 2 + γ · p a r e n t 1 p a r e n t 2 o f f s p r i n g 2 = r 2 · p a r e n t 2 + 1 r 2 · p a r e n t 1 + γ · p a r e n t 2 p a r e n t 1
where r 1 , r 2 U 0,1 are uniformly distributed random weights that flexibly adjust the contribution of each parent in generating offspring. The perturbation factor γ 0,1 introduces a random disturbance term, effectively preventing the offspring from being too similar to their parents and thereby enhancing population diversity.
(2) Vertical crossover: enabling dimension-level fine-grained search
The vertical crossover strategy emphasizes information exchange and optimization across different dimensions within an individual. For each individual in the population, two distinct dimensions j 1 and j 2   j 1 j 2 are randomly selected, and a new solution is generated by performing crossover on these dimensions as follows:
X v c j 1 = β · X i , j 1 + 1 β · X i , j 2
where β U 0,1 is a random parameter used to flexibly adjust the weight allocation during dimension-level crossover. This approach enables the algorithm to explore potential relationships among different dimensions within an individual, allowing fine-tuned adjustments that respect the unique characteristics of high-dimensional problems.

2.2.3. Boundary Processing Mechanism Based on Global Optimization

In swarm intelligence algorithms, boundary control plays a crucial role by constraining the search space, thereby preventing invalid searches, premature convergence, and entrapment in local optima [42]. Such issues may arise when the search process deviates from feasible solutions or exceeds problem constraints. If the search space is set too narrowly (i.e., the range between L B and U B is excessively small, failing to cover potential optimal solution regions), it may hinder the algorithm from fully exploring the broader solution space, increasing the likelihood of being trapped in local optima [43]. Therefore, defining an appropriate boundary correction mechanism can enhance the algorithm’s ability to find the global optimum, prevent wasted search within infeasible regions, and avoid unnecessary computational effort. In some cases, individuals exceeding the boundaries may restrict the search process, preventing the algorithm from fully leveraging information carried by the current population [44].
Within the conventional Golden Jackal Optimizer, a rudimentary constraint-management technique is typically applied, exemplified by repositioning straying search agents directly onto the feasible region’s perimeter. This abrupt correction severs the agent’s natural exploratory path, effectively discarding the directional intelligence gathered from its current iteration. This fundamentally reactive “corrective” strategy not only squanders computational effort but also introduces the risk of overlooking promising, near-boundary optimal solutions that the agent’s original trajectory was approaching.
To reconceptualize constraint management from a restrictive punitive measure into a proactive directional aid, this work introduces a boundary guidance mechanism anchored to the swarm’s best-known position. As depicted in Figure 2, should an agent’s position exceed the feasible domain, the methodology does not merely reposition it at random. Instead, it actively steers the individual towards the vicinity of the incumbent global optimum. This technique reframes an invalid boundary violation as a productive refinement step, channeling the search effort into promising high-fitness regions. By simultaneously correcting infeasible solutions and capitalizing on the collective search history, the mechanism markedly enhances the convergence rate. The mathematical formulation for this process is given by:
Z i = Z b e s t + α · U B Z b e s t , i f     Z i   >   U B Z b e s t α · Z b e s t L B , i f     Z i   <   L B
where Z b e s t denotes the global best solution, and α is a uniformly distributed random number in the interval [0, 0.5].
In summary, the pseudo-code of the MGJO algorithm is presented in Algorithm 1.
Algorithm 1. Pseudocode of the Modified Golden Jackal Optimizer (MGJO)
1:    Inputs: The population size  p o p , and the maximum iterations  T
2:    Inputs: The location of prey population  Z i i = 1,2 , , p o p
3:    while  t < T  do
4:    *Compute the objective function values for all candidate solutions.
5:       Z 1 = the best prey (Male jackal position)
6:       Z 2 = second best prey (Female jackal position)
7:      for  i = 1 : p o p  do
8:       Update the evading energy  E  using Equation (5)
9:       Update  R L  using Equation (6)
10:     if  E 1  do (Exploration)
11:       Update the prey position using Equations (4) and (7)
12:       Using Equation (13) for boundary adjustment
13:     else  E < 1  do (Exploitation)
14:       Update the prey position using Equations (7) and (8).
15:       Apply Equation (13) to perform solution-boundary correction.
16:     end if
17:     end for
18:     Update the prey position using Equations (11) and (12)
19:    end while
20:    Return  Z 1 .

2.3. Time Complexity Analysis

The run time complexity of MGJO depends on four core processes: good-point set initialization, fitness evaluation, golden jackal position update, and double-crossover strategy (the global-optimum-based boundary handling has the same complexity as GJO’s passive reset and no additional overhead). For a population size of n, maximum iterations of t, and problem dimension of d: the computational complexity of good-point set initialization is O ( n × d ) (consistent with GJO’s random initialization, as each individual’s d-dimensional parameters are generated via good-point set mapping); the fitness evaluation complexity is O ( n × d × f ) (same as GJO, with f as the objective function complexity treated as a constant); the position update complexity is O t × n + O ( t × n × d ) (same as GJO, including searching for the best positions and updating individual vectors); the double-crossover strategy adds O ( t × n × d ) complexity (horizontal crossover for inter-individual information sharing and vertical crossover for dimensional fine-grained search, both with linear overhead). Therefore, the total run time complexity of MGJO is O ( n × d × 2 t + 1 + t × n ) , which belongs to the same asymptotic order O ( n × d × t ) as standard GJO, ensuring no excessive computational cost while improving performance.

3. Numerical Experiments

3.1. Configuration of Algorithmic Parameters

This section assesses the efficacy of the proposed MGJO methodology by employing the highly complex CEC2017 [45] and CEC2022 [46] numerical benchmark suites for comparative analysis against established metaheuristic algorithms. The comparison algorithms include: Gray Wolf Optimizer (GWO) [11], Improved Whale Optimization Algorithm (IWOA) [47], Autonomous Groups Particles Swarm Optimization (AGPSO) [48], Dung Beetle Optimizer (DBO) [21], Holistic swarm optimization (HSO) [49], Birds of Prey-Based Optimization (BPBO) [50] and Golden jackal optimization (GJO) [34]. The specific parameter settings for each comparison algorithm are listed in Table 2.
To maintain experimental equity and account for stochastic influences, this investigation standardized the computational framework with a uniform population of 30 solutions and a 500-iteration ceiling across all benchmarked methods. Each technique completed 30 independent trials, with recorded metrics including arithmetic mean (Ave), statistical deviation (Std), and performance ranking (Rank)—optimal values being typeset in bold. The technical environment featured Windows 10 running on 13th-generation Intel Core i5-13400 silicon (2.5 GHz) with 16 GB DDR4 memory, utilizing MATLAB 2024b as the computational engine.

3.2. Qualitative Analysis of MGJO

This segment presents a conceptual evaluation of the developed MGJO framework. First, we analyze the solution distribution diversity—a vital characteristic for comprehensive search space coverage. Subsequently, we investigate the equilibrium between global scanning and local refinement, noting that earlier cycles demand broader investigation while subsequent stages require intensified concentration. The method’s search capabilities are verified through dedicated exploration and exploitation metrics. Furthermore, component ablation studies are conducted to quantify the contribution of each enhancement. Comprehensive discussions follow in subsequent sections.

3.2.1. Analysis of the Population Diversity

Within the domain of metaheuristic search methodologies, population diversity quantifies the dispersion characteristics among constituent agents in a solution set [51], with each agent characteristically representing a potential configuration within the feasible domain. A decline in population diversity can lead to premature convergence to a local optimum, thereby constraining the algorithm’s global search capability. On the other hand, preserving higher diversity facilitates exploration across broader areas of the solution space and improves the probability of locating the global optimum. In this subsection, we evaluate the population diversity of the MGJO algorithm based on Equation (14) [52].
I C t = i = 1 N   d = 1 D   x i d t c d t 2
where the metric I C t characterizes the collective dispersion of the solution set, N corresponds to the total number of agents, D signifies the dimensionality of the search space, and x i d t represents the coordinate of the i t h candidate solution along the d t h axis during iteration t . The coefficient c d t measures the spatial distribution of the entire population relative to its centroid at cycle t , computed through Equation (15).
c d t = 1 D i = 1 N   x i d t
Figure 3 presents the evolution curves of population diversity for MGJO and the original GJO on the CEC2017 test suite. Population diversity is quantified using the I C t metric defined in Equation (14), which reflects the dispersion of individuals within the search space. Higher values indicate stronger diversity, which is beneficial for exploring unknown regions and avoiding local optima.
As shown in the figure, across all test functions, the diversity curve of MGJO consistently remains above that of GJO, and this advantage persists throughout the entire iteration process. In the early stage of iteration (first 50 generations), MGJO, aided by the elite-based initialization strategy, distributes the initial population more uniformly and broadly across the search space, resulting in significantly higher I C t values than GJO and laying a solid foundation for subsequent global exploration. During the middle stage of iteration (50–300 generations), the dual crossover strategy (Section 2.2.2) promotes information sharing among individuals through horizontal crossover and enables fine-grained, dimension-level search via vertical crossover. This effectively maintains population diversity and prevents the rapid decline observed in GJO, which heavily relies on the global best solution for individual updates. In the late stage of iteration (after 300 generations), although the I C t values of both algorithms gradually decrease with convergence, MGJO still maintains a higher diversity level. This is attributed to the global best-based boundary handling mechanism (Section 2.2.3)—when individuals exceed the search boundaries, they are guided toward regions near the global optimum rather than simply being reset. This approach preserves the search direction of individuals while injecting new exploratory vitality into the population.
For instance, in the CEC2017-F8 function, after 500 iterations, the I C t value of MGJO is approximately 900, whereas that of GJO is only 600, a difference of 33.3%. In the CEC2017-F29 function, MGJO maintains I C t above 3700 throughout, while GJO drops below 3000 in the later iterations. These results fully demonstrate that MGJO, through its three key improvements, significantly enhances population diversity and effectively mitigates the premature convergence problem common in traditional GJO, providing crucial support for efficient exploration of the search space and discovery of global optima.

3.2.2. Examination of Global Search and Local Refinement Dynamics

Within heuristic optimization frameworks, global search and local intensification represent two fundamental and complementary phases. Global search encompasses extensive investigation throughout the solution domain to identify promising regions potentially containing optimal solutions, thereby ensuring the algorithm does not prematurely converge to suboptimal areas. Conversely, local intensification entails concentrated examination and refinement of previously discovered high-potential solutions, progressively enhancing solution quality through meticulous neighborhood investigation. This latter phase strategically utilizes accumulated knowledge to systematically exploit the most promising search regions.
Overemphasis on exploration may result in inefficient allocation of computational resources, as the algorithm scans widely without making concentrated improvements, potentially missing opportunities to refine solutions in specific zones. Conversely, excessive exploitation can cause the algorithm to converge prematurely to a local optimum, limiting its capacity to identify better solutions elsewhere in the search domain [53]. Hence, achieving an appropriate balance between these two processes is vital for the algorithm’s performance. In this section, we examine the exploration and exploitation properties of the MGJO algorithm, which are quantitatively evaluated using Equations (16) and (17) [12].
E x p l o r a t i o n % = D i v t D i v m a x × 100 %
E x p l o i t a t i o n % = D i v t D i v m a x D i v m a x × 100 % ,
where the exploration metric D i v t characterizes the global search intensity at iteration t , computed through Equation (17), while D i v m a x d represents the peak exploration magnitude observed throughout the optimization process.
D i v t = 1 D d = 1 D 1 N i = 1 N m e d i a n x d t x i d t .
Figure 4 illustrates the evolution of exploration and exploitation capabilities of MGJO on the CEC2017 test suite, focusing on 10 representative functions (e.g., F1, F3, F6). These capabilities are quantified using Equations (16)–(18), where the exploration rate reflects the algorithm’s global search ability, and the exploitation rate indicates its capacity for fine-grained search around high-quality solutions.
In the early stage of iteration (first 100 generations), MGJO’s exploration rate rapidly rises to 80–90%; for example, F1 reaches 85% at generation 50, and F3 exceeds 80%. This is attributed to the broad population distribution from the elite-based initialization combined with perturbations from Lévy flight random vectors, enabling rapid coverage of critical regions in the search space. During the middle stage of iteration (100–300 generations), the exploration rate gradually decreases to 40–60%, while the exploitation rate increases to 50–70%. Taking F11 as an example, at generation 200, the exploration and exploitation rates stabilize around 50% and 45%, respectively. This behavior results from the dynamic adjustment of the E value, which facilitates a smooth transition from global search to localized capture, while the dual crossover strategy enhances exploitation by sharing information through horizontal crossover and optimizing dimensional correlations via vertical crossover. In the late stage of iteration (after 300 generations), the exploitation rate maintains a high level of 60–80%; for instance, F23 reaches 75% and F28 exceeds 70% by generation 500, without falling into local optima. This performance benefits from the global best-based boundary handling mechanism, which guides out-of-bound individuals toward advantageous regions, ensuring high exploitation efficiency while maintaining moderate exploration.
Overall, MGJO effectively balances exploration and exploitation, preventing resource wastage and loss of the global optimum, thereby providing robust performance for high-dimensional and complex optimization problems.

3.2.3. Impact Analysis of the Modification

To quantitatively assess both the isolated impacts and combined efficacy of the three proposed improvement mechanisms—Good-Point-Set Population Initialization (S1), Double-Crossover Strategy (S2), and Boundary Processing Mechanism Based on Global Optimization (S3)—on the baseline GJO, ablation experiments were conducted using the CEC2017 benchmark functions with dimension dim = 30. Five comparative variants were designed: GJO (standard), GJO-S1 (only S1), GJO-S2 (only S2), GJO-S3 (only S3), and MGJO, which integrates all strategies. The experimental results are presented in Figure 5 and Figure 6.
From the convergence curves in Figure 5, it is evident that all variants incorporating at least one enhancement outperform the standard GJO, with MGJO demonstrating the most pronounced advantage. For functions such as F1, F10, and F16, MGJO consistently achieves the lowest fitness values (objective function values) and the fastest convergence. After 500 iterations, MGJO attains a fitness value of only 4.1805 × 103 on F1, far lower than the standard GJO (1.3244 × 1010) and single-strategy variants (e.g., GJO-S1 at 9.5699 × 108). On complex multimodal functions such as F22 and F27, MGJO exhibits smoother convergence curves in the later stages, without significant oscillations, whereas single-strategy variants (e.g., GJO-S3) still show fitness rebounds. This indicates that individual strategies may only improve performance at specific stages and cannot consistently optimize throughout the entire process. Furthermore, GJO-S1 demonstrates faster early-stage convergence than standard GJO (e.g., the first 100 iterations of F30), confirming that the good-point-set initialization enhances initial exploration efficiency. GJO-S2 achieves higher mid-stage optimization accuracy (e.g., F12 iterations 200–300), reflecting the value of information sharing via the double-crossover strategy. GJO-S3 performs more efficient late-stage boundary-region search (e.g., after 400 iterations on F29), validating the guiding role of the boundary processing mechanism.
The radar chart of average rankings in Figure 6 further quantifies the contribution of each strategy. MGJO achieves an overall average rank of only 1.20, significantly better than standard GJO (4.33) and the single-strategy variants (GJO-S1: 3.43, GJO-S2: 1.97, GJO-S3: 1.50). Among the individual strategies, S3 contributes most to ranking improvement (GJO-S3: 1.50), followed by S2 (GJO-S2: 1.97) and S1 (GJO-S1: 3.43). When integrated, the three strategies exhibit a strong synergistic effect, allowing MGJO to achieve the top ranking. This demonstrates that the three strategies effectively address GJO’s weaknesses in initialization, information exchange, and boundary handling, respectively, and complement each other: good-point-set initialization establishes a broad exploration foundation, the double-crossover strategy enhances mid-stage information utilization, and the boundary processing mechanism ensures late-stage search accuracy. Combined, they form a “full-process optimization loop,” resulting in an overall performance leap for the algorithm.
Table 3 reports the optimization performance of the standard GJO, its three enhanced variants incorporating different improvement strategies (GJO-S1, GJO-S2, GJO-S3), and the combined MGJO on the CEC2017 benchmark set (dim = 30). The mean (Ave) and standard deviation (Std) provide an intuitive overview of each algorithm’s accuracy and stability. According to the results, all single-strategy variants outperform the original GJO. Among them, GJO-S3 (global best–guided boundary handling) achieves particularly strong performance on most functions. For example, on F1, its Ave drops to 6.9047 × 108, a substantial reduction compared with GJO’s 1.3149 × 1010. GJO-S2 (dual-crossover strategy) shows powerful local exploitation on complex multimodal functions such as F9, achieving an Ave of 2.9440 × 103, better than GJO’s 5.4780 × 103. GJO-S1 (good-point-set initialization) improves global exploration by optimizing the initial population distribution; for instance, on the high-dimensional F2 function, the Ave decreases from 3.7800 × 1036 to 3.1681 × 1033.
MGJO, which integrates all three strategies, produces a clear synergistic effect and achieves the best results across all test functions. Not only does it yield the lowest Ave values on functions such as F1 and F2 (3.7651 × 1003 and 2.2475 × 1014, respectively), but it also exhibits greater stability, with Std values consistently smaller than those of the single-strategy variants—for example, an Std of only 3.9619 × 10−02 on F6. These outcomes demonstrate the complementary strengths of good-point-set initialization, the dual-crossover strategy, and the global best–guided boundary handling mechanism. Their integrated use effectively overcomes issues such as uneven initialization, insufficient information exchange, and passive boundary handling in GJO, laying a solid foundation for MGJO’s superior performance on high-dimensional and complex optimization problems.

3.3. Performance Evaluation and Discussion on CEC2017 and CEC2022 Benchmark Sets

This segment presents a rigorous performance validation of the MGJO framework utilizing the standardized CEC2017 and CEC2022 evaluation suites. To ensure comprehensive capability assessment, the experimental design incorporates multidimensional testing: CEC2017 functions at dimensional configurations of 30 and 100, complemented by CEC2022 evaluations at 10 and 20 dimensions. Computational outcomes are systematically documented in Table 4, Table 5, Table 6 and Table 7, presenting both central tendency (Mean) and variability (Standard Deviation) metrics across optimization runs, where superior results are typographically emphasized. Evolutionary characteristics are further visualized through convergence trajectory analysis in Figure 7.
In the CEC2017 benchmark suite (dim = 30, Table 4), MGJO demonstrates optimal performance on most functions. For F1, MGJO achieves an Ave value of 4.1805 × 1003, far lower than the second-best algorithm HSO (8.3879 × 1003) and standard GJO (1.3244 × 1010), with a Std of 3.9175 × 1003, only 0.54 times that of HSO, demonstrating high precision and stability. On F2, MGJO attains an Ave of 1.2005 × 1014, which is more than 20 orders of magnitude lower than standard GJO (1.7645 × 1035), and even compared with HSO (1.3768 × 1017), it is still 3 orders of magnitude lower. For complex multimodal functions such as F9, F12, and F30, MGJO also maintains superiority; for instance, the Ave of F30 is 1.1595 × 1004, only 0.03% of standard GJO (3.8595 × 1007), with a Std of 3.9623 × 1003, much lower than other comparison algorithms, verifying its robustness in high-dimensional complex problems.
In CEC2017 (dim = 100, Table 5), with increasing dimensionality, MGJO’s advantages become even more significant. For F1, the Ave is 2.5979 × 1008, only 0.2% of standard GJO (1.3279 × 1011), and one order of magnitude lower than HSO (3.2404 × 1009). On F4, the Ave is 1.1419 × 1003, 40.4% lower than IWOA (1.9157 × 1003), and the Std is 8.5415 × 1001, only 27.5% of IWOA. For functions such as F15 and F19, MGJO maintains the lowest Ave and Std values, proving its efficient optimization capability even in ultra-high-dimensional problems.
In the CEC2022 benchmark suite (dim = 10, 20, Table 6 and Table 7), MGJO still exhibits outstanding performance. For dim = 10, F3 reaches the theoretical optimum with an Ave of 6.0000 × 1002 and a Std of 8.3378 × 10−04, far better than comparison algorithms such as GWO (6.0165 × 1002). For F5, Ave is 9.0166 × 1002, lower than AGPSO (9.0181 × 1002), and Std is 2.9296 × 1000, only 1.5% of IWOA (1.9366 × 1002). For dim = 20, F3 also reaches the theoretical optimum (Ave 6.0000 × 1002), while F10 achieves an Ave of 2.5828 × 1003, 4.2% lower than IWOA (2.6964 × 1003), with a Std of 1.6528 × 1002, the lowest among all comparison algorithms, demonstrating its generalization across different benchmarks.
Figure 7 shows the convergence curves of MGJO and comparison algorithms on representative functions from CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20), visually reflecting the optimization speed and final accuracy. On CEC2017 (dim = 30), MGJO converges fastest and reaches the highest final precision. For F1, the fitness value drops below 1.0 × 1004 within 100 iterations, while standard GJO remains above 1.0 × 1010; after 500 iterations, MGJO outperforms GJO by more than 2900 times. For F30, after 200 iterations, the convergence curve stabilizes, with a final fitness of 1.1595 × 1004, 87.9% lower than HSO (9.5764 × 1004). In CEC2017 (dim = 100), for F1 and F12, MGJO shows the steepest decline in the first 50 iterations; for example, F12 reaches below 1.0 × 1008 at iteration 50, while GJO remains at 4.7622 × 1010. Later-stage convergence is smooth, demonstrating fast convergence without premature stagnation. On CEC2022 (dim = 10, 20), MGJO maintains advantages: for dim = 10, F3 reaches the theoretical optimum within 100 iterations and remains stable, whereas GWO requires more than 300 iterations and fails to achieve the optimum; for dim = 20, F10 reaches 2.5828 × 1003 after 300 iterations, 36.1% lower than BPBO (4.0409 × 1003), with stable convergence.
Furthermore, Figure 8 presents radar charts of rankings for all algorithms across CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20), where smaller values indicate better performance. In CEC2017 (dim = 30, 100), MGJO ranks first on all functions, with minimal radar coverage concentrated at the center. For key functions such as F1, F2, and F10, MGJO ranks 1, significantly outperforming standard GJO (mostly rank 8) and other algorithms (e.g., GWO ranks 2–5, IWOA ranks 3–7). Its ranking advantage is especially pronounced for high-dimensional functions (dim = 100), where F12 and F19 remain ranked 1, while other algorithms generally rank higher (e.g., AGPSO ranks 5), confirming MGJO’s superiority in high-dimensional scenarios. In CEC2022 (dim = 10, 20), MGJO still ranks first; for F3, F5, and F10, the rank is 1, and for F2 and F4, it is 2, while standard GJO mostly ranks 7–8 and GWO ranks 3–4, further demonstrating MGJO’s consistent advantage across different benchmarks and dimensions, providing reliable support for practical applications.
In summary, the quantitative results in Table 4, Table 5, Table 6 and Table 7, together with convergence curves (Figure 7) and ranking visualizations (Figure 8), consistently show that MGJO significantly outperforms comparison algorithms in terms of optimization accuracy, stability, convergence speed, and generality, providing a solid foundation for its subsequent application in multilevel threshold image segmentation.

3.4. Stability Analysis

Statistical analysis plays a vital role in algorithm optimization, enabling researchers to evaluate and contrast the efficacy of various algorithmic approaches. This facilitates the selection of the most suitable method for specific research challenges. In this section, the performance of the MGJO algorithm is assessed using the Wilcoxon rank-sum test and the Friedman test, with detailed procedures and results presented accordingly.

3.4.1. Statistical Analysis Using Wilcoxon Rank-Sum Test

This section utilizes the Wilcoxon rank-sum test [54] to determine statistical significance in performance differentials between the proposed MGJO and comparative algorithms. This non-parametric approach does not require normal distribution assumptions, offering enhanced robustness compared to parametric alternatives like the t-test when handling non-Gaussian data or anomalous observations. The calculation of the Wilcoxon test statistic W follows the formulation presented in Equation (19).
W = i = 1 n 1 R X i
In this procedure, each data point X i is assigned a rank R X i based on its value within the entire sample. The final test statistic U is calculated according to the expression given in Equation (20).
U = W n 1 n 1 + 1 2
With sufficiently large samples, the U converges to normality, with its mean and variance given by Equations (21) and (22), respectively.
μ U = n 1 n 2 2
σ U = n 1 n 2 n 1 + n 2 + 1 12
and the standardized statistic Z is calculated by Equation (23).
Z = U μ U σ U
Table 8 presents the Wilcoxon rank-sum test results (significance level 0.05) for MGJO compared with seven benchmark algorithms on the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) test suites. The performance differences are indicated using “+/=/−” (“+” means MGJO performs better, “−” means worse). On CEC2017 dim = 30, MGJO achieves “+”/“−” values of 29/1 against DBO and BPBO, and 28/2 against GJO. For dim = 100, MGJO attains “+”/“−” values of 29/1 against DBO, BPBO, and GJO, demonstrating an even more pronounced advantage. On CEC2022 dim = 10, MGJO records “+”/“−” values of 11/1 against GWO, HSO, and GJO. For dim = 20, it achieves “+”/“−” values of 12/0 against IWOA, DBO, and GJO, achieving complete dominance. Overall, MGJO significantly outperforms the comparison algorithms in most scenarios, with strong statistical reliability.

3.4.2. Friedman Average Ranking Assessment

This segment employs the Friedman examination [55] to establish comprehensive performance hierarchies among all evaluated methodologies. Operating as a non-parametric technique, this procedure effectively compares central tendency disparities across multiple correlated samples. Particularly advantageous for blocked experimental designs, it serves as a powerful substitute for conventional ANOVA when normality prerequisites remain unfulfilled. The computation of the Friedman metric follows the formulation specified in Equation (24) [55].
Q = 12 n k k + 1 j = 1 k R j 2 3 n k + 1
where n is the number of blocks, k is the number of groups, and R j is the rank sum for j -th group. When n and k are large, Q follows approximately a χ 2 distribution with k 1 degrees of freedom.
Table 9 presents the Friedman mean rank test results for MGJO and seven comparison algorithms on the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) benchmark suites. The overall performance of each algorithm is quantified by the mean rank ( M . R ) and final rank ( T . R ), with smaller values indicating better performance. Across all test scenarios, MGJO achieves the lowest mean ranks and consistently secures the first final rank. Specifically, for CEC2017 dim = 30, MGJO attains an M . R of 1.90, far lower than the second-best AGPSO (3.10) and standard GJO (6.50). For dim = 100, MGJO’s M . R is 1.77, still leading GWO (3.47), while standard GJO has an M . R of 6.40 and a T . R of 7. On CEC2022 dim = 10, MGJO achieves an M . R of 2.42, outperforming AGPSO (2.50), with standard GJO at 6.75 and T . R 8. For dim = 20, MGJO attains an M.R of 2.00, significantly ahead of other algorithms, while standard GJO has an M . R of 5.92 and T . R 7. These results, from the perspective of overall ranking, confirm that MGJO’s comprehensive optimization performance is significantly superior to comparison algorithms across different test suites and dimensionalities, further validating its effectiveness.

4. MGJO for Multilevel Thresholding Art Image Segmentation

In image segmentation through thresholding, Otsu’s approach and Kapur’s entropy criterion represent two widely adopted techniques for selecting optimal separation values. When a specific threshold value TH is applied, the inter-class variance between foreground and background regions varies accordingly. The ideal threshold ( T b e s t ) is identified as the value that maximizes this inter-class separation. Alternatively, Kapur’s method calculates the combined entropy of foreground and background pixel distributions, seeking the threshold that optimizes this entropy measure. Both techniques process image pixel data as input and derive their respective optimal thresholds through distinct computational frameworks [54,55,56]. For the experimental phase of this research, Otsu’s methodology has been implemented for threshold selection.
Otsu’s technique establishes an optimal segmentation threshold by maximizing inter-class separation. Consider a digital image I possessing L distinct intensity values, where each tonal value i contains n i pixels. The aggregate pixel population is consequently determined as:
N = i = 0 L 1 n i
and the probability of a pixel having gray level i is:
P i = n i N , i = 0,1 , , L 1
where P i 0 ,   a n d   P 0 + P 1 + + P L 1 = 1 .
Suppose the number of thresholds is k. If a gray level t is selected as a threshold, it divides the image into two regions: pixels with gray levels [ 0 , t ] as the object, and pixels with gray levels [ t + 1 , L 1 ] as the background. Let the proportion of object pixels be ω 0 with mean gray level μ 0 , and the proportion of background pixels be ω 1 with mean gray level μ 1 . Denote the overall mean gray level of the image as μ , and the inter-class variance as v . Then, ω 0 , μ 0 , ω 1 , μ 1 , μ and v are calculated as follows [56,57,58]:
ω 0 = i = 0 t P i μ 0 = i = 0 t i P i ω 0 ω 1 = i = t + 1 L 1 P i μ 1 = i = t + 1 L 1 i P i ω 1 μ = i = 0 L 1 i P i ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
For multi-threshold segmentation, the inter-class variance is extended as:
v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 + + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where ω i and μ i are calculated as:
ω i 1 = i = t i 1 + 1 t i P i , 1 i k + 1 μ i 1 = i = t i 1 + 1 t i i P i ω i 1 , 1 i k + 1
The optimal thresholds T b e s t are determined by:
T b e s t = arg m a x 0 t 1 t 2 t k [ v ( t 1 , t 2 , , t k ) ]
In this study, the performance of the MGJO algorithm in multilevel thresholding is evaluated through comparative experiments. Seven improved swarm intelligence algorithms are selected as benchmarks. All algorithms are set with a population size of p o p   =   30 and a maximum iteration number of T   =   20 , using Otsu’s method as the objective function. Experiments are conducted on eight benchmark images of different styles with threshold numbers of 4, 6, 8, and 10, as shown in Figure 9, Each experiment is independently run 30 times, and the mean (Ave) and standard deviation (Std) of the objective function values are reported to assess algorithm accuracy and stability. All algorithm parameters are kept consistent with previous sections to ensure a fair comparison.

4.1. Evaluation Index

Standard quantitative measures for assessing segmentation efficacy comprise Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). Elevated PSNR and SSIM readings correspond to reduced distortion and enhanced visual fidelity in processed images, while superior FSIM scores reflect diminished feature-level discrepancies. The computational formulations for these metrics are specified in references [59,60] as follows:
The PSNR is calculated as:
P S N R = 10 l o g 10 ( 255 2 M S E )
where the Mean Square Error (MSE) quantifies the cumulative squared deviation between the source image I and its processed counterpart I . The MSE calculation is defined as:
M S E = 1 M N j = 1 M   k = 1 N [ I ( j , k ) I ( j , k ) ] 2
where ( M × N ) represents the image dimensions, I and I denote the original and segmented images, respectively, I ( j , k ) is the gray value of the original image at pixel ( j , k ) , and I ( j , k ) is the gray value of the segmented image at the same pixel.
SSIM measures the structural similarity between two images by comparing their luminance, contrast, and structural information. Given two images I and I , SSIM is calculated as:
S S I M ( I , I ) = ( 2 μ I μ I + C 1 ) ( 2 σ I I + C 2 ) ( μ I 2 + μ I 2 + C 1 ) ( σ I 2 + σ I 2 + C 2 )
where μ I and μ I are the mean gray values of I and I , σ I I is the covariance between I and I , σ I 2 and σ I 2 are the variances of I and I , respectively. Typically C 1 = K 1 L , C 2 = K 2 L 2 , K 1 = 0.01 , K 2 = 0.03 , and L being the maximum gray level.
Human visual perception is sensitive to points with high phase consistency; therefore, FSIM uses phase congruency as the primary feature to assess image quality. FSIM is computed as:
F S I M ( I , I ) = x Ω S L ( x ) P C m ( x ) x Ω P C m ( x )
where x denotes a pixel, Ω represents the entire image domain, and P C m ( x ) = m a x ( P C 1 ( x ) , P C 2 ( x ) ) , P C 1 ( x ) and P C 2 ( x ) being the phase congruency values of the original and segmented images, respectively.

4.2. Evaluation of Otsu Thresholding Performance Using MGJO Framework

The proposed MGJO framework was utilized to identify optimal segmentation thresholds through Otsu’s criterion as the fitness function for multi-level image thresholding across eight test images. Algorithm efficacy is quantified through the maximized fitness score alongside PSNR, FSIM, and SSIM metrics, where superior values across these indicators correspond to enhanced segmentation quality. Experimental outcomes confirm that the developed MGJO technique consistently obtains superior performance across all evaluation criteria—fitness function values, PSNR, FSIM, and SSIM—when benchmarked against established optimization methods.
Table 10 displays the spatial distribution of optimized thresholds across image histograms. Table 11 summarizes the central tendency and variability of optimal fitness values achieved by all compared methods using Otsu’s criterion, accompanied by Friedman ranking outcomes. Table 12, Table 13 and Table 14 systematically document the performance statistics (mean and standard deviation) for PSNR, FSIM, and SSIM metrics, respectively, under Otsu-based optimization, with integrated Friedman ranking comparisons. Experiments were conducted for threshold numbers of 4, 6, 8, and 10. Figure 10, Figure 11, Figure 12 and Figure 13 visually illustrate the comprehensive performance of each algorithm using Friedman average rankings, collectively confirming the significant superiority of MGJO in multilevel image segmentation tasks.
From the optimal fitness values reported in Table 11, MGJO demonstrates the highest search accuracy and strongest stability across all image and threshold combinations. For instance, under a low threshold (TH = 4) on the Camera image, MGJO achieves an Ave value of 4.6001 × 1003, slightly higher than the standard GJO’s 4.5950 × 1003, while its Std value of 1.0296 × 1000 is only 31.7% of GJO’s, indicating much smaller variability across multiple runs. Under a high threshold (TH = 10) on the Lena image, MGJO attains an Ave value of 3.8118 × 1003, surpassing GWO’s 3.8035 × 1003, with a Std value of 1.9341 × 1000, lower than AGPSO’s 3.2966 × 1000. For a complex-texture image like Baboon (TH = 8), MGJO achieves an Ave of 3.3992 × 1003, higher than HSO’s 3.3671 × 1003, and a Std of 1.1804 × 1000, only 10.6% of HSO’s. These results confirm that MGJO consistently attains higher Ave values (closer to the optimal Otsu objective function) and lower Std values (smaller fluctuations over repeated runs), demonstrating superior accuracy and stability in identifying optimal segmentation thresholds.
The image quality evaluation metrics reported in Table 12, Table 13 and Table 14 further highlight MGJO’s advantages. In terms of PSNR (higher values indicate lower distortion), for the Face image at TH = 10, MGJO reaches a PSNR of 26.4192, exceeding GJO’s 24.9027, with the lowest Std of 0.2050 among all algorithms. For FSIM (higher values indicate lower segmentation error), for the Hunter image at TH = 10, MGJO achieves 0.9529, higher than GJO’s 0.9377, with a Std of 0.0024, substantially lower than other algorithms. For SSIM (higher values indicate greater structural similarity to the original image), for the Saturn image at TH = 10, MGJO attains 0.9267, outperforming GJO’s 0.9119, with a Std of 0.0025, only 2.7% of GJO’s. These results indicate that MGJO not only produces segmented images with low distortion but also preserves the original features and structural information more effectively.
Figure 10, Figure 11, Figure 12 and Figure 13, showing the Friedman average rankings, provide a comprehensive validation of MGJO’s superiority. In Figure 10 (fitness ranking), MGJO consistently ranks first across all thresholds, far ahead of GJO, which mostly ranks 5–6. In Figure 11 (PSNR ranking), MGJO maintains the top position with an average rank of 1.54, while the next best, GWO, only achieves 2.82. In Figure 12 (FSIM ranking) and Figure 13 (SSIM ranking), MGJO achieves average ranks of 1.93 and 2.65, respectively, consistently outperforming all other algorithms. Notably, as the threshold number increases (TH = 8, 10), MGJO’s ranking advantage is not diminished; rather, the synergistic effects of the double-crossover strategy and the global-optimum-based boundary handling mechanism make its superiority even more pronounced under high-threshold conditions. These findings further confirm MGJO’s overall excellence in multilevel image segmentation tasks, providing strong support for its application in practical image segmentation scenarios.

5. Summary and Prospect

This study addresses several limitations of the standard Golden Jackal Optimization (GJO) algorithm, including uneven population initialization, insufficient information exchange among individuals, and passive boundary handling, by proposing an improved GJO algorithm (MGJO) and applying it to multilevel image thresholding tasks. At the algorithmic level, MGJO employs a good-point-set population initialization strategy, which enables a wider and more uniform distribution of the initial population within the search space, overcoming the limited exploration range of traditional random initialization. A double-crossover strategy, integrating horizontal and vertical crossovers, not only promotes information sharing among individuals within the population but also achieves fine-grained, dimension-level search, enhancing optimization accuracy in high-dimensional problems. Meanwhile, the global-optimum-based boundary handling mechanism replaces the traditional “resetting-boundary” passive approach, preserving the search direction information of individuals while improving search efficiency near boundary regions.
Comparative experiments on the CEC2017 (dimensions 30 and 100) and CEC2022 (dimensions 10 and 20) test suites against seven mainstream optimization algorithms demonstrate that MGJO excels in optimization accuracy, stability, and exploration–exploitation balance. Statistical analyses using Wilcoxon rank-sum tests and Friedman mean rank tests further confirm the reliability of its performance. In the application of multilevel image thresholding, with Otsu’s method as the objective function, MGJO achieves higher fitness values in 4-, 6-, 8-, and 10-level thresholding tasks on eight benchmark artistic images. The segmented images exhibit superior quality in terms of PSNR, FSIM, and SSIM metrics compared with other algorithms, and MGJO consistently ranks first in Friedman average rankings, fully demonstrating its practical effectiveness in image segmentation tasks.
Despite the outstanding performance of MGJO in numerical optimization and image segmentation, there remains room for further improvement and extension. Future research may proceed in several directions. In terms of algorithmic performance, the current MGJO parameters are mostly fixed or linearly varied; adaptive parameter adjustment mechanisms can be designed using reinforcement learning or chaos theory to adjust parameters in real time according to the characteristics of the optimization problem. MGJO can also be extended to multi-objective optimization to address complex scenarios with multiple conflicting objectives. In terms of application scenarios, the algorithm can be tailored to medical image segmentation by optimizing objective functions for images with uneven grayscale distribution or high noise, supporting clinical diagnosis. Temporal optimization strategies can also be developed to leverage correlations between video frames, enabling efficient video sequence segmentation. Furthermore, MGJO can be integrated with deep learning models to optimize network hyperparameters or initial weights, forming a hybrid framework. Lightweight adaptations and hardware acceleration techniques may be applied for deployment on embedded devices. From a theoretical perspective, establishing a convergence analysis framework would provide rigorous support for algorithmic improvements and further enhance the value of MGJO in engineering optimization problems.

Author Contributions

Conceptualization, X.Z. and Z.B.; methodology, X.Z. and Z.B.; software, X.Z. and Z.B.; validation, X.L. and J.W.; formal analysis, X.L. and J.W.; investigation, X.L. and J.W.; resources, X.Z. and Z.B.; data curation, X.Z. and Z.B.; writing—original draft preparation, J X.Z. and Z.B.; writing—review and editing, X.Z. and Z.B.; visualization, X.Z. and Z.B.; supervision, X.Z. and Z.B.; funding acquisition, X.Z. and Z.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data in this paper are included in the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, S.; Meng, F.; Cai, L.; Yang, R. Boomerang aerodynamic ellipse optimizer: A human game-inspired optimization technique for numerical optimization and multilevel thresholding image segmentation. Math. Comput. Simul. 2025, 238, 604–636. [Google Scholar] [CrossRef]
  2. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.-S. MAAPO: An innovative membrane algorithm based on artificial protozoa optimizer for multilevel threshold image segmentation. Artif. Intell. Rev. 2025, 58, 324. [Google Scholar] [CrossRef]
  3. Rao, H.; Jia, H.; Zhang, X.; Abualigah, L. Hybrid Adaptive Crayfish Optimization with Differential Evolution for Color Multi-Threshold Image Segmentation. Biomimetics 2025, 10, 218. [Google Scholar] [CrossRef]
  4. Huo, Y.; Gang, S.; Guan, C. FCIHMRT: Feature Cross-Layer Interaction Hybrid Method Based on Res2Net and Transformer for Remote Sensing Scene Classification. Electronics 2023, 12, 4362. [Google Scholar] [CrossRef]
  5. Tamilarasan, A.; Rajamani, D. Towards efficient image segmentation: A fuzzy entropy-based approach using the snake optimizer algorithm. Results Eng. 2025, 26, 105335. [Google Scholar] [CrossRef]
  6. Ramos-Frutos, J.; Oliva, D.; Miguel-Andrés, I.; Casas-Ordaz, A.; Ramos-Soto, O.; Aranguren, I.; Zapotecas-Martínez, S. Multi-population estimation of distribution algorithm for multilevel thresholding in image segmentation. Neurocomputing 2025, 641, 130325. [Google Scholar] [CrossRef]
  7. Qiao, Z.; Wu, L.; Heidari, A.A.; Zhao, X.; Chen, H. An enhanced tree-seed algorithm for global optimization and neural architecture search optimization in medical image segmentation. Biomed. Signal Process. Control 2025, 104, 107457. [Google Scholar] [CrossRef]
  8. Guo, H.; Wang, J.G.; Liu, Y. Multi-threshold image segmentation algorithm based on Aquila optimization. Vis. Comput. 2023, 40, 2905–2932. [Google Scholar] [CrossRef]
  9. Zheng, J.; Gao, Y.; Zhang, H.; Lei, Y.; Zhang, J. OTSU Multi-Threshold Image Segmentation Based on Improved Particle Swarm Algorithm. Appl. Sci. 2022, 12, 11514. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar]
  13. Dorigo, M.; Birattari, M.; Stützle, T. Ant Colony Optimization. Comput. Intell. Mag. IEEE 2006, 1, 28–39. [Google Scholar]
  14. Yang, X.-S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  15. Wang, R.-B.; Hu, R.-B.; Geng, F.-D.; Xu, L.; Chu, S.-C.; Pan, J.-S.; Meng, Z.-Y.; Mirjalili, S. The Animated Oat Optimization Algorithm: A nature-inspired metaheuristic for engineering optimization and a case study on Wireless Sensor Networks. Knowl.-Based Syst. 2025, 318, 113589. [Google Scholar]
  16. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar]
  17. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  18. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  19. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  20. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  21. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  22. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  24. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  25. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  26. Aziz, M.A.E.; Ewees, A.A.; Hassanien, A.E. Whale Optimization Algorithm and Moth-Flame Optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  27. SMookiah; Parasuraman, K.; Chandar, S.K. Color image segmentation based on improved sine cosine optimization algorithm. Soft Comput. 2022, 26, 13193–13203. [Google Scholar] [CrossRef]
  28. Aranguren, I.; Valdivia, A.; Morales-Castañeda, B.; Oliva, D.; Elaziz, M.A.; Perez-Cisneros, M. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm. Biomed. Signal Process. Control 2021, 64, 102259. [Google Scholar] [CrossRef]
  29. Dhal, K.G.; Das, A.; Ray, S.; Gálvez, J.; Das, S. Nature-Inspired Optimization Algorithms and Their Application in Multi-Thresholding Image Segmentation. Arch. Comput. Methods Eng. 2019, 27, 855–888. [Google Scholar] [CrossRef]
  30. Jiang, Z.; Zou, F.; Chen, D.; Cao, S.; Liu, H.; Guo, W. An ensemble multi-swarm teaching–learning-based optimization algorithm for function optimization and image segmentation. Appl. Soft Comput. 2022, 130, 109653. [Google Scholar] [CrossRef]
  31. Elaziz, M.A.; Al-qaness, M.A.A.; Al-Betar, M.A.; Ewees, A.A. Polyp image segmentation based on improved planet optimization algorithm using reptile search algorithm. Neural Comput. Appl. 2025, 37, 6327–6349. [Google Scholar] [CrossRef]
  32. Premalatha, R.; Dhanalakshmi, P. An innovative segmentation algorithm based on enhanced fuzzy optimization of skin cancer images. Multimed. Tools Appl. 2025, 84, 40425–40448. [Google Scholar] [CrossRef]
  33. Abdel-Salam, M.; Houssein, E.H.; Emam, M.M.; Samee, N.A.; Gharehchopogh, F.S.; Bacanin, N. EATHOA: Elite-evolved hiking algorithm for global optimization and precise multi-thresholding image segmentation in intracerebral hemorrhage images. Comput. Biol. Med. 2025, 196, 110835. [Google Scholar] [CrossRef]
  34. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  35. Yang, W.; Lai, T.; Fang, Y. Multi-Strategy Golden Jackal Optimization for engineering design. J. Supercomput. 2025, 81, 1–60. [Google Scholar] [CrossRef]
  36. Mohapatra, S.; Mohapatra, P. Fast random opposition-based learning Golden Jackal Optimization algorithm. Knowl. Based Syst. 2023, 275, 110679. [Google Scholar] [CrossRef]
  37. Hu, G.; Chen, L.; Wei, G. Enhanced golden jackal optimizer-based shape optimization of complex CSGC-Ball surfaces. Artif. Intell. Rev. 2023, 56, 2407–2475. [Google Scholar] [CrossRef]
  38. Devi, R.M.; Premkumar, M.; Kiruthiga, G.; Sowmya, R. IGJO: An Improved Golden Jackel Optimization Algorithm Using Local Escaping Operator for Feature Selection Problems. Neural Process. Lett. 2023, 55, 6443–6531. [Google Scholar] [CrossRef]
  39. Bai, J.; Khatir, S.; Abualigah, L.; Wahab, M.A. Ameliorated Golden jackal optimization (AGJO) with enhanced movement and multi-angle position updating strategy for solving engineering problems. Adv. Eng. Softw. 2024, 194, 103665. [Google Scholar] [CrossRef]
  40. Meng, X.; Tan, L.; Wang, Y. An efficient hybrid differential evolution-golden jackal optimization algorithm for multilevel thresholding image segmentation. PeerJ Comput. Sci. 2024, 10, e2121. [Google Scholar] [CrossRef]
  41. Mai, X.; Zhong, Y.; Li, L. The Crossover strategy integrated Secretary Bird Optimization Algorithm and its application in engineering design problems. Electron. Res. Arch. 2025, 33, 471–512. [Google Scholar] [CrossRef]
  42. Fu, Y.; Liu, D.; Fu, S.; Chen, J. Enhanced aquila optimizer based on tent chaotic mapping and new rules. Sci. Rep. 2024, 14, 3013. [Google Scholar] [CrossRef] [PubMed]
  43. Huang, H.; Wu, R.; Huang, H.; Wei, J.; Han, Z.; Wen, L.; Yuan, Y. Multi-strategy improved artificial rabbit optimization algorithm based on fusion centroid and elite guidance mechanisms. Comput. Methods Appl. Mech. Eng. 2024, 425, 116915. [Google Scholar] [CrossRef]
  44. Xie, J.; He, J.; Gao, Z.; Wang, S.; Liu, J.; Fan, H.J.H. An enhanced snow ablation optimizer for UAV swarm path planning and engineering design problems. Heliyon 2024, 10, e37819. [Google Scholar] [CrossRef]
  45. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization. Nanyang Technol. Univ. Singap. Tech. Rep. 2016, 1–18. [Google Scholar]
  46. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  47. Huang, S.; Liu, D.; Fu, Y.; Chen, J.; He, L.; Yan, J.; Yang, D. Prediction of Self-Care Behaviors in Patients Using High-Density Surface Electromyography Signals and an Improved Whale Optimization Algorithm-Based LSTM Model. J. Bionic Eng. 2025, 22, 1963–1984. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Lewis, A.; Sadiq, A.S. Autonomous particles groups for particle swarm optimization. Arab. J. Sci. Eng. 2014, 39, 4683–4697. [Google Scholar] [CrossRef]
  49. Akbari, E.; Rahimnejad, A.; Gadsden, S.A. Holistic swarm optimization: A novel metaphor-less algorithm guided by whole population information for addressing exploration-exploitation dilemma. Comput. Methods Appl. Mech. Eng. 2025, 445, 118208. [Google Scholar] [CrossRef]
  50. Ghasemi, M.; Akbari, M.A.; Zare, M.; Mirjalili, S.; Deriche, M.; Abualigah, L. Birds of prey-based optimization (BPBO): A metaheuristic algorithm for optimization. Evol. Intell. 2025, 18, 88. [Google Scholar] [CrossRef]
  51. Ou, Y.; Qin, F.; Zhou, K.-Q.; Yin, P.-F.; Mo, L.-P.; Zain, A.J.S.M. An improved grey wolf optimizer with multi-strategies coverage in wireless sensor networks. Symmetry 2024, 16, 286. [Google Scholar] [CrossRef]
  52. Wang, W.-C.; Tian, W.-C.; Xu, D.-M.; Zang, H.-F. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  53. Mohammed, B.O.; Aghdasi, H.S.; Salehpour, P. Dhole optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Clust. Comput. 2025, 28, 430. [Google Scholar] [CrossRef]
  54. Chai, X.; Wu, Z.; Li, W.; Fan, H.; Sun, X.; Xu, J. Image Segmentation Based on the Optimized K-Means Algorithm with the Improved Hybrid Grey Wolf Optimization: Application in Ore Particle Size Detection. Sensors 2025, 28, 2785. [Google Scholar] [CrossRef] [PubMed]
  55. Abualigah, L.; Al-Okbi, N.K.; Alomari, S.A.; Almomani, M.H.; Moneam, S.; Yousif, M.A.; Snasel, V.; Saleem, K.; Smerat, A. Optimized image segmentation using an improved reptile search algorithm with Gbest operator for multi-level thresholding. Sci. Rep. 2025, 15, 12713. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Wang, J.; Zhang, X.; Wang, B. ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics 2025, 15, 596. [Google Scholar] [CrossRef]
  57. Shi, J.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H.; Chen, X.; Sun, L. Multi-threshold image segmentation using new strategies enhanced whale optimization for lupus nephritis pathological images. Displays 2024, 84, 102799. [Google Scholar] [CrossRef]
  58. Abdel-salam, M.; Houssein, E.H.; Emam, M.M.; Samee, N.A.; Azam, M.T. A novel dynamic Nelder-based Electric Eel Foraging algorithm for global optimization and pathological colorectal cancer image segmentation. Comput. Biol. Med. 2025, 197, 110982. [Google Scholar] [CrossRef] [PubMed]
  59. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  60. Jiang, Y.; Yeh, W.-C.; Hao, Z.; Yang, Z. A cooperative honey bee mating algorithm and its application in multi-threshold image segmentation. Inf. Sci. 2016, 369, 171–183. [Google Scholar] [CrossRef]
Figure 1. Comparison of two initial population distribution methods.
Figure 1. Comparison of two initial population distribution methods.
Symmetry 17 02130 g001
Figure 2. Schematic diagram of the processing mechanism based on global optimization.
Figure 2. Schematic diagram of the processing mechanism based on global optimization.
Symmetry 17 02130 g002
Figure 3. Comparative analysis of swarm diversity distribution between MGJO and GJO methodologies.
Figure 3. Comparative analysis of swarm diversity distribution between MGJO and GJO methodologies.
Symmetry 17 02130 g003aSymmetry 17 02130 g003b
Figure 4. Performance analysis of exploration versus exploitation in MGJO.
Figure 4. Performance analysis of exploration versus exploitation in MGJO.
Symmetry 17 02130 g004aSymmetry 17 02130 g004b
Figure 5. Performance evaluation of various enhancement methodologies.
Figure 5. Performance evaluation of various enhancement methodologies.
Symmetry 17 02130 g005
Figure 6. Comparative performance ranking of GJO variants with different enhancements.
Figure 6. Comparative performance ranking of GJO variants with different enhancements.
Symmetry 17 02130 g006
Figure 7. Part of the test function iterates convergence curves.
Figure 7. Part of the test function iterates convergence curves.
Symmetry 17 02130 g007aSymmetry 17 02130 g007bSymmetry 17 02130 g007c
Figure 8. Performance distribution and relative ranking of the evaluated algorithms.
Figure 8. Performance distribution and relative ranking of the evaluated algorithms.
Symmetry 17 02130 g008
Figure 9. The set of benchmark images.
Figure 9. The set of benchmark images.
Symmetry 17 02130 g009
Figure 10. Friedman average rank of fitness in Otsu.
Figure 10. Friedman average rank of fitness in Otsu.
Symmetry 17 02130 g010
Figure 11. Friedman average rank of PSNR in Otsu.
Figure 11. Friedman average rank of PSNR in Otsu.
Symmetry 17 02130 g011
Figure 12. Friedman average rank of FSIM in Otsu.
Figure 12. Friedman average rank of FSIM in Otsu.
Symmetry 17 02130 g012
Figure 13. Friedman average rank of SSIM in Otsu.
Figure 13. Friedman average rank of SSIM in Otsu.
Symmetry 17 02130 g013
Table 1. Comparison of MGJO with Existing Improved GJO Variants.
Table 1. Comparison of MGJO with Existing Improved GJO Variants.
AlgorithmImprovement StrategiesCore NoveltyReference
IGJOOperators in Gradient-Based OptimizersEnhances initial population diversity via opposition sampling[38]
AGJOThe enhanced movement strategy, the global search strategy, and the multi-angle position update strategy for prey.Improves convergence speed[39]
EGJOCombined with opposition-based learning, spring vibration-based adaptive mutation and binomial-based cross-evolution strategyEnhances local exploitation via elite information guidance[37]
DE-GJOHybrid differential evolution (DE) crossoverIntroduces DE operators to expand search range[40]
MGJOGood-point set initialization, Dual crossover (horizontal + vertical) and Global-optimum boundary handlingSynergistic strategy integration, Dimension-level fine-grained search and Boundary information retention
Table 2. Comparison of algorithm parameter settings.
Table 2. Comparison of algorithm parameter settings.
AlgorithmsName of the ParameterValue of the Parameter
GWO a [0, 2]
IWOA r ,   l ,   a [0, 1], [−1, 1], Linear reduction from 2 to 1
AGPSO w 1 , w 2 , c 1 , c 2 0.9, 0.4, [2.55, 0.5], [1.25, 2.25]
HSO α 3
DBO P p e r c e n t 0.2
BPBO P i 0.7
GJO c 1 1.5
MGJO c 1 , α , β , γ 1.5 , 0 ,   1 , 0 ,   1 ,   [ 1 ,   1 ]
Table 3. Experimental findings on the CEC 2017 benchmark suite (dim = 30).
Table 3. Experimental findings on the CEC 2017 benchmark suite (dim = 30).
FunctionMetricGJOGJO-S1GJO-S2GJO-S3MGJO
F1Ave1.3149 × 10102.7855 × 10092.4665 × 10046.9047 × 10083.7651 × 1003
Std4.2973 × 10092.8560 × 10092.6383 × 10044.1807 × 10093.6750 × 1003
F2Ave3.7800 × 10363.1681 × 10332.7613 × 10162.0410 × 10372.2475 × 1014
Std2.4403 × 10371.5166 × 10341.1158 × 10171.3717 × 10389.1683 × 1014
F3Ave6.1534 × 10041.3289 × 10059.6556 × 10041.3319 × 10058.2440 × 1004
Std1.0117 × 10043.8009 × 10042.7842 × 10044.0889 × 10042.2117 × 1004
F4Ave1.3075 × 10038.3526 × 10025.1802 × 10029.6771 × 10025.1436 × 1002
Std7.3982 × 10029.9451 × 10022.5429 × 10011.0191 × 10031.9166 × 1001
F5Ave7.1420 × 10026.7542 × 10026.5085 × 10026.9936 × 10026.3749 × 1002
Std4.5854 × 10013.3407 × 10013.5259 × 10012.9212 × 10012.9944 × 1001
F6Ave6.3982 × 10026.2638 × 10026.0290 × 10026.2545 × 10026.0006 × 1002
Std8.1038 × 10001.1338 × 10016.5810 × 10001.0352 × 10013.9619 ×10−02
F7Ave1.0474 × 10031.0361 × 10039.3598 × 10021.1160 × 10038.9236 × 1002
Std5.9275 × 10019.9329 × 10016.1129 × 10011.0098 × 10024.1074 × 1001
F8Ave9.8532 × 10029.4049 × 10029.3302 × 10029.5380 × 10029.1312 × 1002
Std4.1609 × 10013.2855 × 10012.4688 × 10012.1915 × 10012.9506 × 1001
F9Ave5.4780 × 10035.0697 × 10032.9440 × 10034.9617 × 10031.9042 × 1003
Std1.7382 × 10032.2601 × 10031.0698 × 10031.0561 × 10037.5839 × 1002
F10Ave6.5879 × 10035.9690 × 10035.4494 × 10035.4013 × 10035.0730 × 1003
Std1.3246 × 10031.4901 × 10034.7779 × 10023.4518 × 10026.4052 × 1002
F11Ave4.1974 × 10033.5650 × 10032.8659 × 10037.3527 × 10031.5479 × 1003
Std1.5985 × 10032.5612 × 10032.2137 × 10035.2060 × 10037.1265 × 1002
F12Ave9.3605 × 10081.9871 × 10072.6930 × 10063.6412 × 10081.9249 × 1006
Std6.8215 × 10084.4238 × 10071.7841 × 10061.7187 × 10091.1241 × 1006
F13Ave2.7718 × 10088.3133 × 10052.7489 × 10042.9417 × 10081.1212 × 1004
Std4.7454 × 10085.7531 × 10061.3150 × 10056.7097 × 10081.0753 × 1004
F14Ave8.4081 × 10051.6046 × 10061.2498 × 10062.6088 × 10069.1322 × 1005
Std8.8546 × 10051.8701 × 10061.3323 × 10062.3320 × 10069.2958 × 1005
F15Ave1.8554 × 10078.1784 × 10038.6849 × 10033.6982 × 10078.8123 × 1003
Std4.5626 × 10077.5381 × 10039.0449 × 10031.4626 × 10089.2655 × 1003
F16Ave3.1143 × 10032.8244 × 10032.6633 × 10033.0309 × 10032.6426 × 1003
Std4.8034 × 10023.2298 × 10023.1538 × 10022.9006 × 10023.5112 × 1002
F17Ave2.2924 × 10032.3491 × 10032.1714 × 10032.4346 × 10032.1400 × 1003
Std2.6367 × 10022.1885 × 10022.3330 × 10022.6166 × 10022.4677 × 1002
F18Ave2.1356 × 10062.9990 × 10061.7875 × 10063.4259 × 10062.1382 × 1006
Std2.3536 × 10064.4497 × 10062.7113 × 10063.6139 × 10063.2935 × 1006
F19Ave1.1452 × 10079.6457 × 10037.7706 × 10031.0082 × 10071.0154 × 1004
Std2.8041 × 10079.6409 × 10037.0285 × 10033.1141 × 10071.0111 × 1004
F20Ave2.6030 × 10032.5925 × 10032.4519 × 10032.6399 × 10032.3504 × 1003
Std2.2812 × 10022.4554 × 10022.0489 × 10022.1367 × 10021.7585 × 1002
F21Ave2.4941 × 10032.4507 × 10032.4476 × 10032.4814 × 10032.4161 × 1003
Std3.6677 × 10013.6714 × 10013.5470 × 10013.2290 × 10013.1822 × 1001
F22Ave5.8895 × 10034.6699 × 10034.3228 × 10035.6522 × 10033.8092 × 1003
Std2.6166 × 10031.8649 × 10032.2942 × 10032.1266 × 10032.1337 × 1003
F23Ave2.8997 × 10032.8925 × 10032.8070 × 10032.9863 × 10032.7775 × 1003
Std4.6097 × 10018.5296 × 10014.1910 × 10011.1897 × 10023.8988 × 1001
F24Ave3.0975 × 10033.0929 × 10033.0128 × 10033.3616 × 10032.9714 × 1003
Std5.7202 × 10019.9099 × 10016.4108 × 10011.8881 × 10025.0339 × 1001
F25Ave3.1982 × 10033.0288 × 10032.9148 × 10033.0772 × 10032.9022 × 1003
Std1.1142 × 10028.5251 × 10012.2361 × 10011.4755 × 10021.7223 × 1001
F26Ave6.1427 × 10036.2560 × 10035.2839 × 10035.6075 × 10034.7852 × 1003
Std6.7740 × 10029.9379 × 10021.0448 × 10031.5807 × 10037.6806 × 1002
F27Ave3.3794 × 10033.3559 × 10033.2393 × 10033.3330 × 10033.2328 × 1003
Std6.8760 × 10018.4396 × 10011.2581 × 10017.5706 × 10011.1582 × 1001
F28Ave3.9168 × 10033.6032 × 10033.2889 × 10033.5823 × 10033.2635 × 1003
Std3.6641 × 10022.5536 × 10022.2982 × 10014.9194 × 10022.2143 × 1001
F29Ave4.3638 × 10034.1281 × 10033.7857 × 10034.1023 × 10033.6922 × 1003
Std2.3958 × 10023.0785 × 10021.9756 × 10022.2164 × 10022.0932 × 1002
F30Ave3.4545 × 10079.1420 × 10041.6467 × 10042.9009 × 10061.4314 × 1004
Std2.9010 × 10072.5561 × 10051.0593 × 10044.0542 × 10067.9527 × 1003
Table 4. Experimental findings on the CEC 2017 benchmark suite (dim = 30).
Table 4. Experimental findings on the CEC 2017 benchmark suite (dim = 30).
FunctionMetricGWOIWOAAGPSOHSODBOBPBOGJOMGJO
F1Ave2.8587 × 10091.8562 × 10069.5699 × 10088.3879 × 10032.4029 × 10085.1272 × 10081.3244 × 10104.1805 × 1003
Std1.9458 × 10091.1626 × 10061.5751 × 10097.1732 × 10031.4977 × 10082.2170 × 10085.7246 × 10093.9175 × 1003
F2Ave9.3924 × 10322.7939 × 10202.0504 × 10301.3768 × 10172.2866 × 10321.9198 × 10311.7645 × 10351.2005 × 1014
Std4.3273 × 10335.3890 × 10206.3237 × 10306.7570 × 10171.0634 × 10339.3037 × 10314.9362 × 10354.0515 × 1014
F3Ave6.0705 × 10041.5353 × 10058.7115 × 10044.7849 × 10049.3870 × 10046.8247 × 10046.2327 × 10048.4737 × 1004
Std1.1635 × 10043.8715 × 10042.4003 × 10041.1588 × 10041.9516 × 10048.6188 × 10039.8584 × 10031.8095 × 1004
F4Ave6.4657 × 10025.3391 × 10026.3697 × 10026.9766 × 10026.7833 × 10026.8037 × 10021.1891 × 10035.1346 × 1002
Std1.3296 × 10023.3118 × 10011.9760 × 10029.3288 × 10011.3805 × 10026.3945 × 10014.0519 × 10022.6729 × 1001
F5Ave6.2901 × 10027.1825 × 10025.9782 × 10026.9318 × 10027.4236 × 10027.6778 × 10027.3026 × 10026.4444 × 1002
Std2.8424 × 10015.2546 × 10012.5716 × 10012.5869 × 10014.5681 × 10014.2068 × 10015.6271 × 10012.5330 × 1001
F6Ave6.1185 × 10026.5630 × 10026.1126 × 10026.5217 × 10026.5132 × 10026.6112 × 10026.4144 × 10026.0006 × 1002
Std3.8852 × 10001.3225 × 10014.6539 × 10004.9124 × 10001.0821 × 10017.6243 × 10001.1849 × 10013.9338 × 10−02
F7Ave9.0323 × 10021.0665 × 10038.7635 × 10021.0360 × 10031.0412 × 10031.2208 × 10031.0589 × 10038.8558 × 1002
Std5.2116 × 10011.0675 × 10024.6356 × 10017.2858 × 10017.8647 × 10019.1684 × 10015.4250 × 10014.3810 × 1001
F8Ave9.0326 × 10029.9943 × 10029.0543 × 10021.0364 × 10031.0460 × 10031.0096 × 10039.8746 × 10029.0705 × 1002
Std2.6267 × 10013.4959 × 10012.9645 × 10012.0496 × 10015.0985 × 10013.7934 × 10013.7172 × 10012.7901 × 1001
F9Ave2.7554 × 10035.1194 × 10032.6894 × 10032.4603 × 10037.0971 × 10036.9897 × 10035.5220 × 10032.2982 × 1003
Std1.0419 × 10031.0454 × 10031.2678 × 10037.2450 × 10021.8701 × 10032.0065 × 10031.5915 × 10031.2869 × 1003
F10Ave5.8476 × 10035.6942 × 10035.4738 × 10034.3099 × 10036.6552 × 10037.2016 × 10036.1774 × 10034.9462 × 1003
Std1.9893 × 10038.0042 × 10027.2619 × 10025.4256 × 10021.3293 × 10031.2367 × 10031.3516 × 10036.8427 × 1002
F11Ave2.3487 × 10031.3924 × 10031.3576 × 10031.7803 × 10031.9510 × 10031.6641 × 10033.3628 × 10031.4763 × 1003
Std8.7859 × 10029.3472 × 10011.0288 × 10022.0754 × 10027.3794 × 10021.6930 × 10021.4209 × 10034.8927 × 1002
F12Ave8.7820 × 10074.5389 × 10061.2355 × 10085.5934 × 10058.4389 × 10073.7381 × 10077.7143 × 10082.0464 × 1006
Std8.5028 × 10073.9906 × 10063.7092 × 10086.5551 × 10051.8851 × 10082.8545 × 10076.9451 × 10081.4525 × 1006
F13Ave4.8114 × 10074.0409 × 10042.7694 × 10062.7394 × 10042.1502 × 10073.1509 × 10062.8251 × 10081.0267 × 1004
Std9.6117 × 10073.5830 × 10041.3071 × 10071.8549 × 10043.7718 × 10073.5023 × 10063.4263 × 10081.0813 × 1004
F14Ave5.7297 × 10052.2773 × 10057.0773 × 10041.3045 × 10042.1921 × 10057.3482 × 10051.0858 × 10068.1641 × 1005
Std6.0529 × 10051.9910 × 10057.6866 × 10041.0568 × 10043.6042 × 10056.5540 × 10051.0922 × 10069.2234 × 1005
F15Ave1.2636 × 10061.0741 × 10041.5913 × 10048.7839 × 10031.0247 × 10059.8141 × 10041.4573 × 10079.1519 × 1003
Std1.8713 × 10061.2041 × 10041.6669 × 10043.8447 × 10031.1228 × 10051.0401 × 10052.5383 × 10078.9950 × 1003
F16Ave2.6622 × 10032.8848 × 10032.6586 × 10032.9848 × 10033.4392 × 10033.3697 × 10033.1208 × 10032.6913 × 1003
Std2.7627 × 10023.4864 × 10023.0234 × 10023.3070 × 10024.7743 × 10023.6599 × 10022.8909 × 10022.9750 × 1002
F17Ave2.0425 × 10032.5122 × 10032.2104 × 10032.5545 × 10032.6511 × 10032.6071 × 10032.2896 × 10032.1777 × 1003
Std1.5540 × 10022.6725 × 10022.1800 × 10023.2060 × 10023.3119 × 10022.9519 × 10022.5708 × 10022.1467 × 1002
F18Ave2.0248 × 10062.8307 × 10061.6002 × 10061.2607 × 10053.2405 × 10062.4422 × 10063.6235 × 10061.3430 × 1006
Std2.3149 × 10063.1961 × 10061.7270 × 10068.2646 × 10043.8582 × 10062.2565 × 10065.8186 × 10062.0981 × 1006
F19Ave1.1478 × 10067.5279 × 10033.5759 × 10041.4449 × 10043.0782 × 10062.7309 × 10066.4510 × 10069.3737 × 1003
Std1.3803 × 10061.0003 × 10047.9795 × 10041.7084 × 10047.2016 × 10063.3243 × 10061.1668 × 10071.1237 × 1004
F20Ave2.4838 × 10032.7435 × 10032.5174 × 10032.5551 × 10032.7188 × 10032.7472 × 10032.6782 × 10032.3669 × 1003
Std1.6592 × 10022.4968 × 10021.9336 × 10021.8983 × 10022.4175 × 10021.9688 × 10022.0408 × 10021.6869 × 1002
F21Ave2.4030 × 10032.5003 × 10032.4104 × 10032.5667 × 10032.5519 × 10032.4997 × 10032.4999 × 10032.4172 × 1003
Std2.0699 × 10014.3511 × 10013.0560 × 10011.6294 × 10015.1257 × 10014.3093 × 10015.1069 × 10013.4323 × 1001
F22Ave4.8809 × 10034.2986 × 10035.0618 × 10034.8193 × 10034.9684 × 10033.6110 × 10036.1869 × 10033.8435 × 1003
Std1.8781 × 10032.3464 × 10032.1159 × 10031.5939 × 10032.4981 × 10032.2249 × 10032.3096 × 10032.1135 × 1003
F23Ave2.7842 × 10032.8797 × 10032.8785 × 10032.9091 × 10033.0026 × 10032.9595 × 10032.9265 × 10032.7787 × 1003
Std4.1240 × 10017.4256 × 10018.4955 × 10011.6186 × 10011.2427 × 10026.1837 × 10016.7901 × 10014.1438 × 1001
F24Ave2.9662 × 10033.0308 × 10033.1053 × 10033.0433 × 10033.1998 × 10033.0512 × 10033.1054 × 10032.9615 × 1003
Std6.0541 × 10017.6090 × 10019.1501 × 10011.2406 × 10019.8925 × 10015.8957 × 10016.5715 × 10015.2031 × 1001
F25Ave3.0391 × 10032.9138 × 10032.9395 × 10033.1840 × 10032.9761 × 10033.0795 × 10033.2267 × 10032.9044 × 1003
Std1.2102 × 10021.6781 × 10014.6393 × 10018.6748 × 10014.1779 × 10014.5472 × 10011.2932 × 10022.2886 × 1001
F26Ave5.0335 × 10035.6594 × 10035.1083 × 10035.2881 × 10037.2851 × 10036.8372 × 10036.1636 × 10034.6601 × 1003
Std5.1045 × 10021.4276 × 10038.4084 × 10022.6234 × 10029.1932 × 10021.8118 × 10037.8118 × 10029.7846 × 1002
F27Ave3.2728 × 10033.4502 × 10033.2774 × 10033.3577 × 10033.3446 × 10033.4334 × 10033.3911 × 10033.2336 × 1003
Std3.6129 × 10011.4546 × 10023.4694 × 10017.9813 × 10015.5498 × 10019.6239 × 10017.9391 × 10011.1126 × 1001
F28Ave3.4582 × 10034.4756 × 10033.6015 × 10033.7861 × 10033.5739 × 10033.4549 × 10033.7787 × 10033.2607 × 1003
Std1.4650 × 10021.0548 × 10038.4307 × 10025.6812 × 10023.9794 × 10027.9759 × 10012.3915 × 10022.4565 × 1001
F29Ave3.9170 × 10034.2740 × 10033.9372 × 10034.4687 × 10034.3476 × 10034.9087 × 10034.3690 × 10033.7215 × 1003
Std1.6033 × 10022.4192 × 10022.2151 × 10022.3190 × 10023.8157 × 10023.5212 × 10023.2438 × 10022.1116 × 1002
F30Ave1.3933 × 10071.2169 × 10054.0847 × 10059.5764 × 10045.7551 × 10061.3332 × 10073.8595 × 10071.1595 × 1004
Std1.3980 × 10072.1137 × 10051.2051 × 10061.1256 × 10057.2456 × 10068.4294 × 10063.1914 × 10073.9623 × 1003
Table 5. Experimental findings on the CEC 2017 benchmark suite (dim = 100).
Table 5. Experimental findings on the CEC 2017 benchmark suite (dim = 100).
FunctionMetricGWOIWOAAGPSOHSODBOBPBOGJOMGJO
F1Ave5.3510 × 10108.9745 × 10092.7509 × 10103.2404 × 10098.5312 × 10105.2731 × 10101.3279 × 10112.5979 × 1008
Std8.7034 × 10092.6775 × 10091.0784 × 10101.6339 × 10097.0504 × 10108.8060 × 10091.4218 × 10101.1627 × 1008
F2Ave1.2050 × 101351.2511 × 101333.5868 × 101393.1190 × 102016.1137 × 101639.3962 × 101543.1194 × 101491.3655 × 10101
Std6.5262 × 101356.1838 × 101331.9620 × 101406.5535 × 10046.5535 × 10046.5535 × 10041.4435 × 101505.1024 × 10101
F3Ave5.2603 × 10059.4038 × 10057.0265 × 10053.4666 × 10056.1803 × 10053.5749 × 10053.8842 × 10055.1192 × 1005
Std8.9976 × 10041.0343 × 10059.5974 × 10043.4786 × 10042.4925 × 10053.0824 × 10043.5893 × 10046.3321 × 1004
F4Ave6.4441 × 10031.9157 × 10035.9269 × 10033.0318 × 10031.8998 × 10047.4066 × 10032.0314 × 10041.1419 × 1003
Std1.8268 × 10033.1130 × 10022.9983 × 10038.8704 × 10021.4422 × 10041.6252 × 10034.7254 × 10038.5415 × 1001
F5Ave1.2442 × 10031.4804 × 10031.3223 × 10031.6868 × 10031.6858 × 10031.7620 × 10031.5917 × 10031.3195 × 1003
Std7.7468 × 10016.0703 × 10019.1677 × 10018.7948 × 10012.0586 × 10026.7735 × 10011.1573 × 10021.5512 × 1002
F6Ave6.4522 × 10026.7360 × 10026.5670 × 10026.9061 × 10026.8058 × 10026.9312 × 10026.7476 × 10026.5154 × 1002
Std5.0389 × 10003.2847 × 10007.7674 × 10005.1499 × 10001.1371 × 10014.0328 × 10005.9583 × 10007.1434 × 1000
F7Ave2.1978 × 10033.0995 × 10033.1326 × 10034.8226 × 10033.0094 × 10033.7014 × 10032.9671 × 10032.3527 × 1003
Std1.3159 × 10022.4569 × 10022.6520 × 10026.5307 × 10022.0699 × 10029.2563 × 10011.2136 × 10023.0908 × 1002
F8Ave1.5856 × 10031.9067 × 10031.6318 × 10032.0420 × 10032.1809 × 10032.2433 × 10031.9478 × 10031.6619 × 1003
Std1.4423 × 10026.8833 × 10011.0815 × 10027.4633 × 10012.3425 × 10027.0908 × 10011.0835 × 10021.8880 × 1002
F9Ave4.5865 × 10043.5318 × 10043.8231 × 10046.3337 × 10047.6876 × 10047.0544 × 10046.3521 × 10044.3473 × 1004
Std1.3947 × 10043.8803 × 10031.0973 × 10041.6184 × 10048.5930 × 10031.0140 × 10049.0921 × 10033.6714 × 1003
F10Ave1.9070 × 10041.9403 × 10042.4635 × 10042.3683 × 10042.8480 × 10042.7304 × 10042.6966 × 10042.2392 × 1004
Std4.1712 × 10031.4057 × 10032.8272 × 10031.8427 × 10034.3763 × 10032.5923 × 10034.9114 × 10031.8762 × 1003
F11Ave9.2253 × 10041.5751 × 10051.0379 × 10055.7994 × 10042.4113 × 10051.4358 × 10051.0863 × 10058.3907 × 1004
Std1.7735 × 10045.3303 × 10043.6954 × 10042.3412 × 10046.8023 × 10042.8041 × 10042.0842 × 10042.3945 × 1004
F12Ave1.3411 × 10108.7825 × 10089.9960 × 10091.6939 × 10086.5324 × 10097.0752 × 10094.7622 × 10101.2354 × 1008
Std5.3199 × 10093.4747 × 10086.2775 × 10091.0477 × 10082.1057 × 10092.0407 × 10091.0009 × 10103.4286 × 1007
F13Ave1.4569 × 10097.6721 × 10051.2596 × 10097.4976 × 10043.4423 × 10082.6518 × 10089.4564 × 10091.0271 × 1004
Std9.0110 × 10084.5610 × 10051.6198 × 10093.4890 × 10042.1799 × 10081.1801 × 10083.9091 × 10096.3155 × 1003
F14Ave9.0447 × 10064.2857 × 10067.9435 × 10069.7047 × 10051.6980 × 10071.4402 × 10071.7311 × 10076.9310 × 1006
Std5.5295 × 10061.6898 × 10066.7914 × 10066.9679 × 10051.0376 × 10075.8969 × 10061.1189 × 10072.6292 × 1006
F15Ave3.6823 × 10087.5246 × 10041.5117 × 10082.9811 × 10046.8445 × 10072.5749 × 10073.2105 × 10096.3068 × 1003
Std5.6759 × 10083.6070 × 10043.4836 × 10081.0704 × 10048.9388 × 10071.7288 × 10072.4150 × 10091.1720 × 1004
F16Ave6.5838 × 10036.9150 × 10037.3807 × 10037.2401 × 10039.3081 × 10031.1244 × 10049.9269 × 10035.7588 × 1003
Std5.4314 × 10027.6111 × 10021.0740 × 10039.2612 × 10021.4833 × 10031.2453 × 10031.2241 × 10037.7678 × 1002
F17Ave5.5501 × 10036.3968 × 10037.5824 × 10035.6223 × 10039.1235 × 10037.9666 × 10033.1351 × 10044.9930 × 1003
Std9.6950 × 10026.1720 × 10023.1625 × 10034.2981 × 10021.4514 × 10037.8680 × 10025.1530 × 10046.4923 × 1002
F18Ave9.4181 × 10067.1481 × 10061.0695 × 10072.2247 × 10062.7972 × 10071.5441 × 10071.9089 × 10076.1371 × 1006
Std5.2724 × 10063.5548 × 10066.7932 × 10061.7877 × 10061.5624 × 10076.1604 × 10061.0315 × 10072.9256 × 1006
F19Ave3.0200 × 10085.9721 × 10052.1440 × 10082.0822 × 10047.3010 × 10073.9518 × 10072.5227 × 10095.1664 × 1003
Std4.4671 × 10085.8884 × 10055.7456 × 10082.0347 × 10047.9364 × 10072.7621 × 10071.8096 × 10092.6068 × 1003
F20Ave5.6596 × 10035.9349 × 10036.3321 × 10034.8819 × 10037.3061 × 10036.4297 × 10036.4377 × 10035.8606 × 1003
Std1.0939 × 10035.0468 × 10026.4152 × 10024.8240 × 10027.5312 × 10026.4935 × 10021.0003 × 10031.0923 × 1003
F21Ave3.0936 × 10033.6431 × 10033.3348 × 10033.7133 × 10034.0461 × 10033.9040 × 10033.5464 × 10033.2024 × 1003
Std6.6627 × 10011.8877 × 10021.2260 × 10027.0588 × 10011.4972 × 10021.6236 × 10021.2394 × 10022.1602 × 1002
F22Ave2.2917 × 10042.2472 × 10042.6821 × 10042.5741 × 10042.9813 × 10043.0436 × 10042.9424 × 10042.4632 × 1004
Std5.2026 × 10031.5296 × 10032.8580 × 10031.7428 × 10034.6555 × 10033.0252 × 10034.2920 × 10032.7559 × 1003
F23Ave3.7025 × 10034.1565 × 10034.5492 × 10034.0114 × 10034.8542 × 10034.5816 × 10034.5843 × 10033.4829 × 1003
Std6.9705 × 10011.8345 × 10022.5633 × 10025.1409 × 10011.8344 × 10022.3162 × 10022.0724 × 10022.0993 × 1002
F24Ave4.4320 × 10034.9823 × 10036.6914 × 10034.6083 × 10036.1674 × 10035.5866 × 10036.0391 × 10034.0212 × 1003
Std1.7256 × 10022.7333 × 10025.7435 × 10026.4584 × 10014.1475 × 10022.6685 × 10024.4634 × 10021.6803 × 1002
F25Ave7.1507 × 10034.4340 × 10036.1216 × 10036.5600 × 10038.8138 × 10037.8050 × 10031.2755 × 10043.8202 × 1003
Std1.1525 × 10031.9410 × 10029.5531 × 10027.6190 × 10025.2496 × 10037.1229 × 10021.7622 × 10037.2848 × 1001
F26Ave1.7659 × 10042.2212 × 10042.5527 × 10041.9129 × 10042.6686 × 10043.2665 × 10042.8358 × 10041.6988 × 1004
Std1.4143 × 10034.2171 × 10034.5872 × 10031.0966 × 10033.4511 × 10033.9829 × 10032.0930 × 10032.2942 × 1003
F27Ave4.3566 × 10036.6007 × 10034.7403 × 10034.2438 × 10034.6931 × 10035.6618 × 10035.8454 × 10033.6943 × 1003
Std1.9124 × 10021.5056 × 10036.2080 × 10021.9753 × 10023.8149 × 10026.2630 × 10026.0483 × 10029.3333 × 1001
F28Ave9.5038 × 10031.7317 × 10041.2574 × 10041.5498 × 10041.8865 × 10041.1127 × 10041.6434 × 10044.0417 × 1003
Std1.2968 × 10036.9142 × 10033.9998 × 10034.5155 × 10035.4083 × 10031.1913 × 10032.1791 × 10031.3769 × 1002
F29Ave9.4609 × 10038.4494 × 10039.2722 × 10039.5580 × 10031.2496 × 10041.4890 × 10041.5871 × 10046.7427 × 1003
Std8.2399 × 10027.6366 × 10028.5661 × 10027.1012 × 10023.0845 × 10031.4930 × 10034.8154 × 10035.1538 × 1002
F30Ave1.1000 × 10091.2532 × 10077.6521 × 10081.2125 × 10062.6261 × 10086.6309 × 10087.6570 × 10092.3546 × 1005
Std7.4027 × 10088.9771 × 10069.5393 × 10081.0580 × 10062.3539 × 10083.1762 × 10083.4330 × 10091.2030 × 1005
Table 6. Experimental findings on the CEC 2022 benchmark suite (dim = 10).
Table 6. Experimental findings on the CEC 2022 benchmark suite (dim = 10).
FunctionMetricGWOIWOAAGPSOHSODBOBPBOGJOMGJO
F1Ave3.5415 × 10035.8385 × 10023.0011 × 10021.3698 × 10031.9787 × 10031.0500 × 10033.3052 × 10031.2035 × 1003
Std2.8924 × 10032.5780 × 10024.8818 × 10−012.0562 × 10023.2939 × 10037.1886 × 10022.3852 × 10039.4838 × 1002
F2Ave4.3341 × 10024.1967 × 10024.3021 × 10024.4484 × 10024.3446 × 10024.3632 × 10024.5063 × 10024.1308 × 1002
Std2.2603 × 10012.7659 × 10013.2172 × 10012.9376 × 10013.6068 × 10013.1836 × 10012.7100 × 10012.3071 × 1001
F3Ave6.0165 × 10026.1014 × 10026.0010 × 10026.1898 × 10026.1035 × 10026.2235 × 10026.0927 × 10026.0000 × 1002
Std2.0177 × 10006.9742 × 10003.8896 × 10−014.1618 × 10007.1251 × 10009.9230 × 10005.4324 × 10001.8248 × 10−07
F4Ave8.1728 × 10028.3292 × 10028.1462 × 10028.3944 × 10028.3566 × 10028.2214 × 10028.3139 × 10028.1481 × 1002
Std8.1755 × 10001.2004 × 10017.3628 × 10004.5694 × 10001.2153 × 10017.8466 × 10009.3321 × 10007.3113 × 1000
F5Ave9.3258 × 10021.0575 × 10039.0181 × 10029.1048 × 10029.7112 × 10029.9164 × 10021.0114 × 10039.0166 × 1002
Std6.0653 × 10011.9366 × 10022.8773 × 10009.5826 × 10007.5771 × 10018.6615 × 10011.1416 × 10022.9296 × 1000
F6Ave5.9301 × 10032.4035 × 10034.8602 × 10032.8979 × 10035.9127 × 10034.1557 × 10031.0893 × 10043.1694 × 1003
Std2.4751 × 10039.7532 × 10022.2146 × 10031.4517 × 10032.4702 × 10032.2729 × 10035.3408 × 10031.8218 × 1003
F7Ave2.0330 × 10032.0365 × 10032.0190 × 10032.0781 × 10032.0393 × 10032.0603 × 10032.0462 × 10032.0098 × 1003
Std1.4634 × 10011.6189 × 10016.6032 × 10002.9937 × 10012.1218 × 10011.8419 × 10012.4871 × 10019.7780 × 1000
F8Ave2.2280 × 10032.2195 × 10032.2229 × 10032.2790 × 10032.2341 × 10032.2335 × 10032.2282 × 10032.2260 × 1003
Std2.2821 × 10019.5001 × 10005.0381 × 10006.7574 × 10013.3128 × 10012.0924 × 10013.1546 × 10004.8810 × 1000
F9Ave2.5692 × 10032.5487 × 10032.5342 × 10032.6927 × 10032.5552 × 10032.5749 × 10032.6032 × 10032.5293 × 1003
Std3.7133 × 10014.3650 × 10012.6816 × 10014.4243 × 10014.3793 × 10013.7994 × 10014.1553 × 10011.7324 × 10−01
F10Ave2.5867 × 10032.5489 × 10032.5570 × 10032.6071 × 10032.5547 × 10032.5766 × 10032.5887 × 10032.5008 × 1003
Std6.2669 × 10015.6479 × 10016.1452 × 10011.1611 × 10026.6522 × 10016.3166 × 10016.3634 × 10011.8645 × 10−01
F11Ave2.8748 × 10032.7401 × 10032.8376 × 10032.9730 × 10032.7700 × 10032.7744 × 10032.9859 × 10032.6839 × 1003
Std2.2466 × 10021.7449 × 10021.8726 × 10022.0572 × 10021.2654 × 10021.7069 × 10022.3982 × 10029.6289 × 1001
F12Ave2.8695 × 10032.8717 × 10032.8712 × 10032.8669 × 10032.8772 × 10032.8695 × 10032.8766 × 10032.8650 × 1003
Std1.1415 × 10012.0568 × 10001.7011 × 10011.0262 × 10012.3393 × 10011.2292 × 10011.5871 × 10011.0730 × 1001
Table 7. Experimental findings on the CEC 2022 benchmark suite (dim = 20).
Table 7. Experimental findings on the CEC 2022 benchmark suite (dim = 20).
FunctionMetricGWOIWOAAGPSOHSODBOBPBOGJOMGJO
F1Ave1.6530 × 10042.0859 × 10041.2046 × 10047.8069 × 10033.4305 × 10042.3080 × 10041.6040 × 10042.1086 × 1004
Std5.3662 × 10036.8295 × 10037.9329 × 10034.4438 × 10031.0268 × 10048.1011 × 10035.4810 × 10039.1187 × 1003
F2Ave4.9755 × 10024.6415 × 10024.7620 × 10025.5925 × 10024.9042 × 10025.6247 × 10026.1581 × 10024.6682 × 1002
Std3.9461 × 10011.7315 × 10014.3064 × 10016.3565 × 10015.1359 × 10015.3795 × 10017.3198 × 10012.6211 × 1001
F3Ave6.0711 × 10026.4070 × 10026.0446 × 10026.3690 × 10026.3486 × 10026.4817 × 10026.3112 × 10026.0000 × 1002
Std4.4861 × 10001.8949 × 10013.4282 × 10005.2175 × 10009.3797 × 10001.3872 × 10018.6800 × 10008.3378 × 10−04
F4Ave8.6034 × 10028.8687 × 10028.5159 × 10029.1771 × 10029.0819 × 10028.8617 × 10029.0381 × 10028.5679 × 1002
Std3.0605 × 10012.2540 × 10011.8976 × 10011.1865 × 10013.0499 × 10011.2952 × 10013.1246 × 10011.8707 × 1001
F5Ave1.2909 × 10032.3854 × 10031.1391 × 10031.1600 × 10032.1677 × 10032.6117 × 10031.8898 × 10031.2240 × 1003
Std3.2993 × 10024.9386 × 10021.1466 × 10022.7274 × 10026.0993 × 10025.8900 × 10024.9049 × 10023.5288 × 1002
F6Ave2.9536 × 10067.1596 × 10032.0715 × 10054.7155 × 10037.2529 × 10051.5692 × 10051.9905 × 10075.3478 × 1003
Std1.2837 × 10076.5420 × 10035.1192 × 10053.2456 × 10031.6150 × 10062.8491 × 10053.2789 × 10073.6362 × 1003
F7Ave2.1033 × 10032.1340 × 10032.0673 × 10032.1410 × 10032.1268 × 10032.1661 × 10032.1212 × 10032.0613 × 1003
Std5.1832 × 10014.1502 × 10013.6736 × 10014.9555 × 10014.6537 × 10014.5945 × 10014.1601 × 10013.1778 × 1001
F8Ave2.2723 × 10032.2514 × 10032.2457 × 10032.4639 × 10032.3381 × 10032.3025 × 10032.2658 × 10032.2389 × 1003
Std5.7417 × 10013.8673 × 10013.5503 × 10011.2185 × 10027.0948 × 10018.0681 × 10015.6463 × 10013.8289 × 1001
F9Ave2.5315 × 10032.4809 × 10032.4970 × 10032.6989 × 10032.5141 × 10032.5216 × 10032.5919 × 10032.4866 × 1003
Std3.7290 × 10011.2906 × 10−012.4913 × 10017.6430 × 10013.2234 × 10013.0870 × 10014.6168 × 10012.6549 × 1000
F10Ave3.5982 × 10032.6964 × 10032.9915 × 10033.7925 × 10033.6767 × 10034.0409 × 10034.0490 × 10032.5828 × 1003
Std1.0419 × 10035.5138 × 10025.8868 × 10026.6111 × 10021.3246 × 10031.2880 × 10031.5033 × 10031.6528 × 1002
F11Ave3.5272 × 10032.9012 × 10033.4571 × 10033.5842 × 10033.1253 × 10033.2194 × 10034.4994 × 10032.8934 × 1003
Std2.6310 × 10021.0844 × 10024.7210 × 10022.8166 × 10021.5453 × 10021.6347 × 10026.3837 × 10021.0808 × 1002
F12Ave2.9824 × 10033.0454 × 10033.0078 × 10033.0052 × 10033.0268 × 10033.0509 × 10033.0278 × 10032.9679 × 1003
Std2.7524 × 10011.6390 × 10015.0636 × 10013.8326 × 10016.9680 × 10015.1204 × 10015.9206 × 10012.3602 × 1001
Table 8. Results for various algorithms on the CEC 2017 and CEC2022.
Table 8. Results for various algorithms on the CEC 2017 and CEC2022.
Statistical ResultsGWOIWOAAGPSOHSODBOBPBOGJO
CEC2017 dim = 30 (+/=/−)(20/0/10)(27/0/3)(20/0/10)(27/0/3)(29/0/1)(29/0/1)28/0/2)
CEC2017 dim = 100 (+/=/−)(27/0/3)(28/0/2)(25/0/5)(28/0/2)(29/0/1)(29/0/1)(29/0/1)
CEC2022 dim = 10 (+/=/−)(11/0/1)(10/0/2)(10/0/2)(11/0/1)(8/0/4)(9/0/3)(11/0/1)
CEC2022 dim = 20 (+/=/−)(11/0/1)(12/0/0)(9/0/3)(10/0/2)(12/0/0)(6/0/6)(12/0/0)
Table 9. Outcomes of the Friedman average ranking assessment.
Table 9. Outcomes of the Friedman average ranking assessment.
SuitesCEC2017CEC2022
Dimensions301001020
Algorithms M . R T . R M . R T . R M . R T . R M . R T . R
GWO3.70 33.47 24.50 44.33 4
IWOA4.40 53.70 43.08 34.17 3
AGPSO3.10 24.27 52.50 22.58 2
HSO4.03 43.60 36.50 75.42 6
DBO6.00 66.37 65.00 55.25 5
BPBO6.37 76.43 85.25 66.33 8
GJO6.50 86.40 76.75 85.92 7
MGJO1.9011.7712.4212.001
Table 10. Multi-level thresholding outcomes using MGJO with Otsu’s criterion as objective function.
Table 10. Multi-level thresholding outcomes using MGJO with Otsu’s criterion as objective function.
ImagesTH = 4TH = 6TH = 8TH = 10
baboonSymmetry 17 02130 i001Symmetry 17 02130 i002Symmetry 17 02130 i003Symmetry 17 02130 i004
Symmetry 17 02130 i005Symmetry 17 02130 i006Symmetry 17 02130 i007Symmetry 17 02130 i008
CameraSymmetry 17 02130 i009Symmetry 17 02130 i010Symmetry 17 02130 i011Symmetry 17 02130 i012
Symmetry 17 02130 i013Symmetry 17 02130 i014Symmetry 17 02130 i015Symmetry 17 02130 i016
FaceSymmetry 17 02130 i017Symmetry 17 02130 i018Symmetry 17 02130 i019Symmetry 17 02130 i020
Symmetry 17 02130 i021Symmetry 17 02130 i022Symmetry 17 02130 i023Symmetry 17 02130 i024
GirlSymmetry 17 02130 i025Symmetry 17 02130 i026Symmetry 17 02130 i027Symmetry 17 02130 i028
Symmetry 17 02130 i029Symmetry 17 02130 i030Symmetry 17 02130 i031Symmetry 17 02130 i032
HunterSymmetry 17 02130 i033Symmetry 17 02130 i034Symmetry 17 02130 i035Symmetry 17 02130 i036
Symmetry 17 02130 i037Symmetry 17 02130 i038Symmetry 17 02130 i039Symmetry 17 02130 i040
LenaSymmetry 17 02130 i041Symmetry 17 02130 i042Symmetry 17 02130 i043Symmetry 17 02130 i044
Symmetry 17 02130 i045Symmetry 17 02130 i046Symmetry 17 02130 i047Symmetry 17 02130 i048
SaturnSymmetry 17 02130 i049Symmetry 17 02130 i050Symmetry 17 02130 i051Symmetry 17 02130 i052
Symmetry 17 02130 i053Symmetry 17 02130 i054Symmetry 17 02130 i055Symmetry 17 02130 i056
TerraceSymmetry 17 02130 i057Symmetry 17 02130 i058Symmetry 17 02130 i059Symmetry 17 02130 i060
Symmetry 17 02130 i061Symmetry 17 02130 i062Symmetry 17 02130 i063Symmetry 17 02130 i064
Table 11. Ave and Std of the optimal fitness values with Otsu as the objective function.
Table 11. Ave and Std of the optimal fitness values with Otsu as the objective function.
ImagesTHMetricsGWOIWOAAGPSOHSODBOBPBOGJOMGJO
Baboon4Ave3.2985 × 10033.2958 × 10033.3001 × 10033.2652 × 10033.2993 × 10033.2945 × 10033.2933 × 10033.3008 × 1003
Std4.1827 × 10002.8020 × 10006.4510 × 10−011.9685 × 10011.6491 × 10004.6041 × 10006.1253 × 10003.9126 × 10−02
6Ave3.3648 × 10033.3594 × 10033.3674 × 10033.3327 × 10033.3604 × 10033.3615 × 10033.3611 × 10033.3718 × 1003
Std8.5391 × 10008.3701 × 10003.3757 × 10001.0698 × 10011.1013 × 10015.4107 × 10004.8473 × 10004.9081 × 10−01
8Ave3.3936 × 10033.3869 × 10033.3937 × 10033.3671 × 10033.3876 × 10033.3862 × 10033.3851 × 10033.3992 × 1003
Std4.1562 × 10004.7990 × 10003.6513 × 10001.1122 × 10015.2377 × 10005.6342 × 10007.9354 × 10001.1804 × 1000
10Ave3.4079 × 10033.4029 × 10033.4081 × 10033.3900 × 10033.4026 × 10033.4042 × 10033.4019 × 10033.4139 × 1003
Std3.1677 × 10003.6528 × 10002.9531 × 10007.1104 × 10005.4729 × 10003.6108 × 10004.1270 × 10001.0583 × 1000
Camera4Ave4.5975 × 10034.5975 × 10034.5990 × 10034.5836 × 10034.5983 × 10034.5968 × 10034.5950 × 10034.6001 × 1003
Std2.6817 × 10001.9470 × 10001.1715 × 10006.7691 × 10001.5377 × 10002.3036 × 10003.2502 × 10001.0296 × 1000
6Ave4.6459 × 10034.6416 × 10034.6477 × 10034.6189 × 10034.6407 × 10034.6371 × 10034.6396 × 10034.6512 × 1003
Std4.9420 × 10004.9456 × 10004.2107 × 10008.4537 × 10005.7329 × 10005.4444 × 10005.8709 × 10003.7977 × 10−01
8Ave4.6603 × 10034.6605 × 10034.6648 × 10034.6416 × 10034.6567 × 10034.6597 × 10034.6587 × 10034.6690 × 1003
Std6.2075 × 10003.9347 × 10002.7189 × 10009.5247 × 10005.5192 × 10005.1934 × 10004.4859 × 10001.0893 × 1000
10Ave4.6739 × 10034.6706 × 10034.6739 × 10034.6527 × 10034.6669 × 10034.6688 × 10034.6692 × 10034.6791 × 1003
Std3.6129 × 10002.9887 × 10001.9996 × 10008.4757 × 10004.4742 × 10003.2907 × 10003.9017 × 10009.6454 × 10−01
Face4Ave2.1204 × 10032.1163 × 10032.1217 × 10032.0907 × 10032.1198 × 10032.1180 × 10032.1127 × 10032.1224 × 1003
Std2.5080 × 10004.3746 × 10001.3134 × 10001.5501 × 10013.1669 × 10003.5967 × 10001.2110 × 10018.0498 × 10−02
6Ave2.1764 × 10032.1740 × 10032.1803 × 10032.1483 × 10032.1763 × 10032.1760 × 10032.1666 × 10032.1843 × 1003
Std7.3609 × 10007.4735 × 10003.2680 × 10001.2176 × 10016.8784 × 10005.4467 × 10009.2604 × 10005.6488 × 10−01
8Ave2.2014 × 10032.1973 × 10032.2035 × 10032.1789 × 10032.1994 × 10032.2020 × 10032.1936 × 10032.2094 × 1003
Std6.3197 × 10005.3907 × 10003.5669 × 10009.7466 × 10005.1068 × 10004.1424 × 10006.4159 × 10001.3020 × 1000
10Ave2.2132 × 10032.2107 × 10032.2173 × 10032.1972 × 10032.2106 × 10032.2137 × 10032.2073 × 10032.2225 × 1003
Std4.9300 × 10005.5702 × 10002.8225 × 10007.6236 × 10007.2712 × 10004.5883 × 10006.9676 × 10009.0505 × 10−01
Girl4Ave2.5331 × 10032.5294 × 10032.5331 × 10032.5096 × 10032.5331 × 10032.5312 × 10032.5304 × 10032.5339 × 1003
Std2.2156 × 10003.8668 × 10002.3908 × 10001.3297 × 10011.1515 × 10003.2110 × 10003.2500 × 10009.2841 × 10−03
6Ave2.5824 × 10032.5761 × 10032.5813 × 10032.5522 × 10032.5806 × 10032.5775 × 10032.5775 × 10032.5843 × 1003
Std2.3014 × 10005.6223 × 10002.7930 × 10001.2912 × 10012.6832 × 10004.1286 × 10004.1278 × 10003.5990 × 10−01
8Ave2.6021 × 10032.5950 × 10032.6019 × 10032.5821 × 10032.5967 × 10032.5963 × 10032.5963 × 10032.6057 × 1003
Std3.8462 × 10004.5544 × 10002.8120 × 10008.0226 × 10004.4712 × 10004.2578 × 10004.1632 × 10001.2443 × 1000
10Ave2.6120 × 10032.6065 × 10032.6116 × 10032.5040 × 10032.6073 × 10032.6074 × 10032.6087 × 10032.6166 × 1003
Std3.1581 × 10003.8739 × 10002.3522 × 10004.7301 × 10023.7702 × 10003.8689 × 10002.9101 × 10008.5354 × 10−01
Hunter4Ave3.1889 × 10033.1866 × 10033.1895 × 10033.1472 × 10033.1892 × 10033.1865 × 10033.1864 × 10033.1902 × 1003
Std2.1547 × 10002.5499 × 10007.3977 × 10−012.8846 × 10019.6951 × 10−014.0963 × 10002.5061 × 10001.9189 × 10−01
6Ave3.2433 × 10033.2385 × 10033.2439 × 10033.2103 × 10033.2411 × 10033.2396 × 10033.2404 × 10033.2464 × 1003
Std4.4935 × 10004.3160 × 10001.4189 × 10001.6430 × 10014.7173 × 10004.9492 × 10003.6477 × 10007.5295 × 10−01
8Ave3.2684 × 10033.2598 × 10033.2663 × 10033.2386 × 10033.2613 × 10033.2612 × 10033.2629 × 10033.2715 × 1003
Std2.8558 × 10004.3686 × 10003.2022 × 10001.2175 × 10015.5053 × 10004.2536 × 10004.0888 × 10009.2072 × 10−01
10Ave3.2787 × 10033.2728 × 10033.2777 × 10033.2506 × 10033.2725 × 10033.2734 × 10033.2742 × 10033.2836 × 1003
Std3.2829 × 10004.5857 × 10003.5896 × 10001.0999 × 10014.5371 × 10004.0687 × 10003.7927 × 10008.4336 × 10−01
Lena4Ave3.6827 × 10033.6811 × 10033.6851 × 10033.6374 × 10033.6842 × 10033.6792 × 10033.6780 × 10033.6860 × 1003
Std4.3455 × 10004.8787 × 10001.5975 × 10003.3474 × 10011.9209 × 10007.4090 × 10005.3111 × 10001.1916 × 10−01
6Ave3.7573 × 10033.7507 × 10033.7581 × 10033.7236 × 10033.7556 × 10033.7517 × 10033.7487 × 10033.7654 × 1003
Std8.8347 × 10007.9039 × 10005.8364 × 10001.3862 × 10017.5451 × 10008.2557 × 10001.0989 × 10015.2392 × 10−01
8Ave3.7870 × 10033.7825 × 10033.7896 × 10033.7610 × 10033.7840 × 10033.7823 × 10033.7790 × 10033.7946 × 1003
Std4.3173 × 10005.0347 × 10003.3579 × 10001.1018 × 10016.2895 × 10005.9568 × 10006.3455 × 10001.0259 × 1000
10Ave3.8035 × 10033.7996 × 10033.8053 × 10033.7832 × 10033.7985 × 10033.7992 × 10033.7967 × 10033.8118 × 1003
Std5.5683 × 10003.8848 × 10003.2966 × 10007.9172 × 10005.1814 × 10005.4822 × 10004.5858 × 10001.9341 × 1000
Saturn4Ave5.2205 × 10035.2183 × 10035.2215 × 10035.1955 × 10035.2208 × 10035.2197 × 10035.2163 × 10035.2220 × 1003
Std2.7184 × 10003.2853 × 10007.1242 × 10−011.4455 × 10011.5341 × 10002.6001 × 10007.1545 × 10003.0003 × 10−02
6Ave5.2692 × 10035.2659 × 10035.2713 × 10035.2475 × 10035.2686 × 10035.2665 × 10035.2660 × 10035.2726 × 1003
Std3.1010 × 10003.0999 × 10001.7429 × 10009.6328 × 10003.3621 × 10003.6739 × 10003.9684 × 10004.9814 × 10−01
8Ave5.2889 × 10035.2860 × 10035.2898 × 10035.2741 × 10035.2871 × 10035.2846 × 10035.2848 × 10035.2927 × 1003
Std2.5168 × 10002.4244 × 10002.0422 × 10006.7971 × 10003.3044 × 10003.3184 × 10003.6748 × 10006.0088 × 10−01
10Ave5.2992 × 10035.2967 × 10035.2985 × 10035.2840 × 10035.2955 × 10035.2958 × 10035.2953 × 10035.3026 × 1003
Std2.0853 × 10001.9785 × 10002.5973 × 10005.1445 × 10003.0911 × 10003.4434 × 10003.2255 × 10005.9890 × 10−01
Terrace4Ave2.6390 × 10032.6352 × 10032.6392 × 10032.5956 × 10032.6385 × 10032.6352 × 10032.6361 × 10032.6402 × 1003
Std3.3119 × 10005.8176 × 10001.1456 × 10002.2690 × 10011.9837 × 10004.7550 × 10003.1223 × 10003.7343 × 10−02
6Ave2.7000 × 10032.6900 × 10032.6978 × 10032.6594 × 10032.6961 × 10032.6915 × 10032.6914 × 10032.7020 × 1003
Std2.3293 × 10005.6849 × 10003.1792 × 10001.5664 × 10015.2586 × 10005.9177 × 10006.8622 × 10004.1380 × 10−01
8Ave2.7248 × 10032.7156 × 10032.7224 × 10032.6972 × 10032.7195 × 10032.7198 × 10032.7190 × 10032.7284 × 1003
Std4.1674 × 10005.3877 × 10003.7844 × 10009.5988 × 10005.3308 × 10003.6885 × 10003.5037 × 10001.0277 × 1000
10Ave2.7384 × 10032.7316 × 10032.7359 × 10032.7091 × 10032.7335 × 10032.7315 × 10032.7325 × 10032.7419 × 1003
Std2.7462 × 10004.7626 × 10002.7346 × 10001.1469 × 10014.1832 × 10004.3540 × 10003.5022 × 10001.1363 × 1000
Friedman-Rank2.73 5.70 3.49 7.93 4.30 5.32 5.37 1.18
Final-Rank27384561
Table 12. Ave and Std of all test images for PSNR in Otsu.
Table 12. Ave and Std of all test images for PSNR in Otsu.
ImagesTHMetricsGWOIWOAAGPSOHSODBOBPBOGJOMGJO
Baboon4Ave18.0952 18.255418.1909 17.4612 18.1520 17.7155 18.0750 18.2235
Std0.2659 0.3598 0.1669 1.0028 0.2494 0.3909 0.4987 0.0587
6Ave21.0145 20.9345 21.2826 19.4113 20.7927 20.2304 20.8563 21.3266
Std0.5899 0.7698 0.3768 0.9375 0.8806 0.6951 0.7546 0.1942
8Ave22.9893 22.6465 23.0175 20.9897 22.4766 21.6938 22.6795 23.5610
Std0.6046 0.7081 0.5819 0.8642 0.8183 0.8491 0.7449 0.3852
10Ave24.3626 24.0177 24.4181 22.8657 24.2954 23.3296 24.1569 25.0596
Std0.6461 0.7130 0.6644 1.0584 0.8705 0.8327 0.8141 0.4374
Camera4Ave18.3413 19.006018.3878 17.5884 18.5210 18.2794 18.2820 18.9996
Std0.8146 1.0353 0.7431 0.57420.9090 0.8633 0.7464 0.8982
6Ave21.1833 21.1172 21.7305 19.8444 21.0829 20.0684 21.1714 21.8160
Std1.0512 0.9556 0.4051 1.4722 1.0282 1.1818 0.9735 0.1918
8Ave22.5215 22.5343 23.086321.4453 22.6065 22.0485 22.5461 23.0673
Std1.0347 0.7697 0.6265 1.5412 0.9137 0.9112 0.8603 0.3722
10Ave23.7482 23.5751 23.7985 22.4346 23.8518 22.7772 23.8941 24.0101
Std0.5901 1.0687 0.8536 1.5501 1.1107 0.8033 1.0044 0.3680
Face4Ave19.6190 19.5317 19.7119 18.7452 19.6555 19.4749 19.4372 19.7431
Std0.2258 0.3417 0.1033 0.5727 0.1728 0.3023 0.4106 0.0407
6Ave22.1843 21.8667 22.4044 20.9382 22.1876 22.0046 21.7645 22.5966
Std0.3766 0.7107 0.3096 0.6649 0.5024 0.5694 0.4976 0.1161
8Ave24.2104 23.8621 24.2547 22.5695 24.0278 24.0344 23.6361 24.6478
Std0.4971 0.4271 0.5197 0.7039 0.4714 0.4733 0.4647 0.2921
10Ave25.3571 25.0458 25.7182 23.7160 25.1385 25.2690 24.9027 26.4192
Std0.5372 0.7170 0.3808 0.6686 0.7911 0.6107 0.6236 0.2050
Girl4Ave21.9857 21.8110 21.9611 21.1045 22.0105 22.147821.8203 21.9595
Std0.2566 0.3911 0.2828 0.8421 0.1957 0.2841 0.4064 0.0575
6Ave24.2986 23.8934 24.2860 22.8888 24.1695 24.2970 23.9450 24.4533
Std0.4115 0.6289 0.3916 1.1162 0.5167 0.4098 0.5812 0.2265
8Ave26.2020 25.4631 26.1407 24.3075 25.6081 25.9612 25.5383 26.3961
Std0.4272 0.7363 0.4635 0.8497 0.6244 0.5376 0.6101 0.2812
10Ave27.5148 26.6413 27.3021 25.1911 26.8166 27.2625 26.7874 28.0996
Std0.4306 0.6467 0.5666 1.5980 0.6751 0.5098 0.6573 0.2191
Hunter4Ave21.9108 21.8017 21.9201 20.8587 21.9017 21.9194 21.7408 21.9927
Std0.1196 0.1763 0.0913 0.7714 0.0954 0.1103 0.1617 0.0288
6Ave24.4434 24.0101 24.3554 22.6226 24.2966 24.4141 24.1654 24.5299
Std0.3119 0.3344 0.22440.7563 0.3138 0.2544 0.3551 0.2920
8Ave26.0975 25.2994 25.8516 23.9899 25.5459 25.8217 25.6038 26.2694
Std0.2557 0.4485 0.3325 0.7630 0.4435 0.3720 0.3582 0.1212
10Ave27.2447 26.5114 27.0846 24.7966 26.6295 26.9010 26.6213 27.8330
Std0.4310 0.5125 0.4322 0.7187 0.4018 0.4739 0.4719 0.1634
Lena4Ave19.0565 19.0102 19.0952 18.2606 19.0721 19.0340 18.9694 19.1212
Std0.0888 0.1206 0.0605 0.6663 0.0848 0.1201 0.1461 0.0279
6Ave21.5413 21.2753 21.5354 20.3493 21.3816 21.2560 21.1731 21.8512
Std0.3328 0.2984 0.3216 0.5040 0.3738 0.2894 0.4290 0.0518
8Ave23.1264 22.8547 23.2917 21.8235 22.8829 22.7026 22.7378 23.6057
Std0.4001 0.4382 0.3825 0.6423 0.3833 0.4045 0.4917 0.2887
10Ave24.2832 24.1340 24.5612 22.9175 23.9934 23.7670 23.8262 24.9759
Std0.4871 0.4338 0.3794 0.5785 0.5769 0.5505 0.4946 0.3618
Saturn4Ave22.2645 22.2029 22.3277 21.4109 22.3166 22.365522.1494 22.3423
Std0.1317 0.2373 0.0585 0.5927 0.1163 0.1700 0.3784 0.0270
6Ave25.0762 24.7413 25.1943 23.5879 24.9502 25.1228 24.7550 25.3021
Std0.3315 0.3935 0.2412 0.7636 0.3529 0.2672 0.3721 0.0929
8Ave26.9601 26.5835 27.0003 25.4044 26.7132 26.6869 26.5320 27.4439
Std0.3644 0.3530 0.3017 0.7246 0.3923 0.5373 0.3931 0.1670
10Ave28.4526 27.9390 28.3402 26.4258 27.8525 28.2296 27.8216 29.0811
Std0.3701 0.3439 0.4948 0.7156 0.5970 0.4880 0.4451 0.1862
Terrace4Ave21.4487 21.3472 21.4543 20.2352 21.4414 21.3513 21.3498 21.4807
Std0.0912 0.1680 0.0446 0.5816 0.0605 0.1477 0.1189 0.0057
6Ave23.8957 23.4246 23.7389 22.0499 23.6955 23.6429 23.4807 24.0005
Std0.1441 0.3004 0.2232 0.6379 0.2887 0.2593 0.3648 0.0381
8Ave25.5812 24.9050 25.3399 23.7264 25.1479 25.2346 25.1673 25.8515
Std0.2904 0.4396 0.3309 0.5738 0.4494 0.3141 0.2904 0.0847
10Ave26.9638 26.1866 26.6473 24.4053 26.4007 26.3419 26.2958 27.3360
Std0.3448 0.4855 0.3269 0.8057 0.4211 0.3865 0.3352 0.1720
Friedman-Rank2.82 5.52 3.74 7.94 4.33 4.71 5.40 1.54
Final-Rank27384561
Table 13. Ave and Std of all test images for FSIM in Otsu.
Table 13. Ave and Std of all test images for FSIM in Otsu.
ImagesTHMetricsGWOIWOAAGPSOHSODBOBPBOGJOMGJO
Baboon4Ave0.8176 0.82180.8196 0.8040 0.8187 0.8098 0.8184 0.8200
Std0.0066 0.0107 0.0046 0.0274 0.0062 0.0088 0.0116 0.0014
6Ave0.8795 0.8775 0.8840 0.8459 0.8760 0.8613 0.8755 0.8847
Std0.0136 0.0209 0.0092 0.0250 0.0224 0.0160 0.0201 0.0042
8Ave0.9092 0.9043 0.9092 0.8759 0.9012 0.8846 0.9074 0.9198
Std0.0140 0.0195 0.0137 0.0207 0.0202 0.0192 0.0186 0.0087
10Ave0.9264 0.9230 0.9280 0.9050 0.9295 0.9091 0.9279 0.9363
Std0.0139 0.0150 0.0141 0.0237 0.0170 0.0161 0.0171 0.0082
Camera4Ave0.8350 0.8327 0.83610.8185 0.8318 0.8311 0.8288 0.8358
Std0.0078 0.0085 0.0059 0.0138 0.0084 0.0074 0.0096 0.0044
6Ave0.8700 0.8661 0.8746 0.8473 0.8671 0.8624 0.8664 0.8770
Std0.0088 0.0118 0.0061 0.0150 0.0088 0.0082 0.0133 0.0033
8Ave0.8881 0.8868 0.8960 0.8686 0.8864 0.8823 0.8870 0.9006
Std0.0106 0.0118 0.0073 0.0173 0.0103 0.0106 0.0096 0.0039
10Ave0.9077 0.9016 0.9058 0.8823 0.9014 0.8938 0.9037 0.9148
Std0.0063 0.0113 0.0078 0.0150 0.0100 0.0098 0.0111 0.0040
Face4Ave0.7519 0.7518 0.7536 0.7300 0.7526 0.75410.7476 0.7539
Std0.0054 0.0059 0.0021 0.0196 0.0043 0.0052 0.0128 0.0009
6Ave0.8285 0.8249 0.8363 0.7929 0.8304 0.8321 0.8119 0.8434
Std0.0132 0.0160 0.0065 0.0223 0.0123 0.0109 0.0162 0.0014
8Ave0.8742 0.8648 0.8762 0.8289 0.8700 0.8729 0.8577 0.8912
Std0.0147 0.0107 0.0092 0.0204 0.0102 0.0108 0.0118 0.0035
10Ave0.8936 0.8875 0.9043 0.8602 0.8880 0.8962 0.8829 0.9203
Std0.0126 0.0149 0.0076 0.0160 0.0205 0.0118 0.0136 0.0036
Girl4Ave0.8283 0.8255 0.8286 0.8019 0.8278 0.8281 0.8262 0.8295
Std0.0039 0.0095 0.0067 0.0141 0.0043 0.0069 0.0067 0.0009
6Ave0.8668 0.8612 0.8666 0.8402 0.8678 0.8610 0.8643 0.8691
Std0.0049 0.0095 0.0053 0.0154 0.0059 0.0072 0.0073 0.0045
8Ave0.8979 0.8887 0.8971 0.8629 0.8914 0.8883 0.8906 0.9042
Std0.0078 0.0080 0.0058 0.0153 0.0085 0.0099 0.0082 0.0024
10Ave0.9167 0.9039 0.9155 0.8760 0.9090 0.9070 0.9097 0.9274
Std0.0065 0.0093 0.0059 0.0212 0.0092 0.0083 0.0073 0.0023
Hunter4Ave0.8502 0.8486 0.8508 0.8150 0.8501 0.8466 0.8477 0.8526
Std0.0030 0.0036 0.0025 0.0200 0.0030 0.0052 0.0038 0.0009
6Ave0.9022 0.8953 0.9028 0.8641 0.8993 0.8969 0.8979 0.9065
Std0.0060 0.0052 0.0030 0.0169 0.0065 0.0063 0.0047 0.0019
8Ave0.9313 0.9193 0.9288 0.8910 0.9215 0.9206 0.9236 0.9364
Std0.0042 0.0069 0.0052 0.0135 0.0081 0.0065 0.0069 0.0021
10Ave0.9440 0.9345 0.9415 0.9037 0.9349 0.9348 0.9377 0.9529
Std0.0053 0.0082 0.0070 0.0140 0.0070 0.0075 0.0067 0.0024
Lena4Ave0.7790 0.7792 0.78160.7608 0.7802 0.7758 0.7775 0.7800
Std0.0032 0.0038 0.0021 0.0197 0.0024 0.0057 0.0051 0.0013
6Ave0.8403 0.8346 0.8406 0.8102 0.8367 0.8313 0.8328 0.8486
Std0.0102 0.0097 0.0102 0.0144 0.0099 0.0093 0.0137 0.0036
8Ave0.8705 0.8653 0.8737 0.8409 0.8671 0.8612 0.8627 0.8811
Std0.0103 0.0116 0.0084 0.0173 0.0087 0.0118 0.0096 0.0050
10Ave0.8897 0.8863 0.8934 0.8642 0.8841 0.8817 0.8818 0.9015
Std0.0092 0.0088 0.0063 0.0154 0.0099 0.0106 0.0116 0.0052
Saturn4Ave0.8472 0.8490 0.8482 0.8456 0.8487 0.85070.8481 0.8480
Std0.0037 0.0062 0.0019 0.0116 0.0035 0.0045 0.0074 0.0005
6Ave0.8827 0.8788 0.8840 0.8688 0.8821 0.8847 0.8810 0.8849
Std0.0049 0.0073 0.0046 0.0130 0.0076 0.0058 0.0071 0.0025
8Ave0.9086 0.9049 0.9085 0.8900 0.9047 0.9051 0.9048 0.9126
Std0.0061 0.0073 0.0046 0.0122 0.0072 0.0070 0.0050 0.0035
10Ave0.9258 0.9188 0.9246 0.9031 0.9175 0.9227 0.9173 0.9317
Std0.0050 0.0056 0.0060 0.0108 0.0076 0.0073 0.0072 0.0021
Terrace4Ave0.8433 0.8397 0.8441 0.8064 0.8430 0.8388 0.8410 0.8446
Std0.0040 0.0086 0.0023 0.0211 0.0040 0.0071 0.0050 0.0006
6Ave0.9016 0.8895 0.8983 0.8561 0.8986 0.8855 0.8922 0.9044
Std0.0048 0.0107 0.0059 0.0191 0.0067 0.0092 0.0110 0.0020
8Ave0.9286 0.9178 0.9264 0.8915 0.9219 0.9170 0.9227 0.9349
Std0.0100 0.0107 0.0075 0.0151 0.0068 0.0083 0.0083 0.0043
10Ave0.9441 0.9367 0.9417 0.9050 0.9367 0.9311 0.9386 0.9534
Std0.0066 0.0099 0.0066 0.0175 0.0102 0.0103 0.0054 0.0041
Friedman-Rank3.29 4.96 3.68 7.72 4.21 5.62 4.60 1.93
Final-Rank26384751
Table 14. Ave and Std of all test images for SSIM in Otsu.
Table 14. Ave and Std of all test images for SSIM in Otsu.
ImagesTHMetricsGWOIWOAAGPSOHSODBOBPBOGJOMGJO
Baboon4Ave0.7189 0.72760.7233 0.6908 0.7208 0.7033 0.7174 0.7251
Std0.0123 0.0169 0.0084 0.0512 0.0122 0.0182 0.0239 0.0028
6Ave0.8238 0.8166 0.8315 0.7672 0.8158 0.7976 0.8168 0.8327
Std0.0218 0.0289 0.0152 0.0397 0.0355 0.0271 0.0294 0.0081
8Ave0.8710 0.8612 0.8717 0.8175 0.8571 0.8368 0.8676 0.8856
Std0.0195 0.0246 0.0175 0.0302 0.0311 0.0278 0.0247 0.0123
10Ave0.8979 0.8901 0.8970 0.8633 0.9011 0.8753 0.9000 0.9088
Std0.0168 0.0200 0.0185 0.0339 0.0206 0.0229 0.0218 0.0105
Camera4Ave0.6960 0.7248 0.7008 0.6702 0.7026 0.6940 0.6895 0.7248
Std0.0407 0.0442 0.02960.0420 0.0387 0.0350 0.0403 0.0344
6Ave0.7821 0.7796 0.7987 0.7427 0.7795 0.7466 0.7825 0.8013
Std0.0303 0.0351 0.0155 0.0612 0.0372 0.0365 0.0371 0.0075
8Ave0.8154 0.8117 0.8296 0.7861 0.8198 0.8015 0.8171 0.8311
Std0.0349 0.0329 0.0216 0.0585 0.0322 0.0278 0.0292 0.0106
10Ave0.8404 0.8355 0.8393 0.8183 0.8461 0.8190 0.8457 0.8525
Std0.0206 0.0320 0.0236 0.0410 0.0277 0.0239 0.0284 0.0093
Face4Ave0.7033 0.7056 0.7064 0.6829 0.7055 0.70750.7006 0.7066
Std0.0104 0.0121 0.0043 0.0266 0.0093 0.0100 0.0176 0.0019
6Ave0.7859 0.7804 0.7930 0.7546 0.7882 0.7903 0.7690 0.7996
Std0.0131 0.0209 0.0083 0.0294 0.0138 0.0133 0.0190 0.0019
8Ave0.8396 0.8284 0.8388 0.7935 0.8340 0.8381 0.8220 0.8537
Std0.0157 0.0122 0.0100 0.0239 0.0113 0.0114 0.0135 0.0041
10Ave0.8617 0.8526 0.8730 0.8281 0.8556 0.8661 0.8503 0.8890
Std0.0128 0.0182 0.0088 0.0207 0.0236 0.0133 0.0156 0.0047
Girl4Ave0.71410.7136 0.7132 0.6763 0.7128 0.7120 0.7110 0.7141
Std0.0036 0.0144 0.0110 0.0272 0.0066 0.0099 0.0097 0.0007
6Ave0.7530 0.7512 0.7531 0.7280 0.76520.7473 0.7601 0.7566
Std0.0140 0.0211 0.0165 0.0229 0.0196 0.01380.0206 0.0148
8Ave0.7973 0.7875 0.7925 0.7499 0.8008 0.7804 0.8034 0.8079
Std0.0152 0.0178 0.0132 0.0236 0.0165 0.0205 0.0196 0.0106
10Ave0.8258 0.8135 0.8217 0.7707 0.8265 0.8016 0.8258 0.8365
Std0.0164 0.0258 0.0171 0.0322 0.0229 0.0134 0.0157 0.0111
Hunter4Ave0.6959 0.6976 0.6963 0.6446 0.6961 0.6856 0.6995 0.7037
Std0.0146 0.0158 0.0125 0.0374 0.0167 0.0181 0.0175 0.0064
6Ave0.7722 0.7593 0.7722 0.7133 0.7667 0.7643 0.7706 0.7771
Std0.0170 0.0172 0.0156 0.0358 0.0198 0.0148 0.0128 0.0104
8Ave0.8122 0.8005 0.8111 0.7603 0.8106 0.7947 0.8109 0.8228
Std0.0109 0.0216 0.0128 0.0283 0.0152 0.0197 0.0162 0.0091
10Ave0.8411 0.8304 0.8352 0.7806 0.8373 0.8182 0.8410 0.8532
Std0.0135 0.0167 0.0195 0.0326 0.0239 0.0176 0.0178 0.0100
Lena4Ave0.6743 0.6738 0.6757 0.6575 0.67590.6728 0.6727 0.6756
Std0.0044 0.0046 0.0015 0.0242 0.0030 0.0056 0.0057 0.0007
6Ave0.7472 0.7458 0.7468 0.7179 0.7417 0.7366 0.7413 0.7568
Std0.0179 0.0173 0.0142 0.0206 0.0161 0.0133 0.0239 0.0055
8Ave0.7930 0.7857 0.7959 0.7624 0.7837 0.7813 0.7893 0.8053
Std0.0225 0.0223 0.0198 0.0324 0.0162 0.0189 0.0272 0.0158
10Ave0.8207 0.8227 0.8303 0.7902 0.8174 0.8059 0.8151 0.8364
Std0.0197 0.0221 0.0190 0.0269 0.0239 0.0191 0.0286 0.0141
Saturn4Ave0.8302 0.8316 0.8316 0.8236 0.8319 0.83430.8303 0.8310
Std0.0040 0.0076 0.0034 0.0137 0.0049 0.0056 0.0105 0.0007
6Ave0.8752 0.8694 0.8783 0.8561 0.8739 0.8748 0.8729 0.8806
Std0.0060 0.0094 0.0058 0.0162 0.0107 0.0074 0.0076 0.0023
8Ave0.9024 0.8981 0.9023 0.8822 0.8983 0.8966 0.8983 0.9069
Std0.0065 0.0094 0.0054 0.0138 0.0087 0.0090 0.0059 0.0027
10Ave0.9213 0.9144 0.9195 0.8978 0.9105 0.9167 0.9119 0.9267
Std0.0058 0.0057 0.0061 0.0099 0.0092 0.0063 0.0085 0.0025
Terrace4Ave0.7184 0.7145 0.71950.6644 0.7184 0.7101 0.7139 0.7190
Std0.0048 0.0144 0.0053 0.0344 0.0070 0.0155 0.0101 0.0017
6Ave0.8010 0.7915 0.7954 0.7296 0.8020 0.7825 0.7955 0.8054
Std0.0120 0.0244 0.0137 0.0339 0.0142 0.0163 0.0189 0.0053
8Ave0.8452 0.8329 0.8393 0.7892 0.8383 0.8232 0.8458 0.8543
Std0.0166 0.0233 0.0177 0.0300 0.0175 0.0166 0.0173 0.0129
10Ave0.8735 0.8659 0.8714 0.8160 0.8661 0.8540 0.8773 0.8894
Std0.0147 0.0210 0.0160 0.0347 0.0210 0.0194 0.0174 0.0112
Friedman-Rank3.58 4.48 4.14 7.55 4.06 5.58 3.97 2.65
Final-Rank26584731
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Bao, Z.; Li, X.; Wang, J. Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization. Symmetry 2025, 17, 2130. https://doi.org/10.3390/sym17122130

AMA Style

Zhang X, Bao Z, Li X, Wang J. Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization. Symmetry. 2025; 17(12):2130. https://doi.org/10.3390/sym17122130

Chicago/Turabian Style

Zhang, Xiaoyan, Zuowen Bao, Xinying Li, and Jianfeng Wang. 2025. "Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization" Symmetry 17, no. 12: 2130. https://doi.org/10.3390/sym17122130

APA Style

Zhang, X., Bao, Z., Li, X., & Wang, J. (2025). Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization. Symmetry, 17(12), 2130. https://doi.org/10.3390/sym17122130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop