Next Article in Journal
Deep Reinforcement Learning-Based Experimental Scheduling System for Clay Mineral Extraction
Previous Article in Journal
Facial Beauty Prediction Using a Generative Adversarial Network for Dataset Augmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection

1
School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China
2
Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China
3
Key Laboratory of Parallel, Distributed and Intelligent Computing (Guangxi University), Education Department of Guangxi Zhuang Autonomous Region, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(3), 616; https://doi.org/10.3390/electronics15030616
Submission received: 5 January 2026 / Revised: 28 January 2026 / Accepted: 28 January 2026 / Published: 30 January 2026

Abstract

Multimodal multiobjective optimization problems (MMOPs) are widely encountered in real-world applications. While numerous evolutionary algorithms have been developed to locate equivalent Pareto-optimal solutions, existing Multimodal Multiobjective Evolutionary Algorithms (MMOEAs) often struggle to handle large-scale decision variables and sparse Pareto sets due to the curse of dimensionality and unknown sparsity. To address these challenges, this paper proposes a novel approach named MASR-MMEA, which stands for Large-scale Sparse Multimodal Multiobjective Optimization via Multi-stage Search and Reinforcement Learning (RL)-assisted Environmental Selection. Specifically, to enhance search efficiency, a multi-stage framework is established incorporating three key innovations. First, a dual-strategy genetic operator based on improved hybrid encoding is designed, employing sparse-sensing dynamic redistribution for binary vectors and a sparse fuzzy decision framework for real vectors. Second, an affinity-based elite strategy utilizing Mahalanobis distance is introduced to pair real vectors with compatible binary vectors, increasing the probability of generating superior offspring. Finally, an adaptive sparse environmental selection strategy assisted by Multilayer Perceptron (MLP) reinforcement learning is developed. By utilizing the MLP-generated Guiding Vector (GDV) to direct the evolutionary search toward efficient regions and employing an iteration-based adaptive mechanism to regulate genetic operators, this strategy accelerates convergence. Furthermore, it dynamically quantifies population-level sparsity and adjusts selection pressure through a modified crowding distance mechanism to filter structural redundancy, thereby effectively balancing convergence and multimodal diversity. Comparative studies against six state-of-the-art methods demonstrate that MASR-MMEA significantly outperforms existing approaches in terms of both solution quality and convergence speed on large-scale sparse MMOPs.

1. Introduction

Multiobjective optimization problems (MOPs) [1,2,3,4,5] are prevalent in the real world and require simultaneous optimization of multiple objective functions. To effectively address these challenges, many multiobjective evolutionary algorithms (MOEAs) have been proposed, including well-known methods such as NSGA-II [6] and MOEA/D [7], which have proven effective in various applications. The primary goals of most MOEAs are to achieve a well-distributed and widely covered Pareto Front (PF), maintain diversity, and ensure high convergence quality, enabling decision-makers (DMs) to select the final solution from the PF based on their individual preferences. Effective diversity preservation mechanisms also play a crucial role in enhancing the search capability of MOEAs. In certain multiobjective optimization problems (MOPs), multiple Pareto-optimal solutions in the decision space can correspond to the same point on the Pareto Front (PF). Such problems are referred to as multimodal multiobjective optimization problems (MMOPs) [8]. Many real-world problems exhibit multimodal features, such as diet design [9], spacecraft mission design [10], rocket engine design [11], and functional brain imaging [12]. A simple example of an MMOP is presented, showing a scenario with two objectives and two decision variables, where points x a and x b are distant in the decision space but share the same objective vector, as shown in Figure 1. Identifying all optimal solutions helps decision makers (DMs) better understand the problem. Additionally, if constraints emerge, it becomes easier to switch to alternative solutions. Traditional MOEAs have been found to be ineffective in solving MMOPs [13]. To address the challenges posed by multimodal MOEA problems, several multimodal multiobjective evolutionary algorithms (MMEAs) have been proposed, demonstrating strong performance on benchmark problems. Examples include the Omni-optimizer [14,15], Dual Niche Evolutionary Algorithm (DNEA) [16], Decision Space Niche-based NSGA-II (DN-NSGA-II) [17], Multiobjective Particle Swarm Optimization with Ring Topology and Special Crowding Distance (MO_Ring_PSO_SCD) [4], Convergence-Promoting Density Evolutionary Algorithm (CPDEA) [18], and Multiobjective Evolutionary Algorithm with Weighted Indicators (MMEAWI) [19], among others. The aforementioned algorithms are specifically designed to solve MOPs with multiple PSs and a single common PF. However, above these methods face considerable challenges when addressing large-scale MMOPs, as they are not specifically designed for such problems. Moreover, due to the curse of dimensionality, it is challenging for these algorithms to effectively search for all equivalent Pareto sets in high-dimensional spaces [20]. Moreover, sparse optimization problems with sparse non-zero variables are commonly encountered in large-scale MMOPs, such as feature selection [21], neural architecture search [22], and neural network training [23]. Several specialized MOEAs have been developed to tackle large-scale sparse MOPs, including approaches utilizing decision variable grouping [24,25], decision space reduction [26,27], and innovative search strategies [28,29]. Large-scale multiobjective optimization problems (MOPs) with sparse Pareto-optimal solutions present significant challenges, and current research on algorithms capable of solving these problems is limited. MP-MMEA [30] was the first algorithm designed for such tasks, utilizing multiple subpopulations guided by reference vectors. Building on this, HHC-MMEA [31] introduced hierarchical clustering to differentiate resource allocation among subpopulations and added an adaptive mutation method for more efficient co-evolution. Existing algorithms typically use hybrid encoding to generate sparse solutions, applying separate optimization to binary and real vectors. However, while considerable effort has been put into improving binary vector strategies, enhancements for real vectors remain minimal, often limited to traditional Simulated Binary Crossover (SBX) and Polynomial Mutation (PM) operators. This imbalance may hinder the overall effectiveness of these algorithms, given the crucial role of real vectors in practical applications. Additionally, hybrid encoding often overlooks the potential interaction between binary and real vectors, limiting effective solution space exploration and ultimately impacting the quality of the final solutions. Therefore, to address the issues mentioned above, this paper proposes a dual-strategy genetic operator based on an improved hybrid encoding approach, as well as an affinity-based elitism strategy. Additionally, to enhance the algorithm’s generalizability in practical applications, an adaptive sparse environment selection strategy based on multilayer perceptron (MLP) reinforcement learning. Specifically, the proposed MASR-MMEA makes the following three key contributions.
1.
Dual-Strategy Genetic Operator Based on Improved Hybrid Encoding: To optimize the bin vector and maintain the sparsity of the solutions, a dynamic redistribution strategy of binary vectors based on sparse sensing is proposed. The sparse fuzzy decision variables framework, on the other hand, is tailored for real vectors. It adjusts parameters dynamically based on the sparsity of the solution space, enabling more precise determination of the search step sizes in the fuzzy evolutionary algorithm.
2.
Affinity-Based Elite Strategy: This strategy updates the sets of real vector and binary vectors by calculating affinity using random selection and Mahalanobis distance. By ensuring that real vectors are paired with the most compatible binary vectors, this method increases the likelihood of generating superior offspring solutions that are closer to the Pareto-optimal set.
3.
Adaptive sparse environment selection strategy based on multilayer perceptron (MLP) reinforcement learning: This strategy introduces an MLP-based reinforcement learning mechanism into the environmental selection phase to dynamically model and update solution sparsity. By adaptively adjusting selection pressure according to the learned sparsity distribution and gradient-descent direction information, it effectively balances convergence and diversity, enhances the preservation of sparse Pareto-optimal solutions, and accelerates convergence in large-scale multimodal multiobjective optimization problems.
The remainder of this paper is organized as follows: In Section 2, the paper provides a detailed discussion of related work in large-scale sparse multiobjective optimization and large-scale multimodal multiobjective optimization. Section 3 presents a comprehensive explanation of the MASR-MMEA algorithm. In Section 4, the paper lists the experimental results and conducts a detailed analysis. Finally, Section 5 concludes the paper.

2. Related Works

2.1. Large-Scale MMOPs

Large-scale multiobjective problems [32] may be mathematically defined as follows:
min x   f ( x ) = f 1 ( x ) , , f m ( x ) , s . t .   x Ω
where x = ( x 1 , x 2 , , x D ) Ω represents a solution composed of D decision variables within the decision space Ω . The function f : Ω Λ R m consists of m objectives, where Λ is the objective space and Ω R D is the decision space. The dominant relationship between the two solutions is defined as follows:
i : f i ( x 1 ) f i ( x 2 ) j : f j ( x 1 ) < f j ( x 2 )
If this condition is satisfied, then x 1 is said to dominate x 2 , where i , j { 1 , 2 , , m } . A Pareto-optimal solution is not dominated by any other solution in Ω . A multiobjective optimization problem is defined as a multimodal multiobjective optimization problem (MMOP) [13] if it satisfies at least one of the following conditions:
1.
The problem contains at least one local Pareto-optimal solution.
2.
The problem contains at least two equivalent global Pareto-optimal solutions that correspond to the same point on the Pareto front (PF).
Large-scale MMOPs involve numerous decision variables, adding extra complexity to the algorithm design. In these problems, particularly those with sparse Pareto-optimal solutions, the primary challenge lies in effectively exploring the sparse solution space, which is characterized by most decision variables taking zero values. The sparse nature of these solutions makes it difficult for traditional search and optimization algorithms to be applied effectively, as they usually assume that the solution space is uniformly distributed across all dimensions.
To overcome these challenges, it is essential to develop new algorithms and techniques that are capable of adapting to both high-dimensional and sparse characteristics and remain flexible in the face of unknown environmental changes. The development of these methods will be discussed in detail in the next subsection, where the paper will explore the latest advancements in current research and outline future research directions.

2.2. Existing Sparse Large-Scale Multiobjective Algorithms

Large-scale sparse multiobjective optimization problems attracted growing attention, resulting in the development of numerous multiobjective evolutionary algorithms (MOEAs) to address these challenges in recent years. These algorithms are broadly categorized into two main types. The first category comprises sparse large-scale MOEAs based on Pareto dominance methods. SparseEA [28] is a pioneering algorithm in this domain, employing a framework similar to NSGA-II [6]. Its primary innovation lies in the use of a hybrid encoding strategy to handle sparse solutions, which is then applied within genetic operators. By calculating the importance of decision variables, the algorithm dynamically flips the zero and non-zero values in the mask, effectively optimizing sparse solutions. MSKEA [33] proposed a multi-stage knowledge-guided evolutionary algorithm, which estimates the sparse distribution of Pareto-optimal solutions by designing three different types of knowledge-guided vectors. The algorithm evolves through stages based on these three knowledge-guided phases. The second category consists of sparse large-scale MOEAs using dimensionality reduction techniques. MOEA/PSL [27] leverages DAE and RBM to develop a Pareto-optimal subspace learning-based MOEA; DAE learns compact representations, while RBM captures the sparse distribution of decision variables. This approach reduces dimensionality by using the hidden layers of DAE and RBM. PM-MOEA [26] uses pattern mining to find the largest and smallest sets of non-zero decision variables within the population, followed by applying specialized genetic operators for dimensionality reduction.

2.3. Existing Multimodal Multiobjective Algorithms

Since the concept of multimodal multiobjective optimization was formally introduced in 2005 [12,14], numerous algorithms have been developed. Early foundational works primarily focused on Pareto-based strategies. For instance, the classic Omni-optimizer [14] introduced crowding distance in both decision and objective spaces to maintain diversity. Following this, DN-NSGA-II [17] utilized a decision-space-based neighborhood method to identify equivalent points on the Pareto front, while MO_Ring_PSO_SCD [4] adopted a ring topology to refine diversity measurements. While these methods established the groundwork, recent research has shifted towards more sophisticated ranking and evolutionary mechanisms to handle increasing complexity. In recent years, several advanced approaches have emerged. HREA [34] employs a hierarchical ranking method combined with a crowding balance strategy, utilizing a user-controlled parameter γ to balance local solution quality with global convergence. To further enhance search efficiency, GSEA [35] proposes a grid-based adaptive exploration method coupled with a two-stage environmental selection strategy. In parallel, addressing the uneven clustering of populations, the NDE-HC algorithm [36] incorporates the Hilbert curve into a niching differential evolution framework. It features a Hilbert curve-based neighborhood reproduction method to capture promising regions and employs a convergence-based density indicator to effectively distinguish and balance convergence and diversity solutions in the decision space. Similarly, MMOEA/TSC [37] divides the evolutionary process into diversity-oriented and convergence-oriented species conservation stages to locate global and local Pareto sets. Addressing the challenge of simultaneous distribution management, DsDMEA [38] integrates a dual-space distribution metric (DsDM) into environmental selection. By removing bi-noncontributing solutions, it effectively balances decision space convergence with objective space distribution. Furthermore, MMOEA-EGR [39] integrates reinforcement learning to dynamically select evolutionary stages, coupled with a remodeling population distribution strategy to promote efficient collaboration and uniform distribution across the decision space. In a similar vein of integrating machine learning, CoSOMEA [40] utilizes a Self-Organizing Map (SOM) neural network to extract decision space information for better identification of both global and local optima. This is coupled with a coevolutionary mechanism that balances exploration and exploitation to prevent the algorithm from being trapped in local regions. To tackle the specific challenges of large-scale environments, LMMODE [41] proposes a differential evolution framework based on fuzzy clustering. It employs the Fuzzy C-means (FCM) algorithm to partition the high-dimensional search space into multiple subspaces and utilizes a multi-stage optimization strategy to balance the performance between the objective and decision spaces. Focusing on constrained environments, CMMOGA_DLF [42] introduces a dynamic local fitness evaluation to leverage local optima for diversity, while employing a local dominance criterion to guide the search through infeasible regions. Additionally, BOEA [43] transforms MMOPs into a bi-objective problem, innovatively using boundary crossover and diversity metrics to effectively capture multiple Pareto sets. Decomposition-based methods have also evolved significantly. While MOEA/D-AD [44] pioneered this category, newer algorithms have improved upon the allocation and retention strategies [45]. A notable recent advancement is MMEA-DES [46], which introduces the Dynamic Niche Distance (DND) metric. MMEA-DES applies a dual-elite subpopulation strategy based on decomposition and DND, ensuring a well-distributed solution set in both decision and objective spaces, surpassing the limitations of static niche definitions found in earlier works like the dual clustering MOEA [47]. Further advancing decomposition strategies, MOEA/D-BDN [48] integrates a bi-dynamic niche strategy with adaptive weight decomposition. It employs an archiving mechanism to preserve historical individuals and uses a bi-dynamic niche distance (BDN) metric to evaluate density across both decision and objective spaces, while an adaptive weight adjustment strategy dynamically guides the search to improve distribution uniformity.
Despite these advancements, existing multimodal multiobjective algorithms face significant challenges in solving large-scale problems. Most current MOEAs focus on maintaining diversity in the decision space to find equivalent Pareto sets (PSs). However, this approach becomes computationally prohibitive in large-scale MMOPs, where measuring population diversity and identifying all equivalent PSs within a vast search space is difficult. Therefore, there is a pressing need to develop new strategies specifically designed to address the complexities of large-scale multimodal multiobjective optimization.

2.4. Motivation

To maintain population diversity in the decision space, existing multimodal multiobjective optimization algorithms aim to find equivalent Pareto-optimal solutions. However, this approach faces challenges when dealing with large-scale multimodal optimization problems (MMOPs). In high-dimensional decision spaces, traditional diversity metrics like crowding distance and Euclidean distance lose effectiveness, making it difficult for conventional algorithms to accurately estimate solution diversity, resulting in reduced performance for large-scale MMOPs. To address these challenges, multi-population co-evolutionary mechanisms have been proposed as a promising strategy for improving solution diversity. However, existing multiobjective evolutionary algorithms (MOEAs) that employ multi-subpopulation strategies often struggle with large-scale MMOPs due to the inefficiency of Euclidean distance in high-dimensional spaces and the inability to distinguish solutions between subpopulations effectively. The MP-MMEA algorithm [30] provides an effective solution for large-scale MMOPs with sparse Pareto-optimal solutions. It utilizes multiple subpopulations that evolve independently, guided by adaptively updated guiding vectors, which accelerates convergence and diversifies search directions. However, its performance declines as the number of equivalent Pareto sets increases. HHC-MMEA [31] employs hybrid hierarchical clustering to classify subpopulations, integrating both local and global guidance information. It enhances convergence speed in large search spaces and increases population diversity by using an adaptive compilation method and an environmental selection method based on local guidance. However, it tends to overlook local Pareto sets.
This study introduces a dual-strategy genetic operator based on hybrid encoding, designed to function across multiple stages. At each stage, specialized strategies are employed to independently optimize binary and real-valued vectors. Additionally, an affinity-based elite strategy is proposed to efficiently combine binary and real vectors, facilitating the generation of higher-quality offspring. To further enhance convergence speed and improve the algorithm’s adaptability to real-world applications, an adaptive sparse environment selection strategy based on multilayer perceptron (MLP) reinforcement learning has also been developed.

3. The Proposed MASR-MMEA

3.1. Framework of MASR-MMEA

To depict the overall framework and main components, the pseudocode and flowchart of the proposed algorithm are presented in Algorithm 1 and Figure 2, respectively. The process begins with the initialization and preprocessing of the population. Subsequently, a zero matrix of the same size as the population and decision variable dimensions is initialized to represent the initial binary vector, along with a guidance vector g v initialized to all ones. Based on the predefined number of subpopulations K, each subpopulation is initialized, followed by environmental selection and the updating of the guidance vector g v . Subpopulation ranking is first performed, and the ranking information is stored in the rank array when the population has not yet met the termination condition. To balance exploration and exploitation across different phases of the search process, a threshold is calculated and adjusted. Specifically, a conditional check is performed to see if the current number of function evaluations is less than the calculated threshold. The threshold λ is a percentage of the maximum function evaluations, which is dynamically adjusted through a formula related to the problem’s dimensionality m (see Equation (3)). Before reaching λ , the g v values corresponding to each subpopulation are updated, followed by generating offspring solutions using the Dual-Strategy Genetic Operator (Algorithm 2) and optimizing the newly generated offspring using the Affinity-Based Elite Strategy (Algorithm 3). After exceeding λ , the decision variables (Decs) are processed and g v is updated using the MLP Training mechanism (Algorithm 4). In the final stage, crossover and mutation operations are performed using the method from MP-MMEA, followed by an environmental selection operation. When the generation count is a multiple of T, a merge-split operation is performed on the population. Finally, all subpopulations are merged, and the population P is returned.
λ = Rate + 0.2 × 1 3 Problem   ·   M
Algorithm 1: Framework of MASR-MMEA
Electronics 15 00616 i001
Algorithm 2: Dual-Strategy Genetic Operator
Electronics 15 00616 i002
Algorithm 3: Affinity-Based Elite Strategy
Electronics 15 00616 i003
Algorithm 4: MLP Training
Electronics 15 00616 i004

3.2. Dual-Strategy Genetic Operator

To effectively promote the generation of sparse solutions in both population initialization and genetic operators, a hybrid encoding strategy is employed in this paper. Each solution x represents a real vector and a binary vector, where x i = real i × bin i . Since the real vector represents variables in the continuous decision space, and the bin vector guides the search direction in the binary space, optimizing the real vector allows for the effective adjustment of decision variable values. Simultaneously, optimizing the bin vector facilitates the generation of more zero-value decision variables in the decision space, thereby maintaining the sparsity of the solutions. Considering the characteristics of mixed encoding, this paper proposes an improved dual-strategy genetic operator. Specifically, the Dynamic Redistribution (Algorithm 5) is employed to optimize the binary vector, while the Sparse Fuzzy Decision Variables Framework (Algorithm 6) is tailored for the real vector. By integrating these components, the Dual-Strategy Genetic Operator (Algorithm 2) effectively leverages the advantages of continuous and discrete variables to adapt to diverse optimization environments.
Algorithm 5: Dynamic Redistribution Strategy
Electronics 15 00616 i005
Algorithm 6: Sparsity-Based Adaptive Fuzzy Evolutionary Algorithm
Electronics 15 00616 i006

3.2.1. Dynamic Redistribution Strategy

To optimize the bin vector and maintain the sparsity of the solutions, the proposed operator employs a dynamic redistribution strategy for binary vectors based on sparse sensing. This strategy starts with a comprehensive analysis of the effectiveness of decision variables across all individuals by calculating the average sparsity of the solution vectors. The average sparsity is defined as the ratio of non-zero decision variables to the total number of decision variables. Based on a deep analysis of the average sparsity of solutions, the population size, and the evolutionary state of the population, the number of redistributed mask vectors is collaboratively determined. This approach fully considers the complexity and diversity of the population’s evolutionary dynamics while ensuring an effective balance between exploration and exploitation in high-dimensional optimization problems. The core of this strategy lies in dynamically adjusting the binary vector structure to optimize search efficiency and algorithm adaptability. By employing this dynamic adjustment, the algorithm is able to better meet the optimization needs at different evolutionary stages. The specific calculation formula is as follows:
S p a r s i t y = i = 1 N j = 1 D Mask i j N
D y n a m i c S = S p a r s i t y D + ϕ
I t e r = FE MaxFE
A d a p t i v e M a s k = K if   A d a p t i v e M a s k K I t e r × N × D y n a m i c S γ λ otherwise
In this context, S p a r s i t y refers to the average number of non-zero decision variables in an individual within the current population. Its value gradually approaches D × θ , where D is the total number of decision variables and θ represents the sparsity of the Pareto-optimal solutions. Additionally, D y n a m i c S represents the ratio of non-zero decision variables to the total number of decision variables across all individuals in the population. This ratio is a critical indicator for determining how the binary vectors are redistributed based on the current dynamic coefficient status. ϕ is a small constant used to prevent division by zero. A d a p t i v e M a s k is the number of bins in the dynamic redistribution of binary vectors. I t e r represents the algorithm’s iteration progress, γ is an exponent used to adjust the weight of D y n a m i c S , and λ is a scaling factor for normalizing the calculation results. When AdaptiveMask K , where K is the default number of subpopulations, it is directly set to K to ensure the basic operation of the algorithm. This setting is typically suitable for smaller or simpler optimization problems. This is typically applied to smaller or simpler optimization problems. When AdaptiveMask > K , the population must consider factors such as iteration progress, population size, and average sparsity for more complex dynamic redistribution of binary vectors. This enhances the algorithm’s flexibility and efficiency in handling high-dimensional and complex optimization problems. The value of the AdaptiveMask calculated in Figure 3a,b is 3, resulting in the entire binary vector matrix being divided into 4 groups. These groups are ranked by comparing the number of non-zero vectors in each group, with groups containing more non-zero vectors being ranked higher, as illustrated in Figure 3a. By dynamically adjusting the population structure, the algorithm is able to explore the solution space more effectively, reducing resource waste.
Through the dynamic redistribution strategy described above, the Mask is obtained for the following crossover and mutation operations. First, the parent mask matrix is divided into two parts: Parent1Mask and Parent2Mask. The offspring mask is then initialized to Parent1Mask. Then, for each parent individual’s mask, the algorithm loops through to find the positions for the i-th individual. Specifically, it identifies the positions index1 where the mask is 1 in Parent1Mask and 0 in Parent2Mask, and the positions index2 where the mask is 0 in Parent1Mask and 1 in Parent2Mask. Generate a random number array of the same size as Parent1Mask, with each random number ranging from 0 to 1. If the value at index1 in Parent1Mask is less than the corresponding value in the random number array, it indicates that index1 will not be swapped, and OffspringMask is set to 0 at index1. Conversely, if the value at index1 in Parent1Mask is greater than or equal to the corresponding value in the random number array, a swap occurs, and OffspringMask is set to 1 at index1. Similarly, if the value at index2 in Parent2Mask is greater than the corresponding value in the random number array, a swap occurs, and OffspringMask is set to 1 at index2.
Before performing the mutation operation, the algorithm first checks if the REAL variable is true. If the result is true, the algorithm iterates through the first half of the offspring individuals, comparing the value of rand with Iter. If rand is less than Iter, the i-th individual undergoes mutation.It is important to note that if the value of rand is less than 0.5, the algorithm searches for indices in the current individual’s mask that are 1 and in the Maskindex array. Then, through a binary tournament selection, the individual with the highest Score among these indices is selected for mutation, and its mask value is changed from 1 to 0. If the value of rand is greater than 0.5, the algorithm searches for indices in the current individual’s mask that are 0 and in the Maskindex array. In this second mutation method, the binary tournament selects the individual with the lowest score among these indices, and its mask value is changed from 0 to 1.

3.2.2. Sparse Fuzzy Decision Variables Framework (SFDV)

In scenarios where decision space dimensions are large, existing large-scale algorithms often suffer from poor convergence.To address this issue, researchers have proposed the FDV framework [49,50], which employs a fuzzy evolutionary approach to narrow down the search space. In this framework, fuzzy evolution is introduced where decision variables can belong to two fuzzy sets simultaneously. Each decision variable is updated to the value represented by the fuzzy set with the higher membership degree. Once all decision variable values are updated, the original solution is transformed into a fuzzy solution.
The evolutionary process of SFDV consists of two stages: fuzzy evolution and precise evolution. The fuzzy evolution stage involves substage division and fuzzy operations. The mathematical formula for dividing fuzzy evolution substages is as follows:
Avgs = Sparsity D
S = N D if   iter > 0.7   and   N D > Sparsity 2 · N D otherwise
Step ( i ) = S · i i 2 2 · Acc
Acc = Acc Avgs if   Sparsity α Acc + Avgs otherwise
In Equations (9) and (10), S represents the number of substages into which the fuzzy evolution phase is divided. Step ( i ) , refers to the cumulative step size of the first i substages in the fuzzy evolution phase, with its default value set to 0, i.e., Step ( 0 ) = 0 . Additionally, Total = 1 denotes the total step size for the entire evolution process. The Rate represents the proportion of the fuzzy evolution phase in the entire evolutionary process, referred to as the fuzzy evolution rate. When Acc > 0 , this value represents the phase acceleration, which regulates the rate of change in step size between different substages. It is important to note that if Step ( s ) < Rate , then Step ( s + 1 ) is set to Rate. Additionally, the average sparsity is incorporated to allow the entire fuzzy evolution phase to adaptively change in response to the sparsity of the problem.
The SFDV adjusts parameters by dynamically considering the sparsity of the solution space, allowing for a more precise determination of search steps within the fuzzy evolutionary algorithm in this paper. The key advantage of this improvement is that it enables the algorithm to adaptively optimize based on the structural characteristics of the problem, rather than relying on fixed step sizes or evolutionary parameters. When the value of Sparsity is low, increase the value of S. This implies that the search step size will increase, allowing the algorithm to skip broad ineffective areas and quickly move to potentially valuable regions. Balancing global and local search is crucial. A larger Step value helps promote global search, preventing premature convergence to local optima, while a smaller Step value aids in detailed local search, which is particularly important in the later stages of the optimization process. By reducing ineffective searches in low-potential areas and conducting more precise searches in promising regions, the overall efficiency of the algorithm is improved.
The Sparse Fuzzy Decision Variables Framework (SFDV) is primarily aimed at optimizing real vectors. Inspired by the literature [50], the algorithm uses the non-dominance ratio (ND) and the iter to determine whether to apply the fuzzy evolutionary strategy. Specifically, when the value of iter is less than the threshold Rate and the value of ND is below 0.6, the SFDV optimization process is applied to the OffDec.Additionally, the algorithm calculates Sparsity based on Equation (4) every time the evaluation count reaches a multiple of 10. Based on Sparsity, Avgs is further calculated using Equation (8). By utilizing Avgs and Sparsity, Acc and Step are dynamically updated, as specifically shown in Algorithm 1.

3.3. Affinity-Based Elite Strategy

To ensure the algorithm adheres to the sparsity characteristics, large-scale sparse algorithms typically use a hybrid encoding during population initialization, represented as x i = real i × bin i . Therefore, after the crossover operation, new binary and real vectors are returned and combined to generate new offspring. During this combination process, since the values of the binary vectors must be 0 or 1, it is only necessary to optimize the binary vectors and find the most suitable real vectors to combine and generate the offspring. Therefore, this paper calculates the affinity between the binary vector bin i and the real vector real i to find the most suitable real vector corresponding to bin i for combination. The selected real vector is denoted as x real , and the final solution is given by x i = x real × bin i . It is worth noting that if the problem being addressed involves only binary variables, thereal vector x real will be fixed as an all-one vector. In this case, only the binary vector bin i needs to be optimized, simplifying the optimization process for problems with purely binary variables, and making bin i the main focus of optimization.
d m ( x , y ) = ( x y ) S 1 ( x y ) T
First, the vector-matrix P real of the subpopulation is normalized, and the values from the P real matrix are assigned to the set X real . Then, assign the values from the binary matrix P bin to the set X bin . While the X bin set is not empty, perform the following loop operations: (1) Randomly select a binary vector x bin from the X bin set and perform affinity calculation using the Mahalanobis distance d m , as shown in Equation (12). Here, x and y represent the selected x bin and x real respectively, and S is the covariance matrix. Initialize a minimum distance variable to infinity and a minimum index variable (minidx) to store the index of the x real with the smallest distance in the X real set. (2) Iterate through each vector in the X real set and calculate its Mahalanobis distance to x bin . If a smaller distance is found, update the x real and the minidx accordingly. (3) Using the minidx, select the x real with the smallest distance, representing the highest affinity, from the X real set and denote it as x r _ p r i m e . Update the corresponding position in the OffDec with x r _ p r i m e , and update the corresponding position in the OffMask with x bin . (4) Remove the selected x bin and x r _ p r i m e from the X bin and X real sets, respectively. Finally, the offspring solutions of the new population are generated by combining the OffDec and OffMask obtained after the affinity selection. By using random selection and the principle of maximum affinity calculated through Mahalanobis distance, the real and binary vector sets are updated. This allows real vectors to combine with their most compatible binary vectors to generate elite solutions, which are more likely to approach the Pareto front.
The Mahalanobis distance is used to calculate the distance between the binary vector X b 1 and the real vectors X r 1 and X r 2 . The distances between X b 1 and both X r 1 and X r 2 are identical, as shown in Figure 4a. If the Euclidean distance formula were used instead, it would not account for the distribution of the remaining points in the dataset, thereby failing to accurately reflect the true distances between the points. When using the Mahalanobis distance formula to calculate the distance between points, the standardization process involves proportionally scaling the distances between X r 1 and X r 2 relative to X b . This does not alter the relative magnitudes of these distances. The procedure simply requires a rotation according to the direction of the eigenvectors, followed by scaling by the eigenvalues. Therefore, after processing with the Mahalanobis distance, it is observed that X r 1 is closer to X b 1 , indicating that the affinity between X b and X r 1 is higher. Consequently, X r 1 is more likely to be selected for combination with X b to generate a new offspring solution X. On the other hand, X r 3 , which is closer to the Pareto front but not selected, is able to continue participating as an effective parent solution in subsequent evolutionary processes. This allows X r 3 the opportunity to combine with a binary vector with higher affinity, such as X b 2 , in later evolutions, as shown in Figure 4b. This approach avoids the simple combination of X r 3 and X b 1 , which could result in the generated solution being dominated by more advantageous solutions. By combining hybrid encoding with an affinity-based offspring selection strategy, solutions like X r 3 that hold potential value not only have a higher survival probability but also fully realize their genetic potential. This creates favorable conditions for generating high-quality offspring, significantly driving the algorithm’s effective convergence towards the Pareto front.

3.4. Adaptive Sparse Environment Selection Strategy Based on MLP

A neural network is a mathematical model that mimics the learning process of the human brain and plays a vital role in artificial intelligence. Structurally, it consists of multiple layers of neurons arranged hierarchically. A feedforward neural network, or multilayer perceptron (MLP) [51], processes information layer by layer, with neurons in each layer handling signals from the previous layer. An MLP includes an input layer, an output layer, and one or more hidden layers, which extract essential features from input data.
Applying MLPs to large-scale sparse problems significantly improves the processing of sparse data, which often involves high-dimensional features with many empty values. Traditional dense neural networks struggle with computational inefficiency in such cases. In contrast, MLPs, through their hidden layers, are able to learn complex relationships and extract meaningful patterns from sparse data. Additionally, MLPs effectively reduce the dimensionality of high-dimensional sparse data, transforming it into lower-dimensional dense representations, which improves efficiency and enhances solution quality.
To train an MLP, a dataset D = { ( x i , x i l ) } i = 1 M must be prepared, where M denotes the number of samples, and each input x i is paired with a corresponding target output label x i l . In this setup, the MLP undergoes supervised learning to acquire a guiding direction vector (GDV) for each input x, thereby accelerating the convergence of the solution. To achieve effective training, each input x is paired with a solution vector that helps facilitate fast convergence. The MLP is trained to learn this optimal convergence direction, termed as GDV, which guides the search in a manner akin to gradient descent, enhancing the convergence rate towards an optimal solution. To create the training dataset, the current population P is split into two subsets, S P 1 and S P 2 , where S P 2 contains solutions with better convergence than those in S P 1 . The first half of the population is added to subset S P 2 , and the remaining solutions go to S P 1 . Each solution x from S P 1 serves as input for training, while the corresponding target output labels x l come from S P 2 . The target output x l is computed using the following formula:
x l = y : arg min   θ ( x , y )
In this equation, θ ( x , y ) represents the acute angle between x and y. It is calculated using the following formula:
θ ( x , y ) = arccos F ( x ) · F ( y ) F ( x ) · F ( y )
where F ( x ) = ( f 1 ( x ) , , f m ( x ) ) is the normalized vector of the m objective values for solution x. The normalized objective value f i ( x ) is computed as follows:
f i ( x ) = f i ( x ) f min i f max i f min i
In this equation, f min i and f max i are the minimum and maximum values of the i-th objective across the entire solution set P, respectively.The specific implementation is detailed in the pseudocode provided in Algorithm 4.
Addressing the challenges of large-scale sparse multimodal multi-objective optimization problems (LSMMOPs), this paper proposes an adaptive sparse environmental selection mechanism integrated with MLP reinforcement learning within the MP-MMEA framework. During the evolutionary phase, the forward propagation results of a trained MLP model are utilized as the Guiding Vector (GDV). By analyzing the distribution characteristics of the parent population, the GDV effectively guides decision vectors (Dec) toward efficient search regions, thereby accelerating convergence and facilitating escape from local optima. Furthermore, an innovative adaptive strategy based on the iteration count (iter) is introduced to dynamically regulate genetic operators. Specifically, when the condition r a n d ( 1 iter ) is met, the algorithm employs a Differential Evolution (DE) operator utilizing the GDV—replacing the standard guiding vector ( g v )—to perform crossover and mutation, thus enhancing population diversity and genetic characteristics according to the evolutionary stage. Subsequently, the parent and offspring populations undergo adaptive sparse environmental selection. This phase first quantifies the sparse distribution in high-dimensional space by statistically analyzing binary mask vectors to derive population-level Sparsity and the AvgS. To balance convergence with multimodal diversity, the selection pressure is adaptively regulated by combining a modified crowding distance with sparsity thresholds. Solutions exhibiting a crowding distance below the average and excessive sparsity are identified as structurally redundant and are assigned lower retention priority. Ultimately, elite preservation is achieved through sparsity-regulated non-dominated sorting, effectively coupling evolutionary guidance with environmental screening to significantly improve search efficiency and solution representativeness.
This coevolutionary strategy integrated with MLP reinforcement learning and adaptive sparse perception not only accelerates convergence through GDV-guided search but also rigorously filters structural redundancy via sparsity-aware environmental selection, thereby enhancing the algorithm’s robustness in solving large-scale sparse multimodal problems. It effectively balances rapid convergence with the preservation of multimodal diversity.

4. Experimental Studies on Benchmark Problems

4.1. Experimental Settings

1.
All experiments in this paper were conducted on a PC with the following configuration: 13th Gen Intel(R) Core(TM) i7-13700F, 32 GB RAM, running the Windows 11 Home operating system (Version 25H2), and MATLAB R2023a. The code for all benchmark problems, evaluation metrics, and comparison algorithms was provided by the PlatEMO platform.
2.
Benchmark Problems: In recent years, multimodal multiobjective optimization problems (MMOPs) have received increasing attention in algorithm testing. However, most existing test problems have several limitations: they often lack scalability, lack sparse Pareto-optimal solutions, and have low-dimensional decision variables that fail to reflect the complexity of real-world problems. As a result, there is a significant shortage of test problems that effectively combine large-scale decision variables, multimodality, and sparsity. To address this gap, the SMMOP1-SMMOP8 test suite proposed by MP-MMEA has emerged as a valuable benchmark for researchers. In this study, we use the SMMOP1-SMMOP8 test suite to assess the effectiveness of our proposed algorithm in solving large-scale MMOPs with sparse solutions.
3.
Parameter Settings: The population sizes for the SMMOP1-SMMOP8 test problems were set based on the number of equivalent Pareto sets (PS). Specifically, the population sizes N were set to 400, 600, and 800 for n p = 4 , 6, and 8, respectively, to ensure sufficient solutions for each equivalent Pareto optimal set. To ensure a fair comparison, the maximum number of evaluations for all multiobjective evolutionary algorithms (MOEAs) was set to 250,000, 400,000, and 500,000 for problems with 100, 200, and 500 decision variables, respectively. For SMMOP1-SMMOP8, the merge and split operations of MASR-MMEA were performed every 10 generations. The initial ACC values were set to 0.4, 0.7, and 0.8 for n p = 4 , 6, and 8, respectively. The compared MOEAs were configured using parameters and genetic operators as suggested in the PlatEMO platform or in the original papers. In the MO_Ring_PSO_SCD algorithm, particle swarm optimization was used to generate offspring, with acceleration coefficients C 1 and C 2 set to 0.5 and inertia weight W set to 0.1. In DN-NSGA-II, the crowding distance factor was set to half of the population size to maintain an even distribution of solutions and prevent clustering. For SparseEA and MSKEA, simulated binary crossover [52] and polynomial mutation [53] were used to generate offspring, with crossover and mutation probabilities set to 1 and 1 / D , respectively, and distribution indices set to 20. MP-MMEA’s merge and split operations were conducted every 50 generations, while HHC-MMEA applied hybrid hierarchical clustering every 10 generations.
IGDX [54], IGD [55], and HV [56] are used as performance evaluation metrics in this paper. IGDX is primarily used to quantitatively assess the quality of the solution set in the decision space when solving MMOPs using multiobjective evolutionary algorithms (MOEAs). It evaluates the distribution and proximity of solutions by calculating the average minimum distance between the solutions generated by the algorithm and the true Pareto front. The specific calculation formula is as follows:
IGDX ( P , P ) = v P dis ( v , P ) | P |
where P represents a set of uniformly distributed reference points on the Pareto solution set in the decision space, P denotes the solution set in the decision space, and dis ( v , P ) is the minimum Euclidean distance between point v and the solutions in P. To calculate IGDX, 10,000 reference points are uniformly sampled from the Pareto set of each test problem. IGD is used to evaluate the quality of the solution set obtained by the algorithm in the objective space, and its definition is similar to IGDX.
Since the Pareto front of real-world problems is unknown, the HV (Hypervolume) metric with a reference point of ( 1 , 1 ) is used for result comparison in real-world problems. A smaller value of IGDX and IGD indicates better algorithm performance, while a larger value of HV indicates better performance. Each algorithm is independently run 30 times on each test problem, and the mean and standard deviation of the 30 runs are recorded. Statistical analysis is performed using the Wilcoxon rank-sum test with a significance level of 0.05, where “+”, “−”, and “=” indicate that MASR-MMEA performs better, worse, or equal to the comparison algorithm, respectively.

4.2. Benchmark Algorithms

To validate the performance advantages of the proposed algorithm, this paper conducts a comprehensive comparative analysis of MASR-MMEA against eight other algorithms on the SMMOP1-SMMOP8 benchmark problems. Among these, SparseEA [28] and MSKEA [33] are state-of-the-art multiobjective evolutionary algorithms (MOEAs) specifically designed for large-scale SMOPs, and they excel in handling sparsity issues. Meanwhile, MO_Ring_PSO_SCD [4] and DN_NSGA_II [17] are algorithms specifically tailored for MMOPs, demonstrating exceptional performance in complex scenarios with multiple local optima. Furthermore, MP-MMEA [30] and HHC-MMEA [31] are optimized for large-scale MMOPs with sparse Pareto optimal solutions, enabling effective identification and maintenance of sparse solutions in high-dimensional spaces.
In addition to these established methods, two latest algorithms are included to ensure the timeliness of the comparison. LMMODE [41] is specifically designed for large-scale multimodal problems using fuzzy C-means clustering, providing a direct benchmark for the proposed method’s scalability. Meanwhile, CMMOGA_DLF [42] represents the latest advancement in balancing convergence and diversity through a dynamic local fitness evaluation mechanism.
Through detailed comparative analysis, this study not only evaluates the relative performance of MASR-MMEA in addressing the SMMOP1-SMMOP8 benchmark problems but also further reveals the strengths and weaknesses of the algorithm in dealing with multimodality, sparsity, and high-dimensionality issues. Finally, the parameter settings for all comparison algorithms are kept consistent with those recommended in their original papers to ensure a fair comparison.

4.3. Comparison Experiments

Table 1 presents the IGDX comparison results between the proposed algorithm and eight other advanced MOEAs on the SMMOP1-SMMOP8 problems with 4 equivalent PSs. The data analysis in Table 1 reveals that MASR-MMEA significantly outperforms the other eight MOEAs in handling test problems with 4 equivalent PSs. The mean and standard deviation of IGDX obtained over 30 independent runs are consistently superior. Considering that SMMOP1-SMMOP8 are specifically designed for large-scale MMOPs, although SparseEA and MSKEA are advanced sparse algorithms tailored for large-scale problems, they perform poorly in handling multimodal characteristics. On the other hand, while MO_Ring_PSO_SCD and DN_NSGA_II possess the capability to address multimodal multiobjective problems, they struggle to cope with the complexity of large-scale problems. It is worth noting that LMMODE is a state-of-the-art algorithm specifically designed for large-scale multimodal multi-objective optimization. However, despite its specialized design, it exhibits inferior performance compared to MASR-MMEA on the SMMOP benchmarks. As the dimension increases (e.g., from D = 100 to D = 500 ), the IGDX values of LMMODE and CMMOGA_DLF deteriorate significantly. This indicates that their search strategies struggle to effectively locate and maintain multiple equivalent Pareto optimal subsets in high-dimensional decision spaces, whereas MASR-MMEA demonstrates superior capability in handling such complexity.
Table 2 further presents the comparison results of IGDX values in terms of the mean and standard deviation obtained by MASR-MMEA and eight other MOEAs over 30 independent runs on test problems with 6 equivalent PSs. Specifically, out of the 24 test results, MASR-MMEA achieved the best results in 18 cases, while HHC-MMEA only obtained 4. Even in cases where the results were similar to HHC-MMEA, MASR-MMEA still exhibited a slight advantage in most scenarios. Moreover, although MP-MMEA was designed to handle large-scale, sparse, multimodal multiobjective problems, its performance declined when dealing with test problems involving multiple equivalent PSs, indicating potential limitations in practical applications.
It is worth noting that LMMODE is specifically designed to handle large-scale multimodal multi-objective problems. However, despite its specialized design, it exhibits inferior performance compared to MASR-MMEA on the SMMOP benchmarks with 6 equivalent PSs. Similarly, the performance of CMMOGA_DLF is also unsatisfactory. The significant gap in IGDX values suggests that the search strategies of these algorithms are less effective than MASR-MMEA in simultaneously locating multiple equivalent Pareto optimal subsets in high-dimensional decision spaces. For more detailed and high-resolution tables, please refer to Appendix A Table A1, Table A2, Table A3 and Table A4.
To further analyze and investigate the effectiveness of the algorithm under different equivalent PSs, as well as the superiority of MASR-MMEA in solving high-dimensional space, the following comparative experiments were designed. The number of equivalent PSs was set to 4, 6, and 8, with D fixed at 500, and the number of iterations set to 500,000. The mean and standard deviation of IGDX were obtained through 30 independent runs. According to the data in Table 3, MASR-MMEA achieved the best results in all 24 experimental outcomes. This phenomenon is primarily due to the SFDV strategy employed, which requires longer iteration times to reach optimal performance. For more detailed and high-resolution tables, please refer to Appendix A Table A5 and Table A6. As shown in Table 2, when the number of decision variables D was increased to 200, and the number of iterations was raised to 400,000, the number of best results for MASR-MMEA increased. However, when D was increased to 500 with 500,000 iterations, MASR-MMEA outperformed all the comparative algorithms.
To provide a more comprehensive evaluation and demonstration of the performance of the proposed algorithm, Figure 5 illustrates the population distribution of median IGDX values obtained by MASR-MMEA and the six compared algorithms on SMMOP1, SMMOP2, SMMOP5, and SMMOP6 with 100 decision variables and 4 equivalent PSs. For more detailed and high-resolution figures, please refer to Appendix A Figure A1, Figure A2, Figure A3 and Figure A4.
SMMOP1 and SMMOP2 feature linear Pareto fronts and unimodal landscapes, with SMMOP2 being deceptive. In contrast, SMMOP5 and SMMOP6 have convex linear Pareto fronts, highly multimodal landscapes, and a greater number of local optima. As shown in Figure 5, the MO_Ring_PSO_SCD algorithm fails to achieve effective convergence across all test problems. DN-NSGA-II, SparseEA, and MSKEA are only able to converge on a single Pareto front. In contrast, MP-MMEA and HHC-MMEA, which focus on large-scale sparse multimodal multiobjective problems, as well as MASR-MMEA, are capable of converging to all PFs. In contrast, MP-MMEA and HHC-MMEA, which focus on large-scale sparse multimodal multiobjective problems, as well as MASR-MMEA, are capable of converging to all Pareto fronts. Moreover, MASR-MMEA achieves IGDX values that are superior to those obtained by the other two algorithms. In addition, IGD was used to evaluate the quality of the solution distribution in the objective space. Table 4 presents the comparative results of IGD experiments between MASR-MMEA and the other algorithms. For more detailed and high-resolution tables, please refer to Appendix A Table A7 amd Table A8. Analysis of Table 4 reveals that SparseEA and MSKEA, both of which focus on large-scale sparse multiobjective optimization, performed excellently, with MSKEA demonstrating particularly outstanding performance. The primary reason is that multimodal multiobjective algorithms focus more on enhancing performance in the decision space, which may adversely affect their performance in the objective space. As a result, compared to MSKEA and SparseEA, MASR-MMEA, HHC-MMEA, and MP-MMEA do not hold an advantage in the IGD comparison. Moreover, the proposed MASR-MMEA demonstrates strong competitiveness in high-dimensional space ( D = 500 ), showing its advantages over similar algorithms such as HHC-MMEA and MP-MMEA. Consequently, MASR-MMEA not only achieves superior solution distribution in the decision space but also exhibits competitive solution distribution in the objective space. This effectively balances the demands of exploration and exploitation, thereby enhancing the overall performance and applicability of the algorithm.
Figure 6 presents the convergence curves of IGDX values obtained by MASR-MMEA and all the compared algorithms on the SMMOP1-SMMOP8 test problems ( n p = 4 , D = 100 ). As shown in the figure, MASR-MMEA outperforms the other algorithms in both convergence and efficiency.
Therefore, the proposed MASR-MMEA algorithm is not only effective in handling test problems with different numbers of equivalent PSs but is also well-suited for solving higher-dimensional problems. This effectiveness is primarily attributed to the high diversity provided by the dual-strategy genetic operator with hybrid encoding and the affinity-based elitism strategy, which enables the algorithm to quickly locate multiple PFs. Additionally, the final use of the MLP further accelerates convergence, resulting in more precise and efficient optimization performance in high-dimensional spaces.

4.4. Ablation Study

In this section, three distinct ablation experiments were designed. Specifically, the first ablation experiment was conducted to validate the effectiveness of the dual-strategy genetic operator proposed for the hybrid encoding. The second ablation experiment aimed to verify the effectiveness of the elite vector strategy based on Mahalanobis distance metric. The third ablation experiment focused on validating the effectiveness of the adaptive sparse environment selection strategy based on MLP. To ensure the fairness of the comparison, the parameter settings of the variant algorithms are kept consistent with the original algorithm, except for the ablated components.

4.4.1. The Effectiveness of the Improved Dual-Strategy Genetic Operator Based on Hybrid Encoding

First, this subsection conducts an ablation experiment to validate the effectiveness of the proposed dual-strategy genetic operator based on hybrid encoding. In this experiment, MASR-MMEA-1 does not utilize the proposed strategy but instead employs the genetic operators from MP-MMEA. The symbols “+”, “−”, and “=” indicate whether the hybrid encoding-based dual-strategy genetic operator is significantly inferior, superior, or equivalent to the MP-MMEA genetic operators, respectively, based on the Wilcoxon rank-sum test at a 0.05 significance level. As shown by the data in Table 5, the proposed algorithm achieves the best IGDX values across the SMMOP1-SMMOP8 test suite, demonstrating significant improvement on all test problems with four equivalent PSs and D = 100 , 200, and 500. Additionally, Figure 7 illustrates that the proposed strategy enables the population to more effectively approximate the true PFs. Within the SMMOP1-SMMOP8 test suite, SMMOP3, SMMOP6, and SMMOP8 are recognized as representing the highest level of difficulty among the three types of functions. In SMMOP3, although MASR-MMEA-1 is capable of finding all PFs, many individuals fail to converge to the PF. In SMMOP6, the overall convergence curve of MASR-MMEA-1 is not as smooth as that of MASR-MMEA. Furthermore, in SMMOP8, MASR-MMEA-1 is unable to locate all PFs. Moreover, Figure 8 presents the convergence curves of the median IGDX values over 30 runs with 100 decision variables for the four ablation experiments on SMMOP1-SMMOP8. From the figure, it is evident that MASR-MMEA achieves better IGDX values and faster convergence compared to MASR-MMEA-1. In summary, this validates the effectiveness of employing different strategies for the two distinct vectors in the hybrid encoding.

4.4.2. Effectiveness of the Elite Vector Strategy Based on Mahalanobis Distance Metric

Secondly, this subsection aims to validate the effectiveness of the proposed elite vector strategy based on mahalanobis distance metric. Specifically, an ablation experiment is conducted by comparing the MASR-MMEA-2 variant algorithm with the original MASR-MMEA. In the MASR-MMEA-2 variant, the elite vector strategy based on mahalanobis distance metric is not employed, while all other components remain the same as in the original algorithm. The data in Table 5 demonstrate that the proposed elite vector strategy based on mahalanobis distance metric significantly enhances the algorithm, with MASR-MMEA achieving the best results in all 24 IGDX comparisons. Additionally, as shown in Figure 7, after removing the proposed strategy, the variant algorithm, while able to find all PSs on SMMOP3, SMMOP6, and SMMOP8, fails to effectively converge to all PFs. Figure 8 indicates that the convergence speed of the variant algorithm is significantly slower than that of MASR-MMEA, and MASR-MMEA achieves better IGDX values. In summary, the proposed strategy effectively optimizes both the binary and real vectors, providing the algorithm with superior convergence performance.

4.4.3. Effectiveness of the Adaptive Sparse Environment Selection Strategy Based on MLP

Finally, to investigate the specific contribution of the Adaptive Sparse Environment Selection Strategy based on MLP, we examine the effectiveness of the variant MASR-MMEA-3. Specifically, this variant adopts the traditional environmental selection mechanism from the MP-MMEA framework and utilizes the original guiding vector ( g v ) for crossover and mutation, serving as a baseline for comparison. By comparing the data in Table 5, it can be observed that MASR-MMEA slightly outperforms its variant MASR-MMEA-3, particularly when the population sizes are 100 and 200. In these cases, MASR-MMEA achieves more favorable IGDX values. This phenomenon not only reflects the computational efficiency of the algorithm but also highlights its design optimization for practical applications, especially in resource-constrained environments. Moreover, as shown in Figure 7 and Figure 8, MASR-MMEA demonstrates faster convergence compared to the variant algorithm, with significant improvements in the smoothness and uniformity of its convergence curve. This performance enhancement, particularly when dealing with complex real-world problems, demonstrates the efficiency and practicality of the multilayer perceptron learning-guided strategy. Therefore, the effectiveness of the proposed multilayer perceptron learning strategy is validated.

4.5. Experimental Studies on Real-World Applications

To verify the effectiveness of the proposed algorithm in real-world applications Table 6 and Table 7 lists the mean and standard deviation of hypervolume (HV) values obtained by MASR-MMEA, MP-MMEA, and HHC-MMEA on critical node detection (CN1–CN4) [57], instance selection (IS1–IS3) [58], and community detection (CD1–CD3) [59] and the information of test suites. For all practical application test problems, the population size is set to 100, with 50,000 iterations. The other parameter settings for MASR-MMEA are rate = 0.8, acc = 0.4, and T = 10, with the T value set to 100 for CN1–CN4. The data analysis from Table 7 shows that the MASR-MMEA algorithm performs exceptionally well in the CD1–CD3 test suites. Specifically, while MASR-MMEA achieves significant improvements on CD2, its HV values on CD1 and CD3 are close to those of HHC-MMEA. This similarity can be attributed to the inherent topological characteristics of the datasets. These real-world networks typically exhibit well-defined modular structures and natural sparsity. Consequently, the search landscape is relatively smooth with significant gradient information, allowing basic sparse operators to sufficiently locate the Pareto front. In such scenarios, the advanced MLP guidance strategy of MASR-MMEA reaches a performance ceiling similar to HHC-MMEA. In the IS1–IS3 test suites, MASR-MMEA demonstrates strong performance, particularly on IS1 and IS3. However, on IS2, its performance is slightly inferior to HHC-MMEA. This limitation stems from the complex data distribution inherent to specific datasets from the UCI Machine Learning Repository. Unlike structured networks, UCI datasets often contain noise, outliers, or complex feature correlations. The Mahalanobis distance used in the elite strategy may struggle to fully capture the highly non-linear feature dependencies present in this classification task, leading to suboptimal convergence compared to HHC-MMEA. Additionally, the MASR-MMEA algorithm excels in the CN1–CN4 test suites, showing outstanding performance and achieving significant improvements compared to both HHC-MMEA and MP-MMEA. In summary, despite minor limitations on specific deceptive instances, the MASR-MMEA algorithm proposed in this paper demonstrates significantly greater robustness and adaptability in handling real-world application problems compared to other large-scale sparse multimodal multiobjective optimization algorithms.

5. Conclusions

The curse of dimensionality and the unknown sparsity of the solution space make it extremely challenging to find multiple equivalent sparse solution sets in a large search space for large-scale MMOPs. To address this issue, this paper proposes a multi-stage genetic operator-based large-scale multi-modal multi-objective evolutionary algorithm, focusing on sparsity partitioning. A dual-strategy genetic operator, improved by hybrid encoding, is introduced to apply different optimization strategies for binary vectors and real-valued vectors. This approach aims to avoid the issue of uneven resource allocation and ensures that the optimization focus is not biased towards the binary vectors. Furthermore, by calculating the sparsity of the solution space, the genetic operators are divided into multiple stages, and the optimization strategy used for vectors is switched according to the sparsity. An elite selection strategy based on Mahalanobis distance is proposed, where the affinity between binary and real-valued vectors is calculated using Mahalanobis distance. The smaller the distance, the greater the affinity, increasing the likelihood that the generated offspring will be closer to the optimal solution. Finally, an adaptive sparse environment selection strategy based on MLP is proposed to further enhance the convergence and diversity of the algorithm. Experimental results demonstrate that MASR-MMEA is able to effectively solve large-scale MMOPs, significantly outperforming six other comparison algorithms on the SMMOP test set. Moreover, MASR-MMEA achieves notable results in real-world application test problems compared to the benchmark algorithms.
Despite the superior performance of MASR-MMEA in the decision space, it lacks optimization in the objective space. In future work, we plan to integrate advanced deep learning models, to better capture complex dependencies in the objective space. Furthermore, we aim to design novel guiding operator mechanisms specifically tailored for objective space exploration. By combining these advanced learning models with adaptive guidance strategies, we expect to significantly enhance the algorithm’s performance in handling complex multi-modal multi-objective problems. Additionally, combining the partitioned sub-populations with the sparsity of the solution space to achieve adaptive adjustment and reduce algorithm parameter settings is another challenge we aim to address in our future research.

Author Contributions

Conceptualization, Y.S. and B.C.; methodology, B.C.; validation, Y.S. and B.C.; formal analysis, B.C. and B.H.; investigation, B.C.; resources, B.C.; data curation, B.C. and B.H.; writing—original draft preparation, B.C.; writing—review and editing, Y.S. and B.C.; visualization, B.C. and B.H.; project administration, Y.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Natural Science Foundation of China (Grant Nos. 61763002 and 62072124), Guangxi Major projects of science and technology (Grants No. 2020AA21077021), Foundation of Guangxi Experiment Center of Information Science (Grant No. KF1401).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code for this manuscript has been uploaded to GitHub, available at https://github.com/RLXGAKKI/MASR-MMEA (accessed on 26 January 2026).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MOPsMultiobjective Optimization Problems
MMOPsMultimodal Multiobjective Optimization Problems
RLReinforcement Learning
LSMMOPsLarge-Scale Multimodal Multiobjective Optimization Problems
MOEAsMultiobjective Evolutionary Algorithms
MMEAsMultimodal Multiobjective Evolutionary Algorithms
PFPareto Front
PSPareto Set
DMsDecision Makers
MLPMultilayer Perceptron
SBXSimulated Binary Crossover
PMPolynomial Mutation
GDVGradient-Descent-like Direction Vector
SFDVSparse Fuzzy Decision Variables
IGDInverted Generational Distance
IGDXInverted Generational Distance in Decision Space
HVHypervolume
CNCritical Node Detection
ISInstance Selection
CDCommunity Detection

Appendix A

Figure A1. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1, SMMOP2, with 100 decision variables and 4 equivalent PSs.
Figure A1. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1, SMMOP2, with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g0a1
Figure A2. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Figure A2. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g0a2
Figure A3. Results with the median IGDX values obtained by MSparseEA, MSKEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Figure A3. Results with the median IGDX values obtained by MSparseEA, MSKEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g0a3
Figure A4. Results with the median IGDX values obtained by SparseEA, MSKEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Figure A4. Results with the median IGDX values obtained by SparseEA, MSKEA and MASR-MMEA on SMMOP5, SMMOP6 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g0a4
Table A1. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part I). The best result in each row is highlighted in bold.
Table A1. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part I). The best result in each row is highlighted in bold.
ProblemDMPMMEAHHCMMEAMO_Ring_PSO_SCDDNNSGAIIMASR-MMEA
SMMOP11003.4262e-1 (2.28e-1) -3.0936e-1 (2.88e-1) -5.8388e+0 (6.63e-2) -3.3931e+0 (1.49e-2) -4.2845e-2 (1.16e-2)
SMMOP21003.8698e-1 (6.20e-2) -2.1168e-1 (2.15e-1) -7.3142e+0 (2.09e-1) -6.3899e+0 (6.35e-1) -3.6793e-2 (2.83e-2)
SMMOP31006.9891e-1 (4.21e-1) -3.0060e-1 (3.09e-1) -7.1963e+0 (1.68e-1) -6.4346e+0 (7.43e-1) -1.3464e-1 (1.41e-1)
SMMOP41003.1727e-1 (1.01e-1) -2.8843e-1 (2.27e-1) -6.3233e+0 (8.93e-2) -3.3693e+0 (1.61e-2) -7.5132e-2 (4.85e-2)
SMMOP51003.2076e-1 (8.17e-2) -2.3373e-1 (1.36e-1) -6.0677e+0 (8.64e-2) -3.5518e+0 (4.39e-2) -8.0524e-2 (5.11e-2)
SMMOP61005.4760e-1 (4.78e-1) -3.5769e-1 (1.71e-1) -6.2384e+0 (1.11e-1) -3.4437e+0 (1.11e-2) -1.5823e-1 (1.46e-1)
SMMOP71006.4131e-1 (5.10e-1) -3.8089e-1 (4.02e-1) -5.6062e+0 (7.23e-2) -3.5394e+0 (3.31e-2) -2.1894e-1 (2.58e-1)
SMMOP81001.1029e+0 (6.06e-1) -5.5143e-1 (3.97e-1) -5.8528e+0 (8.10e-2) -3.5313e+0 (2.80e-2) -3.7537e-1 (3.41e-1)
SMMOP12007.7396e-1 (1.07e-1) -9.2270e-1 (2.52e-1) -9.2113e+0 (7.86e-2) -4.8336e+0 (2.19e-2) -1.2676e-1 (1.62e-1)
SMMOP22008.4577e-1 (1.79e-1) -9.2762e-1 (5.19e-1) -1.0940e+1 (1.92e-1) -1.1671e+1 (7.17e-1) -6.7412e-2 (7.19e-2)
SMMOP32001.4738e+0 (4.87e-1) -1.1286e+0 (5.13e-1) -1.0816e+1 (1.98e-1) -1.2181e+1 (7.71e-1) -6.5747e-1 (3.40e-1)
SMMOP42009.1303e-1 (1.78e-1) -9.5634e-1 (3.72e-1) -9.7588e+0 (9.21e-2) -4.8240e+0 (2.14e-2) -1.7992e-1 (1.20e-1)
SMMOP52008.8605e-1 (1.98e-1) -9.4876e-1 (2.38e-1) -9.4533e+0 (1.15e-1) -5.2524e+0 (5.40e-2) -2.2129e-1 (1.73e-1)
SMMOP62001.3005e+0 (4.81e-1) -1.3137e+0 (2.95e-1) -9.7308e+0 (1.29e-1) -4.8713e+0 (6.49e-3) -6.4268e-1 (2.39e-1)
SMMOP72002.1195e+0 (7.68e-1) -1.0386e+0 (2.47e-1) -8.9003e+0 (8.21e-2) -5.2633e+0 (3.99e-2) -6.9430e-1 (3.60e-1)
SMMOP82002.0776e+0 (6.18e-1) -1.7249e+0 (2.32e-1) -9.1180e+0 (1.03e-1) -5.2816e+0 (3.75e-2) -9.8649e-1 (4.67e-1)
SMMOP15002.9461e+0 (4.20e-1) -4.5552e+0 (4.20e-1) -1.6310e+1 (1.74e-1) -7.7364e+0 (5.11e-2) -6.9194e-1 (2.22e-1)
SMMOP25002.8773e+0 (5.30e-1) -3.8229e+0 (1.17e+0) -1.8101e+1 (3.40e-1) -2.3144e+1 (5.87e-1) -7.6296e-1 (4.71e-1)
SMMOP35003.7201e+0 (9.39e-1) -4.6911e+0 (8.10e-1) -1.7917e+1 (4.63e-1) -2.3862e+1 (6.13e-1) -2.0437e+0 (2.94e-1)
SMMOP45003.1952e+0 (2.10e-1) -4.5826e+0 (3.72e-1) -1.7150e+1 (1.83e-1) -7.8032e+0 (7.87e-2) -1.1166e+0 (5.99e-1)
SMMOP55003.2697e+0 (1.68e-1) -4.7510e+0 (4.57e-1) -1.6523e+1 (2.69e-1) -9.1196e+0 (1.09e-1) -1.1432e+0 (5.25e-1)
SMMOP65003.7158e+0 (8.42e-1) -5.1628e+0 (2.23e-1) -1.7065e+1 (3.29e-1) -7.8978e+0 (7.20e-2) -2.2132e+0 (4.33e-1)
SMMOP75004.5286e+0 (1.16e+0) -4.3086e+0 (3.64e-1) -1.6076e+1 (1.89e-1) -8.9965e+0 (6.87e-2) -2.1422e+0 (5.08e-1)
SMMOP85004.5375e+0 (6.88e-1) -5.5221e+0 (5.57e-1) -1.6087e+1 (1.65e-1) -9.0192e+0 (7.95e-2) -2.7611e+0 (5.83e-1)
+/−/=0/24/00/24/00/24/00/24/0
Table A2. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part II). The best result in each row is highlighted in bold.
Table A2. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part II). The best result in each row is highlighted in bold.
ProblemDSparseEAMSKEALMMODECMMOGA_DLFMASR-MMEA
SMMOP11003.3974e+0 (3.17e-2) -3.4412e+0 (8.45e-3) -3.5751e+0 (1.10e-1) -2.7494e+0 (3.59e-1) -4.2845e-2 (1.16e-2)
SMMOP21003.3842e+0 (4.30e-2) -3.4416e+0 (9.20e-3) -1.1859e+1 (6.61e-1) -4.4228e+0 (8.37e-1) -3.6793e-2 (2.83e-2)
SMMOP31003.4510e+0 (3.89e-2) -3.4675e+0 (2.71e-3) -1.1849e+1 (9.03e-1) -4.8078e+0 (7.94e-1) -1.3464e-1 (1.41e-1)
SMMOP41003.3744e+0 (3.44e-2) -3.4361e+0 (1.11e-2) -3.6438e+0 (1.19e-1) -3.0826e+0 (4.19e-1) -7.5132e-2 (4.85e-2)
SMMOP51003.3815e+0 (3.99e-2) -3.4315e+0 (1.22e-2) -3.6675e+0 (1.65e-1) -2.9943e+0 (2.75e-1) -8.0524e-2 (5.11e-2)
SMMOP61003.4306e+0 (1.25e-2) -3.4662e+0 (3.22e-3) -3.7680e+0 (1.09e-1) -3.1536e+0 (4.08e-1) -1.5823e-1 (1.46e-1)
SMMOP71003.4341e+0 (7.99e-3) -3.4689e+0 (1.71e-3) -3.8143e+0 (1.91e-1) -3.0328e+0 (2.50e-1) -2.1894e-1 (2.58e-1)
SMMOP81003.4458e+0 (3.74e-2) -3.4704e+0 (9.59e-4) -3.7400e+0 (1.21e-1) -3.2197e+0 (2.17e-1) -3.7537e-1 (3.41e-1)
SMMOP12004.8637e+0 (4.34e-2) -4.8829e+0 (6.55e-3) -5.6313e+0 (1.38e-1) -4.9087e+0 (2.94e-1) -1.2676e-1 (1.62e-1)
SMMOP22004.8673e+0 (7.17e-2) -4.8825e+0 (8.42e-3) -1.5126e+1 (1.01e+0) -8.2305e+0 (1.27e+0) -6.7412e-2 (7.19e-2)
SMMOP32004.9325e+0 (9.30e-2) -4.9136e+0 (3.55e-2) -1.5801e+1 (1.08e+0) -8.6252e+0 (1.10e+0) -6.5747e-1 (3.40e-1)
SMMOP42004.8088e+0 (3.96e-2) -4.8800e+0 (6.65e-3) -5.7135e+0 (1.74e-1) -5.8068e+0 (6.04e-1) -1.7992e-1 (1.20e-1)
SMMOP52004.8244e+0 (5.15e-2) -4.8775e+0 (1.00e-2) -5.6258e+0 (1.38e-1) -5.0778e+0 (5.05e-1) -2.2129e-1 (1.73e-1)
SMMOP62004.8883e+0 (6.08e-2) -4.8969e+0 (9.60e-3) -6.0442e+0 (1.57e-1) -5.2675e+0 (5.92e-1) -6.4268e-1 (2.39e-1)
SMMOP72004.8862e+0 (3.22e-2) -4.9204e+0 (4.30e-2) -5.9194e+0 (1.43e-1) -5.4506e+0 (4.62e-1) -6.9430e-1 (3.60e-1)
SMMOP82004.9187e+0 (6.39e-2) -4.9121e+0 (2.51e-2) -5.6880e+0 (1.86e-1) -5.5831e+0 (2.68e-1) -9.8649e-1 (4.67e-1)
SMMOP15007.8736e+0 (9.11e-2) -7.8164e+0 (8.47e-2) -9.9065e+0 (1.52e-1) -9.6053e+0 (5.47e-1) -6.9194e-1 (2.22e-1)
SMMOP25007.8818e+0 (8.43e-2) -7.7813e+0 (6.31e-2) -2.1269e+1 (1.22e+0) -1.5700e+1 (1.24e+0) -7.6296e-1 (4.71e-1)
SMMOP35008.0429e+0 (1.49e-1) -7.8448e+0 (7.75e-2) -2.0781e+1 (1.24e+0) -1.6089e+1 (1.45e+0) -2.0437e+0 (2.94e-1)
SMMOP45007.8387e+0 (7.12e-2) -7.7967e+0 (8.34e-2) -1.0370e+1 (2.64e-1) -1.0310e+1 (9.73e-1) -1.1166e+0 (5.99e-1)
SMMOP55007.8049e+0 (7.36e-2) -7.8057e+0 (8.48e-2) -1.0089e+1 (1.66e-1) -9.6278e+0 (2.31e-1) -1.1432e+0 (5.25e-1)
SMMOP65007.9518e+0 (1.16e-1) -7.8973e+0 (7.17e-2) -1.0958e+1 (2.02e-1) -1.0025e+1 (9.75e-1) -2.2132e+0 (4.33e-1)
SMMOP75008.0283e+0 (1.19e-1) -7.9004e+0 (8.35e-2) -1.0341e+1 (2.08e-1) -9.9188e+0 (2.11e-1) -2.1422e+0 (5.08e-1)
SMMOP85008.0035e+0 (8.74e-2) -7.8974e+0 (8.66e-2) -1.0228e+1 (1.73e-1) -1.0024e+1 (2.20e-1) -2.7611e+0 (5.83e-1)
+/−/=0/24/00/24/00/24/00/24/0
Table A3. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 6 equivalent PSs (Part I). The best result in each row is highlighted in bold.
Table A3. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 6 equivalent PSs (Part I). The best result in each row is highlighted in bold.
ProblemDMPMMEAHHCMMEADNNSGAIIMO_Ring_PSO_SCDMASR-MMEA
SMMOP11009.7345e-1 (4.48e-1) -2.7030e-1 (1.37e-1) -3.7502e+0 (1.32e-2) -5.9342e+0 (6.46e-2) -1.9128e-1 (1.48e-1)
SMMOP21001.0859e+0 (3.11e-1) -2.1617e-1 (1.04e-1) =5.2651e+0 (5.75e-1) -7.5018e+0 (1.81e-1) -2.0909e-1 (1.74e-1)
SMMOP31001.7703e+0 (5.04e-1) -3.6931e-1 (1.73e-1) +5.5116e+0 (6.12e-1) -7.3038e+0 (2.21e-1) -6.7422e-1 (4.27e-1)
SMMOP41007.1385e-1 (2.84e-1) -2.9138e-1 (1.52e-1) -3.7206e+0 (1.77e-2) -6.3386e+0 (8.97e-2) -2.4262e-1 (2.08e-1)
SMMOP51007.0377e-1 (2.44e-1) -3.5367e-1 (1.37e-1) -3.8341e+0 (3.44e-2) -6.0951e+0 (8.99e-2) -2.0381e-1 (1.46e-1)
SMMOP61001.0922e+0 (4.66e-1) -4.9778e-1 (1.89e-1) =3.8124e+0 (1.07e-2) -6.3841e+0 (7.58e-2) -7.1485e-1 (4.41e-1)
SMMOP71001.9863e+0 (5.51e-1) -4.0807e-1 (1.94e-1) +3.9332e+0 (3.10e-2) -5.8067e+0 (7.46e-2) -1.1490e+0 (4.99e-1)
SMMOP81002.0794e+0 (3.88e-1) -8.2172e-1 (2.18e-1) +3.9447e+0 (2.48e-2) -5.9415e+0 (8.22e-2) -1.1518e+0 (3.06e-1)
SMMOP12001.7560e+0 (5.83e-1) -1.0882e+0 (1.78e-1) -9.2870e+0 (6.90e-2) -5.3382e+0 (1.73e-2) -4.6280e-1 (2.59e-1)
SMMOP22001.7899e+0 (5.25e-1) -1.1839e+0 (4.60e-1) -1.0917e+1 (2.10e-1) -9.9155e+0 (7.59e-1) -5.0743e-1 (2.99e-1)
SMMOP32002.9591e+0 (5.50e-1) -1.5321e+0 (3.74e-1) =1.0860e+1 (2.69e-1) -1.0231e+1 (7.72e-1) -1.3753e+0 (5.46e-1)
SMMOP42001.2341e+0 (3.67e-1) -1.2724e+0 (2.87e-1) -9.8350e+0 (6.84e-2) -5.3218e+0 (2.23e-2) -4.2234e-1 (2.02e-1)
SMMOP52001.5868e+0 (5.55e-1) -1.2653e+0 (2.12e-1) -9.5371e+0 (5.90e-2) -5.6487e+0 (4.92e-2) -4.0819e-1 (2.58e-1)
SMMOP62002.2113e+0 (5.01e-1) -1.7450e+0 (1.86e-1) -9.8589e+0 (1.04e-1) -5.3928e+0 (8.82e-3) -1.2986e+0 (4.91e-1)
SMMOP72003.2023e+0 (6.43e-1) -1.3385e+0 (3.06e-1) +9.1777e+0 (1.04e-1) -5.7180e+0 (3.10e-2) -1.8395e+0 (5.50e-1)
SMMOP82003.3584e+0 (5.88e-1) -2.2428e+0 (2.62e-1) -9.2691e+0 (7.96e-2) -5.7159e+0 (3.68e-2) -1.7540e+0 (3.97e-1)
SMMOP15004.3189e+0 (6.26e-1) -5.2167e+0 (4.29e-1) -1.6374e+1 (2.38e-1) -8.4914e+0 (6.36e-2) -2.014e+0 (7.90e-1)
SMMOP25004.5447e+0 (7.82e-1) -4.4129e+0 (4.37e-1) -1.8031e+1 (4.71e-1) -2.0719e+1 (8.66e-1) -2.1136e+0 (5.80e-1)
SMMOP35005.5325e+0 (7.37e-1) -5.0509e+0 (5.51e-1) -1.8150e+1 (3.50e-1) -2.1238e+1 (8.05e-1) -3.5194e+0 (7.87e-1)
SMMOP45004.1007e+0 (5.47e-1) -5.3318e+0 (2.57e-1) -1.7139e+1 (2.56e-1) -8.5364e+0 (5.60e-2) -2.0131e+0 (6.07e-1)
SMMOP55004.2686e+0 (5.88e-1) -5.3089e+0 (2.22e-1) -1.6693e+1 (2.90e-1) -9.4947e+0 (1.06e-1) -2.0532e+0 (6.00e-1)
SMMOP65005.2012e+0 (7.38e-1) -5.4838e+0 (1.19e-1) -1.7121e+1 (2.77e-1) -8.5726e+0 (2.40e-2) -3.2368e+0 (4.60e-1)
SMMOP75006.0529e+0 (6.76e-1) -4.9813e+0 (3.42e-1) -1.6272e+1 (1.93e-1) -9.5697e+0 (7.23e-2) -3.7809e+0 (7.22e-1)
SMMOP85006.0207e+0 (5.43e-1) -5.7797e+0 (2.65e-1) -1.6232e+1 (2.32e-1) -9.5594e+0 (7.10e-2) -4.0224e+0 (6.43e-1)
+/−/=0/24/04/17/30/24/00/24/0
Table A4. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 6 equivalent PSs (Part II). The best result in each row is highlighted in bold.
Table A4. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 6 equivalent PSs (Part II). The best result in each row is highlighted in bold.
ProblemDSparseEAMSKEALMMODECMMOGA_DLFMASR-MMEA
SMMOP11003.7372e+0 (4.82e-2) -3.8017e+0 (1.18e-2) -3.6026e+0 (1.39e-1) -2.8100e+0 (3.95e-1) -1.9128e-1 (1.48e-1)
SMMOP21003.7022e+0 (5.21e-2) -3.8027e+0 (1.36e-2) -1.1543e+1 (6.58e-1) -3.9713e+0 (5.07e-1) -2.0909e-1 (1.74e-1)
SMMOP31003.8009e+0 (2.48e-2) -3.8362e+0 (1.91e-3) -1.1837e+1 (6.39e-1) -4.1925e+0 (4.90e-1) -6.7422e-1 (4.27e-1)
SMMOP41003.7223e+0 (4.60e-2) -3.7989e+0 (9.73e-3) -3.8137e+0 (1.88e-1) -3.0370e+0 (3.17e-1) -2.4262e-1 (2.08e-1)
SMMOP51003.7066e+0 (4.67e-2) -3.7958e+0 (1.25e-2) -3.8193e+0 (1.74e-1) -3.7933e+0 (1.45e-1) -2.0381e-1 (1.46e-1)
SMMOP61003.8035e+0 (1.95e-2) -3.8336e+0 (2.91e-3) -4.1158e+0 (1.01e-1) -3.0806e+0 (3.71e-1) -7.1485e-1 (4.41e-1)
SMMOP71003.8045e+0 (7.63e-3) -3.8382e+0 (1.42e-3) -4.0134e+0 (1.54e-1) -3.8569e+0 (1.40e-1) -1.1490e+0 (4.99e-1)
SMMOP81003.8020e+0 (1.74e-2) -3.8393e+0 (1.28e-3) -3.7050e+0 (2.26e-1) -3.8124e+0 (1.45e-1) -1.1518e+0 (3.06e-1)
SMMOP12005.3079e+0 (4.71e-2) -5.3994e+0 (9.04e-3) -5.5189e+0 (1.15e-1) -5.0945e+0 (1.74e-1) -4.6280e-1 (2.59e-1)
SMMOP22005.3513e+0 (7.76e-2) -5.4024e+0 (6.75e-3) -1.5441e+1 (8.50e-1) -7.2784e+0 (8.58e-1) -5.0743e-1 (2.99e-1)
SMMOP32005.4056e+0 (6.70e-2) -5.4269e+0 (2.08e-3) -1.5590e+1 (9.41e-1) -7.6269e+0 (8.09e-1) -1.3753e+0 (5.46e-1)
SMMOP42005.3234e+0 (6.22e-2) -5.3912e+0 (1.27e-2) -5.8900e+0 (1.55e-1) -5.4175e+0 (4.66e-1) -4.2234e-1 (2.02e-1)
SMMOP52005.3343e+0 (5.05e-2) -5.3996e+0 (1.99e-2) -5.6803e+0 (1.40e-1) -5.8077e+0 (1.28e-1) -4.0819e-1 (2.58e-1)
SMMOP62005.3797e+0 (2.15e-2) -5.4239e+0 (2.67e-3) -6.2864e+0 (1.22e-1) -5.2485e+0 (5.05e-1) -1.2986e+0 (4.91e-1)
SMMOP72005.3907e+0 (1.49e-2) -5.4329e+0 (1.64e-2) -5.9643e+0 (1.36e-1) -5.8498e+0 (1.67e-1) -1.8395e+0 (5.50e-1)
SMMOP82005.3978e+0 (4.74e-2) -5.4297e+0 (1.10e-3) -5.5922e+0 (1.78e-1) -5.8187e+0 (1.90e-1) -1.7540e+0 (3.97e-1)
SMMOP15008.5727e+0 (7.76e-2) -8.5758e+0 (2.68e-2) -9.5916e+0 (1.59e-1) -8.8795e+0 (4.89e-1) -2.014e+0 (7.90e-1)
SMMOP25008.5969e+0 (1.12e-1) -8.5727e+0 (3.10e-2) -2.1420e+1 (1.16e+0) -1.2676e+1 (8.66e-1) -2.1136e+0 (5.80e-1)
SMMOP35008.7225e+0 (1.25e-1) -8.6100e+0 (3.90e-2) -2.1546e+1 (1.11e+0) -1.2862e+1 (7.11e-1) -3.5194e+0 (7.87e-1)
SMMOP45008.5709e+0 (9.21e-2) -8.5853e+0 (3.78e-2) -1.0421e+1 (2.33e-1) -9.1403e+0 (6.91e-1) -2.0131e+0 (6.07e-1)
SMMOP55008.5806e+0 (9.14e-2) -8.5807e+0 (3.69e-2) -9.8896e+0 (1.13e-1) -9.3563e+0 (2.31e-1) -2.0532e+0 (6.00e-1)
SMMOP65008.6528e+0 (1.15e-1) -8.6243e+0 (4.08e-2) -1.0924e+1 (2.31e-1) -9.4130e+0 (9.06e-1) -3.2368e+0 (4.60e-1)
SMMOP75008.7187e+0 (1.19e-1) -8.6404e+0 (4.10e-2) -1.0113e+1 (1.64e-1) -9.2413e+0 (2.55e-1) -3.7809e+0 (7.22e-1)
SMMOP85008.6553e+0 (1.04e-1) -8.6224e+0 (4.40e-2) -9.8571e+0 (1.57e-1) -9.5258e+0 (3.24e-1) -4.0224e+0 (6.43e-1)
+/−/=0/24/00/24/00/24/00/24/0
Table A5. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs (Part I). The best result in each row is highlighted in bold.
Table A5. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs (Part I). The best result in each row is highlighted in bold.
ProblemnpDMPMMEAHHCMMEAMO_Ring_PSO_SCDDNNSGAIIMASR-MMEA
SMMOP1 5002.9461e+0 (4.20e-1) -4.5552e+0 (4.20e-1) -1.6310e+1 (1.74e-1) -7.7364e+0 (5.11e-2) -6.8094e-1 (2.22e-1)
SMMOP2 5002.8773e+0 (5.30e-1) -3.8229e+0 (1.17e+0) -1.8101e+1 (3.40e-1) -2.3144e+1 (5.87e-1) -7.5096e-1 (4.71e-1)
SMMOP3 5003.7201e+0 (9.39e-1) -4.6911e+0 (8.10e-1) -1.7917e+1 (4.63e-1) -2.3862e+1 (6.13e-1) -2.0137e+0 (2.94e-1)
SMMOP445003.1952e+0 (2.10e-1) -4.5826e+0 (3.72e-1) -1.7150e+1 (1.83e-1) -7.8032e+0 (7.87e-2) -1.0266e+0 (5.99e-1)
SMMOP5 5003.2697e+0 (1.68e-1) -4.7510e+0 (4.57e-1) -1.6523e+1 (2.69e-1) -9.1196e+0 (1.09e-1) -1.1532e+0 (5.25e-1)
SMMOP6 5003.7158e+0 (8.42e-1) -5.1628e+0 (2.23e-1) -1.7065e+1 (3.29e-1) -7.8978e+0 (7.20e-2) -2.1232e+0 (4.33e-1)
SMMOP7 5004.5286e+0 (1.16e+0) -4.3086e+0 (3.64e-1) -1.6076e+1 (1.89e-1) -8.9965e+0 (6.87e-2) -2.0522e+0 (5.08e-1)
SMMOP8 5004.5375e+0 (6.88e-1) -5.5221e+0 (5.57e-1) -1.6087e+1 (1.65e-1) -9.0192e+0 (7.95e-2) -2.6711e+0 (5.83e-1)
SMMOP1 5004.3189e+0 (6.26e-1) -5.2167e+0 (4.29e-1) -1.6374e+1 (2.38e-1) -8.4914e+0 (6.36e-2) -2.1114e+0 (7.90e-1)
SMMOP2 5004.5447e+0 (7.82e-1) -4.4129e+0 (4.37e-1) -1.8031e+1 (4.71e-1) -2.0719e+1 (8.66e-1) -2.2036e+0 (5.80e-1)
SMMOP3 5005.5325e+0 (7.37e-1) -5.0509e+0 (5.51e-1) -1.8150e+1 (3.50e-1) -2.1238e+1 (8.05e-1) -3.5194e+0 (7.87e-1)
SMMOP465004.1007e+0 (5.47e-1) -5.3318e+0 (2.57e-1) -1.7139e+1 (2.56e-1) -8.5364e+0 (5.60e-2) -2.0131e+0 (6.07e-1)
SMMOP5 5004.2686e+0 (5.88e-1) -5.3089e+0 (2.22e-1) -1.6693e+1 (2.90e-1) -9.4947e+0 (1.06e-1) -2.0532e+0 (6.00e-1)
SMMOP6 5005.2012e+0 (7.38e-1) -5.4838e+0 (1.19e-1) -1.7121e+1 (2.77e-1) -8.5726e+0 (2.40e-2) -3.2368e+0 (4.60e-1)
SMMOP7 5006.0529e+0 (6.76e-1) -4.9813e+0 (3.42e-1) -1.6272e+1 (1.93e-1) -9.5697e+0 (7.23e-2) -3.7809e+0 (7.22e-1)
SMMOP8 5006.0207e+0 (5.43e-1) -5.7797e+0 (2.65e-1) -1.6232e+1 (2.32e-1) -9.5594e+0 (7.10e-2) -4.0224e+0 (6.43e-1)
SMMOP1 5005.4735e+0 (7.95e-1) -5.5678e+0 (2.17e-1) -1.6393e+1 (2.29e-1) -8.8323e+0 (7.92e-2) -3.3729e+0 (5.64e-1)
SMMOP2 5005.5305e+0 (5.84e-1) -5.1211e+0 (7.04e-1) -1.8036e+1 (4.42e-1) -1.8420e+1 (7.10e-1) -3.2861e+0 (5.67e-1)
SMMOP3 5006.7319e+0 (6.50e-1) -5.3189e+0 (3.18e-1) -1.7989e+1 (5.32e-1) -1.9095e+1 (7.18e-1) -4.5030e+0 (6.43e-1)
SMMOP485004.9510e+0 (4.88e-1) -5.6880e+0 (1.79e-1) -1.7129e+1 (2.38e-1) -8.8994e+0 (6.70e-2) -3.3106e+0 (6.69e-1)
SMMOP5 5002.5692e+0 (7.54e-1) -5.6244e+0 (1.90e-1) -1.6625e+1 (2.76e-1) -8.8543e+0 (8.34e-2) -1.2242e+0 (4.30e-1)
SMMOP6 5006.1969e+0 (8.43e-1) -5.8609e+0 (1.66e-1) -1.7184e+1 (2.63e-1) -8.9711e+0 (1.85e-2) -4.2785e+0 (5.68e-1)
SMMOP7 5007.0664e+0 (6.89e-1) -5.2678e+0 (2.51e-1) -1.6391e+1 (1.62e-1) -9.8715e+0 (7.42e-2) -5.0512e+0 (4.30e-1)
SMMOP8 5006.6190e+0 (7.00e-1) -6.0049e+0 (1.41e-1) -1.6305e+1 (1.76e-1) -9.8741e+0 (6.47e-2) -4.7933e+0 (4.88e-1)
+/−/= 0/24/00/24/00/24/00/24/0
Table A6. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs (Part II). The best result in each row is highlighted in bold.
Table A6. IGDX values obtained by various algorithms on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs (Part II). The best result in each row is highlighted in bold.
ProblemnpDSparseEAMSKEAMASR-MMEA
SMMOP1 5007.8736e+0 (9.11e-2) -7.8164e+0 (8.47e-2) -6.8094e-1 (2.22e-1)
SMMOP2 5007.8818e+0 (8.43e-2) -7.7813e+0 (6.31e-2) -7.5096e-1 (4.71e-1)
SMMOP3 5008.0429e+0 (1.49e-1) -7.8448e+0 (7.75e-2) -2.0137e+0 (2.94e-1)
SMMOP445007.8387e+0 (7.12e-2) -7.7967e+0 (8.34e-2) -1.0266e+0 (5.99e-1)
SMMOP5 5007.8049e+0 (7.36e-2) -7.8057e+0 (8.48e-2) -1.1532e+0 (5.25e-1)
SMMOP6 5007.9518e+0 (1.16e-1) -7.8973e+0 (7.17e-2) -2.1232e+0 (4.33e-1)
SMMOP7 5008.0283e+0 (1.19e-1) -7.9004e+0 (8.35e-2) -2.0522e+0 (5.08e-1)
SMMOP8 5008.0035e+0 (8.74e-2) -7.8974e+0 (8.66e-2) -2.6711e+0 (5.83e-1)
SMMOP1 5008.5727e+0 (7.76e-2) -8.5758e+0 (2.68e-2) -2.1114e+0 (7.90e-1)
SMMOP2 5008.5969e+0 (1.12e-1) -8.5727e+0 (3.10e-2) -2.2036e+0 (5.80e-1)
SMMOP3 5008.7225e+0 (1.25e-1) -8.6100e+0 (3.90e-2) -3.5194e+0 (7.87e-1)
SMMOP465008.5709e+0 (9.21e-2) -8.5853e+0 (3.78e-2) -2.0131e+0 (6.07e-1)
SMMOP5 5008.5806e+0 (9.14e-2) -8.5807e+0 (3.69e-2) -2.0532e+0 (6.00e-1)
SMMOP6 5008.6528e+0 (1.15e-1) -8.6243e+0 (4.08e-2) -3.2368e+0 (4.60e-1)
SMMOP7 5008.7187e+0 (1.19e-1) -8.6404e+0 (4.10e-2) -3.7809e+0 (7.22e-1)
SMMOP8 5008.6553e+0 (1.04e-1) -8.6224e+0 (4.40e-2) -4.0224e+0 (6.43e-1)
SMMOP1 5008.9960e+0 (9.01e-2) -8.9789e+0 (1.56e-2) -3.3729e+0 (5.64e-1)
SMMOP2 5008.9588e+0 (1.19e-1) -8.9790e+0 (1.25e-2) -3.2861e+0 (5.67e-1)
SMMOP3 5009.0542e+0 (8.19e-2) -9.0075e+0 (1.95e-2) -4.5030e+0 (6.43e-1)
SMMOP485008.9575e+0 (9.63e-2) -8.9729e+0 (1.16e-2) -3.3106e+0 (6.69e-1)
SMMOP5 5007.8160e+0 (7.27e-2) -7.7468e+0 (3.68e-2) -1.2242e+0 (4.30e-1)
SMMOP6 5009.0124e+0 (1.23e-1) -8.9974e+0 (1.92e-2) -4.2785e+0 (5.68e-1)
SMMOP7 5009.0454e+0 (1.05e-1) -9.0116e+0 (2.08e-2) -5.0512e+0 (4.30e-1)
SMMOP8 5008.9144e+0 (1.18e-1) -9.0075e+0 (1.92e-2) -4.7933e+0 (4.88e-1)
+/−/= 0/24/00/24/0
Table A7. IGD values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part I). The best result in Table A8.
Table A7. IGD values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part I). The best result in Table A8.
ProblemDMPMMEAHHCMMEAMO_Ring_PSO_SCDMASR-MMEA
SMMOP11002.6535e-3 (4.27e-4) +1.7463e-3 (3.47e-4) +7.4965e-1 (1.08e-2) -5.3183e-3 (1.07e-3)
SMMOP21002.9874e-3 (3.67e-4) +1.6070e-3 (2.37e-4) +2.0301e+0 (2.29e-2) -5.1625e-3 (1.29e-3)
SMMOP31004.4374e-3 (1.17e-3) +1.7955e-3 (3.54e-4) +2.1004e+0 (2.37e-2) -1.1246e-2 (5.05e-3)
SMMOP41002.9300e-3 (4.12e-4) +2.0064e-3 (7.11e-4) +3.6976e-1 (3.89e-3) -5.2911e-3 (4.00e-4)
SMMOP51002.8852e-3 (3.97e-4) +1.9255e-3 (5.36e-4) +3.6174e-1 (5.48e-3) -5.2092e-3 (4.08e-4)
SMMOP61003.4504e-3 (6.45e-4) +2.8252e-3 (1.43e-3) +4.0424e-1 (5.47e-3) -8.6154e-3 (1.89e-3)
SMMOP71003.7606e-3 (7.92e-4) +2.1633e-3 (4.27e-4) +9.3392e-1 (1.51e-2) -1.4345e-2 (6.26e-3)
SMMOP81004.6457e-3 (1.95e-3) +2.9630e-3 (9.50e-4) +8.7713e-1 (1.10e-2) -1.5657e-2 (8.50e-3)
SMMOP12003.9339e-3 (4.77e-4) +3.3987e-3 (9.29e-4) +8.6003e-1 (9.59e-3) -6.2139e-3 (8.93e-4)
SMMOP22004.4837e-3 (6.34e-4) +2.6846e-3 (7.31e-4) +2.0948e+0 (1.34e-2) -6.0765e-3 (1.19e-3)
SMMOP32009.0102e-3 (2.42e-3) +4.2077e-3 (2.15e-3) +2.1756e+0 (1.14e-2) -1.1487e-2 (5.33e-3)
SMMOP42003.7343e-3 (4.34e-4) +2.9477e-3 (1.07e-3) +3.9627e-1 (3.12e-3) -6.1050e-3 (2.89e-4)
SMMOP52003.6821e-3 (4.89e-4) +3.3440e-3 (1.23e-3) +4.2140e-1 (6.19e-3) -6.1994e-3 (3.60e-4)
SMMOP62005.2639e-3 (1.36e-3) +7.4593e-3 (2.86e-3) =4.4210e-1 (5.10e-3) -8.5517e-3 (2.89e-3)
SMMOP72006.9965e-3 (3.75e-3) +7.2076e-3 (4.09e-3) +1.1013e+0 (1.48e-2) -1.3756e-2 (6.81e-3)
SMMOP82001.0094e-2 (4.66e-3) +1.0388e-2 (3.85e-3) +1.0130e+0 (1.24e-2) -1.8444e-2 (9.38e-3)
SMMOP15001.0399e-2 (2.40e-3) -2.7242e-2 (8.19e-3) -1.0258e+0 (1.15e-2) -7.4726e-3 (4.14e-4)
SMMOP25009.4700e-3 (2.20e-3) -9.6629e-3 (5.28e-3) =2.1125e+0 (8.28e-3) -7.1695e-3 (7.30e-4)
SMMOP35002.6246e-2 (6.65e-3) -4.2189e-2 (1.54e-2) -2.2000e+0 (1.08e-2) -2.3844e-2 (4.07e-3)
SMMOP45007.8683e-3 (8.12e-4) -1.5551e-2 (2.53e-3) -4.2728e-1 (2.31e-3) -7.2379e-3 (8.79e-4)
SMMOP55008.3333e-3 (1.11e-3) -1.6771e-2 (3.62e-3) -5.1897e-1 (7.03e-3) -7.1165e-3 (7.16e-4)
SMMOP65001.4926e-2 (3.54e-3) =3.8779e-2 (7.77e-3) -4.7865e-1 (3.06e-3) -1.3862e-2 (3.74e-3)
SMMOP75003.3410e-2 (7.67e-3) =6.9908e-2 (1.84e-2) -1.3719e+0 (1.80e-2) -2.9276e-2 (9.06e-3)
SMMOP85003.3146e-2 (5.21e-3) =5.7682e-2 (1.53e-2) -1.2550e+0 (1.26e-2) -3.3033e-2 (9.66e-3)
+/−/=24/0/024/0/024/0/0
Table A8. IGD values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part II). The best result in each row is highlighted in bold.
Table A8. IGD values obtained by various algorithms on SMMOP1-SMMOP8 with 4 equivalent PSs (Part II). The best result in each row is highlighted in bold.
ProblemDDNNSGAIISparseEAMSKEAMASR-MMEA
SMMOP11002.6018e-3 (8.34e-5) +1.1972e-3 (9.51e-5) +9.2136e-4 (2.17e-6) +5.3183e-3 (1.07e-3)
SMMOP21001.0816e-1 (3.13e-2) -1.4263e-3 (2.11e-4) +9.2144e-4 (1.87e-6) +5.1625e-3 (1.29e-3)
SMMOP31001.1100e-1 (3.72e-2) -1.7380e-3 (8.83e-4) +9.2150e-4 (2.63e-6) +1.1246e-2 (5.05e-3)
SMMOP41003.0921e-3 (1.30e-4) +1.2029e-3 (3.63e-5) +1.0285e-3 (9.18e-6) +5.2911e-3 (4.00e-4)
SMMOP51006.5908e-3 (6.86e-4) -1.2203e-3 (3.17e-5) +1.0269e-3 (8.44e-6) +5.2092e-3 (4.08e-4)
SMMOP61002.9585e-3 (1.33e-4) +1.2136e-3 (4.85e-5) +1.0264e-3 (9.06e-6) +8.6154e-3 (1.89e-3)
SMMOP71008.6335e-3 (1.42e-3) +1.6861e-3 (1.90e-4) +1.0195e-3 (4.10e-6) +1.4345e-2 (6.26e-3)
SMMOP81008.6800e-3 (1.27e-3) +2.2438e-3 (1.06e-3) +1.0193e-3 (3.85e-6) +1.5657e-2 (8.50e-3)
SMMOP12002.7808e-3 (7.43e-5) +1.5171e-3 (6.07e-4) +9.2137e-4 (2.53e-6) +6.2139e-3 (8.93e-4)
SMMOP22002.0950e-1 (3.28e-2) -1.7347e-3 (3.43e-4) +9.2096e-4 (2.89e-6) +6.0765e-3 (1.19e-3)
SMMOP32002.3633e-1 (3.70e-2) -5.6382e-3 (2.31e-3) +1.4051e-3 (1.84e-3) +1.1487e-2 (5.33e-3)
SMMOP42003.2798e-3 (1.21e-4) +1.3327e-3 (3.93e-5) +1.0257e-3 (9.23e-6) +6.1050e-3 (2.89e-4)
SMMOP52008.7662e-3 (6.38e-4) -1.3675e-3 (4.87e-5) +1.0258e-3 (7.43e-6) +6.1994e-3 (3.60e-4)
SMMOP62003.1730e-3 (1.03e-4) +1.9956e-3 (1.21e-3) +1.0281e-3 (1.02e-5) +8.5517e-3 (2.89e-3)
SMMOP72001.7260e-2 (1.14e-3) =2.1939e-3 (8.24e-4) +2.0686e-3 (3.20e-3) +1.3756e-2 (6.81e-3)
SMMOP82001.8020e-2 (1.08e-3) =3.6776e-3 (2.01e-3) +1.1782e-3 (8.77e-4) +1.8444e-2 (9.38e-3)
SMMOP15004.2043e-3 (3.89e-4) +4.6775e-3 (1.89e-3) +1.6543e-3 (1.09e-3) +7.4726e-3 (4.14e-4)
SMMOP25003.6411e-1 (2.08e-2) -4.9703e-3 (1.54e-3) +1.1748e-3 (4.28e-4) +7.1695e-3 (7.30e-4)
SMMOP35003.9087e-1 (2.31e-2) -1.8315e-2 (4.15e-3) +2.8355e-3 (1.89e-3) +2.3844e-2 (4.07e-3)
SMMOP45005.0755e-3 (5.40e-4) +2.7394e-3 (5.78e-4) +1.2338e-3 (3.42e-4) +7.2379e-3 (8.79e-4)
SMMOP55001.7423e-2 (7.96e-4) -2.5422e-3 (6.70e-4) +1.2605e-3 (3.81e-4) +7.1165e-3 (7.16e-4)
SMMOP65005.7934e-3 (7.92e-4) +5.7592e-3 (1.74e-3) +2.8044e-3 (1.63e-3) +1.3862e-2 (3.74e-3)
SMMOP75003.4293e-2 (1.64e-3) -1.7044e-2 (6.47e-3) +6.9944e-3 (5.85e-3) +2.9276e-2 (9.06e-3)
SMMOP85003.5284e-2 (2.23e-3) =1.5129e-2 (3.58e-3) +3.5752e-3 (2.13e-3) +3.3033e-2 (9.66e-3)
+/−/=12/9/324/0/024/0/00/24/0

References

  1. Han, Y.; Gong, D.; Jin, Y.; Pan, Q. Evolutionary Multiobjective Blocking Lot-Streaming Flow Shop Scheduling with Machine Breakdowns. IEEE Trans. Cybern. 2019, 49, 184–197. [Google Scholar] [CrossRef] [PubMed]
  2. Babaee Tirkolaee, E.; Goli, A.; Weber, G.W. Fuzzy Mathematical Programming and Self-Adaptive Artificial Fish Swarm Algorithm for Just-in-Time Energy-Aware Flow Shop Scheduling Problem with Outsourcing Option. IEEE Trans. Fuzzy Syst. 2020, 28, 2772–2783. [Google Scholar] [CrossRef]
  3. Chen, M.R.; Zeng, G.Q.; Lu, K.D. A many-objective population extremal optimization algorithm with an adaptive hybrid mutation operation. Inf. Sci. 2019, 498, 62–90. [Google Scholar] [CrossRef]
  4. Yue, C.; Qu, B.; Liang, J. A Multiobjective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multiobjective Problems. IEEE Trans. Evol. Comput. 2018, 22, 805–817. [Google Scholar] [CrossRef]
  5. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S. A novel Whale Optimization Algorithm integrated with Nelder–Mead simplex for multi-objective optimization problems. Knowl.-Based Syst. 2021, 212, 106619. [Google Scholar] [CrossRef]
  6. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  7. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  8. Liu, Y.; Yen, G.G.; Gong, D. A Multimodal Multiobjective Evolutionary Algorithm Using Two-Archive and Recombination Strategies. IEEE Trans. Evol. Comput. 2019, 23, 660–674. [Google Scholar] [CrossRef]
  9. Rudolph, G.; Naujoks, B.; Preuss, M. Capabilities of EMOA to Detect and Preserve Equivalent Pareto Subsets. In Evolutionary Multi-Criterion Optimization; Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 36–50. [Google Scholar]
  10. Schutze, O.; Vasile, M.; Coello, C.A.C. Computing the Set of Epsilon-Efficient Solutions in Multiobjective Space Mission Design. J. Aerosp. Comput. Information, Commun. 2011, 8, 53–70. [Google Scholar] [CrossRef]
  11. Kudo, F.; Yoshikawa, T.; Furuhashi, T. A study on analysis of design variables in Pareto solutions for conceptual design optimization problem of hybrid rocket engine. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 2558–2562. [Google Scholar] [CrossRef]
  12. Sebag, M.; Tarrisson, N.; Teytaud, O.; Lefevre, J.; Baillet, S. A multi-objective multi-modal optimization approach for mining stable spatio-temporal patterns. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, San Francisco, CA, USA, 30 July–5 August 2005; IJCAI’05. pp. 859–864. [Google Scholar]
  13. Tanabe, R.; Ishibuchi, H. A Review of Evolutionary Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2020, 24, 193–200. [Google Scholar] [CrossRef]
  14. Deb, K.; Tiwari, S. Omni-optimizer: A Procedure for Single and Multi-objective Optimization. In Evolutionary Multi-Criterion Optimization; Coello, C.A., Hernández Aguirre, A., Zitzler, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 47–61. [Google Scholar]
  15. Deb, K.; Tiwari, S. Omni-optimizer: A generic evolutionary algorithm for single and multi-objective optimization. Eur. J. Oper. Res. 2008, 185, 1062–1087. [Google Scholar] [CrossRef]
  16. Liu, Y.; Ishibuchi, H.; Nojima, Y.; Masuyama, N.; Shang, K. A Double-Niched Evolutionary Algorithm and Its Behavior on Polygon-Based Problems. In Parallel Problem Solving from Nature—PPSN XV; Auger, A., Fonseca, C.M., Lourenço, N., Machado, P., Paquete, L., Whitley, D., Eds.; Springer: Cham, Switzerland, 2018; pp. 262–273. [Google Scholar]
  17. Liang, J.J.; Yue, C.T.; Qu, B.Y. Multimodal multi-objective optimization: A preliminary study. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2454–2461. [Google Scholar] [CrossRef]
  18. Liu, Y.; Ishibuchi, H.; Yen, G.G.; Nojima, Y.; Masuyama, N. Handling Imbalance Between Convergence and Diversity in the Decision Space in Evolutionary Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2020, 24, 551–565. [Google Scholar] [CrossRef]
  19. Li, W.; Zhang, T.; Wang, R.; Ishibuchi, H. Weighted Indicator-Based Evolutionary Algorithm for Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2021, 25, 1064–1078. [Google Scholar] [CrossRef]
  20. Tian, Y.; Zheng, X.; Zhang, X.; Jin, Y. Efficient Large-Scale Multiobjective Optimization Based on a Competitive Swarm Optimizer. IEEE Trans. Cybern. 2020, 50, 3696–3708. [Google Scholar] [CrossRef] [PubMed]
  21. Yue, C.T.; Liang, J.J.; Qu, B.Y.; Yu, K.J.; Song, H. Multimodal Multiobjective Optimization in Feature Selection. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 302–309. [Google Scholar] [CrossRef]
  22. Zoph, B.; Le, Q.V. Neural Architecture Search with Reinforcement Learning. arXiv 2016, arXiv:1611.01578. [Google Scholar]
  23. Jin, Y.; Sendhoff, B. Pareto-Based Multiobjective Machine Learning: An Overview and Case Studies. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2008, 38, 397–415. [Google Scholar] [CrossRef]
  24. Ma, X.; Liu, F.; Qi, Y.; Wang, X.; Li, L.; Jiao, L.; Yin, M.; Gong, M. A Multiobjective Evolutionary Algorithm Based on Decision Variable Analyses for Multiobjective Optimization Problems with Large-Scale Variables. IEEE Trans. Evol. Comput. 2016, 20, 275–298. [Google Scholar] [CrossRef]
  25. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization. IEEE Trans. Evol. Comput. 2018, 22, 97–112. [Google Scholar] [CrossRef]
  26. Tian, Y.; Lu, C.; Zhang, X.; Cheng, F.; Jin, Y. A Pattern Mining-Based Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems. IEEE Trans. Cybern. 2022, 52, 6784–6797. [Google Scholar] [CrossRef] [PubMed]
  27. Tian, Y.; Lu, C.; Zhang, X.; Tan, K.C.; Jin, Y. Solving Large-Scale Multiobjective Optimization Problems with Sparse Optimal Solutions via Unsupervised Neural Networks. IEEE Trans. Cybern. 2021, 51, 3115–3128. [Google Scholar] [CrossRef]
  28. Tian, Y.; Zhang, X.; Wang, C.; Jin, Y. An Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 380–393. [Google Scholar] [CrossRef]
  29. Qi, S.; Zou, J.; Yang, S.; Jin, Y.; Zheng, J.; Yang, X. A self-exploratory competitive swarm optimization algorithm for large-scale multiobjective optimization. Inf. Sci. 2022, 609, 1601–1620. [Google Scholar] [CrossRef]
  30. Tian, Y.; Liu, R.; Zhang, X.; Ma, H.; Tan, K.C.; Jin, Y. A Multipopulation Evolutionary Algorithm for Solving Large-Scale Multimodal Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2021, 25, 405–418. [Google Scholar] [CrossRef]
  31. Ding, Z.; Cao, L.; Chen, L.; Sun, D.; Zhang, X.; Tao, Z. Large-scale multimodal multiobjective evolutionary optimization based on hybrid hierarchical clustering. Knowl.-Based Syst. 2023, 266, 110398. [Google Scholar] [CrossRef]
  32. Ioannis Giagkiozis, R.C.P.; Fleming, P.J. An overview of population-based algorithms for multi-objective optimisation. Int. J. Syst. Sci. 2015, 46, 1572–1599. [Google Scholar] [CrossRef]
  33. Ding, Z.; Chen, L.; Sun, D.; Zhang, X. A multi-stage knowledge-guided evolutionary algorithm for large-scale sparse multi-objective optimization problems. Swarm Evol. Comput. 2022, 73, 101119. [Google Scholar] [CrossRef]
  34. Li, W.; Yao, X.; Zhang, T.; Wang, R.; Wang, L. Hierarchy Ranking Method for Multimodal Multiobjective Optimization with Local Pareto Fronts. IEEE Trans. Evol. Comput. 2023, 27, 98–110. [Google Scholar] [CrossRef]
  35. Zou, J.; Yang, X.; Deng, Q.; Liu, Y.; Xia, Y.; Wu, Z. A grid self-adaptive exploration-based algorithm for multimodal multiobjective optimization. Appl. Soft Comput. 2024, 166, 112153. [Google Scholar] [CrossRef]
  36. Li, G.; Li, W.; He, L.; Gao, C. A niching differential evolution with Hilbert curve for multimodal multi-objective optimization. Swarm Evol. Comput. 2025, 95, 101952. [Google Scholar] [CrossRef]
  37. Yang, C.; Wu, T.; Ji, J. Two-stage species conservation for multimodal multi-objective optimization with local Pareto sets. Inf. Sci. 2023, 639, 118990. [Google Scholar] [CrossRef]
  38. Cao, J.; Liu, Q.; Chen, Z.; Zhang, J.; Qi, Z. Dual-space distribution metric-based evolutionary algorithm for multimodal multi-objective optimization. Expert Syst. Appl. 2025, 262, 125596. [Google Scholar] [CrossRef]
  39. Liu, H.; Deng, S.; Li, K.; Shi, H.; Li, M. Enhanced goal direction and remodeling of population distributions multimodal multi-objective evolutionary algorithm. Expert Syst. Appl. 2026, 295, 128913. [Google Scholar] [CrossRef]
  40. Liu, Z.; Yang, Y.; Cao, J.; Zhang, J.; Chen, Z.; Liu, Q. A coevolutionary algorithm using Self-organizing map approach for multimodal multi-objective optimization. Appl. Soft Comput. 2024, 164, 111954. [Google Scholar] [CrossRef]
  41. Wu, L.; Zhao, X.; Ye, L.; Qiao, Z.; Zuo, X. Fuzzy clustering-based large-scale multimodal multi-objective differential evolution algorithm. Swarm Evol. Comput. 2025, 93, 101856. [Google Scholar] [CrossRef]
  42. Yue, C.; Wu, W.; Liang, J.; Bi, Y.; Yu, K.; Chen, K.; Guo, W. An improved generalized evolutionary algorithm for constrained multimodal multiobjective optimization. Complex Intell. Syst. 2026, 12, 48. [Google Scholar] [CrossRef]
  43. Wei, Z.; Gao, W.; Gong, M.; Yen, G.G. A Bi-Objective Evolutionary Algorithm for Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2024, 28, 168–177. [Google Scholar] [CrossRef]
  44. Tanabe, R.; Ishibuchi, H. A Decomposition-Based Evolutionary Algorithm for Multi-modal Multi-objective Optimization. In Parallel Problem Solving from Nature—PPSN XV; Auger, A., Fonseca, C.M., Lourenço, N., Machado, P., Paquete, L., Whitley, D., Eds.; Springer: Cham, Switzerland, 2018; pp. 249–261. [Google Scholar]
  45. Tanabe, R.; Ishibuchi, H. A Framework to Handle Multimodal Multiobjective Optimization in Decomposition-Based Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2020, 24, 720–734. [Google Scholar] [CrossRef]
  46. Sun, Y.; Zhang, S. A decomposition and dynamic niching distance-based dual elite subpopulation evolutionary algorithm for multimodal multiobjective optimization. Expert Syst. Appl. 2023, 231, 120738. [Google Scholar] [CrossRef]
  47. Parsons, L.; Haque, E.; Liu, H. Subspace clustering for high dimensional data: A review. SIGKDD Explor. Newsl. 2004, 6, 90–105. [Google Scholar] [CrossRef]
  48. Zhang, C.; Li, H.; Long, S.; Yue, X.; Ouyang, H.; Zhu, H.; Li, S. MOEA/D-BDN: Multimodal multi-objective evolutionary algorithm based on bi-dynamic niche strategy and adaptive weight decomposition. Swarm Evol. Comput. 2025, 99, 102171. [Google Scholar] [CrossRef]
  49. Yang, X.; Zou, J.; Yang, S.; Zheng, J.; Liu, Y. A Fuzzy Decision Variables Framework for Large-Scale Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023, 27, 445–459. [Google Scholar] [CrossRef]
  50. Wang, S.T.; Zheng, J.H.; Liu, Y.; Zou, J.; Yang, S.X. An extended fuzzy decision variables framework for solving large-scale multiobjective optimization problems. Inf. Sci. 2023, 643, 119221. [Google Scholar] [CrossRef]
  51. Liu, S.; Li, J.; Lin, Q.; Tian, Y.; Tan, K.C. Learning to Accelerate Evolutionary Search for Large-Scale Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023, 27, 67–81. [Google Scholar] [CrossRef]
  52. Deb, K.; Agrawal, R.B. Simulated Binary Crossover for Continuous Search Space. Complex Syst. 1995, 9, 115–148. [Google Scholar]
  53. Deb, K.; Goyal, M. A combined genetic adaptive search (GeneAS) for engineering design. Comput. Sci. Inform. 1996, 26, 30–45. [Google Scholar]
  54. Zhou, A.; Zhang, Q.; Jin, Y. Approximating the Set of Pareto-Optimal Solutions in Both the Decision and Objective Spaces by an Estimation of Distribution Algorithm. IEEE Trans. Evol. Comput. 2009, 13, 1167–1189. [Google Scholar] [CrossRef]
  55. Bosman, P.; Thierens, D. The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans. Evol. Comput. 2003, 7, 174–188. [Google Scholar] [CrossRef]
  56. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  57. Lalou, M.; Tahraoui, M.A.; Kheddouci, H. The Critical Node Detection Problem in networks: A survey. Comput. Sci. Rev. 2018, 28, 92–117. [Google Scholar] [CrossRef]
  58. Verbiest, N.; Derrac, J.; Cornelis, C.; García, S.; Herrera, F. Evolutionary wrapper approaches for training set selection as preprocessing mechanism for support vector machines: Experimental evaluation and support vector analysis. Appl. Soft Comput. 2016, 38, 10–22. [Google Scholar] [CrossRef]
  59. Zhang, L.; Pan, H.; Su, Y.; Zhang, X.; Niu, Y. A Mixed Representation-Based Multiobjective Evolutionary Algorithm for Overlapping Community Detection. IEEE Trans. Cybern. 2017, 47, 2703–2716. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of a situation where the two solutions are identical or close to each other in the objective space but are far from each other in the solution space (a minimization problem): (a) Decision space, showing that solutions x a and x b are far apart; (b) Objective space, showing that the corresponding objective values of these two solutions are identical or close to each other.
Figure 1. Illustration of a situation where the two solutions are identical or close to each other in the objective space but are far from each other in the solution space (a minimization problem): (a) Decision space, showing that solutions x a and x b are far apart; (b) Objective space, showing that the corresponding objective values of these two solutions are identical or close to each other.
Electronics 15 00616 g001
Figure 2. The proposed method framework of MASR-MMEA.
Figure 2. The proposed method framework of MASR-MMEA.
Electronics 15 00616 g002
Figure 3. Illustrative example of the dynamic redistribution strategy: (a) Ranking of groups divided from the binary vector matrix, where groups are sorted by the number of non-zero vectors; (b) Schematic diagram of the dynamic redistribution process based on the calculated AdaptiveMask value. Note: The calculated value of AdaptiveMask is 3, which means the entire binary vector matrix is divided into 4 groups (the core numerical setting for the dynamic redistribution strategy).
Figure 3. Illustrative example of the dynamic redistribution strategy: (a) Ranking of groups divided from the binary vector matrix, where groups are sorted by the number of non-zero vectors; (b) Schematic diagram of the dynamic redistribution process based on the calculated AdaptiveMask value. Note: The calculated value of AdaptiveMask is 3, which means the entire binary vector matrix is divided into 4 groups (the core numerical setting for the dynamic redistribution strategy).
Electronics 15 00616 g003
Figure 4. Illustrative example of the affinity-based elite strategy. (a) Distance evaluation between the binary vector X b 1 and real vectors X r 1 , X r 2 , and X r 3 using the Mahalanobis distance. Although X r 1 and X r 2 exhibit identical Euclidean distances to X b 1 , the Mahalanobis distance accounts for the data distribution, resulting in a higher affinity between X b 1 and X r 1 . The colored circles denote different candidate solutions, where the binary vector and real vectors are distinguished by different colors. The arrows indicate the affinity-based selection relationship for offspring generation. (b) Subsequent evolutionary stage, where the real vector X r 3 , although closer to the Pareto front but not selected in (a), remains as an effective parent and is later combined with a more suitable binary vector X b 2 .
Figure 4. Illustrative example of the affinity-based elite strategy. (a) Distance evaluation between the binary vector X b 1 and real vectors X r 1 , X r 2 , and X r 3 using the Mahalanobis distance. Although X r 1 and X r 2 exhibit identical Euclidean distances to X b 1 , the Mahalanobis distance accounts for the data distribution, resulting in a higher affinity between X b 1 and X r 1 . The colored circles denote different candidate solutions, where the binary vector and real vectors are distinguished by different colors. The arrows indicate the affinity-based selection relationship for offspring generation. (b) Subsequent evolutionary stage, where the real vector X r 3 , although closer to the Pareto front but not selected in (a), remains as an effective parent and is later combined with a more suitable binary vector X b 2 .
Electronics 15 00616 g004
Figure 5. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1, SMMOP2, SMMOP5, and SMMOP6 with 100 decision variables and 4 equivalent PSs.
Figure 5. Results with the median IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1, SMMOP2, SMMOP5, and SMMOP6 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g005
Figure 6. Convergence profiles of the IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1–SMMOP8 with 100 decision variables and 4 equivalent PSs.
Figure 6. Convergence profiles of the IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1–SMMOP8 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g006
Figure 7. Results with the median IGDX values obtained by MASR-MMEA_1, MASR-MMEA_2, MASR-MMEA_3 and MASR-MMEA on SMMOP3, SMMOP6, and SMMOP8 with 100 decision variables and 4 equivalent PSs.
Figure 7. Results with the median IGDX values obtained by MASR-MMEA_1, MASR-MMEA_2, MASR-MMEA_3 and MASR-MMEA on SMMOP3, SMMOP6, and SMMOP8 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g007
Figure 8. Convergence profiles of the IGDX values obtained by MASR-MMEA_1, MASR-MMEA_2, MASR-MMEA_3 and MASR-MMEA on SMMOP3, SMMOP6, and SMMOP8 with 100 decision variables and 4 equivalent PSs.
Figure 8. Convergence profiles of the IGDX values obtained by MASR-MMEA_1, MASR-MMEA_2, MASR-MMEA_3 and MASR-MMEA on SMMOP3, SMMOP6, and SMMOP8 with 100 decision variables and 4 equivalent PSs.
Electronics 15 00616 g008
Table 1. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA, LMMODE, CMMOGA_DLF and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
Table 1. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA, LMMODE, CMMOGA_DLF and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
ProblemDMPMMEAHHCMMEAMO_Ring_PSO_SCDDNNSGAIISparseEAMSKEALMMODECMMOGA_
DLF
MASR-MMEA
SMMOP11003.4262e-1 (2.28e-1) -3.0936e-1 (2.88e-1) -5.8388e+0 (6.63e-2) -3.3931e+0 (1.49e-2) -3.3974e+0 (3.17e-2) -3.4412e+0 (8.45e-3) -3.5751e+0 (1.10e-1) -2.7494e+0 (3.59e-1) -4.2845e-2 (1.16e-2)
SMMOP21003.8698e-1 (6.20e-2) -2.1168e-1 (2.15e-1) -7.3142e+0 (2.09e-1) -6.3899e+0 (6.35e-1) -3.3842e+0 (4.30e-2) -3.4416e+0 (9.20e-3) -1.1859e+1 (6.61e-1) -4.4228e+0 (8.37e-1) -3.6793e-2 (2.83e-2)
SMMOP31006.9891e-1 (4.21e-1) -3.0060e-1 (3.09e-1) -7.1963e+0 (1.68e-1) -6.4346e+0 (7.43e-1) -3.4510e+0 (3.89e-2) -3.4675e+0 (2.71e-3) -1.1849e+1 (9.03e-1) -4.8078e+0 (7.94e-1) -1.3464e-1 (1.41e-1)
SMMOP41003.1727e-1 (1.01e-1) -2.8843e-1 (2.27e-1) -6.3233e+0 (8.93e-2) -3.3693e+0 (1.61e-2) -3.3744e+0 (3.44e-2) -3.4361e+0 (1.11e-2) -3.6438e+0 (1.19e-1) -3.0826e+0 (4.19e-1) -7.5132e-2 (4.85e-2)
SMMOP51003.2076e-1 (8.17e-2) -2.3373e-1 (1.36e-1) -6.0677e+0 (8.64e-2) -3.5518e+0 (4.39e-2) -3.3815e+0 (3.99e-2) -3.4315e+0 (1.22e-2) -3.6675e+0 (1.65e-1) -2.9943e+0 (2.75e-1) -8.0524e-2 (5.11e-2)
SMMOP61005.4760e-1 (4.78e-1) -3.5769e-1 (1.71e-1) -6.2384e+0 (1.11e-1) -3.4437e+0 (1.11e-2) -3.4306e+0 (1.25e-2) -3.4662e+0 (3.22e-3) -3.7680e+0 (1.09e-1) -3.1536e+0 (4.08e-1) -1.5823e-1 (1.46e-1)
SMMOP71006.4131e-1 (5.10e-1) -3.8089e-1 (4.02e-1) -5.6062e+0 (7.23e-2) -3.5394e+0 (3.31e-2) -3.4341e+0 (7.99e-3) -3.4689e+0 (1.71e-3) -3.8143e+0 (1.91e-1) -3.0328e+0 (2.50e-1) -2.1894e-1 (2.58e-1)
SMMOP81001.1029e+0 (6.06e-1) -5.5143e-1 (3.97e-1) -5.8528e+0 (8.10e-2) -3.5313e+0 (2.80e-2) -3.4458e+0 (3.74e-2) -3.4704e+0 (9.59e-4) -3.7400e+0 (1.21e-1) -3.2197e+0 (2.17e-1) -3.7537e-1 (3.41e-1)
SMMOP12007.7396e-1 (1.07e-1) -9.2270e-1 (2.52e-1) -9.2113e+0 (7.86e-2) -4.8336e+0 (2.19e-2) -4.8637e+0 (4.34e-2) -4.8829e+0 (6.55e-3) -5.6313e+0 (1.38e-1) -4.9087e+0 (2.94e-1) -1.2676e-1 (1.62e-1)
SMMOP22008.4577e-1 (1.79e-1) -9.2762e-1 (5.19e-1) -1.0940e+1 (1.92e-1) -1.1671e+1 (7.17e-1) -4.8673e+0 (7.17e-2) -4.8825e+0 (8.42e-3) -1.5126e+1 (1.01e+0) -8.2305e+0 (1.27e+0) -6.7412e-2 (7.19e-2)
SMMOP32001.4738e+0 (4.87e-1) -1.1286e+0 (5.13e-1) -1.0816e+1 (1.98e-1) -1.2181e+1 (7.71e-1) -4.9325e+0 (9.30e-2) -4.9136e+0 (3.55e-2) -1.5801e+1 (1.08e+0) -8.6252e+0 (1.10e+0) -6.5747e-1 (3.40e-1)
SMMOP42009.1303e-1 (1.78e-1) -9.5634e-1 (3.72e-1) -9.7588e+0 (9.21e-2) -4.8240e+0 (2.14e-2) -4.8088e+0 (3.96e-2) -4.8800e+0 (6.65e-3) -5.7135e+0 (1.74e-1) -5.8068e+0 (6.04e-1) -1.7992e-1 (1.20e-1)
SMMOP52008.8605e-1 (1.98e-1) -9.4876e-1 (2.38e-1) -9.4533e+0 (1.15e-1) -5.2524e+0 (5.40e-2) -4.8244e+0 (5.15e-2) -4.8775e+0 (1.00e-2) -5.6258e+0 (1.38e-1) -5.0778e+0 (5.05e-1) -2.2129e-1 (1.73e-1)
SMMOP62001.3005e+0 (4.81e-1) -1.3137e+0 (2.95e-1) -9.7308e+0 (1.29e-1) -4.8713e+0 (6.49e-3) -4.8883e+0 (6.08e-2) -4.8969e+0 (9.60e-3) -6.0442e+0 (1.57e-1) -5.2675e+0 (5.92e-1) -6.4268e-1 (2.39e-1)
SMMOP72002.1195e+0 (7.68e-1) -1.0386e+0 (2.47e-1) -8.9003e+0 (8.21e-2) -5.2633e+0 (3.99e-2) -4.8862e+0 (3.22e-2) -4.9204e+0 (4.30e-2) -5.9194e+0 (1.43e-1) -5.4506e+0 (4.62e-1) -6.9430e-1 (3.60e-1)
SMMOP82002.0776e+0 (6.18e-1) -1.7249e+0 (2.32e-1) -9.1180e+0 (1.03e-1) -5.2816e+0 (3.75e-2) -4.9187e+0 (6.39e-2) -4.9121e+0 (2.51e-2) -5.6880e+0 (1.86e-1) -5.5831e+0 (2.68e-1) -9.8649e-1 (4.67e-1)
SMMOP15002.9461e+0 (4.20e-1) -4.5552e+0 (4.20e-1) -1.6310e+1 (1.74e-1) -7.7364e+0 (5.11e-2) -7.8736e+0 (9.11e-2) -7.8164e+0 (8.47e-2) -9.9065e+0 (1.52e-1) -9.6053e+0 (5.47e-1) -6.9194e-1 (2.22e-1)
SMMOP25002.8773e+0 (5.30e-1) -3.8229e+0 (1.17e+0) -1.8101e+1 (3.40e-1) -2.3144e+1 (5.87e-1) -7.8818e+0 (8.43e-2) -7.7813e+0 (6.31e-2) -2.1269e+1 (1.22e+0) -1.5700e+1 (1.24e+0) -7.6296e-1 (4.71e-1)
SMMOP35003.7201e+0 (9.39e-1) -4.6911e+0 (8.10e-1) -1.7917e+1 (4.63e-1) -2.3862e+1 (6.13e-1) -8.0429e+0 (1.49e-1) -7.8448e+0 (7.75e-2) -2.0781e+1 (1.24e+0) -1.6089e+1 (1.45e+0) -2.0437e+0 (2.94e-1)
SMMOP45003.1952e+0 (2.10e-1) -4.5826e+0 (3.72e-1) -1.7150e+1 (1.83e-1) -7.8032e+0 (7.87e-2) -7.8387e+0 (7.12e-2) -7.7967e+0 (8.34e-2) -1.0370e+1 (2.64e-1) -1.0310e+1 (9.73e-1) -1.1166e+0 (5.99e-1)
SMMOP55003.2697e+0 (1.68e-1) -4.7510e+0 (4.57e-1) -1.6523e+1 (2.69e-1) -9.1196e+0 (1.09e-1) -7.8049e+0 (7.36e-2) -7.8057e+0 (8.48e-2) -1.0089e+1 (1.66e-1) -9.6278e+0 (2.31e-1) -1.1432e+0 (5.25e-1)
SMMOP65003.7158e+0 (8.42e-1) -5.1628e+0 (2.23e-1) -1.7065e+1 (3.29e-1) -7.8978e+0 (7.20e-2) -7.9518e+0 (1.16e-1) -7.8973e+0 (7.17e-2) -1.0958e+1 (2.02e-1) -1.0025e+1 (9.75e-1) -2.2132e+0 (4.33e-1)
SMMOP75004.5286e+0 (1.16e+0) -4.3086e+0 (3.64e-1) -1.6076e+1 (1.89e-1) -8.9965e+0 (6.87e-2) -8.0283e+0 (1.19e-1) -7.9004e+0 (8.35e-2) -1.0341e+1 (2.08e-1) -9.9188e+0 (2.11e-1) -2.1422e+0 (5.08e-1)
SMMOP85004.5375e+0 (6.88e-1) -5.5221e+0 (5.57e-1) -1.6087e+1 (1.65e-1) -9.0192e+0 (7.95e-2) -8.0035e+0 (8.74e-2) -7.8974e+0 (8.66e-2) -1.0228e+1 (1.73e-1) -1.0024e+1 (2.20e-1) -2.7611e+0 (5.83e-1)
+/−/= 0/24/00/24/00/24/00/24/00/24/00/24/00/24/00/24/0
Table 2. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA, LMMODE, CMMOGA_DLF and MASR-MMEA on SMMOP1-SMMOP8 with 6 equivalent PSs. The best result in each row is highlighted in bold.
Table 2. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA, LMMODE, CMMOGA_DLF and MASR-MMEA on SMMOP1-SMMOP8 with 6 equivalent PSs. The best result in each row is highlighted in bold.
ProblemDMPMMEAHHCMMEADNNSGAIIMO_Ring_
PSO_SCD
SparseEAMSKEALMMODECMMOGA_
DLF
MASR-MMEA
SMMOP11009.7345e-1 (4.48e-1) -2.7030e-1 (1.37e-1) -3.7502e+0 (1.32e-2) -5.9342e+0 (6.46e-2) -3.7372e+0 (4.82e-2) -3.8017e+0 (1.18e-2) -3.6026e+0 (1.39e-1) -2.8100e+0 (3.95e-1) -1.9128e-1 (1.48e-1)
SMMOP21001.0859e+0 (3.11e-1) -2.1617e-1 (1.04e-1) =5.2651e+0 (5.75e-1) -7.5018e+0 (1.81e-1) -3.7022e+0 (5.21e-2) -3.8027e+0 (1.36e-2) -1.1543e+1 (6.58e-1) -3.9713e+0 (5.07e-1) -2.0909e-1 (1.74e-1)
SMMOP31001.7703e+0 (5.04e-1) -3.6931e-1 (1.73e-1) +5.5116e+0 (6.12e-1) -7.3038e+0 (2.21e-1) -3.8009e+0 (2.48e-2) -3.8362e+0 (1.91e-3) -1.1837e+1 (6.39e-1) -4.1925e+0 (4.90e-1) -6.7422e-1 (4.27e-1)
SMMOP41007.1385e-1 (2.84e-1) -2.9138e-1 (1.52e-1) -3.7206e+0 (1.77e-2) -6.3386e+0 (8.97e-2) -3.7223e+0 (4.60e-2) -3.7989e+0 (9.73e-3) -3.8137e+0 (1.88e-1) -3.0370e+0 (3.17e-1) -2.4262e-1 (2.08e-1)
SMMOP51007.0377e-1 (2.44e-1) -3.5367e-1 (1.37e-1) -3.8341e+0 (3.44e-2) -6.0951e+0 (8.99e-2) -3.7066e+0 (4.67e-2) -3.7958e+0 (1.25e-2) -3.8193e+0 (1.74e-1) -3.7933e+0 (1.45e-1) -2.0381e-1 (1.46e-1)
SMMOP61001.0922e+0 (4.66e-1) -4.9778e-1 (1.89e-1) =3.8124e+0 (1.07e-2) -6.3841e+0 (7.58e-2) -3.8035e+0 (1.95e-2) -3.8336e+0 (2.91e-3) -4.1158e+0 (1.01e-1) -3.0806e+0 (3.71e-1) -7.1485e-1 (4.41e-1)
SMMOP71001.9863e+0 (5.51e-1) -4.0807e-1 (1.94e-1) +3.9332e+0 (3.10e-2) -5.8067e+0 (7.46e-2) -3.8045e+0 (7.63e-3) -3.8382e+0 (1.42e-3) -4.0134e+0 (1.54e-1) -3.8569e+0 (1.40e-1) -1.1490e+0 (4.99e-1)
SMMOP81002.0794e+0 (3.88e-1) -8.2172e-1 (2.18e-1) +3.9447e+0 (2.48e-2) -5.9415e+0 (8.22e-2) -3.8020e+0 (1.74e-2) -3.8393e+0 (1.28e-3) -3.7050e+0 (2.26e-1) -3.8124e+0 (1.45e-1) -1.1518e+0 (3.06e-1)
SMMOP12001.7560e+0 (5.83e-1) -1.0882e+0 (1.78e-1) -9.2870e+0 (6.90e-2) -5.3382e+0 (1.73e-2) -5.3079e+0 (4.71e-2) -5.3994e+0 (9.04e-3) -5.5189e+0 (1.15e-1) -5.0945e+0 (1.74e-1) -4.6280e-1 (2.59e-1)
SMMOP22001.7899e+0 (5.25e-1) -1.1839e+0 (4.60e-1) -1.0917e+1 (2.10e-1) -9.9155e+0 (7.59e-1) -5.3513e+0 (7.76e-2) -5.4024e+0 (6.75e-3) -1.5441e+1 (8.50e-1) -7.2784e+0 (8.58e-1) -5.0743e-1 (2.99e-1)
SMMOP32002.9591e+0 (5.50e-1) -1.5321e+0 (3.74e-1) =1.0860e+1 (2.69e-1) -1.0231e+1 (7.72e-1) -5.4056e+0 (6.70e-2) -5.4269e+0 (2.08e-3) -1.5590e+1 (9.41e-1) -7.6269e+0 (8.09e-1) -1.3753e+0 (5.46e-1)
SMMOP42001.2341e+0 (3.67e-1) -1.2724e+0 (2.87e-1) -9.8350e+0 (6.84e-2) -5.3218e+0 (2.23e-2) -5.3234e+0 (6.22e-2) -5.3912e+0 (1.27e-2) -5.8900e+0 (1.55e-1) -5.4175e+0 (4.66e-1) -4.2234e-1 (2.02e-1)
SMMOP52001.5868e+0 (5.55e-1) -1.2653e+0 (2.12e-1) -9.5371e+0 (5.90e-2) -5.6487e+0 (4.92e-2) -5.3343e+0 (5.05e-2) -5.3996e+0 (1.99e-2) -5.6803e+0 (1.40e-1) -5.8077e+0 (1.28e-1) -4.0819e-1 (2.58e-1)
SMMOP62002.2113e+0 (5.01e-1) -1.7450e+0 (1.86e-1) -9.8589e+0 (1.04e-1) -5.3928e+0 (8.82e-3) -5.3797e+0 (2.15e-2) -5.4239e+0 (2.67e-3) -6.2864e+0 (1.22e-1) -5.2485e+0 (5.05e-1) -1.2986e+0 (4.91e-1)
SMMOP72003.2023e+0 (6.43e-1) -1.3385e+0 (3.06e-1) +9.1777e+0 (1.04e-1) -5.7180e+0 (3.10e-2) -5.3907e+0 (1.49e-2) -5.4329e+0 (1.64e-2) -5.9643e+0 (1.36e-1) -5.8498e+0 (1.67e-1) -1.8395e+0 (5.50e-1)
SMMOP82003.3584e+0 (5.88e-1) -2.2428e+0 (2.62e-1) -9.2691e+0 (7.96e-2) -5.7159e+0 (3.68e-2) -5.3978e+0 (4.74e-2) -5.4297e+0 (1.10e-3) -5.5922e+0 (1.78e-1) -5.8187e+0 (1.90e-1) -1.7540e+0 (3.97e-1)
SMMOP15004.3189e+0 (6.26e-1) -5.2167e+0 (4.29e-1) -1.6374e+1 (2.38e-1) -8.4914e+0 (6.36e-2) -8.5727e+0 (7.76e-2) -8.5758e+0 (2.68e-2) -9.5916e+0 (1.59e-1) -8.8795e+0 (4.89e-1) -2.014e+0 (7.90e-1)
SMMOP25004.5447e+0 (7.82e-1) -4.4129e+0 (4.37e-1) -1.8031e+1 (4.71e-1) -2.0719e+1 (8.66e-1) -8.5969e+0 (1.12e-1) -8.5727e+0 (3.10e-2) -2.1420e+1 (1.16e+0) -1.2676e+1 (8.66e-1) -2.1136e+0 (5.80e-1)
SMMOP35005.5325e+0 (7.37e-1) -5.0509e+0 (5.51e-1) -1.8150e+1 (3.50e-1) -2.1238e+1 (8.05e-1) -8.7225e+0 (1.25e-1) -8.6100e+0 (3.90e-2) -2.1546e+1 (1.11e+0) -1.2862e+1 (7.11e-1) -3.5194e+0 (7.87e-1)
SMMOP45004.1007e+0 (5.47e-1) -5.3318e+0 (2.57e-1) -1.7139e+1 (2.56e-1) -8.5364e+0 (5.60e-2) -8.5709e+0 (9.21e-2) -8.5853e+0 (3.78e-2) -1.0421e+1 (2.33e-1) -9.1403e+0 (6.91e-1) -2.0131e+0 (6.07e-1)
SMMOP55004.2686e+0 (5.88e-1) -5.3089e+0 (2.22e-1) -1.6693e+1 (2.90e-1) -9.4947e+0 (1.06e-1) -8.5806e+0 (9.14e-2) -8.5807e+0 (3.69e-2) -9.8896e+0 (1.13e-1) -9.3563e+0 (2.31e-1) -2.0532e+0 (6.00e-1)
SMMOP65005.2012e+0 (7.38e-1) -5.4838e+0 (1.19e-1) -1.7121e+1 (2.77e-1) -8.5726e+0 (2.40e-2) -8.6528e+0 (1.15e-1) -8.6243e+0 (4.08e-2) -1.0924e+1 (2.31e-1) -9.4130e+0 (9.06e-1) -3.2368e+0 (4.60e-1)
SMMOP75006.0529e+0 (6.76e-1) -4.9813e+0 (3.42e-1) -1.6272e+1 (1.93e-1) -9.5697e+0 (7.23e-2) -8.7187e+0 (1.19e-1) -8.6404e+0 (4.10e-2) -1.0113e+1 (1.64e-1) -9.2413e+0 (2.55e-1) -3.7809e+0 (7.22e-1)
SMMOP85006.0207e+0 (5.43e-1) -5.7797e+0 (2.65e-1) -1.6232e+1 (2.32e-1) -9.5594e+0 (7.10e-2) -8.6553e+0 (1.04e-1) -8.6224e+0 (4.40e-2) -9.8571e+0 (1.57e-1) -9.5258e+0 (3.24e-1) -4.0224e+0 (6.43e-1)
+/−/=0/24/04/17/30/24/00/24/00/24/00/24/00/24/00/24/0
Table 3. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs. The best result in each row is highlighted in bold.
Table 3. IGDX values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1-SMMOP8 with 4, 6, 8 equivalent PSs. The best result in each row is highlighted in bold.
ProblemnpDMPMMEAHHCMMEAMO_Ring_
PSO_SCD
DNNSGAIISparseEAMSKEAMASR-MMEA
SMMOP1 5002.9461e+0 (4.20e-1) -4.5552e+0 (4.20e-1) -1.6310e+1 (1.74e-1) -7.7364e+0 (5.11e-2) -7.8736e+0 (9.11e-2) -7.8164e+0 (8.47e-2) -6.8094e-1 (2.22e-1)
SMMOP2 5002.8773e+0 (5.30e-1) -3.8229e+0 (1.17e+0) -1.8101e+1 (3.40e-1) -2.3144e+1 (5.87e-1) -7.8818e+0 (8.43e-2) -7.7813e+0 (6.31e-2) -7.5096e-1 (4.71e-1)
SMMOP3 5003.7201e+0 (9.39e-1) -4.6911e+0 (8.10e-1) -1.7917e+1 (4.63e-1) -2.3862e+1 (6.13e-1) -8.0429e+0 (1.49e-1) -7.8448e+0 (7.75e-2) -2.0137e+0 (2.94e-1)
SMMOP445003.1952e+0 (2.10e-1) -4.5826e+0 (3.72e-1) -1.7150e+1 (1.83e-1) -7.8032e+0 (7.87e-2) -7.8387e+0 (7.12e-2) -7.7967e+0 (8.34e-2) -1.0266e+0 (5.99e-1)
SMMOP5 5003.2697e+0 (1.68e-1) -4.7510e+0 (4.57e-1) -1.6523e+1 (2.69e-1) -9.1196e+0 (1.09e-1) -7.8049e+0 (7.36e-2) -7.8057e+0 (8.48e-2) -1.1532e+0 (5.25e-1)
SMMOP6 5003.7158e+0 (8.42e-1) -5.1628e+0 (2.23e-1) -1.7065e+1 (3.29e-1) -7.8978e+0 (7.20e-2) -7.9518e+0 (1.16e-1) -7.8973e+0 (7.17e-2) -2.1232e+0 (4.33e-1)
SMMOP7 5004.5286e+0 (1.16e+0) -4.3086e+0 (3.64e-1) -1.6076e+1 (1.89e-1) -8.9965e+0 (6.87e-2) -8.0283e+0 (1.19e-1) -7.9004e+0 (8.35e-2) -2.0522e+0 (5.08e-1)
SMMOP8 5004.5375e+0 (6.88e-1) -5.5221e+0 (5.57e-1) -1.6087e+1 (1.65e-1) -9.0192e+0 (7.95e-2) -8.0035e+0 (8.74e-2) -7.8974e+0 (8.66e-2) -2.6711e+0 (5.83e-1)
SMMOP1 5004.3189e+0 (6.26e-1) -5.2167e+0 (4.29e-1) -1.6374e+1 (2.38e-1) -8.4914e+0 (6.36e-2) -8.5727e+0 (7.76e-2) -8.5758e+0 (2.68e-2) -2.1114e+0 (7.90e-1)
SMMOP2 5004.5447e+0 (7.82e-1) -4.4129e+0 (4.37e-1) -1.8031e+1 (4.71e-1) -2.0719e+1 (8.66e-1) -8.5969e+0 (1.12e-1) -8.5727e+0 (3.10e-2) -2.2036e+0 (5.80e-1)
SMMOP3 5005.5325e+0 (7.37e-1) -5.0509e+0 (5.51e-1) -1.8150e+1 (3.50e-1) -2.1238e+1 (8.05e-1) -8.7225e+0 (1.25e-1) -8.6100e+0 (3.90e-2) -3.5194e+0 (7.87e-1)
SMMOP465004.1007e+0 (5.47e-1) -5.3318e+0 (2.57e-1) -1.7139e+1 (2.56e-1) -8.5364e+0 (5.60e-2) -8.5709e+0 (9.21e-2) -8.5853e+0 (3.78e-2) -2.0131e+0 (6.07e-1)
SMMOP5 5004.2686e+0 (5.88e-1) -5.3089e+0 (2.22e-1) -1.6693e+1 (2.90e-1) -9.4947e+0 (1.06e-1) -8.5806e+0 (9.14e-2) -8.5807e+0 (3.69e-2) -2.0532e+0 (6.00e-1)
SMMOP6 5005.2012e+0 (7.38e-1) -5.4838e+0 (1.19e-1) -1.7121e+1 (2.77e-1) -8.5726e+0 (2.40e-2) -8.6528e+0 (1.15e-1) -8.6243e+0 (4.08e-2) -3.2368e+0 (4.60e-1)
SMMOP7 5006.0529e+0 (6.76e-1) -4.9813e+0 (3.42e-1) -1.6272e+1 (1.93e-1) -9.5697e+0 (7.23e-2) -8.7187e+0 (1.19e-1) -8.6404e+0 (4.10e-2) -3.7809e+0 (7.22e-1)
SMMOP8 5006.0207e+0 (5.43e-1) -5.7797e+0 (2.65e-1) -1.6232e+1 (2.32e-1) -9.5594e+0 (7.10e-2) -8.6553e+0 (1.04e-1) -8.6224e+0 (4.40e-2) -4.0224e+0 (6.43e-1)
SMMOP1 5005.4735e+0 (7.95e-1) -5.5678e+0 (2.17e-1) -1.6393e+1 (2.29e-1) -8.8323e+0 (7.92e-2) -8.9960e+0 (9.01e-2) -8.9789e+0 (1.56e-2) -3.3729e+0 (5.64e-1)
SMMOP2 5005.5305e+0 (5.84e-1) -5.1211e+0 (7.04e-1) -1.8036e+1 (4.42e-1) -1.8420e+1 (7.10e-1) -8.9588e+0 (1.19e-1) -8.9790e+0 (1.25e-2) -3.2861e+0 (5.67e-1)
SMMOP3 5006.7319e+0 (6.50e-1) -5.3189e+0 (3.18e-1) -1.7989e+1 (5.32e-1) -1.9095e+1 (7.18e-1) -9.0542e+0 (8.19e-2) -9.0075e+0 (1.95e-2) -4.5030e+0 (6.43e-1)
SMMOP485004.9510e+0 (4.88e-1) -5.6880e+0 (1.79e-1) -1.7129e+1 (2.38e-1) -8.8994e+0 (6.70e-2) -8.9575e+0 (9.63e-2) -8.9729e+0 (1.16e-2) -3.3106e+0 (6.69e-1
SMMOP5 5002.5692e+0 (7.54e-1) -5.6244e+0 (1.90e-1) -1.6625e+1 (2.76e-1) -8.8543e+0 (8.34e-2) -7.8160e+0 (7.27e-2) -7.7468e+0 (3.68e-2) -1.2242e+0 (4.30e-1)
SMMOP6 5006.1969e+0 (8.43e-1) -5.8609e+0 (1.66e-1) -1.7184e+1 (2.63e-1) -8.9711e+0 (1.85e-2) -9.0124e+0 (1.23e-1) -8.9974e+0 (1.92e-2) -4.2785e+0 (5.68e-1)
SMMOP7 5007.0664e+0 (6.89e-1) -5.2678e+0 (2.51e-1) -1.6391e+1 (1.62e-1) -9.8715e+0 (7.42e-2) -9.0454e+0 (1.05e-1) -9.0116e+0 (2.08e-2) -5.0512e+0 (4.30e-1)
SMMOP8 5006.6190e+0 (7.00e-1) -6.0049e+0 (1.41e-1) -1.6305e+1 (1.76e-1) -9.8741e+0 (6.47e-2) -8.9144e+0 (1.18e-1) -9.0075e+0 (1.92e-2) -4.7933e+0 (4.88e-1)
+/−/= 0/24/00/24/00/24/00/24/00/24/00/24/0
Table 4. IGD values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
Table 4. IGD values obtained by MO_Ring_PSO_SCD, DN-NSGA-II, SparseEA, MSKEA, MP-MMEA, HHC-MMEA and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
ProblemDMPMMEAHHCMMEAMO_Ring_
PSO_SCD
DNNSGAIISparseEAMSKEAMASR-MMEA
SMMOP11002.6535e-3 (4.27e-4) +1.7463e-3 (3.47e-4) +7.4965e-1 (1.08e-2) -2.6018e-3 (8.34e-5) +1.1972e-3 (9.51e-5) +9.2136e-4 (2.17e-6) +5.3183e-3 (1.07e-3)
SMMOP21002.9874e-3 (3.67e-4) +1.6070e-3 (2.37e-4) +2.0301e+0 (2.29e-2) -1.0816e-1 (3.13e-2) -1.4263e-3 (2.11e-4) +9.2144e-4 (1.87e-6) +5.1625e-3 (1.29e-3)
SMMOP31004.4374e-3 (1.17e-3) +1.7955e-3 (3.54e-4) +2.1004e+0 (2.37e-2) -1.1100e-1 (3.72e-2) -1.7380e-3 (8.83e-4) +9.2150e-4 (2.63e-6) +1.1246e-2 (5.05e-3)
SMMOP41002.9300e-3 (4.12e-4) +2.0064e-3 (7.11e-4) +3.6976e-1 (3.89e-3) -3.0921e-3 (1.30e-4) +1.2029e-3 (3.63e-5) +1.0285e-3 (9.18e-6) +5.2911e-3 (4.00e-4)
SMMOP51002.8852e-3 (3.97e-4) +1.9255e-3 (5.36e-4) +3.6174e-1 (5.48e-3) -6.5908e-3 (6.86e-4) -1.2203e-3 (3.17e-5) +1.0269e-3 (8.44e-6) +5.2092e-3 (4.08e-4)
SMMOP61003.4504e-3 (6.45e-4) +2.8252e-3 (1.43e-3) +4.0424e-1 (5.47e-3) -2.9585e-3 (1.33e-4) +1.2136e-3 (4.85e-5) +1.0264e-3 (9.06e-6) +8.6154e-3 (1.89e-3)
SMMOP71003.7606e-3 (7.92e-4) +2.1633e-3 (4.27e-4) +9.3392e-1 (1.51e-2) -8.6335e-3 (1.42e-3) +1.6861e-3 (1.90e-4) +1.0195e-3 (4.10e-6) +1.4345e-2 (6.26e-3)
SMMOP81004.6457e-3 (1.95e-3) +2.9630e-3 (9.50e-4) +8.7713e-1 (1.10e-2) -8.6800e-3 (1.27e-3) +2.2438e-3 (1.06e-3) +1.0193e-3 (3.85e-6) +1.5657e-2 (8.50e-3)
SMMOP12003.9339e-3 (4.77e-4) +3.3987e-3 (9.29e-4) +8.6003e-1 (9.59e-3) -2.7808e-3 (7.43e-5) +1.5171e-3 (6.07e-4) +9.2137e-4 (2.53e-6) +6.2139e-3 (8.93e-4)
SMMOP22004.4837e-3 (6.34e-4) +2.6846e-3 (7.31e-4) +2.0948e+0 (1.34e-2) -2.0950e-1 (3.28e-2) -1.7347e-3 (3.43e-4) +9.2096e-4 (2.89e-6) +6.0765e-3 (1.19e-3)
SMMOP32009.0102e-3 (2.42e-3) +4.2077e-3 (2.15e-3) +2.1756e+0 (1.14e-2) -2.3633e-1 (3.70e-2) -5.6382e-3 (2.31e-3) +1.4051e-3 (1.84e-3) +1.1487e-2 (5.33e-3)
SMMOP42003.7343e-3 (4.34e-4) +2.9477e-3 (1.07e-3) +3.9627e-1 (3.12e-3) -3.2798e-3 (1.21e-4) +1.3327e-3 (3.93e-5) +1.0257e-3 (9.23e-6) +6.1050e-3 (2.89e-4)
SMMOP52003.6821e-3 (4.89e-4) +3.3440e-3 (1.23e-3) +4.2140e-1 (6.19e-3) -8.7662e-3 (6.38e-4) -1.3675e-3 (4.87e-5) +1.0258e-3 (7.43e-6) +6.1994e-3 (3.60e-4)
SMMOP62005.2639e-3 (1.36e-3) +7.4593e-3 (2.86e-3) =4.4210e-1 (5.10e-3) -3.1730e-3 (1.03e-4) +1.9956e-3 (1.21e-3) +1.0281e-3 (1.02e-5) +8.5517e-3 (2.89e-3)
SMMOP72006.9965e-3 (3.75e-3) +7.2076e-3 (4.09e-3) +1.1013e+0 (1.48e-2) -1.7260e-2 (1.14e-3) =2.1939e-3 (8.24e-4) +2.0686e-3 (3.20e-3) +1.3756e-2 (6.81e-3)
SMMOP82001.0094e-2 (4.66e-3) +1.0388e-2 (3.85e-3) +1.0130e+0 (1.24e-2) -1.8020e-2 (1.08e-3) =3.6776e-3 (2.01e-3) +1.1782e-3 (8.77e-4) +1.8444e-2 (9.38e-3)
SMMOP15001.0399e-2 (2.40e-3) -2.7242e-2 (8.19e-3) -1.0258e+0 (1.15e-2) -4.2043e-3 (3.89e-4) +4.6775e-3 (1.89e-3) +1.6543e-3 (1.09e-3) +7.4726e-3 (4.14e-4)
SMMOP25009.4700e-3 (2.20e-3) -9.6629e-3 (5.28e-3) =2.1125e+0 (8.28e-3) -3.6411e-1 (2.08e-2) -4.9703e-3 (1.54e-3) +1.1748e-3 (4.28e-4) +7.1695e-3 (7.30e-4)
SMMOP35002.6246e-2 (6.65e-3) -4.2189e-2 (1.54e-2) -2.2000e+0 (1.08e-2) -3.9087e-1 (2.31e-2) -1.8315e-2 (4.15e-3) +2.8355e-3 (1.89e-3) +2.3844e-2 (4.07e-3)
SMMOP45007.8683e-3 (8.12e-4) -1.5551e-2 (2.53e-3) -4.2728e-1 (2.31e-3) -5.0755e-3 (5.40e-4) +2.7394e-3 (5.78e-4) +1.2338e-3 (3.42e-4) +7.2379e-3 (8.79e-4)
SMMOP55008.3333e-3 (1.11e-3) -1.6771e-2 (3.62e-3) -5.1897e-1 (7.03e-3) -1.7423e-2 (7.96e-4) -2.5422e-3 (6.70e-4) +1.2605e-3 (3.81e-4) +7.1165e-3 (7.16e-4)
SMMOP65001.4926e-2 (3.54e-3) =3.8779e-2 (7.77e-3) -4.7865e-1 (3.06e-3) -5.7934e-3 (7.92e-4) +5.7592e-3 (1.74e-3) +2.8044e-3 (1.63e-3) +1.3862e-2 (3.74e-3)
SMMOP75003.3410e-2 (7.67e-3) =6.9908e-2 (1.84e-2) -1.3719e+0 (1.80e-2) -3.4293e-2 (1.64e-3) -1.7044e-2 (6.47e-3) +6.9944e-3 (5.85e-3) +2.9276e-2 (9.06e-3)
SMMOP85003.3146e-2 (5.21e-3) =5.7682e-2 (1.53e-2) -1.2550e+0 (1.26e-2) -3.5284e-2 (2.23e-3) =1.5129e-2 (3.58e-3) +3.5752e-3 (2.13e-3) +3.3033e-2 (9.66e-3)
+/−/=16/5/315/7/20/24/011/10/324/0/024/0/0
Table 5. IGDX values obtained by MASR-MMEA-1, MASR-MMEA-2, MASR-MMEA-3 and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
Table 5. IGDX values obtained by MASR-MMEA-1, MASR-MMEA-2, MASR-MMEA-3 and MASR-MMEA on SMMOP1-SMMOP8 with 4 equivalent PSs. The best result in each row is highlighted in bold.
ProblemDMASR-MMEA-1MASR-MMEA-2MASR-MMEA-3MASR-MMEA
SMMOP11002.6484e-1 (2.59e-2) -7.0521e-1 (3.76e-1) -8.1947e-2 (1.57e-2) -2.4986e-2 (1.82e-2)
SMMOP21003.2261e-1 (2.91e-2) -6.9251e-1 (4.35e-1) -5.8416e-2 (2.06e-2) -2.5019e-2 (2.35e-2)
SMMOP31007.8345e-1 (9.74e-2) -7.7917e-1 (3.90e-1) -1.0581e-1 (1.00e-1) =2.0734e-1 (3.30e-1)
SMMOP41003.2745e-1 (2.71e-2) -7.2141e-1 (3.34e-1) -1.2868e-1 (2.58e-2) -7.3115e-2 (5.94e-2)
SMMOP51003.4247e-1 (5.49e-2) -6.8853e-1 (2.74e-1) -1.3155e-1 (1.92e-2) -6.1045e-2 (3.18e-2)
SMMOP61006.7943e-1 (9.72e-2) -7.9691e-1 (3.35e-1) -1.5392e-1 (9.35e-2) =1.3192e-1 (1.30e-1)
SMMOP71006.5162e-1 (1.81e-1) -1.0972e+0 (5.86e-1) -1.7238e-1 (1.24e-1) =1.8609e-1 (2.28e-1)
SMMOP81008.5197e-1 (4.49e-1) -9.9827e-1 (4.24e-1) -4.4734e-1 (4.17e-1) =3.7461e-1 (2.99e-1)
SMMOP12005.4644e-1 (4.85e-2) -1.8383e+0 (3.85e-1) -2.3445e-1 (5.23e-2) -7.6110e-2 (7.70e-2)
SMMOP22005.7056e-1 (8.00e-2) -1.8008e+0 (4.61e-1) -1.8459e-1 (3.58e-2) -5.0990e-2 (5.03e-2)
SMMOP32001.4418e+0 (7.54e-2) -1.9236e+0 (5.31e-1) -4.8528e-1 (2.17e-1) =6.5696e-1 (5.40e-1)
SMMOP42006.8855e-1 (5.23e-2) -1.7496e+0 (3.56e-1) -2.8703e-1 (7.15e-2) -1.9530e-1 (1.53e-1)
SMMOP52007.0301e-1 (5.95e-2) -1.6722e+0 (3.28e-1) -3.0333e-1 (7.99e-2) -2.1807e-1 (1.72e-1)
SMMOP62001.4372e+0 (6.44e-2) -2.0762e+0 (4.79e-1) -5.1656e-1 (1.85e-1) =5.4537e-1 (3.44e-1)
SMMOP72001.3769e+0 (2.00e-1) -2.0887e+0 (5.38e-1) -4.8382e-1 (3.04e-1) +5.9888e-1 (3.27e-1)
SMMOP82001.4868e+0 (2.08e-1) -2.2954e+0 (4.27e-1) -6.7907e-1 (3.80e-1) +1.0496e+0 (4.79e-1)
SMMOP15001.4614e+0 (1.07e-1) -4.0057e+0 (5.51e-1) -7.9550e-1 (1.89e-1) =6.9094e-1 (2.22e-1)
SMMOP25001.3062e+0 (8.22e-2) -4.1894e+0 (5.39e-1) -8.3999e-1 (3.14e-1) =7.6096e-1 (4.71e-1)
SMMOP35002.6822e+0 (6.43e-2) -4.3460e+0 (6.41e-1) -2.0630e+0 (4.61e-1) =2.0137e+0 (2.94e-1)
SMMOP45001.7673e+0 (1.32e-1) -4.0802e+0 (4.07e-1) -1.1278e+0 (3.86e-1) =1.1266e+0 (5.99e-1)
SMMOP55001.9304e+0 (2.03e-1) -3.9643e+0 (4.56e-1) -1.0523e+0 (3.14e-1) =1.1532e+0 (5.25e-1)
SMMOP65003.0225e+0 (2.48e-1) -4.2884e+0 (4.92e-1) -2.1156e+0 (2.75e-1) =2.2232e+0 (4.33e-1)
SMMOP75002.7967e+0 (2.69e-1) -4.4019e+0 (5.62e-1) -2.5155e+0 (9.58e-1) =2.1322e+0 (5.08e-1)
SMMOP85003.0905e+0 (1.42e-1) -4.9335e+0 (4.10e-1) -2.6766e+0 (8.57e-1) =2.7711e+0 (5.83e-1)
+/−/=0/8/00/8/02/8/14
Table 6. Datasets of three sparse lmops in real-world applications.
Table 6. Datasets of three sparse lmops in real-world applications.
Critical node detection problemType of variablesNo. of variablesDatasetNo. of nodesNo. of edges
CN1 102Hollywood Film Music 4102192
CN2Binary234Graph Drawing Contests Data (A99) 4234154
CN3 311Graph Drawing Contests Data (A01) 4311640
CN4 452Graph Drawing Contests Data (C97) 4452460
Instance selection problemType of variablesNo. of variablesDatasetNo. of samplesNo. of featuresNo. of classes
IS1 862Fouclass 18623
IS2Binary4177Abalone 241776
IS3 11,055phishing 111,0559
Community detection problemType of variablesNo. of variablesDatasetNo. of nodesNo. of edges
CD1 105Polbook 3105441
CD2Binary115Football 3115614
CD3 1133Email 311,0555451
1 https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html (accessed on 27 January 2026); 2 https://archive.ics.uci.edu/ml (accessed on 27 January 2026); 3 http://deim.urv.cat/~alexandre.arenas/data/welcome.htm (accessed on 27 January 2026); 4 Sparse_CN test suite of Real-world MOPs in PlatEMO platform.
Table 7. HV values obtained by MP-MMEA, HHC-MMEA and MASR-MMEA on real-world applications with 4 equivalent PSs. The best result in each row is highlighted in bold.
Table 7. HV values obtained by MP-MMEA, HHC-MMEA and MASR-MMEA on real-world applications with 4 equivalent PSs. The best result in each row is highlighted in bold.
ProblemDHHCMMEAMPMMEAMASR-MMEA
Sparse_CN11028.9540e-1 (1.28e-2) -8.9736e-1 (1.52e-2) -9.1859e-1 (2.78e-3)
Sparse_CN22349.5357e-1 (8.03e-3) -9.0837e-1 (1.23e-2) -9.5215e-1 (4.58e-3)
Sparse_CN33118.5266e-1 (2.02e-1) -8.7194e-1 (2.08e-2) -8.8248e-1 (1.32e-2)
Sparse_CN44529.9382e-1 (3.69e-3) -9.6860e-1 (1.58e-2) -9.8596e-1 (1.47e-4)
Sparse_CD11057.7298e-1 (8.80e-3) =7.6659e-1 (9.40e-3) -7.7362e-1 (1.17e-2)
Sparse_CD21157.6239e-1 (2.29e-3) -7.5634e-1 (1.11e-1) -7.7053e-1 (2.22e-3)
Sparse_CD311336.7859e-1 (1.03e-2) =6.5817e-1 (1.49e-2) -6.8078e-1 (2.07e-2)
Sparse_IS18628.5012e-1 (1.32e-2) -7.3625e-1 (2.95e-1) =8.6012e-1 (8.63e-3)
Sparse_IS241777.0843e-1 (8.36e-3) +6.9266e-1 (1.14e-1) -7.0238e-1 (5.23e-3)
Sparse_IS311,0559.4989e-1 (1.83e-3) -7.4097e-1 (1.67e-3) -9.5075e-1 (2.74e-3)
+/−/=1/7/20/9/1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, B.; Sun, Y.; Hua, B. Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection. Electronics 2026, 15, 616. https://doi.org/10.3390/electronics15030616

AMA Style

Chen B, Sun Y, Hua B. Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection. Electronics. 2026; 15(3):616. https://doi.org/10.3390/electronics15030616

Chicago/Turabian Style

Chen, Bozhao, Yu Sun, and Bei Hua. 2026. "Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection" Electronics 15, no. 3: 616. https://doi.org/10.3390/electronics15030616

APA Style

Chen, B., Sun, Y., & Hua, B. (2026). Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection. Electronics, 15(3), 616. https://doi.org/10.3390/electronics15030616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop