1. Introduction
Many real-world problems in engineering and other research areas require multi-objective optimization, where it is necessary to find a set of solutions in the search space that are well-distributed along the Pareto-optimal front. Generally, in this type of problem the computation of possible solutions must consider the existence of two or more conflicting objective functions. A multi-objective optimization problem can be defined as follows:
where each
is a real-valued scalar function,
is the set of objective or cost functions that produce an
m-dimensional vector in the
-objective space when evaluated,
is an
n-dimensional vector in the search space
, and
is the set of all feasible solutions of Equation (
1).
In this type of problem, identifying the set of best possible solutions can in many cases be highly complicated or impossible. Rather than looking for globally optimal solutions to a multi-objective problem, it is possible to instead seek to compute satisfactory solutions that can be obtained in adequate time, mainly to find the Pareto optimal (PS) set of solutions. The mapping from the PS to the objective space is the Pareto front (
). Examples of classical optimization techniques adapted to multi-objective problems are the Weighted Sum Method [
1], e-constraint method [
2], goal programming [
3], and lexicographic ordering [
4] among others.
Multi-objective optimization evolutionary algorithms (MOEAs) are highly suitable for solving problems involving multiple objectives because they can generate a set of
-approximate solutions in a single run [
5]. In recent decades, many multi-objective optimization algorithms have been derived from classical single-objective metaheuristics, showing efficiency and effectiveness on various complex problems. Single-objective metaheuristics are optimization techniques that focus on finding a single optimal solution. Common examples include genetic algorithms (GA) [
6], particle swarm optimization (PSO) [
7], ant colony optimization (ACO) [
8], and simulated annealing (SA) [
9].
Several techniques based on single-objective metaheuristics have been developed to address multi-objective problems. Examples of the most outstanding options include the Non-Dominated Sorting Genetic Algorithm (NSGA-II), which extends GAs to handle multiple objectives using a non-dominated sorting scheme and population diversity [
10]; the Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D), which splits a multi-objective problem into several single-objective problems [
11]; SPEA2, which uses a list of non-dominated solutions and assigns strengths to each solution to guide the search process [
12]; Multi-objective PSO (MOPSO), which is an extension of PSO that allows it to handle multiple objectives while maintaining an archive of non-dominated solutions [
13]; Multi-Objective Simulated Annealing (MOSA), which extends simulated annealing to handle multiple targets in order to maintain a balance between exploration and exploitation [
14]; the Multi-objective Ant Lion Optimizer (MOALO), which expands the Ant Lion Optimizer by applying a repository to store non-dominated solutions in the Pareto set [
15]; the Multi-objective Multi-Verse Optimizer (MOMVO), which builds on the MVO to compare and optimize test and practical problems [
16]; and the Multi-objective ACO (MOACO), which adapts the ACO for multi-objective optimization using multiple populations to improve computational efficiency [
17,
18]. MOACO has also been employed to address multi-objective problems in airline crew turnover [
19] and to improve the supply chain configurations [
20].
Another work that is an extension of a recent metaheuristic algorithm is the Multi-objective Salp Swarm Algorithm (MSSA) [
21]. Inspired by the swarming behavior of sea salps, the MSSA splits the population into a leader and followers, using an external file to store the non-dominated solutions. This approach shows adequate convergence and coverage of the PS. The interplay of several PSO algorithms for simultaneous optimization of single objectives in a multi-objective problem (MPMO) is described in [
22] using multiple populations. In discrete problems, the MPMOGA is a new algorithm inspired by the GA which uses multiple populations to solve problems with multiple objectives. It was used in [
23] to address the job-shop scheduling problems, obtaining satisfactory results. Multi-Objective Heat Transfer Search (MOHTS) based on Heat Transfer Search was proposed in [
24] to optimize structural multi-objective problems, obtaining better results compared with other algorithms based on ant colonies and symbolic organisms. In related research, a multi-objective version of the symbolic organism algorithm was presented in [
25] for optimal reinforcement design.
Multi-Objective Teaching–Learning-based Optimization (MOTLBO) is an extension of the Teaching–Learning-based Optimization (TLBO) algorithm. This approach uses two main phases: the teacher phase, where solutions are improved based on the best individual, and the learner phase, where solutions are optimized through knowledge sharing between individuals [
26]. Multi-Objective Thermal Exchange Optimization (MOTEO), inspired by the principles of thermodynamics and heat exchange, seeks to solve optimization problems by considering multiple criteria simultaneously [
27]. Multi-objective Plasma Generation Optimization (MOPGO) is inspired by the generation and behavior of plasmas, where charged particles interact and move towards lower energy states [
28]. the Multi-Objective Crystal Structure Algorithm (MOCSA) is motivated by the formation and organization of crystalline structures; solutions resemble atoms that organize themselves into configurations that minimize the energy of the system [
29]. The Multi-Objective Forest Optimization Algorithm (MOFOA) follows the dynamics and ecology of forests; solutions resemble trees competing for resources, allowing the solutions (trees) to evolve and adapt in a competitive and cooperative environment [
30]. The Competitive Mechanism Integrated Multi-Objective Whale Optimization Algorithm with Differential Evolution (CMI-MOWOA-DE) combines whale social behavior and differential evolution; the solutions simulate whale movement and hunting strategy, while differential evolution introduces variation and diversification to achieve a balance between multiple conflicting objectives. The Multi-Objective Harris Hawks Optimizer (MOHHO) is an extension of the Harris Hawks Optimizer (HHO) algorithm, which evokes the cooperative hunting strategies of Harris Hawks [
31]. The Marine Predators Algorithm (MPA) emulates the behavior of species such as sharks and dolphins for multi-objective optimization, using search tactics and solution space exploitation to optimize multiple objectives [
32]. The Multi-Objective Sine–Cosine Algorithm (MOSCA) is a variant of the Sine–Cosine Algorithm (SCA) that uses sine and cosine functions to guide the exploration and exploitation of the search space, dynamically adjusting the positions of candidate solutions to maintain diversity and ensure convergence to the FP [
33]. The Multi-objective Atomic Orbital Search (MAOS) algorithm is based on the concept of atomic orbitals from quantum chemistry; the potential solutions are treated as electrons in different orbitals, and the search process resembles the movement of these electrons to reach lower energy configurations [
34]. In the branch-and-bound framework for continuous global multi-objective optimization, the search space is recursively divided into smaller subregions, then lower and upper bounds are computed for the objective functions in these subregions. Subregions that cannot contain optimal solutions are discarded, which reduces the overall search space. Multi-Objective Differential Evolution (MODE) uses a population of candidate solutions that evolve through mutation operators. The multi-objective optimization method based on adaptive parameter harmony search algorithm simulates the improvisation process of musicians searching for the best harmony, dynamically adapting its parameters using memory and tuning operators to explore new solutions and preserve the best ones [
35]. The Guided Population Archive Whale Optimization Algorithm (GPA-WOA) is a variant of the Whale Optimization Algorithm (WOA) that simulates the hunting behavior of humpback whales and uses guides or benchmark solutions to direct the search and improve convergence to the FP; a population file is dynamically updated to preserve diversity and ensure that solutions are optimal and well-distributed [
36]. The quantum-inspired Decomposition-based Quantum Salp Swarm Algorithm (DQSSA) combines quantum mechanical principles with the swarming behavior of salps to divide multi-objective problems into more tractable subproblems, allowing a set of well-distributed optimal solutions to be found at the FP [
37].
The above works are just a sample of the many single-objective algorithms that have recently been extended in various ways to deal with multi-objective problems. Single-objective optimization algorithms inspired by cellular automata are practical and have competitive results on these types of problems compared to more recent metaheuristics.
For instance, Cellular Particle Swarm Optimization (CPSO) is a variant of the classical PSO algorithm that organizes particles into a cellular lattice structure in which each particle only interacts with its nearest neighbors, thereby improving exploration and reducing the probability of premature convergence [
38]. Island Cellular Model Differential Evolution combines the principles of Differential Evolution (DE) with a distributed population structure; a cellular scheme divides the population into subpopulations, which promotes genetic diversity, reduces premature convergence, and enhances exploration capability [
39]. The Continuous-State Cellular Automata Algorithm (CCAA) is inspired by cellular automata but adapted to work with continuous rather than discrete variables. In this algorithm, individuals (or smart-cells) are organized in a spatial grid; each cell updates its state (candidate solution) based on the solutions of its local neighbors. Continuous states allow for finer exploration of the search space, while the restricted neighborhood structure favors a balance between local exploitation and global exploration [
40]. The Cellular Learning Automata and Reinforcement Learning (CLARL) approach combines the principles of learning automata and reinforcement learning. In this method, learning automata are organized in a cellular mesh, where each automaton represents a potential solution and adapts its behavior through local interactions and reward-based feedback. This scheme allows for learning strategies that improve the system’s dynamic adaptability [
41]. The Reversible Elementary Cellular Automata Algorithm (RECAA) uses reversible rules, meaning that the system can return to previous states without losing information. In this algorithm, each potential solution follows simple local rules to update its state but with the property of reversibility, enabling a more controlled and efficient search space exploration [
42].
However, only a few works have applied the concept of cellular automata for general multi-objective optimization. One of the most representative examples is the Cellular Ant Algorithm (CAA) for multi-objective optimization, which combines the ant colony structure with a cellular mesh in which ants only interact with their close neighbors. This mechanism simultaneously optimizes several objective functions, achieving balanced solutions to complex problems with multiple criteria [
43]. Multi-objective Cellular Automata Optimization is another approach that applies cellular automata. Potential solutions are cells in a network that evolve based on local rules and interaction with their neighbors. This approach seeks to reach a balance by facilitating the identification of solutions [
44]. Cellular Multi-objective Particle Swarm Optimization (CMPSO) is a variant of PSO in which particles are arranged in a cellular structure and only interact with their close neighbors. This promotes solution diversity by limiting global influences and encourages better exploration, and it is beneficial in applications that require simultaneous optimization of several criteria [
45]. Cellular Teaching–Learning-Based Optimization (CTLBO) is a teaching-=learning-based approach to optimization. In this method, solutions are organized in a cellular structure, where each cell represents an individual who learns from its neighbors and a virtual teacher who guides the process. This approach enhances the algorithm’s ability to adapt to dynamic changes for multiple objectives that may vary over time [
46].
Following this trend, a recent single-objective optimization algorithm is the Majority-=minority Cellular Automata Algorithm (MmCAA), which was tested on several test problems in multiple dimensions and for various applications in engineering, obtaining satisfactory results against other well-recognized algorithms [
47].
This paper presents a multi-objective version of this algorithm called MOMmCAA. This algorithm is inspired by the local behavior of cellular automata, particularly the majority and minority rules, which are intermixed and able to generate complex behaviors in order to perform the tasks of exploration and exploitation in the search space.
The problem to be addressed in this work is the optimization of multi-objective problems using a modification of the MmCAA to obtain an adequate approximation of its PS. Although multi-objective algorithms are continuously proposed in the specialized literature, they have yet to fully exploit the advantages offered by the different cellular automata rules, such as the diversities and richness of their dynamic behaviors and their easy implementation. Thus, this work aims to test and demonstrate the feasibility of modifying the MmCAA to deal with multi-objective problems in a manner comparable to current well-recognized algorithms for performing this task. The manuscript’s originality lies in the fact that it is the first to propose an algorithm for multi-objective optimization inspired by cellular automata using majority and minority rules, and is complemented by managing a repository to control the density of solutions in the FP.
To test the performance of the MOMmCAA, we used the DLTZ benchmark, ten quadratic problems, and ten CEC2020 problems. The proposed algorithm was also tested on two practical engineering problems, obtaining satisfactory results. In these cases, five other algorithms were also considered for comparison: Multi-Objective Lightning Attachment Procedure Optimization (MOLAPO) [
48], Grid Search (GS) [
49], Multi-Objective Particle Swarm Optimization (MOPSO) [
13], the Non-dominated Sorting Genetic Algorithm (NSGA-II) [
10], and the Multi-objective Nelder–Mead Algorithm (MNMA) [
50].
Non-parametric Wilcoxon statistical tests were performed to show the statistical significance of the experiments. The results indicate that the proposed algorithm ranks among the best with respect to the other methods used in this work.
The rest of this article is organized as follows.
Section 2 presents the details of the Multi-Objective Majority–minority Cellular Automata Algorithm (MOMmCAA);
Section 3 presents the results of our experiments on various test benches (DTLZ benchmark, ten quadratic problems, and ten CEC2020 problems), providing a statistical comparison of the MOMmCAA through the Wilcoxon test that relates it to other multi-objective algorithms recognized for their performance;
Section 4 describes the application of the MOMmCAA to two practical engineering problems (design of a four-bar truss and a disk brake); finally,
Section 5 provides the paper’s conclusions.
3. Computational Experiments Comparing MOMmCAA to Other Algorithms
MOLAPO, GS, MOPSO, NSGA-II, MOMVO, and MNMA were compared to MOMmCAA in order to identify the best performance in calculating Pareto-optimal solutions. The initial parameters of all described algorithms are summarized in
Table 1. Each experiment employed 50 PS solutions and a maximum of 1000 iterations. The proposed algorithm was tested in 29 diverse case studies, including 27 unconstrained and constrained mathematical problems and two real-world engineering design problems.
The original Matlab implementations of these algorithms were taken directly from the web addresses indicated in the reference articles. The Matlab code of the MOMmCAA can be downloaded from Github using the link
https://github.com/juanseck/MOMmCAA (accessed on 2 September 2024). The MOMmCAA and other algorithms were executed in Matlab 2015a on a PC with a
GHz Intel Xeon CPU with 64 GB of RAM using the macOS Sonoma operating system. Thirty independent runs were made for each algorithm on every benchmark function. A number of different metrics were used to compare the results of the algorithms, as described below.
Hypervolume (HV): The diverseness in the search space through the hypervolume metric was first introduced by Ulrich et al. to escalate the diversity in both decision space and objective space [
55]. The
of a set of solutions measures the size of the portion of the objective space dominated by those solutions as a group. In general,
is favored because it captures both the closeness of the solutions to the optimal set and (to some extent) the distribution of solutions across the objective space in a single scalar. The
value measures both convergence and diversity, and can be calculated using the equation
where
refers to the hypercube bounded by a solution
s in the obtained
. A larger
value indicates a better approximation of the
.
Contribution (C): The Contribution metric counts the number of PS points used in the combined solution of all algorithms. This metric is an extension of the Purity metric [
56] For two approximation Pareto sets
A and
B, where
, the
C metric assigns
A a higher measure than
B.
For
MOEAs applied to a problem, let
be the non-dominated solutions obtained by the
i-th MOEA for
. The union of all these sets is
. The set
of non-dominated solutions is calculated from
R. Let
be the number of non-dominated sets in
obtained by the
i-th MOEA:
Thus, the
metric of the
i-th MOEA is defined as
The value may lie between , with a value nearer to 1 indicating better performance.
Epsilon Indicator (EI):
The Epsilon Indicator was defined in [
57]. It measures the minimum value of the scalar
required to make the Pareto front (
) dominated by the approximation set
S. Epsilon values fall within the range of
.
In this case, the output of the epsilon indicator function is ; a value in the range with a value near 1 is a close fit with the solution set.
3.1. Benchmark Instances
A total of 27 benchmark instances with complicated characteristics were used to compare the performance of the proposed MOMmCAA: DLTZ1-DLTZ7, ten quadratic problems, and ten CEC2020 test instances. These problems exhibit various characteristics, such as a convex, concave, mixed, disconnected, or degenerated PFs and a multimodal, biased, deceptive, and nonlinear variable PS.
For each instance, the compared algorithms are ranked according to the performance metrics, with the ranks shown in square brackets. The mean rank (MR) for each algorithm for each instance is also presented in the tables. As a result of the Wilcoxon rank sum test at a significance level, a result labeled + denotes that the compared algorithm outperforms the MOMmCAA; in contrast, − means that the MOMmCAA has a better performance than the compared algorithm, while ≈ means that there is no statistically significant difference between MOMmCAA and the compared algorithm. The data in orange in every table show the best mean metric values yielded by the algorithms for each instance over 30 independent runs.
3.2. DLTZ Instances
As shown in
Table 2, the MOMmCAA obtains significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for four, four, one, five, and seven out of the seven instances, respectively. Regarding the overall mean rankings, the MOMmCAA obtains the second optimal mean rank value, below MOPSO, followed by GS, NSGA-II, MOLAPO, and MNMA. The MOMmCAA has poor performance on the DLTZ1 and DLTZ3 test instances. In summary, the MOMmCAA is superior to the other four MOEAs on this metric.
Table 3 shows that the MOMmCAA achieves seven, seven, six, six, and seven better
C metric values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA, respectively. This indicates the quality of the solutions obtained by the MOMmCAA.
Table 4 summarizes the overall performance of six algorithms in terms of EI metric values. The MOMmCAA yields significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for six, four, one, seven, and five out of the seven instances, respectively. Overall, the EI statistics are similar to those for HV.
Figure 4 plots the representative PFs obtained by the six comparison MOEAs. In summary, the MOMmCAA shows competitive performance on the DLTZ benchmark.
3.3. Quadratic Instances
The Quadratics test set is a randomly generated test set described in [
50]. The objective functions are all of the form
, where the components
A,
b, and
c are random numbers in the range
.
A is not a symmetric matrix, and the test set is non-convex.
Table 5,
Table 6 and
Table 7 expose the results of the metric values obtained by the algorithms over the ten quadratic problems.
Table 5 shows that the MOMmCAA obtains significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for six, seven, two, eight, and seven out of the eight instances, respectively. Regarding the overall mean rankings, the MOMmCAA obtains the second optimal mean rank value after MOPSO, followed by the other algorithms. The MOMmCAA demonstrates performance that is significantly equivalent to MOPSO regarding the other three instances.
Table 6 shows that the MOMmCAA achieves ten, ten, five, eight, and seven better
C metric values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA, respectively. This indicates the quality of the solutions obtained by the MOMmCAA.
Table 7 depicts the overall performance of the six algorithms in terms of their EI metric values. The MOMmCAA yields significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for nine, seven, six, eight, and nine out of the ten instances, respectively.
Figure 5 shows representative Pareto fronts (PFs) obtained by the six comparison MOEAs. In summary, the MOMmCAA shows competitive performance on the Quadratic benchmark.
3.4. CEC2020 Instances
Table 8,
Table 9 and
Table 10 present the results of the metric values obtained by the algorithms on ten benchmark CEC2020 problems.
Table 8 shows that the MOMmCAA obtains significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for eight, nine, four, eight, and seven out of the ten instances, respectively. In the case of MOPSO, there are six results with no significant difference. Concerning the overall mean rankings, the MOMmCAA obtains the optimal mean rank value. The MOMmCAA demonstrates poor performance on the MMF-2 and MMF-7 test instances. In summary, the MOMmCAA is superior to all the other MOEAs in terms of this metric.
Table 9 shows that the MOMmCAA achieves ten, ten, seven, eight, and six better
C metric values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA, respectively. In the case of MNMA, there are four results with the worst significant difference.
Table 10 summarizes the overall performance of the six algorithms in terms of their EI metric values. The MOMmCAA yields significantly better
values than MOLAPO, GS, MOPSO, NSGA-II, and MNMA for ten, ten, six, ten, and eight out of the ten instances, respectively.
Figure 6 depicts the representative Pareto fronts (PFs) obtained by the seven comparison MOEAs. In summary, the MOMmCAA shows competitive behavior on the CEC2020 benchmark.
5. Conclusions and Further Work
This paper presents a new multi-objective optimization algorithm called the MOMmCAA inspired by the neighborhood and local interaction rules of majority and minority cellular automata. The randomness, concurrency, and information exchange generated between the smart-cells by applying different rules produce an appropriate balance between exploration and exploitation actions.
Comparative computational testing was carried out on 27 test functions with various characteristics, including convex, concave, mixed, disconnected, and degenerated PFs. These test functions were used to challenge the MOMmCAA, and its performance was compared against five other algorithms recognized for their efficiency. The experiments showed satisfactory performance on the part of the MOMmCAA.
In addition, two multi-objective engineering problems from the recent literature were used to test the MOMmCAA against the results obtained by the other algorithms. The MOMmCAA again demonstrated its high quality in finding solutions to these problems, proving its competitiveness against other recent metaheuristics.
Compared to classical techniques, the MOMmCAA provides improved flexibility. It can explore large search spaces and adapt to problems with multiple objectives and complex constraints. These features make the MOMmCAA especially useful for solving multi-objective optimization problems, where traditional methods may be inefficient due to assumptions about the problem’s nature, the need for derivatives, or the complexity of the objective functions.
As further work, the MOMmCAA has to be proven effective in solving real-world problems such as power grid design, vehicle routing optimization, industrial systems control, and feature selection in bioinformatics. Its ability to balance multiple conflicting criteria makes it suitable for multi-objective situations.
However, the MOMmCAA has limitations in scalability for high-dimensional problems, where managing the repository of non-dominated solutions and correctly selecting the algorithm parameters are critical aspects affecting its performance. Its computational cost can also be high when dealing with complex problems, especially when requiring many iterations or accurate PF estimation.
These limitations provide opportunities for future algorithm refinement, including testing improvements with fewer parameters, dynamic parameter control, or other solution control mechanisms such as niche strategy, clustering, rank dominance, or FP maintenance methods. The richness of cellular automata behaviors also presents new opportunities for proposing new multi-objective optimization algorithms, such as the utilization of periodic, chaotic, universal, complex, or surjective and reversible cellular automata.