Next Article in Journal
Integrating GIS and Official Statistics Using GISINTEGRATION
Previous Article in Journal
Population-Resource Systems with Perception and Decision Delays: Asymmetric Stability Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Multi-Objective Optimal Control Problems Using a Hybrid Method of Genetic Algorithm and Simple Cell Mapping

by
Saeed Mirzajani
1,*,
Gholam Hosein Askarirobati
2 and
Majid Roohi
3,*
1
Department of Basic Sciences, Technical and Vocational University (TVU), Tehran 14356-61137, Iran
2
Department of Mathematics, Payame Noor University, Tehran 19395-4697, Iran
3
Department of Mathematics, Aarhus University, 8000 Aarhus, Denmark
*
Authors to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 165; https://doi.org/10.3390/appliedmath5040165
Submission received: 30 September 2025 / Revised: 3 November 2025 / Accepted: 12 November 2025 / Published: 1 December 2025

Abstract

The design of a control system becomes more complex with the advancement of technology, and this requires optimization techniques to be developed. In particular, multi-objective optimal control (MOC) is a method that can be used to achieve a scheme for control system that coordinates several design objectives that can be in conflict with each other. In this study, a new hybrid scheme is presented that is a combination of non-dominated sorting genetic algorithm-II (NSGA-II) and the simple cell mapping (SCM) method. The combined method first starts a random search using the genetic algorithm and then proceeds by using the SCM method for a neighborhood-based search and recovery algorithm. An evaluation of the proposed method’s efficiency and performance was conducted on two benchmark problems and two multi-objective optimal control problems. We utilized two performance indicators (generational distance (GD) and a diversity metric) to assess the convergence to the Pareto front and the diversity of the solution set, respectively. The results demonstrated that the proposed method not only achieved superior efficiency but also produced a more uniform distribution of solutions along the Pareto front compared to the SCM and NSGA-II algorithms.

1. Introduction

Optimal control theory addresses the challenge of designing control functions for dynamical systems, governed by differential equations, to optimize specific performance criteria. These problems are inherently more complex than conventional static optimization, as decision variables represent time-dependent trajectories rather than scalar values [1]. In real-world engineering applications, system performance must often balance multiple competing objectives, giving rise to multi-objective optimal control problems (MOOCPs), where solutions are characterized by Pareto-optimal sets representing trade-offs between conflicting goals [2].
Scalarization methods represent a classical approach for handling MOOCPs by transforming them into parameterized single-objective subproblems. Techniques such as Weighted Sum (WS), Normal Boundary Intersection (NBI), and Normalized Normal Constraint (NNC) have been widely applied in this context [3,4]. However, these methods face significant limitations: the WS method cannot generate solutions in non-convex regions of the Pareto front [5,6], while NBI and NNC, though capable of handling non-convex Pareto fronts, require numerous independent optimization problems to be solved, leading to substantial computational burdens [7]. Recent advancements incorporating Centroidal Voronoi Tessellation (CVT) algorithms have improved solution distribution uniformity but remain computationally intensive, particularly for high-dimensional problems and interior points of the Pareto front [8].
Evolutionary algorithms (EAs), particularly Multi-Objective Evolutionary Algorithms like NSGA-II and genetic algorithms (GAs), offer population-based solutions for MOOCPs [9]. While demonstrating global search capabilities, EAs face challenges including susceptibility to local optima, sensitivity to operator selection, and computational inefficiency [10,11]. The performance of standard GA is particularly constrained by its dependence on problem-specific operator tuning and limited convergence speed [10]. To address these limitations, researchers have developed sophisticated hybrid approaches including (1) multi-operator evolutionary algorithms combining different selection, crossover, and mutation strategies [12,13] and (2) integration with other metaheuristics including particle swarm optimization [14,15], descent methods [16], simulated annealing [17], and cuckoo search algorithms [18]. These hybridizations have demonstrated improved convergence properties and solution quality across various optimal control applications.
Set-oriented algorithms, such as the simple cell mapping (SCM) method, provide a global framework for analyzing and solving multi-objective optimal control problems. In SCM, the state space is discretized into a uniform grid of cells [19]. The system’s dynamics are then analyzed by mapping each cell to one or more image cells [20], enabling a comprehensive exploration of the entire domain, including the identification of global attractors and their basins [21,22]. A key advantage of set-oriented methods is their ability to naturally approximate the entire Pareto frontier, including non-convex and disconnected segments, in an integrated process. While the computational burden of the basic SCM method increases significantly with the dimensionality of the design space [23], its efficiency can be greatly enhanced by focusing the cell mapping search only on promising regions, such as the neighborhoods of solutions provided by a preliminary evolutionary search [24]. Compared to more complex cell mapping (CM) techniques, SCM is often preferred for deterministic optimization due to its computational simplicity and directness. Other set-oriented methods, like subdivision techniques, share this global perspective but may also face challenges with excessive computational load in complex, high-dimensional problems [25].
In this study, because of the weaknesses mentioned above, we have tried to present an effective numerical approach for finding solutions to multi-objective nonlinear optimal control problems by presenting a hybrid method (genetic algorithm and cell mapping). The proposed hybrid method uses NSGA-II for efficient global exploration to identify promising regions near the Pareto set and then uses SCM for precise local refinement within those regions. This approach not only dramatically reduces the computational burden associated with a pure SCM search over the entire design space but also significantly improves the accuracy and uniformity of the Pareto frontier approximation compared to a pure NSGA-II and SCM. To validate the proposed algorithm’s performance, it was tested on a suite of four problems: two standard benchmarks and two real-world multi-objective optimal control problems. Its performance was quantitatively assessed based on the convergence to the true Pareto-optimal set and the diversity of the obtained solution set.
The remainder of this paper is organized as follows. In Section 2, mathematical formulations of MOCP are briefly introduced. The hybrid method is described in Section 3. Finally, in Section 4, two benchmark MOPs and two practical MOCPs are presented to demonstrate the efficiency of the proposed algorithm.

2. Multi-Objective Optimal Control Problem

The multi-objective optimal control problem is generally formulated as follows:
O p t J ( x , u ) = ( J 1 ( x , u ) , J 2 ( x , u ) , , J m ( x , u ) ) ,
s . t : x ˙ ( t ) = f ( x ( t ) , u ( t ) , t ) ,
b ( x ( 0 ) ) = 0 ,
b f ( x ( t f ) ) = 0 ,
c p ( x ( t ) , u ( t ) , t ) 0 ,
c f ( x ( t f ) , u ( t f ) , t f ) 0 .
where m is the number of objectives and n is the state space dimension. Also, x ( t ) R n and u ( t ) R m show the state and control variables, respectively. The objective J i is defined as follows:
J i = φ ( x f , t f ) + t 0 t f L i ( x , u , t ) d t ,
The functions φ : R n × R + R and L : R n × R m × R + R are continuous and J is the cost function to be minimized. The final time t f may be known or variable. The function f : R n × R m × R + R n represents the dynamic system equations, and also the vectors b and b f indicate the initial and terminal boundary conditions, and path and final inequality constraints on the states and controls are given by the vectors c p and c f , respectively. The set of all admissible pairs ( x , u ) that are satisfied in Equations (2)–(6) is defined as S R n × R m × R  [26].
In multi-objective problems, due to the existence of multiple objectives (usually in conflict with each other), it is difficult to find an acceptable solution that optimizes all objectives simultaneously [27]. Therefore, it is necessary to explain the concept of Pareto optimality [28], which is defined as follows.
Definition 1.
A solution ( x * , u * ) S is said to not dominate whenever there is no solution like ( x , u ) S so that it is no worse than ( x * , u * ) in all objectives and is strictly better in at least one objective [2].
Definition 2.
A solution ( x * , u * ) S is a Pareto optimal solution of the MOCP if and only if it is not dominated by another solution like ( x , u ) S [2].
Definition 3.
A set of non-dominated solutions is said to be a Pareto set, and we call their image in the objective space the Pareto frontier [2].
Definition 4.
Let u and v R n be two vectors and A and B R n be two sets of the vectors. The maximum norm distance d and the semi-distance d i s t ( . , . ) are defined as
1. 
d = m a x i = 1 , , n | u i v i |
2. 
d i s t ( u , A ) = i n f v A d ( u , v )
3. 
d i s t ( B , A ) = s u p u B d ( u , A )

3. Non-Dominated Sorting Genetic Algorithm II (NSGA-II)

Genetic algorithm (GA) is a meta-heuristic algorithm inspired by the laws of genetics and natural selection. This algorithm starts by creating a random population of candidate solutions in the design space, and then the fitness of the population is checked. From among the candidate solutions, the best ones are selected to produce the next generation and the combination, and the mutation operation is performed on them. In subsequent iterations, by performing this process, the candidate solutions gradually converge to the true solution [29].
The GA for MOPs has many variations, and the differences in these variations mainly lie in the fitness assignment, elitism, and diversification methods. Non-dominated sorting genetic algorithm II (NSGA-II) [29] is one of these variations and is one of the most effective multi-objective GAs. NSGA-II is a modified version of NSGA that was first introduced by Srinivas and Deb [30]. The difference between these two versions is that the NSGA-II algorithm uses fast non-dominated sorting, elitism, and population distance techniques. The elitism technique ensures that the best solutions from the previous step are retained in the current step. This process clearly increases the convergence speed of the algorithm.
First, the parent population is randomly generated as a candidate solution. Then, by performing operation tournament selection, crossover, and mutation on the parents, a population of offspring is generated. Both populations are sorted using non-dominated sorting. In addition, another strategy, the crowding distance, is also used. This measure shows how close a solution is to its neighbors, so that the greater the distance, the better the Pareto front diversity. This operation continues until one of the stopping conditions of the algorithm is met. For more details, the interested reader is referred to Deb et al. (2002) [29].
EAs behave better than other methods for specific problem classes but may perform worse for other problem classes. Given that evolutionary algorithms are inherently stochastic, they can approximate the Pareto set well for most problems. These algorithms also have some weaknesses, such as the fact that they only consider a subset of solutions that are close to the optimal solutions. In addition, a significant portion of the solution set is ignored by the archiving technique [31]. Additionally, EAs can get stuck in local optima when using local search alone. However, hybridizing EAs with other search and optimization techniques can overcome this limitation and provide optimized solutions.

4. Simple Cell Mapping (SCM)

Cell mapping techniques were first introduced in [19] for the global analysis of nonlinear dynamical systems. In SCM, the cell state space is an n-dimensional grid of cells of the same size. The basic idea of the SCM method is that each cell has a single image, which is usually determined using the Center Point Method, where a single trajectory from the center of the cell domain is examined. In other words, all states within a cell are mapped to a single cell.
To discretize the performance index J ( t 0 ) , a finite set of mapping time intervals and admissible control inputs can be considered. We denote the set of mapping time steps and the set of admissible control vectors by Δ t = 0 = t 0 , t 1 , , t n = T with a step size h = T 0 n and U = u i 0 , u i 1 , , u i m , respectively. Using Equation (7), we can obtain the point-to-point mapping.
x ( k ) = F ( x ( k 1 ) , u ( k ) , Δ t ( k ) ) ,
where F is described as a point-to-point mapping and the state vector and control in the k-th mapping step are represented by the x ( k ) R n and u ( k ) U , respectively. Here, the dynamics of the whole cell are shown by z. Using the point-to-point mapping of Equation (8), the center of z is imaged. The cell containing this image is considered the image of cell z. This cell is then called the image cell of z [19]:
z ( k ) = C ( z ( k 1 ) , u ( k ) , Δ t ( k ) ) .
Because the exact image of the center of the z is approximated by the center of its image cell, this operation causes a significant error in the long term of the solution of dynamical systems [19,32]. However, the SCM method provides an effective procedure for examining the properties of the global solution of the system. For more studies, the reader is referred to Hsu [19].

5. The Hybrid Method

The SCM method first discretizes the design space and then creates cell-to-cell mappings for the MOP search algorithm [19]. The SCM method examines the entire search space to find the solution of MOP. Because the Pareto set only covers a small part of the design space, the computational burden and performance of the method will be significantly improved if we apply the SCM method around the Pareto set.
On the other hand, the NSGA-II algorithm is an evolutionary algorithm with random search [9]. The process starts with an initial population as candidate solutions and by evaluating their fitness, the best solutions are selected from them to produce the next generation. This process continues its evolutionary path until the stopping condition of the algorithm is met.
In this section, first we introduce discretization of the control space based on an equidistant partition of [0,T] as Δ t = 0 = t 0 , t 1 , , t n = T with a step size h = T 0 n . The time interval is divided into n subinterval [ t i , t i + 1 ] ; i = 0 to i = n 1 .
The equidistant nodes on the set of control values correspond to the i-th component of the control vector function that is divided into constants ( u i 0 , u i 1 , , u i m ) . Furthermore, we assume that all components of the control vector function are at each time subinterval. Using the characteristic function, the i-th component of the control function vector can be written as follows:
v i ( t ) = k = 1 n v i k ( t ) χ [ t k 1 , t k ] ( t ) ,
where
χ [ t k 1 , t k ] ( t ) = 1 ; t [ t k 1 , t k ) 0 ; o t h e r w i s e
and v i k ( t ) is a part of a piecewise linear function v i ( t ) on an interval [ t k 1 , t k ] obtained by connecting the nodes ( t k 1 , u i k 1 ) and ( t k , u i k ) . For each control, we need its corresponding trajectory to evaluate the performance index. In accordance with the control, the corresponding trajectory must be in discretized form. So, corresponding to each piecewise linear control function, we solve the given system of differential equations that represent the dynamic system to be controlled; for  x ( t ) , we solve this numerically using any numerical solver of high accuracy ( R K 4 : for instance, the  R K 4 is a widely used, robust, and high-accuracy numerical method for solving Ordinary Differential Equations (ODEs), making it suitable for simulating the controlled system dynamics in optimal control problems) using the given initial conditions on the state variables. Based on generate vectors in R 2 m + 2 as ( x i , u i ) = ( x i 0 , x i 1 , , x i m , u i 0 , u i 1 , , u i m ) , the value of the performance index is found J i ( x , u ) ; i = 1 , , n to be minimized using any numerical integration formula of high accuracy.
Now, we introduce a hybrid technique to solve the MOCP which includes the advantages of NSGA-II and SCM methods. In the hybrid method, the NSGA-II algorithm is first invoked. Initially, an initial population of solutions is generated and then over a small number of generation cycles, a set close to or on the Pareto frontier is generated. In the next step, this set of points is considered as input to the SCM method. Based on that, a set of cells containing these points is identified. Next, a neighborhood search method and a recovery process in the cell space are used to find the Pareto points. In order to obtain better results, the cells can be further refined [33].
The set B contains all the cells of interest, which gradually increase during the process. The fitness of each cell c s in B is compared with all its orthogonal neighbors N ( c s ) . Among the non-dominated cells, the cell with the steepest decent is added to the set B l . The current cell c s is part of the solution if it has no dominant neighbors. In this case, it is added to the set B. This process continues until no more non-dominant neighboring cells can be added to the current set. Next, cell refinement is performed on the cell set B 0 . Finally, after finding the solution set, the dominance of all cells is checked. It is noteworthy that instead of using the Jacobian gradient matrix, the dominant relation is used for simple cell mapping.
The purpose of using the NSGA-II algorithm in the hybrid method is to reduce the computational burden of the entire search space. Since we use a small population size in the NSGA-II, the entire Pareto set is not formed. The SCM method is used in the following. This method works in such a way that it evaluates neighboring cells.
In the hybrid method, SCM is applied to the set of cells containing the initial NSGA-II solutions ( B 0 ) and their neighborhoods. This is in contrast to traditional SCM, which discretizes and evaluates the entire search space. This modification directly improves computational efficiency. Furthermore, the accuracy of the final solutions is enhanced through the “Subdivision” (refinement) process. Finally, the neighborhood search helps discover connected parts of the Pareto set that NSGA-II might have missed, leading to better convergence to the complete Pareto frontier. The flowchart of the hybrid method is shown in Figure 1.
The pseudo-code of the proposed algorithm is presented below. Let z and D represent the destination cell and d i s t ( . , . ) , respectively.

Pseudo-Code for Hybrid Algorithm

Here is the Algorithm 1:
Algorithm 1: Pseudo-Code for Hybrid Algorithm
1. First Discretize the control and state space
2. Ran NSGA-II algorithm to find initial solution B 0
3. For  l = 1 to iter
3.1. Set i = 0 , B l = , S = B l
3.2. While i < | S |
3.2.1. Put s i in c s , c s in z and D = 0
3.3. For each s N ( c s )
3.3.1. If  c e n t e r ( c s ) dominate c e n t e r ( s ) and | | J ( c e n t e r ( s ) ) J ( c e n t e r ( c s ) ) | | 2 > D
3.3.1.1. Put s in z
3.3.1.2. Put | | J ( c e n t e r ( s ) ) J ( c e n t e r ( c s ) ) | | 2 = D
3.4. If  z S
3.4.1. Put z in S
3.5. If  z = c s
3.5.1. If  z B l
3.5.1.1. Put z in B l
3.5.2. For each s N ( c s )
3.5.2.1. If  c e n t e r ( s ) dominate c e n t e r ( c s ) and not exists b B l such that s dominate b
3.5.2.2. Put s in S
3.6. i = i + 1
3.7. Put s u b × N in N
4. Put the non-dominated terms of S in B l
5. For  l = 1 , 2 ,
5.1 (Subdivision): construct B ^ l B ( c l , r l ) from B l 1
Such that B B ^ l B = B B l 1 B
where c i = a i + b i 2 , r i = b i a i 2 and B is n-dimensional cell can be expressed as B = B ( c , r ) = x R | c i r i x i c i + r i , i = 1 , , n
6. Selection: define the new collection B l = b B ^ l | ¬ b ^ B ^ l : b ^ b

6. Performance Metrics

In this section, we present the performance metrics proposed in the literature to evaluate the performance of NSGA-II, SCM, and hybrid algorithms. To evaluate the convergence of the solution to the true Pareto frontier, we use the generational distance metric ( γ ), and to measure the diversity of the solutions, we use the diversity metric Δ . These metrics are defined as follows, respectively [34],
γ = ( i = 1 | S | d i 2 ) 1 / 2 | S |
where S represents the solution set and d i is defined as follows, where f m ( i ) and f m * ( k ) represent the members of the solution set and the closest member to the Pareto correct set, respectively.
d i = min m = 1 M ( f m ( i ) f m * ( k ) ) 2
where M represents number of objectives.
Δ = d f + d l + i = 1 | S | 1 ( | d i d ¯ | ) d f + d l + ( | S | 1 ) d ¯
where d f represents the Euclidean distance between the extreme points in true Pareto optimal frontier and d l represents the boundary solutions of the Pareto set. Also, d ¯ shows the average of all distances d i .
The smaller the γ and Δ values, the better the algorithm will perform.

7. Numerical Results

In this section, first we present two examples of benchmark MOPs and then two problems that deal with multi-objective optimal controls for nonlinear dynamical systems to compare SCM, NSGA-II, and hybrid methods. The details of implementation of the algorithms for each problem are presented in the form of a table. For fair evaluation and comparison, we have considered the number of function evaluations to be approximately the same for all three methods. For Algorithm NSGA-II, the number of evaluations is obtained by the product of the number of iterations and the population size.
It should be noted that determining the optimal combination of generation count and population size is an optimization problem in itself. Since this falls outside the scope of the current study and due to the stochastic nature of the genetic algorithm (GA), the computation was repeated 20 times to obtain a statistical average of the performance metrics. Furthermore, the Wilcoxon rank-sum test was employed to evaluate the null hypothesis at a 95 % confidence level. Because the MOCP problems in Section 7.3 and Section 7.4 do not have a known Pareto frontier, we ran each algorithm separately with 5 × 10 4 function cells and then combined the two solution sets into one. Among them, the non-dominated solutions were selected as the expected Pareto solutions.

7.1. Fonseca’s Problem

First, we consider Fonseca’s problem as [35]
min x Q [ f 1 ( x ) , f 2 ( x ) ] ,
where
f 1 ( x ) = 1 e x p i = 1 n x i 1 n 2 , ( n = 3 ) f 2 ( x ) = 1 e x p i = 1 n x i + 1 n 2 ,
Q = x R 3 | 4 x i 4 , 1 i 3
This is a non-convex problem with three variables and two objectives. The results of the exchange between objectives for all three algorithms are shown in Figure 2. This figure depicts the true solutions with green triangles, and the solutions from the NSGA-II, SCM, and the hybrid method with squares, blue triangles, and circles, respectively. As can be observed, the points generated by the hybrid method (circles) are more uniformly distributed along the Pareto frontier compared to the other two methods. Table 1 shows a comparison of methods for Fonseca’s problem. The metric average and diversity values are shown in Table 1. The results presented in Table 1 demonstrate the superior performance of the hybrid method, which yields convergence γ and diversity Δ metrics of 0.026 and 0.102, respectively. These values, being lower than those obtained by the other methods, confirm the effectiveness of the proposed hybrid approach, and as a result, the hybrid method has better efficiency.

7.2. Deb’s Problem

This problem has two variables and two objectives defined as follows:
m i n ( f 1 ( x ) , f 2 ( x ) ) ,
where
f 1 ( x ) = x 1 , f 2 ( x ) = ( 1 + 10 x 2 ) 1 x 1 1 + 10 x 2 α x 1 1 + 10 x 2 sin ( 2 π q x 1 ) , x = [ x 1 , x 2 ] T , 0 x 1 , x 2 1 , α = 2 , q = 6 .
This test problem was introduced by Deb [34]. The Pareto frontier consists of several discontinuous sections. Figure 3 shows the Pareto frontier generated by hybrid, SCM, and NSGA-II algorithm. The details of the solution for this problem are reported in Table 2. The zoom-in view in Figure 3 depicts the distribution of the Pareto solution on the Pareto frontier obtained by the hybrid, NSGA-II, and SCM methods. As can be seen from the figure, the hybrid method generates more uniformly distributed solutions than the NSGA-II and SCM methods. It is noteworthy that the quality of the solution of the SCM method does not change significantly with the increase in the number of function evaluations. This is because a large part of the solution has been lost at the initial cell stage and is no longer recoverable.
Table 2 shows that although the hybrid method did not perform well compared to other methods in terms of CPU time, it performed better than other methods in terms of diversity and convergence of the solution. Although the proposed method takes more time to find the solutions due to the subdivision of the cells, the accuracy of its solutions is higher. In addition, the mentioned method produces a Pareto frontier with uniform points.

7.3. Home Heating System

The following problem is a multi-objective optimal control problem introduced by Bianchi, 2006 [36]. This problem concerns a heating system with two objectives. The objectives are to minimize the energy input over an entire day and maximize the thermal comfort. The dynamic model and objectives are as follows:
J 1 = 0 t f = 24 h u ( t ) P max d t , J 2 = 0 t f = 24 h ( x 2 x 2 , r e f ) 2 d t , x 1 ˙ = κ W R ρ W C P W V H x 1 + κ W R ρ W C P W V H x 2 + u ρ W C P W V H , x 2 ˙ = κ W R κ G τ G x 1 K W R + K G K G τ G x 2 + d τ G .
where the state variables x 1 [ ° C ] and x 2 [ ° C ] denote the temperature of the water and room, respectively. The variable control u [ W ] is the heat of the pump, and also d [ ° C ] denotes the outside temperature and includes a disturbance d ( t ) = 2.5 + 7.5 sin ( 2 π t / t f π / 2 ) , and the control u is limited to the range of 0 to 15,000.
The initial room and water temperatures are 19.5 °C. Table 3 shows the parameter values.
Figure 4 shows the Pareto frontier generated by all the three algorithms. According to Figure 4, we understand that with increasing input energy, thermal comfort gradually decreases. Figure 3 shows that the Pareto points obtained from the SCM method are mostly located in the curved part of the Pareto frontier, while the Pareto points obtained from the NSGA-II and hybrid methods are uniformly distributed along of the Pareto frontier. The zoom-in view in Figure 4 depicts the distribution of the Pareto solution on the Pareto frontier obtained by the hybrid, NSGA-II, and SCM methods. As can be seen from the figure, the hybrid method generates more uniformly distributed solutions than the NSGA-II and SCM methods. The results of Table 4 show that the values of γ and Δ for the hybrid method are lower than the other methods. This means that the hybrid method has better convergence and diversity of solutions than the SCM and NSGA-II methods.
Figure 5 shows the control trajectory corresponding to the utopia point, the minimum Euclidean distance from the minimum values of the objectives. According to it, we find that more heating is used during the night when the cold increases, and then less heating is used during the day due to the increase in temperature.

7.4. Fed-Batch Bioreactor

This is the second multi-objective optimal control problem we will consider. This problem was introduced by Ohno et al. [37] and is based on the fed-batch lysine fermentation process. The objectives are to obtain the maximum productivity and yield simultaneously. The equations and objectives of the problem are defined as follows:
J 1 = J p = x 3 ( t f ) t f , J 2 = J y = x 3 ( t f ) ( x 4 ( t f ) x 4 ( 0 ) ) C S , F , d x 1 d t = μ x 1 , d x 2 d t = σ x 1 + u C S , F d x 3 d t = π x 1 , d x 4 d t = u .
where the states variables x 1 [ g ] , x 2 [ g ] , x 3 [ g ] , and x 4 [ L ] denote the biomass, substrate, product (lysine), and fermenter volume, respectively. The initial conditions are [ x 1 ( 0 ) , x 2 ( 0 ) , X 3 ( 0 ) , x 4 ( 0 ) ] = [ 0.1 , 14 , 0 , 5 ] . The control variable u [ L / h ] denotes the volumetric rate of the feed stream, which contains a limiting substrate concentration C S , F of 2.8 g/L. The parameters of the problem are given in Table 5.
We show the Pareto frontier in Figure 6 obtained by the three strategies. Details of the computational time and convergence rate and diversity of the solutions for all three methods are shown in Table 6. The results presented in Table 6 demonstrate the superior performance of the hybrid method, which yields convergence γ and diversity Δ metrics of 0.017 and 0.008, respectively. These values, being lower than those obtained by the other methods, confirm the effectiveness of the proposed hybrid approach, and as a result, the hybrid method has better efficiency.
Figure 7 depicts the optimal feeding profile. From the figure, we can see that maximum lysine production is only achieved if only productivity is considered. However, if the goal is to find maximum yield, the height of the individual profile is reduced but it becomes more durable.

8. Results of Comparative Statistical Analysis

A comparative analysis using the Wilcoxon signed-rank test was conducted to determine the efficacy of the three methods—NSGA-II, SCM, and the proposed hybrid method. The results for each performance metric, detailed below, are evaluated at a 95 % confidence level ( α = 0.05 ). A detailed comparison of all three methods using the Wilcoxon signed-rank test is presented in Table 7.
Time Performance: The statistical test yielded p-values of 0.082 and 0.096 for the respective comparisons. Since these values exceed the significance level of 0.05, we fail to reject the null hypothesis. This leads to the conclusion that there is no statistically significant difference in the time performance of the three methods under investigation.
Convergence Performance: In contrast, the analysis of the convergence metric resulted in p-values of 0.018 and 0.011. With these values being below the 0.05 threshold, the null hypothesis is rejected. This provides statistical evidence for a significant difference in convergence performance. Subsequent pairwise comparisons revealed that the hybrid method consistently outperformed both the NSGA-II and SCM methods.
Dispersion Performance: Similarly, for the dispersion metric, the p-values of 0.024 and 0.006 are statistically significant ( p < 0.05 ). Therefore, we reject the null hypothesis, confirming a significant difference in the dispersion characteristics of the methods. The hybrid method was found to exhibit the lowest dispersion value, indicating its superior stability.
So it can be said that the hybrid method demonstrates a statistically significant advantage in terms of both solution quality (convergence) and reliability (diversity).

9. Conclusions

This study has introduced a novel hybrid methodology that effectively integrates the NSGA-II algorithm with the simple cell mapping (SCM) technique for solving multi-objective optimal control problems. The performance of the proposed approach was rigorously evaluated against established benchmarks and practical problems, with quantitative assessment based on two critical metrics: generational distance (GD), measuring convergence accuracy to the true Pareto front, and a diversity metric, evaluating the distribution uniformity of solution sets.
The results demonstrate that our hybrid method consistently outperforms conventional approaches across both benchmark and practical problems. The algorithm successfully generates solution sets with superior convergence characteristics while maintaining excellent diversity across the Pareto front. This balanced performance confirms the efficacy of combining evolutionary algorithms with set-oriented methods, leveraging NSGA-II’s global exploration capabilities with SCM’s precise local refinement. The proposed framework represents a significant advancement in multi-objective optimal control methodology, offering researchers and practitioners an effective tool for handling complex optimization scenarios where multiple competing objectives must be simultaneously considered. In addition, the Wilcoxon rank sum test also showed significant results for all examples presented in the objective space, which led to the rejection of the null hypothesis at the 95 % confidence level ( p < 0.05 ).
Future research will be directed along several promising avenues to advance the methodological framework. A primary focus will be the enhancement of computational efficiency through the implementation of localized adaptive cell division, which intensifies refinement predominantly in regions proximate to the Pareto front, coupled with an adaptive time-stepping scheme that employs larger steps in domains with smooth dynamics and finer resolutions in sensitive, high-gradient areas. The scope of the algorithm will be expanded to tackle high-dimensional multi-objective optimal control problems (MOCPs) by leveraging adaptive cell mapping and dimension reduction techniques. Further extensions will aim to adapt the framework for real-time optimal control applications and problems featuring time-varying parameters.
The proposed method also has some limitations. While the hybrid method reduces the SCM search space, the initial phases of NSGA-II and cell refinement still pose computational challenges for very-high-dimensional problems (e.g., state dimensions > 10 ). The accuracy of the solution is inherently dependent on the granularity of the discretization of the control and state space, which is a common limitation in cell mapping methods.

Author Contributions

Conceptualization, G.H.A. and M.R.; methodology, S.M. and G.H.A.; software, S.M.; formal analysis, S.M. and M.R.; investigation, M.R.; writing—original draft, S.M. and G.H.A.; writing—review and editing, M.R.; visualization, S.M.; supervision, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

No external funding was received for this research.

Data Availability Statement

All original contributions of this study are included in the article, and additional inquiries can be addressed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yenkie, K.M.; Diwekar, U. Stochastic Optimal Control of Seeded Batch Crystallizer Applying the Ito Process. Ind. Eng. Chem. Res. 2013, 52, 108–122. [Google Scholar] [CrossRef]
  2. Askarirobati, G.H.; Borzabadi, A.H.; Heydari, A. Solving multi-objective optimal control problems of chemical processes using hybrid evolutionary algorithm. Iran. J. Math. Chem. 2019, 10, 103–126. [Google Scholar]
  3. Logist, F.; Houska, B.; Diehl, M.; Van Impe, J.F. Fast Pareto set generation for nonlinear optimal control problems with multiple objectives. Struct. Multidiscip. Optim. 2010, 42, 591–603. [Google Scholar] [CrossRef]
  4. Das, I.; Dennis, J. Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef]
  5. Miettinen, K. Nonlinear Multiobjective Optimization; Kluwer Academic Publishers: Boston, MA, USA, 1999. [Google Scholar]
  6. Askarirobati, G.H.; Borzabadi, A.H.; Heydari, A. Axial preferred Solutions for Multi-objective Optimal Control Problems: An application to chemical processes. Iran. J. Numer. Anal. Optim. 2020, 10, 19–32. [Google Scholar]
  7. Messac, A.; Mattson, C.A. Normal constraint method with guarantee of even representation of complete Pareto frontier. AIAA J. 2004, 42, 2101–2111. [Google Scholar] [CrossRef]
  8. Motta, R.d.S.; Afonso, S.M.; Lyra, P.R. A modified nbi and nc method for the solution of n-multiobjective optimization problems. Struct. Multidiscip. Optim. 2012, 46, 239–259. [Google Scholar] [CrossRef]
  9. Coello Coello, C.A.; Lamont, G.B.; Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: New York, NY, USA, 2007. [Google Scholar]
  10. Arumugam, M.S.; Rao, M.V.C.; Palaniappan, R. New hybrid genetic operators for real coded genetic algorithm to compute optimal control of a class of hybrid systems. Appl. Soft Comput. 2005, 6, 38–52. [Google Scholar] [CrossRef]
  11. Shuai, X.; Zhou, X. A genetic algorithm based on combination operators. Procedia Environ. Sci. 2011, 11, 346–350. [Google Scholar] [CrossRef]
  12. Larrañaga, P.; Kuijpers, C.M.H.; Murga, R.H.; Inza, I.; Dizdarevic, S. Genetic algorithms for the travelling salesman problem: A review of representations and operators. Artif. Intell. Rev. 1999, 13, 129–170. [Google Scholar] [CrossRef]
  13. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. Multi-operator based evolutionary algorithms for solving constrained optimization problems. Comput. Oper. Res. 2011, 38, 1877–1896. [Google Scholar] [CrossRef]
  14. Lan, T.; Huang, L.; Ma, R.; Wang, K.; Ruan, Z.; Wu, J.; Li, X.; Chen, L. A robust method of dual adaptive prediction for ship fuel consumption based on polymorphic particle swarm algorithm driven. Appl. Energy 2025, 379, 124911. [Google Scholar] [CrossRef]
  15. Zhang, S.; Yan, J.; Xie, P.; Zhai, P.; Tao, Y. Power System Loss Reduction Strategy Considering Security Constraints Based on Improved Particle Swarm Algorithm and Coordinated Dispatch of Source–Grid–Load–Storage. Processes 2025, 13, 831. [Google Scholar] [CrossRef]
  16. Shan, Y.; Zhao, S.; Li, Q.; Zhang, R. Gradient Descent Method for Group Particles Based on Improved Genetic Algorithm. Authorea 2024. [Google Scholar] [CrossRef]
  17. Cai, Z.; Wei, M.; Zou, P.; Bai, X.; Lin, Z.; Chen, J. Analytical Layer Assignment with Simulated Annealing Refinement. In Proceedings of the 2025 IEEE International Symposium on Circuits and Systems (ISCAS), London, UK, 25–28 May 2025; pp. 1–5. [Google Scholar]
  18. Kanagaraj, G.; Ponnambalam, S.G.; Jawahar, N. A hybrid cuckoo search and genetic algorithm for reliability-redundancy allocation problems. Comput. Ind. Eng. 2013, 66, 1115–1124. [Google Scholar]
  19. Hsu, C.S. A discrete method of optimal control based upon the cell state space concept. J. Optim. Theory Appl. 1985, 46, 547–569. [Google Scholar] [CrossRef]
  20. Hsu, C.S. Cell-to-Cell Mapping: A Method of Global Analysis for Nonlinear Systems; Applied Mathematical Sciences; Springer: Singapore, 1987; Volume 64. [Google Scholar]
  21. Hernández, C.; Naranjani, Y.; Sardahi, Y.; Liang, W.; Schütze, O.; Sun, J.Q. Simple cell mapping method for multi-objective optimal feedback control design. Int. J. Dyn. Control 2013, 1, 231–238. [Google Scholar] [CrossRef]
  22. Naranjani, Y.; Sardahi, Y.; Sun, J.Q.; Hernández, C.; Schütze, O. Fine structure of Pareto front of multi-objective optimal feedback control design. In Proceedings of the ASME 2013 Dynamic Systems and Control Conference, Palo Alto, CA, USA, 21–23 October 2013; p. V001T15A009. [Google Scholar]
  23. Naranjani, Y.; Hernández, C.; Xiong, F.R.; Schütze, O.; Sun, J.Q. A hybrid algorithm for the simple cell mapping method in multiobjective optimization. In EVOLVE—A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation IV; Emmerich, M., Deutz, A., Schütze, O., Bäck, T., Tantar, E., Tantar, A.A., Moral, P.D., Legrand, P., Bouvry, P., Coello Coello, C.A., Eds.; Advances in Intelligent Systems and Computing; Springer: New York, NY, USA, 2013; Volume 227, pp. 207–223. [Google Scholar]
  24. Dellnitz, M.; Hohmann, A. A subdivision algorithm for the computation of unstable manifolds and global attractors. Numer. Math. 1997, 75, 293–317. [Google Scholar] [CrossRef]
  25. Qin, Z.Q.; Xiong, F.R.; Hernández, C.; Fernandez, J.; Ding, Q.; Schütze, O.; Sun, J.Q. Multi-objective optimal design of sliding mode control with parallel simple cell mapping method. J. Vib. Control 2017, 23, 46–54. [Google Scholar] [CrossRef]
  26. Askarirobati, G.H.; Borzabadi, A.H.; Heydari, A. Pareto-optimal Solutions for multiobjective optimal control problems using hybrid IWO/PSO algorithm. Glob. Anal. Discrete Math. 2019, 4, 41–60. [Google Scholar]
  27. Askarirobati, G.H.; Borzabadi, A.H.; Heydari, A. Solving multiobjective optimal control problems using an improved scalarization method. IMA J. Math. Control Inf. 2020, 37, 1524–1547. [Google Scholar] [CrossRef]
  28. Pareto, V. Manuale di Economia Politica; Societa Editrice Libraria: Milan, Italy, 1906. [Google Scholar]
  29. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  30. Srinivas, N.; Deb, K. Multi-objective function optimization using nondominated sorting genetic algorithms. Evol. Comput. 1995, 2, 221–248. [Google Scholar] [CrossRef]
  31. Schütze, O.; Vasile, M.; Coello Coello, C.A. Computing the set of epsilon-efficient solutions in multiobjective space mission design. J. Aerosp. Comput. Inf. Commun. 2011, 8, 53–70. [Google Scholar] [CrossRef]
  32. Bursal, F.H.; Hsu, C.S. Application of a cell-mapping method to optimal control problems. Int. J. Control 1989, 49, 1505–1522. [Google Scholar] [CrossRef]
  33. Dellnitz, M.; Schütze, O.; Hestermeyer, T. Covering Pareto sets by multilevel subdivision techniques. J. Optim. Theory Appl. 2005, 124, 113–136. [Google Scholar] [CrossRef]
  34. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; Wiley: New York, NY, ISA, 2001. [Google Scholar]
  35. Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 1995, 3, 1–16. [Google Scholar] [CrossRef]
  36. Bianchi, M. Adaptive Modellbasierte Prädiktive Regelung Einer Kleinwärmepumpen Anlage. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2006. [Google Scholar]
  37. Ohno, H.; Nakanishi, E.; Takamatsu, T. Optimal control of a semi-batch fermentation. Biotechnol. Bioeng. 1976, 18, 847–864. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of the hybrid method.
Figure 1. The flowchart of the hybrid method.
Appliedmath 05 00165 g001
Figure 2. The Pareto frontier obtained by all three methods for Fonseca’s problem.
Figure 2. The Pareto frontier obtained by all three methods for Fonseca’s problem.
Appliedmath 05 00165 g002
Figure 3. The Pareto frontier obtained by all three methods for Deb’s problem.
Figure 3. The Pareto frontier obtained by all three methods for Deb’s problem.
Appliedmath 05 00165 g003
Figure 4. The Pareto frontier obtained by all three methods for home heating system.
Figure 4. The Pareto frontier obtained by all three methods for home heating system.
Appliedmath 05 00165 g004
Figure 5. Optimal control profile for home heating system.
Figure 5. Optimal control profile for home heating system.
Appliedmath 05 00165 g005
Figure 6. The Pareto frontier obtained by all three methods for fed-batch bioreactor.
Figure 6. The Pareto frontier obtained by all three methods for fed-batch bioreactor.
Appliedmath 05 00165 g006
Figure 7. Optimal control profile for fed-batch bioreactor.
Figure 7. Optimal control profile for fed-batch bioreactor.
Appliedmath 05 00165 g007
Table 1. The values of average CPU time, γ and Δ measures for Fonseca’s problem.
Table 1. The values of average CPU time, γ and Δ measures for Fonseca’s problem.
NSGA-IISCMHybrid
Iterations20120
DivisionsN/A15 × 15 × 1510 × 10 × 10
Sub-DivN/A3 × 3 × 33 × 3 × 3
GA generation40N/A10
GA population100N/A50
Crossover0.8N/A0.8
Mutation0.2N/A0.2
Function evaluation400042374161
CPU time
ave9.9127.1337.254
sdt2.57 2.49
Convergence rate γ
ave0.0500.0480.026
sdt0.003 0.004
Diversity Δ
ave0.1430.1940.102
sdt0.04 0.02
Table 2. The values of average CPU time, γ and Δ measures for Deb’s problem.
Table 2. The values of average CPU time, γ and Δ measures for Deb’s problem.
NSGA-IISCMHybrid
Iterations20120
DivisionsN/A60 × 6050 × 50
Sub-DivN/A3 × 34 × 4
GA generation50N/A10
GA population100N/A50
Crossover0.8N/A0.8
Mutation0.2N/A0.2
Function evaluation500052425107
CPU time (s)
ave13.45814.32715.416
sdt2.05 1.74
Convergence rate γ
ave0.0640.0530.026
sdt0.005 0.004
Diversity Δ
ave0.0390.0940.023
sdt0.002 0.001
Table 3. The value of parameters for Home Heating System.
Table 3. The value of parameters for Home Heating System.
ParameterValueUnit
V H 1.28m3
κ W R 1160[W/K]
κ G 260[W/K]
τ G 240s
Table 4. The values of average CPU time, γ and Δ measures for home heating system.
Table 4. The values of average CPU time, γ and Δ measures for home heating system.
NSGA-IISCMHybrid
Iterations20120
DivisionsN/A40 × 4030 × 30
Sub-DivN/A3 × 34 × 4
GA generation40N/A10
GA population100N/A50
Crossover0.8N/A0.8
Mutation0.2N/A0.2
Function evaluation400043314109
CPU time
ave29.216.733.4
sdt0.321 0.354
Convergence rate γ
ave0.0380.0420.024
sdt0.003 0.003
Diversity Δ
ave0.0210.0720.017
sdt0.004 1.2 × 10 4
Table 5. The value of parameters for Fed-Batch Bioreactor.
Table 5. The value of parameters for Fed-Batch Bioreactor.
ParameterValueUnit
μ 0.125 C S , F 1/h
σ μ / 0.135 g/gh
π 138 μ 2 + 134 μ g/gh
Table 6. The values of average CPU time, γ and Δ measures for fed-batch bioreactor.
Table 6. The values of average CPU time, γ and Δ measures for fed-batch bioreactor.
NSGA-IISCMHybrid
Iterations20120
DivisionsN/A20 × 20 × 20 × 2015 × 15 × 15 × 15
Sub-DivN/A3 × 3 × 3 × 33 × 3 × 3 × 3
GA generation40N/A10
GA population100N/A50
Crossover0.8N/A0.8
Mutation0.2N/A0.2
Function evaluation400043314109
CPU time
ave39.628.326.1
sdt4.57 3.96
Convergence rate γ
ave0.0290.0430.017
sdt0.007 0.009
Diversity Δ
ave0.0110.0620.008
sdt0.001 1.3 × 10 4
Table 7. Comparison of hybride, NSGA-II, and SCM methods using the Wilcoxon test.
Table 7. Comparison of hybride, NSGA-II, and SCM methods using the Wilcoxon test.
MetricsMethodsStatisticp-ValueResult
CPU timeHybrid vs. NSGA-II W = 4 0.082Not Significant
Hybrid vs. SCM W = 3 0.096Not Significant
ConvergenceHybrid vs. NSGA-II W = 0 0.018Significant Difference
Hybrid vs. SCM W = 0 0.011Significant Difference
DiversityHybrid vs. NSGA-II W = 0 0.024Significant Difference
Hybrid vs. SCM W = 0 0.006Significant Difference
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mirzajani, S.; Askarirobati, G.H.; Roohi, M. Solving Multi-Objective Optimal Control Problems Using a Hybrid Method of Genetic Algorithm and Simple Cell Mapping. AppliedMath 2025, 5, 165. https://doi.org/10.3390/appliedmath5040165

AMA Style

Mirzajani S, Askarirobati GH, Roohi M. Solving Multi-Objective Optimal Control Problems Using a Hybrid Method of Genetic Algorithm and Simple Cell Mapping. AppliedMath. 2025; 5(4):165. https://doi.org/10.3390/appliedmath5040165

Chicago/Turabian Style

Mirzajani, Saeed, Gholam Hosein Askarirobati, and Majid Roohi. 2025. "Solving Multi-Objective Optimal Control Problems Using a Hybrid Method of Genetic Algorithm and Simple Cell Mapping" AppliedMath 5, no. 4: 165. https://doi.org/10.3390/appliedmath5040165

APA Style

Mirzajani, S., Askarirobati, G. H., & Roohi, M. (2025). Solving Multi-Objective Optimal Control Problems Using a Hybrid Method of Genetic Algorithm and Simple Cell Mapping. AppliedMath, 5(4), 165. https://doi.org/10.3390/appliedmath5040165

Article Metrics

Back to TopTop