1. Introduction
Structures with viscous boundaries have been applied to diverse areas for vibration reduction [
1], sound absorption [
2], and boundary control [
3]. One recent example is the railway bridge design for highspeed trains where the soil interacting with the bridge has been modeled as mass–damper–spring terminations of the structure [
4]. The best design of structures has always been the pursuit of engineers. The optimal structural design must usually accommodate multiple objectives such as the settling time of vibrations, the response amplitude, and the shaping of the frequency response, leading to multiobjective optimization problems (MOPs). This paper presents a study of the multiobjective optimal design of a onedimensional elastic rod with a mass–damper–spring termination.
The multiobjective nature of the optimization problem leads to a set of optimal solutions called the Pareto set, making setoriented methods such as simple cell mapping (SCM) [
5] suitable for solving such problems. The cell mapping method was initially developed by Hsu [
6] for investigating the global behavior of nonlinear dynamical systems, then extended by Sun and his coworkers [
7,
8,
9] for MOPs. The method seeks optimal solutions by constructing cell mappings based on the local dominance relation of cells in the discretized design space until the optimal solutions are achieved. Although the method is effective for lowdimensional problems, it suffers from the curse of dimensionality for highdimensional problems because the searching space grows exponentially with the increase of the dimensions.
In terms of solving MOPs with relatively high dimensions, the evolutionary algorithms such as the genetic algorithm (GA) [
10], immune algorithm [
11], particle swarm optimization (PSO) [
12], and ant colony optimization [
13] are the mainstream methods for MOPs. The evolutionary algorithms are stochastic methods that mimic the biological evolutionary process using the evolution laws defined based on the Pareto dominance of fitness functions. Such methods can escape the local optima and rapidly discover the domains containing the solutions. However, the results of evolutionary algorithms can be sensitive to the selection of the hyperparameters.
Recently, Sun and colleagues [
5,
14] proposed a hybrid method that incorporates NSGAII and simple cell mapping (SCM). The method begins with NSGAII to generate a rough set from several generations such that the domains containing optimal solutions can be outlined. Using the rough set, SCM performs a local recovery method to complete the branches of the Pareto set through iterative refinement of the design space. With the power of NSGAII, the searching domain of the simple cell mapping method has been substantially reduced, making it possible to apply SCM for highdimensional problems. On the other hand, the SCM method can complement the GA since obtaining outlined optimal domains using the GA is not very sensitive to the selection of the hyperparameters and is much easier than obtaining detailed Pareto optimal solutions using the GA. This can reduce the burden of parameter tuning with the GA. This paper will present a new case study of MOPs by the hybrid GASCM method. For more discussions on the advantage of the GASCM method and a comparison with different methods, the reader is referred to [
5] and the references therein.
To accelerate the MOP algorithms for structural design, a fast and accurate solver that can predict structural response under external loading is needed. Traditional methods such as the finiteelement method for calculating structural response can result in considerable computational load. However, obtaining such a solver for structures with viscous terminations is not an easy task. This is because viscous boundary conditions lead to nonselfadjoint boundary value problems that cannot be solved by the traditional method of eigenvalue expansion. To address this issue, several analytical methods have been developed. Hull et al. [
15] presented a method that applies modal expansion in the augmented spatial interval where orthogonal eigenmodes exist. Jayachandran and Sun [
16] transformed the problem into a selfadjoint boundary value problem in Hilbert space. Oliveto et al. [
17] proposed a complex modal expansion method, which requires formulating new orthogonality conditions. Jovannovic [
18] formulated the steadystate solution in the form of Fourier series in the state space by reconstructing the differential operator of the equations of motion. Recently, Xing and Sun [
19] applied a particular solution method to study the impulsive response of a 1D elastic rod subject to a mass–damper–spring termination.
In this study, we will continue the effort in [
19] to optimize the viscous termination of a 1D elastic rod under impulsive loading using the GASCM method. The solution of this problem has many potential applications in structural and acoustic design. The dynamic response of the rod will be predicted by the particular solution method. Firstly, we will define the multiobjective optimization problem, followed by the introduction of the GASCM hybrid method. Then, we will formulate the impulse response of the structural problem using the particular solution method and introduce the multiobjective functions for the structural optimization problem. We will demonstrate the effectiveness of the GASCM method through a case study.
2. MultiObjective Optimization
A continuous multiobjective optimization problem (MOP) can be defined as
where
$\mathbf{x}$ is a variable of the design space and
${g}_{i}$ and
${h}_{j}$ are the design constraints.
$\mathbf{F}$ is a map comprised of objective functions
${f}_{i}$ (
$i=1,2,\cdots ,k$), i.e.,
where
${f}_{i}:Q\to \mathcal{R}$. Herein, Q is the feasible set represented by
The optimal solution of the multiobjective problem is defined in the sense of Pareto optimality, which requires the introduction of the following definitions.
Definition 1. (Dominance relation [
5])
. (a)
A vector $\mathbf{y}\in Q$ is called strictly dominated (or simply dominated by a vector $\mathbf{x}\in Q$ ($\mathbf{x}\prec \mathbf{y}$) ifwhere ${<}_{p}$ is an elementwise lessthanorequalto relation.  (b)
A vector $\mathbf{y}\in Q$ is called weakly dominated by a vector $\mathbf{x}\in Q$ ($\mathbf{x}\u2aaf\mathbf{y}$) if $\mathbf{F}\left(\mathbf{x}\right){\le}_{p}\mathbf{F}\left(\mathbf{y}\right)$.
The dominance relation defines the “good” solution in the sense of Pareto optimality. This is a strong relation, which can lead to many optimal solutions, because objective functions are considered as equally “good” solutions when they partially satisfy the inequality relations. To define the sets of optimal solutions and their objective functions, we introduce the Pareto set and Pareto front.
Definition 2. (Pareto point, Pareto set, Pareto front [
5])
. (a)
A point $\mathbf{x}\in Q$ is called Pareto optimal or a Pareto point of (1) if there is no $\mathbf{y}\in Q$ that dominates $\mathbf{x}$.  (b)
A point $\mathbf{x}\in Q$ is called locally (Pareto) optimal or a local Pareto point of (1) if there exists a neighborhood ${N}_{\mathbf{x}}$ of $\mathbf{x}$ such that there is no $\mathbf{y}\in Q\cap {N}_{\mathbf{x}}$ that dominates $\mathbf{x}$.  (c)
A point $\mathbf{x}\in Q$ is called a weak Pareto point or weakly optimal if there exists no $\mathbf{y}\in Q$ such that $\mathbf{F}\left(\mathbf{y}\right){<}_{p}\mathbf{F}\left(\mathbf{x}\right)$.
 (d)
The set of all Pareto optimal solutions is called the Pareto set, i.e.,  (e)
The image $\mathbf{F}\left(\mathcal{P}\right)$ of $\mathcal{P}$ is called the Pareto front.
3. GASCM Hybrid Method
We apply a hybrid method combining genetic algorithms (GAs) and cell mapping methods [
14] to solve an MOP with multiobjective performance indices to be defined in
Section 4. The hybrid method is initiated with a genetic algorithm (NSGAII) to generate a rough Pareto set in the design space, which is then used by a cellmappingbased recovery method to seek a complete branch of the Pareto set through iterative refinement of the cellular space of the design parameters, which will be defined in
Section 5. The pseudo code of the GASCM method is listed in Algorithm 1. The pseudo code for recovering the Pareto optimal solution is listed in Algorithm 2.
As shown in Algorithm 2, the recovery process firstly discretizes the design space and then iterates through elements of the rough Pareto set from the GA or the previous cell partition, performing a onestep simple cell mapping to search local Pareto points. If a cell is mapped to itself (i.e., a local sink is found), then the cell is pushed into the candidate set, followed by an operator to gather nearby solutions into the set to be visited (
${S}_{\mathrm{tovisit}}$) as long as they dominate some elements in the Pareto set
${\mathcal{P}}_{s}$. Otherwise, the destination cell of the cell mapping is pushed to
${S}_{\mathrm{tovisit}}$. Then, the same iterative procedure will be performed on the set
${S}_{\mathrm{tovisit}}$ until no new cells can be brought into
${S}_{\mathrm{tovisit}}$. At last, a dominance check is carried out to remove nondominant points from the Pareto set. More detail on the method can be seen in [
5].
Algorithm 1 GASCM algorithm. 
 Input:
Design space Q, cell space partition N, refinement partition $sub$, GA population size n, objective functions $\mathbf{F}$, refinement number k  Output:
Pareto set ${\mathcal{P}}_{s}$, Pareto front ${\mathcal{P}}_{f}$  1:
Initialization${S}_{r}\stackrel{}{\leftarrow}$ GA(n, Q) {finding a rough candidate set using the GA}  2:
${S}_{c}\stackrel{}{\leftarrow}$ cell creation(${S}_{r},\mathbf{F}$)  3:
while$i\le k$do {seeking Pareto set and front using SCMbased local recovery processes}  4:
${\mathcal{P}}_{s}$, ${\mathcal{P}}_{f}\stackrel{}{\leftarrow}$ recover(${S}_{c}$, $\mathbf{F}$, N, Q)  5:
${S}_{c}$$\stackrel{}{\leftarrow}$ refine(${\mathcal{P}}_{s}$, N, $sub$)  6:
$N\stackrel{}{\leftarrow}N\times sub$ {refining cell space}  7:
$i\stackrel{}{\leftarrow}i+1$  8:
end while

Algorithm 2 SCMbased recovering algorithm. 
 Input:
Rough Pareto set ${\mathcal{P}}_{s}$, rough Pareto front ${\mathcal{P}}_{f}$, objective functions $\mathbf{F}$, cell space partition N, design space Q, max iteration n  Output:
Pareto set ${\mathcal{P}}_{s}$, Pareto front ${\mathcal{P}}_{f}$ (under the cell space partition N)  1:
Initialization Discretize design space Q based on the cell space partition N  2:
${S}_{\mathrm{visiting}}$, ${S}_{\mathrm{visited}}\stackrel{}{\leftarrow}{\mathcal{P}}_{s}$, ${S}_{c}\stackrel{}{\leftarrow}\varnothing $$\{{S}_{c}$ stores candidate solutions.}  3:
while
${S}_{\mathrm{visiting}}\ne \varnothing $
do  4:
${S}_{\mathrm{tovisit}}\stackrel{}{\leftarrow}\varnothing $  5:
for $q\in {S}_{\mathrm{visiting}}$ do  6:
${C}_{d}\stackrel{}{\leftarrow}$ simple cell mapping(q, ${S}_{\mathrm{visited}}$)  7:
if ${C}_{d}\ne q$ and ${C}_{d}\notin {\mathcal{P}}_{s}$ then  8:
${S}_{\mathrm{tovisit}}\stackrel{}{\leftarrow}{S}_{\mathrm{tovisit}}\cup \left\{{C}_{d}\right\}$  9:
else  10:
${S}_{c}\stackrel{}{\leftarrow}{S}_{c}\cup \left\{{C}_{d}\right\}$  11:
${S}_{\mathrm{tovisit}}\stackrel{}{\leftarrow}{S}_{\mathrm{tovisit}}\phantom{\rule{3.33333pt}{0ex}}\cup \left\{\mathbf{x}\right\mathbf{x}\in neighbor\left(q\right)\phantom{\rule{4.pt}{0ex}}\mathrm{and}\mathbf{x}\prec \mathbf{y}\phantom{\rule{4.pt}{0ex}}\mathrm{where}\phantom{\rule{4.pt}{0ex}}\mathbf{y}\in {\mathcal{P}}_{s}\}$ {collecting neighbors that dominate some element(s) in ${\mathcal{P}}_{s}$}  12:
end if  13:
end for  14:
${S}_{\mathrm{visiting}}\stackrel{}{\leftarrow}{S}_{\mathrm{tovisit}}$  15:
${\mathcal{P}}_{s}\stackrel{}{\leftarrow}{\mathcal{P}}_{s}\cup {S}_{c}$, ${\mathcal{P}}_{f}\stackrel{}{\leftarrow}{\mathcal{P}}_{f}\cup \mathbf{F}\left({S}_{c}\right)$  16:
end while  17:
${\mathcal{P}}_{s},{\mathcal{P}}_{f}\stackrel{}{\leftarrow}$ dominance check(${\mathcal{P}}_{s}$, ${\mathcal{P}}_{f}$)

The detail of the onestep simple cell mapping algorithm is listed in Algorithm 3. The method finds the local optimal solution by checking the dominance relation between a cell and its neighbor. The optimal solution is defined as the most distant cell that dominates the source cell.
Algorithm 3 Simple cell mapping algorithm. 
 Input:
Objective functions $\mathbf{F}$, cell ${C}_{s}$, visited cell set ${S}_{\mathrm{visited}}$  Output:
Destination cell ${C}_{d}$, visited cell set ${S}_{\mathrm{visited}}$  1:
${S}_{\mathrm{nbr}}\stackrel{}{\leftarrow}neighbor\left({C}_{s}\right)$  2:
forN in ${S}_{nbr}$ do  3:
if $N\prec {C}_{s}$ and constraints satisfied then {$\mathbf{F}\left(N\right)$ can be fetched from visited set directly if N is visited.}  4:
Store N  5:
${S}_{\mathrm{visited}}\stackrel{}{\leftarrow}{S}_{\mathrm{visited}}\cup \left\{N\right\}$  6:
end if  7:
end for  8:
${C}_{d}\stackrel{}{\leftarrow}$ arg{max${\parallel {q}_{s}{q}_{nbr}\parallel}_{2}$} {${q}_{s}$ and ${q}_{nbr}$ are the cell centers of ${C}_{s}$ and ${S}_{nbr}$}

Given the numerical computation of the impulse response of the rod is the most timeconsuming subroutine in this problem, we record all visited cells using a dictionary structure, whose key is the cell index and whose values consist of the multiobjective functions. This way, the algorithm can search for the values in the dictionary with a time complexity
$O\left(1\right)$, eliminating the repeated computation for cells that have been visited. In addition, the key of a dictionary is unique. Pushing a visited cell to the dictionary will automatically replace the repeated one. Therefore, our implementation, different from that in [
14], does not require combining the repeated cells in the visited set.
5. A Case Study
We considered an elastic rod with Young’s modulus
$E=10$, density
$\rho =10$, length
$L=2$, crosssection area
$A=0.1$, and excitation force magnitude
${f}_{0}=1.0$. The design space was chosen as
subject to a constraint
where
$\mathbf{x}$ is the tuple (
$m,c,k$). We calculated the first 15 s rod response under the impact loading through the numerical integration of Equation (
14), because the max displacement appears quickly after impact, and the impact wave dominates the terminal response when it is propagated at the right end during this time period. Thirty elastic modes were adopted, which, based on our observation, are sufficient to approximate the values of performance indices within the design space.
We first discover a rough Pareto set using the NSGAII algorithm with a population size 1000, number of generations 10, and mutation rate 0.05. Other configurations of NSGAII can be seen in
Table 1. With the numerical predictor, the NSGAII algorithm was completed in 66 s on a desktop with an Intel core i7 CPU, producing a rough Pareto set as the input to the SCM method. In the SCM method, the
$mck$ design space is discretized into a
$10\times 20\times 20$ cellular grid as shown in
Table 2. The elements of the Pareto set are the cells in the design space. The local search and recovery algorithm are performed twice, the first time with the initial grid and the second time with the refined grid, which divides the initial grid by three. We stop the program after the refinement because the desired resolution 0.06 × 0.08 × 0.166 in the parameter space is achieved. The computational time was 36 s with the initial grid and 2000 s with the refined grid.
There are 5392 cells in the Pareto set. The Pareto set and front of the mass–damper–spring termination are presented in
Figure 2. Generally, either larger stiffness or damping will lead to better design. The majority of optimal design is achieved with either moderate or small mass. The Pareto front can be divided into three regions, labeled in
Figure 2b. Region 1 minimizes displacement at the cost of long settling time and moderate damping performance. Region 2 balances the performance of three objective functions. Region 3 achieves premium damping performance at the expense of large displacement and moderate settling time.
The optimal designs of each performance index are presented in
Figure 3,
Figure 4 and
Figure 5. The corresponding design parameters, as well as performance indices are listed in
Table 3.
5.1. Optimal Design: Minimal Settling Time
Figure 3 shows the optimal design of the settling time. The settling time of the total response approximates 1200 s. While the performance index of the settling time is significantly smaller than this number, it still correctly reflects the trend of the settling time change in comparison to other designs such as those in
Figure 4 and
Figure 5. The large mass in this design can increase the portion of energy transmitted to the mass after impact, which can be more effectively dissipated through the heavily damped boundary condition.
5.2. Optimal Design: Maximal Decay of Impact Wave
The time response of the optimal design maximizing the decay of the impact wave is presented in
Figure 4. The impact wave propagates to the right end when
$t=2,6,10\dots $. The suppression of the impact wave is evident. However, this is at the cost of at least a fivetimes longer settling time and a slight increase of the maximal displacement. When compared to the other two designs, this design considerably reduces the damping coefficient. This could be attributed to the velocity change of the mounted mass in response to the impact wave hitting the terminal. Such a change will immediately alter the viscous force produced by the damper, which in turn can lead to higher strain at the terminal. A small damping coefficient can reduce the magnitude of the reflected impact wave.
5.3. Optimal Design: Minimal Peak Displacement at Termination
The optimal design of terminal peak displacement in
Figure 5 has the same stiffness, but much smaller mass and larger damping as the design in
Figure 4. This makes sense because the terminal displacement is identical to the displacement of the mounted mass. Using small inertia and large stiffness and damping, one can effectively reduce the maximal terminal displacement. However, smaller inertia also leads to less energy distributed to the mass. Because the energy can only be dissipated through the damper attached to the mass, this choice can also significantly amplify the settling time.
6. Conclusions
In this paper, a multiobjective optimization problem of the terminal response of an elastic rod with a viscous boundary condition was formulated. The terminal response of the rod was predicted through a computationally effective and accurate particular solution method. The Pareto set and front of the MOP were obtained with the GASCM hybrid method. The proposed objective functions can effectively capture the dynamic response of the structure. The optimal design strategies were presented and analyzed. The amount of energy distributed to the terminal mass after impact was significant for the optimization of the terminal design.
The computational load of this work was due to the repeated computations of the impulse response with different parameter sets. Although the solver adopted in this paper can be computationally more effective and accurate than finiteelement methods, it still requires a sufficient number of modes to capture the nonsmooth impulsive response when highly accurate results are desired. The computational load can be further reduced using a surrogate (metamodel) model [
20]. One future direction is to use neural operators such as DeepONet [
21] to approximate the impulsive response, with the neural operator trained using data from the adopted solver.