Next Article in Journal
Personalized Characterization of Emotional States in Patients with Bipolar Disorder
Previous Article in Journal
Application of Delaunay Triangulation and Catalan Objects in Steganography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design Improvement for Complex Systems with Uncertainty

1
School of Statistics, Capital University of Economics and Business, Beijing 100070, China
2
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100864, China
3
School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100811, China
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(11), 1173; https://doi.org/10.3390/math9111173
Submission received: 29 April 2021 / Revised: 12 May 2021 / Accepted: 20 May 2021 / Published: 22 May 2021
(This article belongs to the Section Engineering Mathematics)

Abstract

:
The uncertainty of the engineering system increases with its complexity, therefore, the tolerance to the uncertainty becomes important. Even under large variations of design parameters, the system performance should achieve the design goal in the design phase. Therefore, engineers are interested in how to turn a bad design into a good one with the least effort in the presence of uncertainty. To improve a bad design, we classify design parameters into key parameters and non-key parameters based on engineering knowledge, and then seek the maximum solution hyper-box which already includes non-key parameters of this bad design. The solution hyper-box on which all design points are good, that is, they achieve the design goal, provides target intervals for each parameter. The bad design can be turned into a good one by only moving its key parameters into their target intervals. In this paper, the PSO-Divide-Best method is proposed to seek the maximum solution hyper-box which is in compliance with the constraints. This proposed approach has a considerably high possibility to find the globally maximum solution hyper-box that satisfies the constraints and can be used in complex systems with black-box performance functions. Finally, case studies show that the proposed approach outperforms the EPCP and IA-CES methods in the literature.

1. Introduction

With the rapid development of technology in recent years, the engineering system has become increasingly complex. The growing complexity of the system provides a new source of uncertainty. Uncertainty arises because some design parameters may be changed over the course of development, or they cannot yet exactly be specified [1,2,3]. The uncertainty appears as a deviation between the desired and realized parameter settings, which may bring in design failure, therefore it is essential to improve design under the uncertainty for complex systems.
Design points that achieve their design goals are called good, otherwise they are called bad. However, if there is already a bad design point, engineers are interested in how to turn it into a good design with the least effort. It can be fulfilled by modifying as few parameters from the existing bad design as possible.
Traditional optimization [4], sensitivity analysis [5,6], robust design optimization [7,8,9], and reliability-based design optimization [10,11,12] are some classical approaches to improve designs. Traditional optimization seeks an optimum design point in the design space, which does not take uncertainty into account. Sensitivity analysis computes the effect of variability of input design parameters on variability of output performance value. Unfortunately, sensitivity analysis does not provide information on how the parameters of a bad design need to be changed in order to turn it into a good design. Robust design optimization seeks a design point in a deterministic neighborhood with small output variation. However, robust design optimization only deals with the case where the variability of design parameters is given. If the variability of the design parameters is not completely known, it has to be estimated, which is not always possible. Reliability-based design optimization is a method to determine optimal designs which are characterized by a low probability of failure. Unfortunately, it is established under the assumption that the complete information of the design uncertainty is known.
Evolutionary algorithms are often used to solve design optimization problems in the presence of uncertainties [13,14,15,16]. The non-dominated sorting genetic algorithm II is combined with reliability-based optimization for handling uncertainty in design parameters in [13]. Six-sigma robust design optimization is solved by the many-objective decomposition-based evolutionary algorithm in [14]. Evolutionary algorithms have some specific advantages, which include the ability to self-adapt the search for optimum solutions on the fly, as well as the flexibility of the procedures.
Besides, swarm intelligence-based algorithms are frequently-used techniques in design optimization problems [17,18,19,20]. The uncertainty is directly incorporated into optimization using particle swarm, and a deterministic Pareto front of optimal designs is found in [17]. The ant colony optimization is combined with reliability-based design optimization to achieve the global optimal design of a crane metallic structure under parametric uncertainty in [18]. Against the uncertainty of cost and traffic, a mathematical model is established using robust optimization method, and the corresponding optimal design is obtained by ant colony algorithm in [19]. Swarm intelligence-based algorithms often find an optimal solution when appropriate parameters are determined and sufficient convergence stage is achieved.
The solution hyper-box on which all design points are good is expressed by independent target intervals for each design parameter. The widths of target intervals quantify the robustness to uncertainty. Furthermore, the solution hyper-box can help to improve a bad design. More precisely, a bad design can be turned into a good one by only moving its design parameters into their target intervals. To make it easy to reach the target intervals, the solution hyper-box should be as large as possible.
There are two categories of approaches which seek for a solution hyper-box with the maximum volume in the literature. The first category is based on machine learning and data mining techniques [21,22,23]. A stochastic method is presented in [21], which combines online learning and query. It probes a candidate solution hyper-box by stochastic sampling, and then readjusts its boundaries in order to remove bad designs and explores more design space that has not been probed before. Because the locations of good and bad designs are estimated by Monte Carlo sampling, the obtained solution hyper-box may not have the maximum size. The second category is based on an analytical technique [24,25,26]. The algorithm presented in [24] applies interval arithmetic within an iterative optimization scheme to check whether the candidate hyper-box is a solution hyper-box. However, interval arithmetic limits the applicability of the algorithm, as the accuracy of the interval arithmetic depends on the problem. Besides, the interval arithmetic cannot handle the black-box performance function. The algorithm proposed in [26] combines the DIRECT algorithm [27] with evolutionary algorithms, where the former is a checking technique to guarantee that the obtained hyper-box is indeed a solution hyper-box and the latter is to seek the maximum size of hyper-box. Unfortunately, the DIRECT algorithm requires very high computational effort for its work.
This paper aims to turn a bad design into a good one with comparatively little effort in the presence of uncertainty. To this end, instead of focusing on an optimal design point, an approach based on the optimization of the solution hyper-box is proposed in this paper. Specifically, design parameters are classified into key parameters and non-key ones based on engineering knowledge firstly. Secondly, by adding proper constraints on non-key parameters, we seek the maximum solution hyper-box which already includes non-key parameters of the bad design. The maximum solution hyper-box provides the target intervals for each design parameter. Finally, the bad design is turned into a good one by only moving its key parameters to their target intervals. Obviously, the wider the target interval is, the stronger the robustness is against unintended variations.
Therefore, we focus on seeking for the maximum solution hyper-box which is in compliance with constraints. Fender et al. [28] extended the algorithm in [21] to account for constraints. However, the algorithm in [28] produces a hyper-box that may not have the largest size and may contain some bad designs, because the locations of good and bad designs are estimated by the Monte Carlo sampling. In this paper, the proposed PSO-Divide-Best method combines particle swarm optimization (PSO) [29] with the Divide-the-Best algorithm in [30], where the former is to evolve the solution hyper-box towards larger volume and the latter is to guarantee that the obtained solution hyper-box only includes good designs. The PSO-Divide-Best method is established based on black-box performance functions, so it is appropriate for most engineering problems.
The paper is organized as follows. The next section explains the motivation of this paper and gives a mathematical problem statement for optimizing the solution hyper-box with constraints. Section 3 introduces the Divide-the-Best algorithm and particle swarm optimization. Section 4 presents the proposed approach in detail. In Section 5, we apply the proposed approach to complex systems. In Section 6, we provide some concluding remarks.

2. Motivation and Problem Statement

2.1. Motivation

A voltage divider shown in Figure 1 is considered. The terminal voltage V a b is expressed as V a b = 10 R / ( R + R a ) and the resistance as R i n = R + R a .
Good designs fulfilling the design goals should satisfy:
4.5 V V a b 5.5 V ,
80 Ω R i n 120 Ω .
The complete solution space defined by Expression (1) is shown in Figure 2 as a grey region.
Now, a design with R = 43 Ω and R a = 60 Ω is considered. It violates Equation (1a) and therefore is a bad design. To improve this bad design with the least effort, three scenarios are considered:
(a)
Figure 2a shows a solution hyper-box with the maximum volume. Both R and R a are changed so as to achieve the design goals.
(b)
Figure 2b shows a solution hyper-box that includes R a = 60 Ω . To achieve the design goals, only R needs to be modified. The solution hyper-box is obtained by requiring a minimum safety margin of ± 2 Ω for R a . This is essential, because R a is subject to uncertainty and cannot be controlled exactly. The realized lower safety margin is larger than the required minimum safety margin, thus more tolerance to uncertainty is provided.
(c)
Figure 2c shows a solution hyper-box that includes R = 43 Ω . To realize the design goals, only R a needs to be modified. The same minimum safety margin as in Scenario (b) is required.
The solution hyper-box of Scenario (a) is much larger than those of Scenarios (b) and (c). In this sense, the solution hyper-box of Scenario (a) is the most robust one. However, Scenario (a) requires redesigning two design parameters, while Scenarios (b) and (c) only require redesigning one design parameter. If there is already a bad design, Scenarios (b) and (c) are recommended, because they only change one parameter. Otherwise, Scenario (a) is more appropriate, because it provides more tolerance to the uncertainty. To choose among Scenarios (b) and (c), two different strategies are possible. If there is no information on which design parameter is easier to redesign, the scenario with the maximum solution hyper-box is chosen, because it provides a larger target region and is easier to reach. This would be Scenario (a) with R as the key parameter. As alternative approach, engineers can take advantage of their engineering knowledge and choose Scenario (b) where R a is regarded as the key parameter, if they know that design parameter R a is easier to redesign than R.
This procedure can described by seeking the maximum solution hyper-boxes under the constraint that certain parameters should be included with a specified safety margin.

2.2. Problem Statement

Let x = ( x 1 , x 2 , , x m ) represent the design point or design, which is composed of m design parameters. The output performance at x is given by
y = f ( x ) ,
where f ( x ) is the performance function. The performance is sufficient by satisfying the performance criterion
f ( x ) f c ,
where f c is a predefined threshold value.
Designs or design points are called good if their performance satisfies (3), otherwise they are called bad. The lower and upper bounds of the design parameters, x l and x u , define the continuous design space. The region consisting of all good design points is the complete solution space.
To obtain interval boundaries of each parameter that are independent of other parameters, hyper-boxes are considered. The hyper-box D = D ( x low , x up ) is expressed by
D ( x low , x up ) = d e f [ x low , x up ] = { x : x i low x i x i up , i = 1 , , m } ,
where x up = ( x 1 up , , x m up ) and x low = ( x 1 low , , x m low ) are the upper bound and lower bound of the hyper-box, respectively. Furthermore, let x bound = ( x 1 bound , , x 2 m bound ) denote the bounds of the hyper-box, that is,
x bound = d e f ( x low , x up ) = ( x 1 low , , x m low , x 1 up , , x m up ) .
We see that the first m components of x bound are the lower bound x low and the last m ones are the upper bound x up . The volume of the hyper-box D ( x low , x up ) is thus given by
u ( x low , x up ) = d e f i = 1 m ( x i up x i low ) .
A hyper-box which only includes good design points is called a solution hyper-box.
The design parameter which can be modified easily with low modification cost is called the key parameter. The design parameter which is hard to be modified is called the non-key parameter.
As mentioned in Section 2.1, to turn a bad design into a good one with the least effort, a solution hyper-box which already includes non-key parameters of the bad design is sought. Formally, the problem we focus on is formulated as the following constrained optimization problem:
max x low , x up u ( x low , x up ) subject to min x D ( x low , x up ) f ( x ) f c , x k low x c k low , x k up x c k up , k K , x l x low x up x u ,
where k is the number value that corresponds to the dimension of a non-key parameter, K is the set of the indices of non-key parameters, and x c k low and x c k up are the lower and upper limit values, respectively. The constraints x k low x c k low and x k up x c k up ensure that the non-key parameter is included in the obtained hyper-box with a specified safety margin.
It can be seen from (7) that the aim of this paper is to seek for the largest solution hyper-box satisfying the constraints. The proposed problem is fundamentally different from robust design optimization in that it is to seek the intervals of a permissible design range, and interval boundaries are used as degrees of freedom rather than design parameters.
If the explicit expressions of f ( x ) and min x D ( x low , x up ) f ( x ) are analytically known, the problem (7) can be solved by classical optimization methods. However, the performance function f ( x ) is not analytically known in most engineering problems, thus min x D ( x low , x up ) f ( x ) cannot be explicitly expressed by x low and x up . Therefore, it is necessary to propose an approach which aims at black-box performance function.

3. Preliminaries

3.1. Divide-the-Best Algorithm

The problem (7) in Section 2.2 involves the following box-constrained global optimization sub-problem:
min x D ( x low , x up ) f ( x ) ,
where D ( x low , x up ) = { x = ( x 1 , , x m ) : x i low x i x i up , i = 1 , , m } is an m-dimensional hyper-box (hyper-rectangle) and f ( x ) is a performance function.
Numerous algorithms have been proposed (see, e.g., [27,30,31,32,33,34,35,36,37,38,39]) for solving the problem (8), under the assumption that the performance function f ( x ) satisfies the Lipschitz condition over the hyper-box with an unknown Lipschitz constant. The Divide-the-Best approach is the most well-known partitioning-based one. Particularly, the Divide-the-Best method in [30] has very promising performance, which uses the multiple estimates of the Lipschitz constant and an efficient diagonal partition. Therefore, the Divide-the-Best method in [30] is adopted in this paper to solve the problem (8). Figure 3 shows the flow chart of the Divide-the-Best algorithm in [30]. Given a vector q k of the method parameters, at each iteration k, the admissible region D ( x low , x up ) is adaptively partitioned into a collection { D i k } of the finite number of robust subsets D i k .
More precisely, in Step 1, several possible Lipschitz constants are chosen from a set of values varying from zero to infinity, denoted by L ^ . Then, the “merit” (called characteristic) C i of each hyper-box D i k ( x i low , x i up ) is calculated by the following formula C i = C i ( L ^ ) = 1 2 ( f ( x i low ) + f ( x i up ) L ^ | | x i up x i low | | ) (see Step 2 in Figure 3). The hyper-box over which characteristic is the minimum is called the “promising hyper-box”. It has higher possibility to find the global minimizer within the “promising hyper-box” D t k (see Step 3). Subsequently, the new sample points are obtained from the old ones by adding and subtracting the two-thirds-side length of the longest edge of the “promising hyper-box”. Then, the evaluation of performance function at the new sample point is performed. Finally, in Step 4, the “promising hyper-box” is subdivided by an efficient diagonal partition strategy for performing the next iteration (see [30] for more details). Naturally, more than one “promising hyper-box” can be partitioned at every iteration. The stopping criterion is that the number of iteration reaches the pre-defined maximal allowed number. The evaluation of the performance function at a point is referred to as a trial.
To better demonstrate how the Divide-the-Best algorithm performs, its first three iterations on a two-dimensional function in [30] are shown in Figure 4.
The Divide-the-Best algorithm in [30] balances global and local search in a more sophisticated way and provides a faster convergence to the global minimizers of difficult multi-extremal black-box performance functions.

3.2. Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO) is a population-based meta-heuristics technique [29,40]. The population consists of potential solutions, called particles, which are a metaphor of birds in bird flocking. The particles are randomly initialized and then freely fly across the multidimensional search space. In social science context, a PSO system combines a social-only model and a cognition-only model [41]. The social-only component suggests that particles ignore their own experience and adjust their behavior according to the successful beliefs of particles in the neighborhood. On the other hand, the cognition-only component treats individuals as isolated beings. One advantage of PSO is that it often locates near optimal solutions significantly more quickly than evolutionary optimization [42].
Each particle is equivalent to a candidate solution of the problem (7). The particle moves according to an adjusted velocity, which is based on the corresponding particle’s experience and the experience of its companions. The velocity of the ith particle is modified under the following equation in the PSO algorithm:
v i t + 1 = ω t v i t + c 1 r 1 ( pbest i t x i bound , t ) + c 2 r 2 ( gbest t x i bound , t ) ,
where t is the current iteration number, v i t is the velocity of the ith particle at iteration t, x i bound , t is the position of the ith particle at iteration t, pbest i t is the best position of the particle i until iteration t, gbest t is the best position among all particles until iteration t, ω t = ω max ω max ω min Iter max t is the inertia weight at iteration t, Iter max is the allowed maximum iteration number, c 1 and c 2 are weight factors, and r 1 and r 2 are random numbers between 0 and 1. Naturally, according to Equation (5), x i bound , t represents the bounds of the ith hyper-box at iteration t.
It is worth mentioning that the second term of Equation (9) represents the cognitive part of PSO where the particle changes its velocity based on its own thinking and memory. The third term represents the social part of PSO where the particle changes its velocity based on the social-psychological adaptation of knowledge.
Each particle moves from the current position to the next one by the modified velocity in (9) using the following equation:
x i bound , t + 1 = x i bound , t + v i t + 1 .

4. The Proposed Particle Swarm Optimization Divide-the-Best Algorithm

The particle swarm optimization algorithm does not rely on mathematical properties for application. The Divide-the-Best algorithm in [30] is an efficient algorithm to calculate the global minimum of a black-box performance function over a hyper-box. Therefore, an innovative approach which combines the particle swarm optimization algorithm with the Divide-the-Best algorithm is proposed to solve the problem (7), referred to as the particle swarm optimization divide-the-best algorithm (PSO-Divide-Best).
Specifically, particle swarm optimization drives the evolution toward increasing the volume of hyper-box. The Divide-the-Best algorithm solves the optimization sub-problem (8), and thus ensures that the obtained hyper-box only includes good designs.
The PSO-Divide-Best algorithm is illustrated in Algorithm 1. Initially, it generates randomly a group of particles satisfying constraints. During the process of optimization, at iteration t, firstly, the velocity v i t + 1 and position x i bound , t + 1 of the ith particle are updated according to Equations (9) and (10), respectively. Then, if the conditions x i k low x c k low , x i k up x c k up , and k K are not satisfied, the re-updating is performed. Otherwise, the value of min x D ( x i low , t + 1 , x i up , t + 1 ) f ( x ) is evaluated by the Divide-the-Best algorithm in Figure 3 and denoted by f min , i t + 1 . If the constraint f min , i t + 1 f c is met, we calculate the corresponding volume according to Equation (6), otherwise the particle is re-updated. Next, pbest i t + 1 and gbest t + 1 are updated.
Algorithm 1 The particle swarm optimization divide-the-best algorithm (PSO-Divide-Best)
Input:
The allowed maximum iteration number, Iter max ; The weight factors, c 1 and c 2 ; The initial and final weight parameters, ω max and ω min ; The number of particles, N; The current iteration number, t = 1 ;
Output:
The lower and upper bounds of the maximum solution hyper-box, x low and x up ;
1:
Initialization:
Initialize N particles randomly while satisfying constraints.
2:
Optimization:
2.1
Velocity and position updates while satisfying constraints:
for all  i = 1 , , N  do
2.1.1
The ith particle velocity v i t + 1 is updated according to Equation (9). If a particle violates the velocity limits, set its velocity equal to the limit.
2.1.2
The position of each particle x i bound , t + 1 is modified by Equation (10). If a particle violates its position limits in any dimension, set its position equal to the limit.
2.1.3
Set x i k low , t + 1 = min ( x i k bound , t + 1 , x i , k + m bound , t + 1 ) , x i k up , t + 1 = max ( x i k bound , t + 1 , x i , k + m bound , t + 1 ) , k = 1 , , m ; Set x i low , t + 1 = ( x i 1 low , t + 1 , , x i m low , t + 1 ) , x i up , t + 1 = ( x i 1 up , t + 1 , , x i m up , t + 1 ) .
2.1.4
If the constraints x i k low , t + 1 x c k low , x i k up , t + 1 x c k up , k K , are not satisfied, go to Step 2.1.1.
2.1.5
Calculate the value of min x D ( x i low , t + 1 , x i up , t + 1 ) f ( x ) by algorithm in Figure 3 with parameter setting K max = 100 ln 2 m , denoted by f min , i t + 1 .
2.1.6
If the constraint f min , i t + 1 f c is met, then calculate the corresponding volume by Equation (6), namely u i t + 1 = u ( x i low , t + 1 , x i up , t + 1 ) . Otherwise go to Step 2.1.1.
end for
2.2
Update pbest i t + 1 and gbest t + 1 :
2.2.1
For all i = 1 , , N , if u i t + 1 > u i t , set pbest i t + 1 = x i bound , t + 1 and ubest i = u i t .
2.2.2
Update gbest t + 1 as the one with the maximum volume among all pbest i t + 1 , i = 1 , , N .
2.3
Stopping criteria:
If t < Iter max , set t = t + 1 and go to Step 2.1.
3:
Verification:
3.1
Set x k low = min ( gbest k t + 1 , gbest k + m t + 1 ) , x k up = max ( gbest k t + 1 , gbest k + m t + 1 ) , k = 1 , , m ; Set x low = ( x 1 low , , x m low ) , x up = ( x 1 up , , x m up ) .
3.2
Calculate the value of min x D ( x low , x up ) f ( x ) by algorithm in Figure 3 with parameter setting K max = 1000 ln 2 m , denoted by f min * .
3.3
If f min * f c , output x low and x up ; Otherwise go to Step 2.
Finally, to verify whether the best particle satisfies the constraints, the Divide-the-Best algorithm with more iterative times is performed. If the constraints are satisfied, PSO-Divide-Best outputs the best particle. Otherwise, it goes back to Step 2.
PSO is derivative-free and global search [43]. The Divide-the-Best algorithm in [30] can converge to the global minimum with any degree of accuracy provided there are enough iterations. Therefore, the proposed PSO-Divide-Best method in this paper has the following advantages:
(1)
PSO-Divide-Best has great possibility to reach the globally maximum solution hyper-box satisfying the constraints.
(2)
Due to the discrete nature of the trial points in the Divide-the-Best algorithm, the PSO-Divide-Best method can be applied to both analytically known and black-box performance functions.
(3)
PSO-Divide-Best guarantees that any point selected within the obtained hyper-box is a good design provided that the performance function is continuous.
In most engineering problems, the performance function is continuous whether it is analytically known or black-box. Therefore, the PSO-Divide-Best approach has strong applicability in engineering problems.

5. Case Studies

A stochastic approach based on Monte Carlo sample is discussed in [28]. This approach consists of two phases: exploration phase and consolidation phase. The purpose of the exploration phase is to identify a solution box as large as possible. The consolidation phase includes an algorithm which shrinks the hyper-box such that it contains only good designs. Therefore, we denote this method by “EPCP” hereafter (an abbreviation for “exploration phase and consolidation phase”). Besides, a method in [24] which combines interval arithmetic with cellular evolutionary strategies is denoted by “IA-CES” hereafter.
To compare the proposed PSO-Divide-Best method with the EPCP and IA-CES ones, two cases are considered. The first case is the vehicle structure design problem which has been studied by Fender et al. in [28]. The second case is the power-shift steering transmission control system (PSSTCS) with a price of approximately 500,000 USD.
In the proposed PSO-Divide-Best method, the weight factors c 1 and c 2 are set to 2, the maximum number of iterations is 2000, ω max = 0.9 , ω min = 0.4 , and the number of particles N is set to 160.
All experiments were performed in MATLAB R2016b on a windows platform with Intel Core i7-4790 CPU 3.60 GHz, 16 GB RAM.

5.1. Vehicle Structure Design Problem

The vehicle structure design problem in [28] consists of two structural components. The design parameters are the two deformation forces x 1 and x 2 . The performance functions are as follows:
f 1 ( x ) = x 2 x 1 ,
f 2 ( x ) = a c x 2 / m a c ,
f 3 ( x ) = x 1 μ 1 c + x 2 μ 2 c 1 2 m v 0 2 ,
where x = ( x 1 , x 2 ) , m = 2000 kg is the mass, a c = 32 g is a critical threshold level, v 0 = 15.6 m/s is the speed, and μ 1 c = μ 2 c = 0.3 m are the limits of the deformation measures. The design goals are achieved if f i ( x ) 0 for all i = 1 , 2 , 3 .
A design with x 1 = 275 kN and x 2 = 450 kN is considered in [28]. It violates Equation (11a) and therefore is a bad design. To improve this bad design with the least effort, three scenarios are performed. In Scenario (a), both x 1 and x 2 are key parameters, that is, both F 1 and F 2 need to be modified. In Scenario (b), only x 1 is the key parameter. More precisely, x 2 is included in the solution hyper-box that is obtained by requiring a minimum safety margin of ± 25   kN . In Scenario (c), only x 2 is the key parameter.
Since each performance function in Equation (11) is monotone, the exact solution is obtained by the penalty method [44], as shown in Table 1. The results in [28] obtained by the EPCP method are listed in Table 1. Due to the stochastic nature of the PSO, as adopted in [24], we ran the PSO-Divide-Best algorithm 20 times for each scenario, and the best solutions among the 20 runs are shown in Table 1. The results obtained by the IA-CES method are also shown in Table 1. The deviations between the lower boundaries computed numerically x i low and the exact solutions x 0 , i low are given by Δ x i low = | x i low x 0 , i low | . The relative error is given by φ i low = Δ x i low / x 0 , i low . Besides, the relative error for the upper boundaries of the hyper-box are defined in a similar way. The row entitled “volume” is the volume of hyper-box, and the row entitled “error” is the relative error between the volume of hyper-box computed numerically and the exact one.
Table 1 shows that the hyper-boxes obtained by the PSO-Divide-Best method are nearly identical to the exact ones, while the hyper-boxes obtained by the EPCP and IA-CES approaches have a high deviation from the exact ones. Besides, the hyper-boxes obtained by the PSO-Divide-Best method are significantly larger than those obtained by the EPCP and IA-CES methods.
Furthermore, a visualization of the results of the EPCP, IA-CES, and PSO-Divide-Best methods in Table 1 is shown in Figure 5, Figure 6 and Figure 7, respectively, where the grey region is the complete solution space. Figure 5, Figure 6 and Figure 7 also give some examples of how to turn the bad design into a good one.
In Scenarios (a) and (b), the blue hyper-boxes exceed the grey region. This implies that the hyper-boxes obtained by the EPCP method contain some bad designs. Based on the hyper-box obtained by the EPCP method, a bad design may fail to be turned into a good one (see Figure 5a). Consequently, the EPCP method should be used with great caution.
The yellow hyper-boxes obtained by the IA-CES method all lie within the grey region; however, they are not the largest ones. This implies that target intervals for each parameter determined by the IA-CES method are relatively shorter. Therefore, the good design point which is determined by the IA-CES method has relatively poor robustness against unintended variations.
The green hyper-boxes obtained by the PSO-Divide-Best method all locate within grey region, therefore they only include good designs. Actually, as long as the key parameters of the bad design are moved into their target intervals defined by our method, the bad design is turned into a good one. Taking Scenario (b) as an example, according to the solution hyper-box obtained by the PSO-Divide-Best method shown in Table 1, the bad design is turned into a good one by changing x 1 into any value within [ 386.20 , 425.00 ] .

5.2. The Power-Shift Steering Transmission Control System (PSSTCS)

Figure 8 and Figure 9 show the structure principle drawing and the function constitutes of PSSTCS, respectively.
The PSSTCS is a non-monotonic coherent system which consists of 86 components. The design goal is that the system reliability should be higher than 0.9900. However, the system reliability function is not analytically known, so it is evaluated by the goal-oriented (GO) reliability assessment method in [45]. Now, we consider a design shown in Table A1 in Appendix A. Its system reliability is 0.9856; therefore, it is insufficient. To improve this bad design with the least effort, firstly, the engineers classify design parameters into key parameters and non-key ones according to modification difficulty degree. The key parameters (components) are marked with red color in Table A1. Secondly, we search the maximum solution hyper-box which already includes non-key parameters of this bad design. More precisely, in this PSSTCS case, the optimization problem (7) is formulated as follows:
max R up , R low i = 1 86 log ( R i up R i low ) subject to min R D ( R low , R up ) f ( R ) f c , R k low R c k low , R k up R c k up , k K , R l R low R up R u ,
where R low = ( R 1 low , , R 86 low ) , R up = ( R 1 up , , R 86 up ) , R i low and R i up are the lower and upper bounds of reliability for the ith component, R = ( R 1 , , R 86 ) , R i is the reliability for the ith component, R l = ( 0.9990 , , 0.9990 ) , R u = ( 1.0000 , , 1.0000 ) , R c k low = 0.9995 , R c k up = 0.9997 , K = { 2 , 3 , 4 , 5 , 10 , 11 , 12 , 13 , 15 , 16 , 17 , 18 , 22 , 23 , 26 , 28 , 30 , 31 , 32 , 35 , 36 , 37 , 38 , 39 , 42 , 43 , 45 , 51 , 55 , 56 , 57 , 59 , 63 , 65 , 66 , 68 , 69 , 71 , 72 , 73 , 82 , 83 } , f c = 0.9900 is the reliability threshold, and f ( R ) is the system reliability function which is evaluated by the GO method in [45]. Note that the volume here is replaced by the log-volume (logarithmic transformation of the volume) in order to calculate conveniently.
The IA-CES method is established based on analytically known function; therefore, it is not applicable to this complex system with black-box reliability function. The obtained hyper-boxes of the problem (12) by the EPCP and PSO-Divide-Best methods are listed in Table A2 in Appendix A. We see that the lower bounds obtained by the PSO-Divide-Best method are almost always smaller than those obtained by the EPCP method, and the upper bounds obtained by the PSO-Divide-Best method are almost always larger than those obtained by the EPCP method. This implies that the PSO-Divide-Best method provides much wider target intervals for most design parameters. Therefore, the PSO-Divide-Best method has stronger robustness against uncertainty.
Besides, the log-volume of the obtained hyper-box is listed in Table 2. We see that the log-volume of hyper-box obtained by the PSO-Divide-Best method is much larger than that by the EPCP method. This is further reflected that the PSO-Divide-Best method is more robust against variations.
The hyper-box provides target intervals for each design parameter; therefore, the bad design can be turned into a good design by only moving its key parameters into their target intervals. Particularly, the good design for which the key parameters are located at the midpoints of their target intervals may be the most representative one. It provides the maximum robustness if the variation of design parameter is the same on both sides of a nominal value. These representative good designs obtained by the EPCP and PSO-Divide-Best methods are listed in Table A1. To verify whether these good designs satisfy the design goal, their system reliabilities were evaluated by the GO method, as also shown in Table A1. We can see these two good designs indeed achieve the design goal. Therefore, the system reliability for the PSSTCS can change from insufficient to sufficient by only modifying the reliabilities for key components according to Table A1.
To illustrate whether the obtained hyper-boxes meet the design goal, we use the Latin hypercube sampling to choose n designs from each hyper-box, and then calculate the rate of good designs as follows:
rate = 1 n i = 1 n # ( f ( R i ) f c ) ,
where # ( · ) is the indicator function, i.e., # ( f ( R i ) f c ) = 1 if f ( R i ) f c and # ( f ( R i ) f c ) = 0 otherwise.
Figure 10 shows the rates of good designs under different sample sizes. We see that the rates of good designs of the PSO-Divide-Best method are all 1, while those of the EPCP method are all below 1. This implies that the hyper-box obtained by the PSO-Divide-Best method only includes good designs, while those obtained by the EPCP method includes some bad designs. Therefore, the EPCP method should be used cautiously, because it may fail to turn a bad design into a good one. However, the PSO-Divide-Best method is valid as it ensures any design within the obtained hyper-box is good.

6. Conclusions

To improve a bad design with comparatively little effort in the presence of uncertainty, rather than changing all design parameters of this bad design, this paper only modifies its key parameters. To this end, the maximum solution hyper-box which already includes non-key parameters of a current bad design is sought. The solution hyper-box provides target intervals for each design parameter. A current bad design can be turned into a good one by only moving its key parameters into their target intervals. The volume of the solution hyper-box should be as large as possible for providing stronger robustness against unintended variations.
The PSO-Divide-Best algorithm combines the PSO and the Divide-the-Best algorithms [30] to seek a solution hyper-box which has the maximum volume and satisfies all the constraints. The case studies show the solution hyper-boxes obtained by the PSO-Divide-Best method only include good designs, and they are much larger than those obtained by the EPCP and IA-CES methods. This implies that a good design determined by the EPCP method may have stronger robustness against uncertainty. Therefore, our method is better than the EPCP and IA-CES methods.
Since the Divide-the-Best algorithm only evaluates the performance function at trial points, the PSO-Divide-Best method has strong application and can be applied to complex systems with black-box performance functions. As long as the performance function is continuous, our method can provide a solution hyper-box that is guaranteed to include only good designs. Therefore, a bad design can be turned into a good one provided that its key parameters are moved into the target intervals obtained by our method. This implies that our method is valid. Our method converges to the globally maximum solution hyper-box with very great probability. However, its convergence speed becomes slow as the number of design parameters increases. Therefore, the scalability of our method is relatively poor.

Author Contributions

Conceptualization, Y.C., J.S., and X.-J.Y.; methodology, Y.C.; and supervision, J.S. and X.-J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 71801196, National Major Project from the Shanghai Nuclear Engineering Research and Design Institute under Grant 2017ZX06002006, and the Science Research Foundation of Capital University of Economics and Business under Grant XRZ2021043.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle Swarm Optimization
PSSTCSPower-Shift Steering Transmission Control System
PSO-Divide-BestParticle Swarm Optimization Divide-the-Best algorithm
EPCPApproach in [28] including exploration phase and consolidation phase
IA-CESMethod in [24] which combines interval arithmetic with cellular evolutionary strategies

Appendix A

Table A1. The bad designs and the representative good designs.
Table A1. The bad designs and the representative good designs.
Bad DesignGood Design Bad DesignGood Design
EPCPPSO-Divide-Best EPCPPSO-Divide-Best
R 1 0.99960.99980.9999 R 2 0.99960.99960.9996
R 3 0.99960.99960.9996 R 4 0.99960.99960.9996
R 5 0.99960.99960.9996 R 6 0.99960.99970.9999
R 7 0.99960.99960.9998 R 8 0.99960.99960.9996
R 9 0.99960.99970.9997 R 10 0.99960.99960.9996
R 11 0.99960.99960.9996 R 12 0.99960.99960.9996
R 13 0.99960.99960.9996 R 14 0.99960.99980.9996
R 15 0.99960.99960.9996 R 16 0.99960.99960.9996
R 17 0.99960.99960.9996 R 18 0.99960.99960.9996
R 19 0.99960.99970.9998 R 20 0.99960.99980.9999
R 21 0.99960.99980.9998 R 22 0.99960.99960.9996
R 23 0.99960.99960.9996 R 24 0.99960.99970.9999
R 25 0.99960.99970.9998 R 26 0.99960.99960.9996
R 27 0.99960.99970.9999 R 28 0.99960.99960.9996
R 29 0.99960.99970.9998 R 30 0.99960.99960.9996
R 31 0.99960.99960.9996 R 32 0.99960.99960.9996
R 33 0.99960.99970.9999 R 34 0.99960.99980.9999
R 35 0.99960.99960.9996 R 36 0.99960.99960.9996
R 37 0.99960.99960.9996 R 38 0.99960.99960.9996
R 39 0.99960.99960.9996 R 40 0.99960.99980.9998
R 41 0.99960.99980.9998 R 42 0.99960.99960.9996
R 43 0.99960.99960.9996 R 44 0.99960.99980.9999
R 45 0.99960.99960.9996 R 46 0.99960.99980.9998
R 47 0.99960.99970.9999 R 48 0.99960.99980.9998
R 49 0.99960.99980.9999 R 50 0.99960.99970.9999
R 51 0.99960.99960.9996 R 52 0.99960.99970.9998
R 53 0.99960.99980.9997 R 54 0.99960.99980.9999
R 55 0.99960.99960.9996 R 56 0.99960.99960.9996
R 57 0.99960.99960.9996 R 58 0.99960.99960.9997
R 59 0.99960.99960.9996 R 60 0.99960.99960.9997
R 61 0.99960.99970.9999 R 62 0.99960.99970.9998
R 63 0.99960.99970.9995 R 64 0.99960.99970.9998
R 65 0.99960.99960.9996 R 66 0.99960.99960.9996
R 67 0.99960.99970.9997 R 68 0.99960.99960.9996
R 69 0.99960.99960.9996 R 70 0.99960.99960.9995
R 71 0.99960.99960.9996 R 72 0.99960.99960.9996
R 73 0.99960.99960.9996 R 74 0.99960.99970.9996
R 75 0.99960.99970.9997 R 76 0.99960.99970.9998
R 77 0.99960.99970.9997 R 78 0.99960.99960.9998
R 79 0.99960.99970.9998 R 80 0.99960.99970.9995
R 81 0.99960.99970.9997 R 82 0.99960.99960.9996
R 83 0.99960.99960.9996 R 84 0.99960.99970.9996
R 85 0.99960.99960.9995 R 86 0.99960.99980.9998
f ( R ) 0.98560.99120.9940
Table A2. The lower and upper boundaries of hyper-boxes.
Table A2. The lower and upper boundaries of hyper-boxes.
Lower BoundEPCPPSO-Divide-BestUpper BoundEPCPPSO-Divide-Best
R 1 low 0.99970.9997 R 1 up 0.99991.0000
R 2 low 0.99950.9994 R 2 up 0.99981.0000
R 3 low 0.99960.9991 R 3 up 0.99981.0000
R 4 low 0.99950.9993 R 4 up 0.99981.0000
R 5 low 0.99950.9995 R 5 up 0.99981.0000
R 6 low 0.99970.9997 R 6 up 0.99981.0000
R 7 low 0.99950.9995 R 7 up 0.99971.0000
R 8 low 0.99950.9992 R 8 up 0.99981.0000
R 9 low 0.99960.9994 R 9 up 0.99981.0000
R 10 low 0.99950.9999 R 10 up 0.99991.0000
R 11 low 0.99900.9990 R 11 up 1.00001.0000
R 12 low 0.99900.9990 R 12 up 1.00001.0000
R 13 low 0.99910.9990 R 13 up 1.00001.0000
R 14 low 0.99970.9998 R 14 up 0.99991.0000
R 15 low 0.99950.9992 R 15 up 0.99981.0000
R 16 low 0.99960.9994 R 16 up 0.99981.0000
R 17 low 0.99970.9994 R 17 up 0.99991.0000
R 18 low 0.99950.9995 R 18 up 0.99991.0000
R 19 low 0.99950.9996 R 19 up 0.99991.0000
R 20 low 0.99970.9998 R 20 up 0.99991.0000
R 21 low 0.99970.9997 R 21 up 0.99991.0000
R 22 low 0.99970.9998 R 22 up 0.99991.0000
R 23 low 0.99900.9990 R 23 up 1.00001.0000
R 24 low 0.99950.9999 R 24 up 0.99981.0000
R 25 low 0.99950.9995 R 25 up 0.99991.0000
R 26 low 0.99970.9998 R 26 up 0.99991.0000
R 27 low 0.99950.9998 R 27 up 0.99991.0000
R 28 low 0.99900.9990 R 28 up 1.00001.0000
R 29 low 0.99950.9995 R 29 up 0.99991.0000
R 30 low 0.99970.9995 R 30 up 0.99991.0000
R 31 low 0.99900.9990 R 31 up 1.00001.0000
R 32 low 0.99900.9990 R 32 up 1.00001.0000
R 33 low 0.99950.9997 R 33 up 0.99991.0000
R 34 low 0.99970.9999 R 34 up 0.99991.0000
R 35 low 0.99900.9990 R 35 up 1.00001.0000
R 36 low 0.99910.9990 R 36 up 1.00001.0000
R 37 low 0.99910.9990 R 37 up 1.00001.0000
R 38 low 0.99920.9990 R 38 up 1.00001.0000
R 39 low 0.99900.9990 R 39 up 1.00001.0000
R 40 low 0.99970.9995 R 40 up 0.99991.0000
R 41 low 0.99970.9997 R 41 up 0.99991.0000
R 42 low 0.99950.9995 R 42 up 0.99991.0000
R 43 low 0.99970.9997 R 43 up 0.99991.0000
R 44 low 0.99970.9999 R 44 up 0.99991.0000
R 45 low 0.99980.9998 R 45 up 0.99991.0000
R 46 low 0.99970.9995 R 46 up 0.99991.0000
R 47 low 0.99950.9998 R 47 up 0.99991.0000
R 48 low 0.99970.9995 R 48 up 0.99991.0000
R 49 low 0.99970.9999 R 49 up 0.99991.0000
R 50 low 0.99950.9998 R 50 up 0.99991.0000
R 51 low 0.99960.9995 R 51 up 0.99981.0000
R 52 low 0.99950.9995 R 52 up 0.99981.0000
R 53 low 0.99970.9995 R 53 up 0.99991.0000
R 54 low 0.99970.9998 R 54 up 0.99991.0000
R 55 low 0.99970.9995 R 55 up 0.99991.0000
R 56 low 0.99970.9995 R 56 up 0.99991.0000
R 57 low 0.99960.9992 R 57 up 0.99981.0000
R 58 low 0.99950.9994 R 58 up 0.99981.0000
R 59 low 0.99960.9994 R 59 up 0.99981.0000
R 60 low 0.99950.9994 R 60 up 0.99981.0000
R 61 low 0.99960.9997 R 61 up 0.99981.0000
R 62 low 0.99960.9996 R 62 up 0.99981.0000
R 63 low 0.99950.9990 R 63 up 0.99981.0000
R 64 low 0.99960.9996 R 64 up 0.99981.0000
R 65 low 0.99960.9995 R 65 up 0.99981.0000
R 66 low 0.99950.9995 R 66 up 0.99981.0000
R 67 low 0.99950.9994 R 67 up 0.99981.0000
R 68 low 0.99960.9994 R 68 up 0.99981.0000
R 69 low 0.99960.9990 R 69 up 0.99981.0000
R 70 low 0.99950.9990 R 70 up 0.99981.0000
R 71 low 0.99950.9990 R 71 up 0.99981.0000
R 72 low 0.99950.9995 R 72 up 0.99981.0000
R 73 low 0.99960.9992 R 73 up 0.99981.0000
R 74 low 0.99950.9991 R 74 up 0.99981.0000
R 75 low 0.99950.9994 R 75 up 0.99981.0000
R 76 low 0.99960.9995 R 76 up 0.99981.0000
R 77 low 0.99950.9993 R 77 up 0.99981.0000
R 78 low 0.99950.9995 R 78 up 0.99981.0000
R 79 low 0.99960.9995 R 79 up 0.99981.0000
R 80 low 0.99960.9990 R 80 up 0.99981.0000
R 81 low 0.99960.9994 R 81 up 0.99981.0000
R 82 low 0.99960.9993 R 82 up 0.99981.0000
R 83 low 0.99960.9996 R 83 up 0.99981.0000
R 84 low 0.99960.9992 R 84 up 0.99981.0000
R 85 low 0.99950.9990 R 85 up 0.99981.0000
R 86 low 0.99970.9995 R 86 up 0.99991.0000

References

  1. Oberkampf, W.L. Uncertainty Quantification Using Evidence Theory; Stanford University: Stanford, CA, USA, 2005. [Google Scholar]
  2. Lim, D.; Ong, Y.S.; Sendhoff, B.; Lee, B.S. Inverse multi-objective robust evolutionary design optimization. Genet. Program. Evolvable Mach. 2007, 7, 383–404. [Google Scholar] [CrossRef] [Green Version]
  3. Li, M.Y.; Zhang, W.D.; Hu, Q.P.; Guo, H.R.; Liu, J. Design and Risk Evaluation of Reliability Demonstration Test for Hierarchical Systems With Multilevel Information Aggregation. IEEE Trans. Reliab. 2017, 66, 135–147. [Google Scholar] [CrossRef]
  4. Zhao, Y.G.; Ono, T. A general procedure for first/second-order reliability method (FORM/SORM). Struct. Saf. 1999, 21, 95–112. [Google Scholar] [CrossRef]
  5. Saltelli, A.; Scott, M. Guest editorial: The role of sensitivity analysis in the corroboration of models and its link to model structural and parametric uncertainty. Reliab. Eng. Syst. Saf. 1997, 57, 1–4. [Google Scholar] [CrossRef]
  6. Pannier, S.; Graf, W. Sensitivity measures for fuzzy numbers based on artificial neural networks. In Applications of Statistics and Probability in Civil Engineering; CRC Press: London, UK, 2011; pp. 497–505. [Google Scholar]
  7. Taguchi, G.; Elsayed, E.; Hsiang, T. Quality Engineering in Production Systems; McGraw-Hill: New York, NY, USA, 1989. [Google Scholar]
  8. Hara, K.; Inoue, M. Gain-Preserving Data-Driven Approximation of the Koopman Operator and Its Application in Robust Controller Design. Mathematics 2021, 9, 949. [Google Scholar] [CrossRef]
  9. Lee, M.; Jeong, H.; Lee, D. Design Optimization of 3-DOF Redundant Planar Parallel Kinematic Mechanism Based Finishing Cut Stage for Improving Surface Roughness of FDM 3D Printed Sculptures. Mathematics 2021, 9, 961. [Google Scholar] [CrossRef]
  10. Gunawan, S.; Papalambros, P.Y. A Bayesian Approach to Reliability-Based Optimization With Incomplete Information. J. Mech. Des. 2006, 128, 909–918. [Google Scholar] [CrossRef]
  11. Stocki, R. A method to improve design reliability using optimal Latin hypercube sampling. Comput. Assist. Mech. Eng. Sci. 2005, 12, 393–411. [Google Scholar]
  12. Lehar, M.; Zimmermann, M. An inexpensive estimate of failure probability for high-dimensional systems with uncertainty. Struct. Saf. 2012, 36–37, 32–38. [Google Scholar] [CrossRef]
  13. Daum, D.A.; Deb, K.; Branke, J. Reliability-based optimization for multiple constraints with evolutionary algorithms. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007. [Google Scholar]
  14. Asafuddoula, M.; Singh, H.K.; Ray, T. Six-Sigma Robust Design Optimization Using a Many-Objective Decomposition-Based Evolutionary Algorithm. IEEE Trans. Evol. Comput. 2015, 19, 490–507. [Google Scholar] [CrossRef]
  15. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef] [Green Version]
  16. Nguyen, T.T.; Yang, S.; Branke, J. Evolutionary dynamic optimization: A survey of the state of the art. Swarm Evol. Comput. 2012, 6, 1–24. [Google Scholar] [CrossRef]
  17. Knight, J.T.; Zahradka, F.T.; Singer, D.J.; Collette, M. Multiobjective Particle Swarm Optimization of a Planing Craft with Uncertainty. J. Ship Prod. Des. 2018, 30, 194–200. [Google Scholar] [CrossRef]
  18. Fan, X.N.; Yue, X.Y.; Li, Y.Z. LHS applied to reliability-based design optimization of a crane metallic structure. J. Chem. Pharm. Res. 2015, 7, 2400–2406. [Google Scholar]
  19. Qin, Y.; Zhu, J.F.; Wei, Z. The robust optimization of the capacitated hub and spoke airline network design base on ant colony algorithms. In Proceedings of the 2010 International Conference on Intelligent Computing and Integrated Systems, Guilin, China, 22–24 October 2010; pp. 601–604. [Google Scholar]
  20. Chen, T.H.; Yeh, M.F.; Lee, W.Y.; Hsieh, W.H. Particle swarm optimization based networked control system design with uncertainty. Matec Web Conf. 2017, 119, 01051. [Google Scholar] [CrossRef] [Green Version]
  21. Zimmermann, M.; von Hoessle, J.E. Computing solution spaces for robust design. Int. J. Numer. Methods Eng. 2013, 94, 290–307. [Google Scholar] [CrossRef] [Green Version]
  22. Zimmermann, M. Vehicle Front Crash Design Accounting for Uncertainties. In Proceedings of the FISITA 2012 World Automotive Congress; Springer: Berlin/Heidelberg, Germany, 2013; Volume 197, pp. 83–89. [Google Scholar]
  23. Graff, L.; Harbrecht, H.; Zimmermann, M. On the computation of solution spaces in high dimensions. Struct. Multidiscip. Optim. 2016, 54, 811–829. [Google Scholar] [CrossRef] [Green Version]
  24. Rocco, C.M.; Moreno, J.A.; Carrasquero, N. Robust design using a hybrid cellular-evolutionary and interval-arithmetic approach:a reliability application. Reliab. Eng. Syst. Saf. 2003, 79, 149–159. [Google Scholar] [CrossRef]
  25. Chen, Y.; Shi, J.; Yi, X.J. A New Reliable Operating Region Design Method. Math. Probl. Eng. 2020, 2020, 1–12. [Google Scholar] [CrossRef]
  26. Chen, Y.; Shi, J.; Yi, X.J. A globally optimal robust design method for complex systems. Complexity 2020, 2020, 1–25. [Google Scholar] [CrossRef]
  27. Jones, D.R.; Perttunen, C.D.; Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 1993, 79, 157–181. [Google Scholar] [CrossRef]
  28. Fender, J.; Graff, L.; Harbrecht, H.; Zimmermann, M. Identifying Key Parameters for Design Improvement in High-Dimensional Systems With Uncertainty. J. Mech. Des. 2014, 136, 041007. [Google Scholar] [CrossRef]
  29. Kennedy, J. The particle swarm: Social adaptation of knowledge. In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation, Indianapolis, IN, USA, 13–16 April 1997. [Google Scholar]
  30. Sergeyev, Y.D.; Kvasov, D.E. Global search based on diagonal partitions and a set of Lipschitz constants. SIAM J. Optim. 2006, 16, 910–937. [Google Scholar] [CrossRef] [Green Version]
  31. Kvasov, D.E.; Sergeyev, Y.D. Multidimensional Global Optimization Algorithm Based on Adaptive Diagonal Curves. Comput. Math. Math. Phys. 2003, 43, 42–59. [Google Scholar]
  32. Strongin, R.G.; Sergeyev, Y.D. Global Optimization with Non-Convex Constraints. Sequential and Parallel Algorithms; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  33. Sergeyev, Y.D.; Yaroslav, D. An Information Global Optimization Algorithm with Local Tuning. SIAM J. Optim. 2006, 5, 858–870. [Google Scholar] [CrossRef]
  34. Kvasov, D.E.; Sergeyev, Y.D. Deterministic approaches for solving practical black-box global optimization problems. Adv. Eng. Softw. 2015, 80, 58–66. [Google Scholar] [CrossRef] [Green Version]
  35. Sergeyev, Y.D.; Kvasov, D.E. A deterministic global optimization using smooth diagonal auxiliary functions. Commun. Nonlinear Sci. Numer. Simul. 2015, 21, 99–111. [Google Scholar] [CrossRef] [Green Version]
  36. Gablonsky, J.M. Modifications of the Direct Algorithm. Ph.D. Thesis, North Carolina State University, Raleigh, NC, USA, 2001. [Google Scholar]
  37. Paulavicius, R.; Chiter, L.; Žilinskas, J. Global optimization based on bisection of rectangles, function values at diagonals, and a set of Lipschitz constants. J. Glob. Optim. 2018, 71, 5–20. [Google Scholar] [CrossRef]
  38. Paulavičius, R.; Žilinskas, J.; Grothey, A. Investigation of selection strategies in branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds. Optim. Lett. 2010, 4, 173–183. [Google Scholar] [CrossRef] [Green Version]
  39. Stripinis, L.; Paulavičius, R.; Žilinskas, J. Improved scheme for selection of potentially optimal hyper-rectangles in DIRECT. Optim. Lett. 2018, 12, 1699–1712. [Google Scholar] [CrossRef]
  40. Yan, J.; Hu, T.; Huang, C.C.; Wu, X. An improved particle swarm optimization algorithm. Adv. Mater. Res. 2009, 195–196, 1060–1065. [Google Scholar]
  41. Barrera, J.; Coello, C. A Review of Particle Swarm Optimization Methods Used for Multimodal Optimization. In Innovations in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  42. Gandomi, A.H.; Yun, G.J.; Yang, X.S.; Talatahari, S. Chaos-enhanced accelerated particle swarm optimization. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 327–340. [Google Scholar] [CrossRef]
  43. Rao, S.S. Engineering Optimization Theory and Practice; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2009. [Google Scholar]
  44. Kuhn, H.W.; Tucker, A.W. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium; University of California Press: Berkeley, CA, USA, 1951; pp. 481–492. [Google Scholar]
  45. Yi, X.J.; Shi, J.; Dhillon, B.S.; Hou, P.; Dong, H.P. A new reliability analysis method for repairable systems with closed-loop feedback links. Qual. Reliab. Eng. Int. 2018, 34, 298–332. [Google Scholar] [CrossRef]
Figure 1. Voltage divider example.
Figure 1. Voltage divider example.
Mathematics 09 01173 g001
Figure 2. Changes necessary to achieve the design goals: (a) R and R a ; (b) only R; and (c) only R a .
Figure 2. Changes necessary to achieve the design goals: (a) R and R a ; (b) only R; and (c) only R a .
Mathematics 09 01173 g002
Figure 3. Flow chart of Divide-the-Best algorithm in [30].
Figure 3. Flow chart of Divide-the-Best algorithm in [30].
Mathematics 09 01173 g003
Figure 4. An example of subdivisions by an efficient partition strategy in [30]. The top, middle, and bottom rows are the first, second, and third iterations, respectively. The grey hyper-boxes are the promising ones. They are selected in Step 3 of the current iteration and divided in Step 4. The trial points where the performance function is evaluated are red points.
Figure 4. An example of subdivisions by an efficient partition strategy in [30]. The top, middle, and bottom rows are the first, second, and third iterations, respectively. The grey hyper-boxes are the promising ones. They are selected in Step 3 of the current iteration and divided in Step 4. The trial points where the performance function is evaluated are red points.
Mathematics 09 01173 g004
Figure 5. The hyper-boxes (blue) obtained by the EPCP method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Figure 5. The hyper-boxes (blue) obtained by the EPCP method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Mathematics 09 01173 g005
Figure 6. The hyper-boxes (yellow) obtained by the IA-CES method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Figure 6. The hyper-boxes (yellow) obtained by the IA-CES method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Mathematics 09 01173 g006
Figure 7. The hyper-boxes (green) obtained by the PSO-Divide-Best method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Figure 7. The hyper-boxes (green) obtained by the PSO-Divide-Best method: (a) x 1 and x 2 are key parameters; (b) only x 1 ; and (c) only x 2 .
Mathematics 09 01173 g007
Figure 8. Structure principle drawing of PSSTCS.
Figure 8. Structure principle drawing of PSSTCS.
Mathematics 09 01173 g008
Figure 9. Function constitutes of PSSTCS.
Figure 9. Function constitutes of PSSTCS.
Mathematics 09 01173 g009
Figure 10. The rate of good designs of the obtained hyper-boxes.
Figure 10. The rate of good designs of the obtained hyper-boxes.
Mathematics 09 01173 g010
Table 1. The hyper-boxes obtained by the exact, EPCP, IA-CES and PSO-Divide-Best methods in Scenarios (a), (b), and (c) for vehicle structure design problem.
Table 1. The hyper-boxes obtained by the exact, EPCP, IA-CES and PSO-Divide-Best methods in Scenarios (a), (b), and (c) for vehicle structure design problem.
(a) Both x 1 and x 2 Are Key Parameters(b) Only x 1 Is Key Parameter(c) Only x 2 Is Key Parameter
x 2 low x c 2 low = 425 kN x 1 low x c 1 low = 250 kN
x 2 up x c 2 up = 475 kN x 1 up x c 1 up = 300 kN
exact EPCP IA-CES PSO-Divide-Best exact EPCP IA-CES PSO-Divide-Best exact EPCP IA-CES PSO-Divide-Best
x 1 low (kN)294.80290.74284.08296.15386.20384.30392.53386.20250.00249.14245.05250.00
x 1 up (kN)516.40516.31527.30515.06425.00422.47422.78425.00561.20556.32552.28561.47
x 2 low (kN)516.40515.81535.46515.06425.00422.06422.78425.00561.20563.67575.21561.47
x 2 up (kN)627.20622.59626.95627.20627.20623.69619.36627.19627.20627.48626.38627.19
volume ( 10 4 k 4 N 4 )2.45532.40862.22532.45480.78450.76960.57560.78442.05391.96011.57212.0537
φ 1 low (%)____1.383.640.46____0.491.640.00____0.341.590.00
φ 1 up (%)____0.022.110.26____0.600.750.00____0.871.980.05
φ 2 low (%)____0.113.690.26____0.691.250.00____0.442.500.05
φ 2 up (%)____0.740.040.00____0.560.520.01____0.040.130.00
error (%)____1.909.290.02____1.9026.630.01____4.5723.460.01
Table 2. The log-volume of the EPCP and PSO-Divide-Best hyper-boxes.
Table 2. The log-volume of the EPCP and PSO-Divide-Best hyper-boxes.
MethodEPCPPSO-Divide-Best
Log-volume−698.6824−656.9503
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; Shi, J.; Yi, X.-J. Design Improvement for Complex Systems with Uncertainty. Mathematics 2021, 9, 1173. https://doi.org/10.3390/math9111173

AMA Style

Chen Y, Shi J, Yi X-J. Design Improvement for Complex Systems with Uncertainty. Mathematics. 2021; 9(11):1173. https://doi.org/10.3390/math9111173

Chicago/Turabian Style

Chen, Yue, Jian Shi, and Xiao-Jian Yi. 2021. "Design Improvement for Complex Systems with Uncertainty" Mathematics 9, no. 11: 1173. https://doi.org/10.3390/math9111173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop