A Progressive Trigonometric Mixed Response Surface Method for Double-Loop Interval Optimization

: For some highly nonlinear problems, the general second-order response surface method (RSM) cannot satisfy the accuracy requirement. To improve accuracy, the highest order number has to be determined in advance. Thus, a progressive trigonometric mixed response surface method (PTMRSM) was proposed to enhance the approximation accuracy and define the highest order number, rather than determining it in advance. After that, a double-loop interval optimization process could be constructed using this PTMRSM to save time while maintaining accuracy when compared to other experimental or computational methods. Unfortunately, the traditional double-loop interval optimization method had issues with the probability of reliable constraints. Then, for the construction of this double-loop interval optimization process, the modified reliable constraints were introduced. A more reliable and effective double-loop interval optimization was introduced for addressing practical engineering problems using the effective approximate method of the PTMRSM and the amended reliable constraints. Two numerical test functions and a composite submersible hull were performed to verify the accuracy and effectiveness of the PTMRSM and the double-loop interval optimization.


Introduction
Regarding engineering design optimization, it is usual to assess structural behavior using deterministic parameters. In practice, however, the uncertainty associated with actual parameter values is ubiquitous, ranging from simple models to complex systems. Geometric dimensions, material properties, loads, boundary conditions, manufacturing tolerance, and so on are all inherently uncertain in real-world problems [1][2][3]. Recently, a probabilistic method based on a probability density function, or distribution function, has become widely used for dealing with uncertainties. Unfortunately, it seems inevitable that there will be precise probabilistic density functions or a wealth of probabilistic data for some complicated engineering problems. Furthermore, even a small, inaccurate deviation may lead to a large error in structural problems [4,5].
To resolve this problem, a method named interval analysis was proposed by Moore [6] in the 1960s. In this book, a new interval analysis is described as neglecting the premise of a probabilistic density function or huge amounts of probabilistic data. Since then, as an efficient method, interval analysis has been used to cope with a large number of structural problems. Elishakoff and Ben-Haim [7,8] may first have applied this interval analysis as a bounded, uncertain approach for structural engineering. Qiu [5,9] employed the anti-optimization technique to solve linear interval equations for interval static displacement bounds of structures.
Nonetheless, investigating only linear programming problems in all of these reviews was an incomplete analysis since most engineering problems are nonlinear. Therefore, a nonlinear interval number programming (NINP) method is introduced to transform the uncertain optimization into deterministic multi-objective optimization using the penalty function method and the first-order Taylor expansion method introduced by Jiang [10][11][12]. It is worth noting that a large perturbation deviation of variables may cause inaccurate results in highly nonlinear systems when using the first-order Taylor expansion method for interval analysis (See Figure 1). In Figure 1, it is shown that the variation ( L y  or U y  ) of the first-order Taylor expansion method is obviously larger than the variation ( L y  or U y  ) of the actual objective function. To precisely obtain more reliable results, Li et al. [3,13] formulated a nested-loop optimization procedure for engineering design optimization. Zhao [14] proposed a doubleloop optimization and rebuilt the objective and constraints at each iteration step. However, the computational cost of the nested-loop optimization algorithm is always prohibitive, whether using the finite element method (FEM) or the experimental method; this may cause several weeks or months to be spent on only one optimization design [15]. To overcome this limitation of expensive computing, a surrogate model that can replace FEM or an experimental method can be expressed. Wu [1,16] developed a novel interval method for nonlinear systems based on Chebyshev polynomial series to achieve tighter and sharper bounds. Cheng [17] and Li [13] integrated the Kriging approximate model and the nested genetic algorithm to improve computational efficiency. Zhao [14] and Cheng [15] formed radial basis functions (RBF) to solve the nested interval optimization for mechanical performance. All these surrogate-based optimization methods can reduce the computational time to a few hours for one optimization design [18][19][20].
Although these surrogate models can achieve a good simulation effect, more sample points should be added to get more accurate simulation results. Basudhar [21] built a precise approximated explicit decision function based on an adaptive sample process to update this decision function. Another new sequential improvement criterion was implemented to simultaneously get the robust optimization solution and advance the suitability of the RBF proposed by Havinga [22]. Cheng [15,23] employed an adaptive Kriging model for more effective computing results by adding more sample points. Wang [24] constructed an adaptive response surface method (ARSM) that required only a reduced number of design experiments for high-dimensional engineering design problems. Chakraborty [25] performed an adaptive RSM in an iterative finite element model updating procedure that proved efficient. It is important to note that more sample points mean not only a more accurate response but also a higher computational cost. For signal classification problems, a weighted fuzzy process neural network model is proposed, and the accuracy of this method is higher than other existing automatic identification methods by S. Xu et al. [26].
Lots of methods for obtaining more precise approximations with fewer sample points have been developed and implemented. In order to construct a more efficient and accurate approximating model, Kim [27] and Youn [28] combined sensitivity information and the moving least squares method (MLSM). Kaymaz [29] and Taflanidis [30] implement a response surface model using the moving least squares method (MLSM) for the reduction of the computational burden. Li [31] proposed a doubly weighted moving least squares method, including the normal weight factor of MLSM and the distance between the most probable failure points. S. Lee [32] proposed a new approach toward reliability-based design optimization using the response surface augmented moment method (RSMM) for the progressive update of the response surface. Even though the aforementioned scholars did their best to establish an accurate surrogate model with a small number of sample points, sensitive data or other information is still needed, which can still increase computing costs.
Consequently, the trigonometric mixed response surface (TMRSM) is proposed as an updated response surface method (RSM) for obtaining perfect approximate responses in the same sample points [33]. Specifically, most of the surrogate model was constructed by the second-order polynomial [24,[27][28][29]31]. Only the quadratic RSM model may be appropriate for engineering performance in some highly nonlinear cases. The conventional approach is to artificially select the order of the terms in advance, then establish this RSM function. Few papers have conducted more extensive research on the determination of high nonlinear programming for RSM. Gavin [34] introduced an algorithm process that can iteratively decide the fittest order polynomial. Along this path, this paper improved the trigonometric mixed response surface (TMRSM) to the progressive trigonometric mixed response surface (PTMRSM), which can automatically determine the order number of the surrogate model using the t-test method. Following that, a double-loop interval optimization can be built using the PTMRSM and reliable constraints. Although the possibility degree may be first introduced by C. Jiang [10] in the treatment of the interval inequality constraints, this paper fixed a minor problem with respect to this possibility degree and proposed reliable constraints in accordance with the amended possibility degree for the double-loop interval optimization. This paper will be organized as follows. In Section 2, a novel progressive response surface method (PTMRSM) is introduced to replace the more time-consuming finite element method (FEM) and determine the order number automatically. The accuracy of the PTMRSM is verified by two general numerical test functions and one practical submersible example in Section 3. Then, a new double-loop interval optimization is proposed to improve the accuracy of interval boundaries using reliable constraints in Section 4. Finally, in Section 5, a practical submersible composite hull design is carried out to verify the efficiency of the double-loop interval optimization. The conclusion is in Section 6.

Identification of Response Surface Method
The response surface method (RSM) was originally proposed as a simple mathematical expression to express the relationship between the input and output of an experiment. Then, this effective method may be first introduced by Box and Hunter [35] and extended to other fields, especially engineering problems. The RSM function can be presented as: In general, approximate polynomial functions can be modeled as linear, second-order, or higher-order polynomials. Then, the common formula of the RSM model can be indicated as: is the actual vector of the coefficient.

Moving Least Squares Method (MLSM)
In the traditional least squares method (LSM), the response surface function is formed as: where ( ) ( ) ( ) ( ) 12 1, , , , To construct this function, m sample points of variables in the design space must be chosen based on optimal Latin hypercube design (OLHD). The output value can be calculated from these OLHD sample points using computer simulation, such as the finite element method (FEM). Once the required sample data was obtained, the polynomial constructed for all sample points can be shown as: where P can be defined as:  (5) After that, the experimental residual error between the output value and the response surface value can be expressed as: where can

( )
Wx be defined as: In Equation (6) (8) where I − xx is the Euclid distance between the prediction point and the sample point, and I D is the influence range whose appropriate size should be selected among a sufficient number of neighboring sample points to avoid singularity in the solution. It is useful to remember that I D should contain at least l sample points, and the weight function should vanish outside the I D influence range. And  is the coefficient of the exponential and the Gaussian weight function. Among all the weight functions, the Gaussian function is considered to be an effective weight by most scholars for MLSM [30,31]. Figure 2 shows the shape of different weight functions using Equation (8). From Figure 2, the exponential weight function and the Gaussian weight function may have a better weight effect both in the domain range close to the sample point and away from the sample point. Nevertheless, using the exponential weight function, the weight value is not zero near the boundary of the domain area. In this regard, the Gaussian weight function may have a more suitable performance for the MLSM compared to the exponential weight function.   The minimum of the experimental residual error ( ) L x can be gained by the partial derivatives with respect to coefficients, as below: Then, the evaluated coefficient can be achieved from Equation (9): Therefore, the contribution of the sample points will be decreased as the distance between the prediction point x and I x is increased. This implies the MLSM can denote both a local and global approximation over the entire region.

Trigonometric Mixed Response Surface Method
Normally, the base function of the response surface method is polynomial. However, if the surrogate relationship has considerable curvature and is extremely complicated, only high-order nonlinear programming may fail to meet the accuracy requirement. For this condition, Lee and Lin [36,37] proposed a novel type of RSM to solve these complex composite problems using trigonometric functions as the basic functions. In this method, the trigonometric functions ( sin x and cos x ) are chosen as the basic elements to replace the traditional polynomial functions. Then, the vector of the trigonometric base functions can be presented as:  3   22  1  2  3  1  1  2   2  2  2  3  3  3  3  1  1  2  1   3  1  2  1 , , cos cos , cos cos , cos cos ,sin ,sin , ,sin , cos , , cos ,sin ,sin , ,sin , cos , , cos , ,sin ,sin , ,sin , cos , , cos ] n n n n n n

( )
Therefore, a trigonometric mixed response surface method (TMRSM) is proposed to integrate the trigonometric functions and the MLSM based on OLHD [33].

Progressive Trigonometric Mixed Response Surface Method
Usually, most RSM models in the literature take second-order polynomials as their basic form. TMRSM, though, has successfully improved the accuracy of the surrogate model, and the determination of the higher-order polynomial should be tested in advance for the given sample data. To circumvent this deficiency, a progressive trigonometric mixed response surface method (PTMRSM) is proposed in this paper to approximate these sample points and test the necessity of the higher-order terms using a t-test.
According to Equations (4) and (10), the experimental residual error can be written as: Hence, the mathematical expectation and the variance can be expressed as: where  is the expectation of the evaluated coefficient ( ) Furthermore, the expectation of the sum of the squared residual error is: . Hence, this squared residual error obeys the Chi-square distribution: According to Equations (18) and (19), the summation of the square residual error should abide by From Equation (10), the mathematical expectation of this evaluated coefficient can be expressed as: So, the variance of the evaluated coefficient can be indicated as: And a standard normal distribution can be achieved: Combining Equations (20) and (23), the t distribution can be represented as: ( )  (24) where is a t-distribution of the s degree of freedom.
Determining the significance of the higher-order terms means measuring the importance of these evaluated coefficients. This indicates that the insignificant coefficients stand for the unimportant higher-order terms. Equation (10) where ( ) ts indicates t distribution of s degree of the freedom (see Equation (24)); d S is the standard deviation of the coefficients; j is the number of the terms of the model, which can be selected from the first number of the highest-order terms to the last number of the highest-order terms.
For this t-test, a two-sided test and 90% confidence intervals are usually chosen in this paper [34]. Then, the boundary value 0.9 t can be calculated by t-distribution functions. If 0.9 j tt  , the null hypothesis 0 H can be rejected. This implies these coefficients ˆ0 j   and the relevant high-order terms may have a significant influence. Otherwise, the null hypothesis 0 H cannot be dismissed. That is to say, the coefficients ˆ0 j  = and these high-order terms may not have a substantial effect.
In addition to the t-test criterion, other criteria should exist to ensure the relationships in which the odd or even order terms have a significant effect on the response results. Only testing whether the highest order has an important effect may easily lead to inaccurate approximations of these special functional relationships. To avoid this deficiency, the determination of the coefficient 2 R and the mean relative error MRE should be introduced for the m sample points, respectively.
where SSE represents the sum of squares for error, which is caused by experimental error. And SST expresses the sum of squares for the total, representing the total variation of values of m sample observations. These two values can be shown as: where y is the mean value of output response y , which can be expressed as: The determination of the coefficient is a statistical indicator that reflects the reliability of the response surface models to account for changes in terms of dependent variables. The closer this value is to 1, the better the approximated effect will be. If the determination of the coefficient 2 0.99 R  for the m sample points, it means that the approximation degree is sufficient [38].
The mean relative error is another indicator that can illustrate the relative error between the actual response and the approximate response. The lesser the value MRE is, the smaller the error is. For most practical engineering problems, it can be viewed as an appropriate limit when 0.02 MRE  [38]. Eventually, another stopping criterion is the highest order limit. Given the possibility of over-fitting for the approximate functions, this model takes a stopping criterion with respect to the sixth highest order. That is due to the fact that the highest terms are really rare to exceed the fifth order for most engineering problems [34].
The proposed PTMRSM can be started with the order 3 k = , and the program can determine whether the t-test values of the third-order terms are sufficient for the 90% confidence intervals. If not, the determination of the coefficient and the mean relative error will be utilized to check the reliability of the fitting performance and the relative error. If these criteria are satisfied, stop this process, and output the response. If not, continue this process until the highest order reaches the sixth order. Then, the response of PTMRSM will be outputted, displaying that "Warning: the highest order has reached the maximum value (6th order)". Figure 4 shows the step-by-step procedure of this algorithm: 1. Perform the OLHD sample points, and obtain the response with respect to these sample points based on FEM or specific functions; 2. Transform all the sample points and the input parameters x into the trigonometric functions sin x and cos x . Then, choose order 3 k = as the beginning, and construct this third-order TMRSM; 3. Calculate the values of the t-test criterion j t . Meanwhile, the determination of the coefficient 2 R and the mean relative error MRE can also be calculated from Equations (28) and (29); 4. Check the t-test criterion and decide whether this operation should be stopped. If the coefficients 0.9 j tt  , it is concluded that the highest-order terms are insignificant. Then, stop the process, and construct the model with (k − 1)-th order TMRSM; 5. If the t-test criterion is not satisfied, check the criteria for the determination of the coefficient 2 0.99 R  and the mean relative error 0.02 MRE  . If these criteria are fulfilled, it implies the performance and the accuracy of the approximated degree are sufficient. Then, stop this operation and build the model with k-th order TMRSM; 6. Otherwise, select (k + 1)-th order as the next step order, and check whether the highest order is 6 k = satisfied. If not, repeat steps 3 to step 6 until one of these criteria is satisfied; 7. When the highest order 6 k = is fulfilled, establish the model with the sixth order TMRSM, "Warning: the highest order number has reached the maximum value (6th order)". The pseudocode for the PTMRSM process is listed in the Algorithm 1.

Algorithm 1:
The pseudocode for the PTMRSM process: 1： Read input variable matrix and output response matrix: 2： load Sample.mat; 3： Set the confidence level of t-test criterion, the determination coefficient criterion and 4： the mean relative error criterion. 5： Transform input variables and sample matrix into trigonometric functions sin θ and cos θ. 6： for sn = 1:Number of response column 7： for k = 3:6 8： Construct the X matrix with MLSM using k order 9： Calculate errors between actual response values and predicted values 11： Calculate the value of t-test criterion: Adopt the new order and reconstruct the TMRSM model using updated order 30： Output the response value: 31： h_order(sn) = k; 33： sn = sn + 1; 34： end

Numerical Rastrigin Function
The first numerical function is the well-known Rastrigin function, which can be stated as: 20 10 cos 2 +cos 2 , In this Rastrigin function, 40 sample points were designed using optimal LHD, and 100 random test points were adopted for the verification of each surrogate model. Figure  5 shows the shape function or responses of the original function, second-order RSM with MLSM, second-order TMRSM and PTMRSM under 40 sample points. It is obvious that the PTMRSM model has exactly the same shape as the original model from Figure 5a,d. Namely, the PTMRSM model has the best approximate performance in comparison with the other surrogate models. And Table 1 represents the performance and the accurate measures of each surrogate model under 100 random test points. From Table 1, it is further confirmed that the PTMRSM model has a better fitting performance and smaller mean relative error compared to other surrogate models. For the PTMRSM model, Figure 6 denotes the number of its order, which can be decided by the step-by-step procedure for 100 random test points. It can be seen that the number of orders for the PTMRSM model is third and fourth order. That implies that the highest order number is not a fixed value for PTMRSM.

Numerical Example Function
The next numerical function is an example function, which has been utilized in the literature [34]. This function can be indicated below.
In this example function, there are also 40 sample points that were designed based on optimal LHD and 100 random test points that were taken to verify each surrogate model. Figure 7 Figure 8 represents the order number of the PTMRSM model through 100 random test points. As seen, for these test points, all the highest order numbers of the constructed PTMRSM models are third order. And this means the higher order terms can improve the performance and accuracy.

Composite Submersible Hull Example
In this paper, a composite submersible hull was put forward to withstand underwater pressure, whose type is the well-known Myring, as a practical example [39]. For this composite structure optimization, the stacking sequence is one of the non-trivial factors in terms of buckling performance. The conventional stacking sequence can be designed in a symmetrical form. This paper takes the symmetric type stacking sequence with a total of 10 layers, and critical buckling pressure cr P is considered the pivotal design objective. Another design constraint can be the maximum buckling displacement max d for the underwater pressure. Consequently, the orientation angle of each layer can be defined as the designed vector  and the thickness of the layer can be considered an uncertain factor t because of the manufacturing tolerance. Figure 9 shows the diagram of the composite submersible hull.   From this table, it is obvious that the PTMRSM model has more precise results and better fitting responses than other RSM models for both the critical buckling pressure and the maximum buckling displacement. Figures 10 and 11 display the highest order distribution under 100 testing points for the critical buckling pressure and the maximum buckling displacement, respectively. For this practical submersible hull, the third-highest order is adequate in terms of both the buckling performance and the deformation performance.

Interval Optimization Method
Based on the interval analysis method [11,40,41], the real numbers are bounded by intervals that the interval vector can be expressed in the following form: (35) where  and  denote the lower and upper bounds, respectively; C  and W  show the center point vector and radius vector, respectively. Considering the interval vectors, the objective and constraint functions will be constructed as lower and upper intervals, respectively: where x is the design variables, and  is the uncertain interval parameters.
C. Jiang performed the first-order Taylor expansion method, which is a simple approximated method for the evaluation under relatively small uncertainty [10,11,42,43]. x, x, x x,     (39) Although this Taylor expansion method is feasible and convenient, inaccurate bounds can be obtained for high nonlinear complex structural problems under large perturbation deviations. Thus, a double-loop interval optimization method can be proposed using sequential quadratic programming (SQP) by J. Wu and F. Li [2,13,44].
With reference to Equation (35), the interval objective and constraint functions can be described as their center points and radii, respectively: Therefore, the above interval optimal problem can be transformed into the deterministic two-objective problem below: ,,

Reliable Constraints
To compare two interval numbers, a certain degree named possibility degree is introduced, which can convert two interval numbers into a deterministic relationship. This possibility degree may be first introduced by C. Jiang to express which interval values are larger or smaller than the other ones [28,45,46]. Figure 12 shows the whole six positional cases with respect to these two interval values A and B . In order to describe this deterministic relationship, the possibility degrees of the cases in Figure 12 can be displayed for ranking these two interval numbers, A and B , as: : , , However, when the event 3 H occurs, three possible events ( AB  , AB = and AB  ) may be encountered. An assumption that all these events have equal chance was made by P. Sevastianov [47]. That means when the event 3 H occurs, the possibility is: Then, the conditional possibilities can be expressed as: Consequently, the possibility of the event AB  can be indicated as: In Case 3 of Figure 12, there are also three possible events: AB  , AB = and AB  (see Figure 13). Define three events in terms of the random parameters a and b in given subintervals when Case 3 appears:   Therefore, the probability degree of the event AB  can be attained as: This is a different value from Cases 2 and 3 of Equation (45). It implies C. Jiang only makes the assumption of the events BA  and BA = , rather than the assumption of the event of BA  . This assumption, however, is at odds with reality. Similarly, the events AB  in Case 4 and 5 can be illustrated, respectively, as: So the amended possibility degree can be improved by integrating Equations (47), (51), (55)-(57). On the other hand, the possibility degree under the opposite rank of the two interval numbers A and B can be shown:

AB A B A B A B A B B B A A
When there is an event AB = , the possibility degrees of Cases 1 and 6 are zero. Additionally, this possibility degree is one only if AB = and AB = . For the condition of Case 2 and 4, the possibility degrees can be displayed, respectively, as: Analogously, for Cases 3 and 5, the possibility degrees can be written, respectively, as: Thus, the whole of possibility degree under the rank of event AB = can be expressed as:

A B A B A B B A A B A B A A B B B A A B P A B B A A B B A B A B B A A A B B A
Otherwise It should be mentioned that ( ) P A B  = means the possibility degree level is  when A is smaller than B . 1  = means that A is always smaller than B , and 0  = represents that is A always larger than B . Under this amended possibility degree method, the reliable constraints should fulfill a confidence degree level, and these reliable constraints are transformed into simple deterministic constraints for the double-loop interval optimization. After the premise of the possibility degree has been ensured, the simple deterministic constraints can be expressed as:  (42) and (43).

Double-Loop Interval Optimization Method
For the sake of dealing with the two-objective problem (see Equation (46)), a linear combination of the two objectives is introduced, which can be generally defined as x  the same magnitude. It has to emphasize that the interval optimization problem may be either a minimization problem or a maximization problem. In this paper, the objective is a maximization issue. Consequently, the linear combination of objectives and reliable constraints can be written as: where  is the normalization factor that can make In order to perform this double-loop interval optimization process, the bounds of the interval objectives and constraints should be evaluated in advance. Therefore, an innerloop optimization should be presented to solve this min-max problem. Note that sequential quadratic programming (SQP) [3,48,49] and particle swarm optimization (PSO) [49,50] can be selected as the inner and outer optimization solvers, respectively. Figure 14 shows the flowchart of the double-loop interval optimization method.
It is clear that this flowchart is a nesting double-loop interval optimization problem. The samples of design and uncertain space are performed using OLHD, and the response of these sample points can be obtained by ANSYS simulation. For the sake of saving computational cost, the aforementioned PTMRSM surrogate model can be formed as the basic objective and constraint functions instead of the more time-consuming ANSYS simulation. In the inner loop interval optimization process, SQP can be executed to compute the bounds of the inner loop objective and constraint with respect to uncertain interval vectors  . Then, during the outer loop interval optimization process, PSO is carried out to optimize the linear combination objective with reliable constraints for the optimal design vector x .

The Objective Function of the Submersible Hull Example
As mentioned above, the critical buckling pressure cr P can be regarded as the inner loop objective function; meanwhile, the maximum buckling displacement max d should also be another significant effect that can be considered the inner loop constraint. In this submersible optimization design, the larger the buckling pressure is, the better the optimal solution is. Thus, the center point of the critical buckling    are the mean preset possibility degree level, which can be selected between 0 and 1. The larger  is, the stricter the constraints will be and may lead to a narrower feasible region [10]. In this paper, a common preset possibility level can be defined as:  Table 4 shows the optimal design results of the double-loop interval optimization for the 3rd order RSM with MLSM and the PTMRSM, and the nonlinear interval number programming (NINP) method for the PTMRSM model under neutral level possibility degree. The NINP method was proposed by C. Jiang [11,42]. And this NINP method can be treated as a comparative item. From this table, it can be seen that the double-loop interval optimization for PTMRSM has the smallest radius of the critical buckling pressure, which means the deviation of the optimal design solution can be least affected by the uncertain variables. In other words, the interval optimization for PTMRSM can obtain a more robust and reliable optimal design solution. Furthermore, the similarity between the optimal design solutions of the interval optimization for PTMRSM and the NINP method for TMRSM is greater than that of the interval optimization for 3rd-order RSM with MLSM. Figure 15 shows the lower and the upper bounds of the critical buckling pressure for one variable orientation angle and the other four fixed orientation angles under the neutral possibility degree. Figure 15a indicates the lower and the upper bounds of the critical buckling pressure under one variable ( 1  ) and four invariants (  From these figures, it should be noted that the deviation of the bounds of the interval optimization for PTMRSM is shaper than the deviations of the bounds of the other two optimization methods.

Results and Analysis
Tables 5-7 express the optimal design solutions under the slight level, the strong level, and the absolute level possibility degree.  From Tables 4-7, it is obvious that the center point of the critical buckling pressure becomes smaller with the increment of the possible degree. This phenomenon may be due to the fact that the increasing possibility degree limits the space of the optimal solutions. Among the whole optimal design solutions, it can be seen that the interval optimization for PTMRSM always has the smallest radius of the critical buckling pressure. This implies that the interval optimization for PTMRSM can attain robust and efficient optimal design solutions. Figures 16-18 display the lower and the upper bounds of the critical buckling pressure under the slight level, the strong level, and the absolute level possibility degree. From these whole figures, it is clear that the results of the interval optimization for PTMRSM also have the smallest deviations of the bounds compared to the other two optimization methods. It means that the double-loop interval optimization method with PTMRSM has enough efficiency and robustness in contrast to the same interval optimization method with 3rd-order RSM with MLSM and the NINP optimization method.

Conclusions
In this paper, a progressive trigonometric mixed response surface method (PTMRSM) was introduced to increase the accuracy of approximation for some high nonlinear engineering problems. The results of two numerical examples and a composite submersible hull were utilized to verify the efficiency and accuracy. For these highly nonlinear complex problems, the PTMRSM may have a better fitting performance and avoid the previous determination of the highest order number of polynomial terms.
Then, the PTMRSM is utilized as one approximate step of the novel double-loop interval optimization method. With the help of the PTMRSM and the amended reliable constraints, the novel double-loop interval optimization process can be constructed, and a composite submersible hull is also performed to verify this new interval optimization method compared with the NINP method and other traditional RSM with the MLSM model. From the comparison results of the composite submersible hull, it can be seen that the double-loop interval optimization for PTMRSM can obtain a robust and efficient optimal design solution.
Furthermore, the center point of the critical buckling pressure becomes smaller with the increase in the possibility degree. This phenomenon may be due to the fact that the increasing possibility degree limits the space of the optimal solutions.
Among the whole optimal design solutions, it can be seen that the interval optimization for PTMRSM always has the smallest radius and deviations of the critical buckling pressure. This implies the double-loop interval optimization for PTMRSM can attain a robust and efficient optimal design solution.