1. Introduction
With the global economic boom driving steady growth in oil consumption, countries are increasingly turning to deep-sea oil development. As key offshore operation facilities for drilling and oil extraction, the number of semi-submersible platforms is rising alongside the offshore oil demand, leading to a higher risk of ship collisions [
1,
2]. Such collisions generate massive impact forces, causing severe structural damage to these large-scale facilities. Moreover, the harsh working environment of semi-submersible platforms makes component repair more difficult and costly. Thus, improving the platform’s anti-collision capacity during the design stage has become a crucial research focus. China’s “North Dragon” semi-submersible drilling platform, designed to meet Norwegian Maritime Directorate and offshore industry standards, operates in the North Sea and Barents Sea with a maximum water depth of 1200 m, maximum drilling depth of 8000 m, and service temperature of −25 °C. It can withstand once-in-a-century storms, features an advanced automated drilling system (boosting efficiency by 15% via parallel automatic pipe arranging), and is uniquely capable of Arctic operations—making structural optimization to enhance its anti-collision and working performance practically significant.
Figure 1 displays the “North Dragon” semi-submersible drilling platform.
To study the dynamic response of large-scale structures to ship collisions, the nonlinear finite element method is widely used. It accounts for material nonlinearity, structural large deformation, local damage, and overall failure to obtain parameters like stress, strain, and knock-out in the collision area. Combined with internal and external mechanism analysis, it can accurately simulate collision processes, partially replacing ship model and real-ship tests. Zhou et al. [
3] analyzed ship collision impacts on a 1018 m cable-stayed bridge via numerical simulation to validate Eurocode’s collision force accuracy; Storheim and Amdahl [
4] performed nonlinear finite element modeling of a 7500-ton supply vessel to study the ship’s collision damage to floating semi-submersibles and fixed jackets; Sha et al. [
5] optimized the design of a steel bridge girder through finite element analysis of ship–bridge collision structural responses; Rudan et al. [
6] calibrated ship collision simulation parameters via finite element analysis of LPG ship–ferry collisions; and Fernandez et al. [
7] compared supply vessel bow modeling methods through finite element analysis of supply vessel–FPSO collisions.
Despite their high accuracy and adaptability to complex scenarios, numerical analysis methods (e.g., finite element analysis) have high computational complexity and long cycles, especially in structural optimization, which requires numerous simulations and may even make optimization unfeasible. In simulation-based optimization, approximate models (e.g., polynomial regression surface (PRS) [
8], Kriging [
9,
10], radial basis function (RBF) [
11], multivariate adaptive regression splines (MARSs) [
12], and inductive learning models [
13]) construct input–output relationships via limited simulations to replace time-consuming black-box models.
Traditional surrogate-based optimization first builds surrogates for time-consuming objective/constraint functions, then optimizes them with meta-heuristic algorithms (e.g., genetic algorithms) [
14]. Static surrogates need high accuracy across the entire design space (requiring many sampling points), while surrogate-assisted meta-heuristic algorithms [
15] dynamically update surrogates but still rely on population evolution, leading to insignificant reductions in simulation time. In contrast, surrogate-based optimization algorithms (e.g., adaptive response surface method (ARSM) [
16], efficient global optimization (EGO) [
17], mode-pursuing sampling (MPS) [
18]) explore the original problem by adding renewal points, integrating global/local search, and fully utilizing surrogate information, requiring fewer objective function calculations. Among these, EGO (based on Kriging) is widely studied due to its simple framework and high efficiency. Surrogate-based optimization approaches, such as Kriging-assisted EGO, have shown promise in reducing computational burden while maintaining accuracy. These methods are particularly valuable in contexts where high-fidelity simulations are required, such as in the vibration analysis of nanocomposite-enhanced shell structures [
19,
20].
The classic EGO algorithm [
21] works as follows: it builds an initial Kriging model with sample points; then, it iteratively selects the point with the maximum expected improvement (EI) as the renewal point, calculates its true target value, and updates the model until optimization is achieved. In engineering, multi-objective optimization often relies on mature meta-heuristic algorithms (e.g., NSGA-II [
22]). However, these require thousands of objective function calculations, rendering it unsuitable for time-consuming tasks. EGO’s efficiency in single-objective optimization has led to its application in multi-objective scenarios, mainly by extending single-objective EI to multi-objective EI: defining a multi-objective improvement function (e.g., Euler distance, maximum/minimum distance, hypervolume (HV)) to measure improvement on the non-dominated front, and then integrating it in the non-dominated area. Relevant research includes that by Keane [
23] (Euler distance EI), Bautista [
24] and Svenson [
25] (maximum/minimum distance EI), and HV-based EI (initially a screening criterion [
26], later used as an EGO update criterion [
27]). Other up-to-date optimization schema include the Adam Optimizer [
28,
29]. The adaptive method optimizer is a stochastic gradient descent method that combines momentum and per-parameter learning rate scaling for the efficient and robust training of deep learning models in engineering.
The standard EGO is serial (with one renewal point per iteration), which makes it incompatible with prevalent parallel computing hardware. Parallel EGO selects multiple renewal points per iteration for parallel computing (on multiple computers/cores), accelerating convergence and reducing total optimization time. Though it requires more real target calculations, simultaneous computing shortens the length of time, i.e., it exchanges computing resources for time, which is engineering-significant given the abundant resources and valuable time.
In summary, two significant challenges arise when extending EGO to practical engineering design: (1) Engineering problems inherently involve multiple, often competing, objectives. While several multi-objective EI criteria have been proposed, each criterion has inherent biases. (2) The standard EGO algorithm is inherently serial. This serial nature fails to leverage the now-ubiquitous parallel computing resources, becoming a critical bottleneck for reducing total optimization time. To overcome these limitations, this paper puts forward a parallel EGO multi-objective algorithm based on the Kriging approximate model with hybrid criteria. Our primary motivation is to develop a more robust and efficient optimization scheme for expensive black-box functions by synergistically combining the strengths of multiple EI criteria to guide parallel sampling. The proposed algorithm introduces a novel hybrid criteria-based parallel sampling method, obtained by combining the merits of the Euler distance, maximum and minimum distance, and HV improvement criteria, taking as a basis the classic EGO algorithm. During the iteration process, the genetic algorithm (GA) is employed in parallel to achieve hybrid criteria-based optimal sampling, so that multiple optimizations, allowing the surrogate model to be gradually updated, can be determined in parallel in each iteration. The algorithm allows parallel computing on multiple machines or multiple cores, avoiding the shortcomings of iterating through serial sampling points, which makes falling into local optimizations more likely. Global optimizations can thus be easily achieved, improving computational efficiency.
This paper is structured as follows: The numerical simulation analysis of collisions between supply vessels and semi-submersible platforms is described in
Section 2.
Section 3 presents the research on the Kriging approximate model, and the hybrid criteria-based parallel EGO algorithm. The validation of the newly proposed method, using a typical multi-objective optimization test function, is delineated in
Section 4.
Section 5 presents the application of the method in the optimization design of anti-collision structures for the semi-submersible platform. Finally,
Section 6 presents the conclusions of this work.
4. Analysis of Test Cases
A set of typical test functions were used to test the accuracy and efficiency of the algorithm proposed in this paper, and the corresponding results were compared with those obtained with the multi-objective EGO algorithm based on the single-objective EI criterion. The six test functions selected cover different types of typical test functions, all in the ZDT and DTLZ function series. ZDT1, ZDT2, and DTLZ2 correspond to the convex Pareto front and non-convex Pareto front, ZDT3 and DTLZ7 represent the discrete Pareto front, and DTLZ5 represents the curve Pareto front [
23].
The number of design variables was set to n = 6 for all test cases, and the number of initial design sample points was 11 n − 1. The initial sample points were selected by the Latin hypercube sampling method, and the Kriging approximate model was created through the DACE (Design and Analysis of Computer Experiments) toolbox. Regpoly0 was set as the regression function, and corrgauss as the correlation function. The GA, with a population size of 150, maximum evolutional generation of 200, and stop algebra of 200, was used in the optimization of the EI function.
For the calculation of test cases, the algorithm proposed in this paper (
EIp criterion) and the multi-objective EGO algorithm based on a single EI criterion [
27] were separately applied, so as to compare the accuracy and efficiency of the algorithm proposed in this paper under the same number of sample points. In each algorithm, 215 samplings (50 iterations for the multi-objective EGO algorithm based on the
EIp criterion, and 150 iterations for the other multi-objective EGO algorithms based on a single EI criterion), that is, 215 “expensive calculations”, were conducted in the same way. All experiments were carried out 30 times as per different initial schemes, so as to exclude the effect of initial sample points on experimental results. The comparisons among various algorithms were performed using the hypervolume (HV) of the Pareto front and the number of Pareto solutions obtained. The hypervolume can capture the closeness of the obtained solutions to the Pareto front as well as the spread property of the solutions. The reference points used for calculating the hypervolume values are set in
Table 3 for each test problem. The hypervolume can capture the closeness of the obtained solutions to the Pareto front as well as the spread property of the solutions. The reference points used for calculating the hypervolume values are set in
Table 3 for each test problem.
The HV of the approximate Pareto front and the number of the Pareto front approximations (NP) of test functions obtained by different algorithms are shown in
Table 4. The larger HV of the approximate Pareto front and NP indicate higher quality.
The results revealed that under the same sample estimation, the HV of the approximate Pareto front obtained by the optimization algorithm proposed in this paper was superior to that obtained by other single-criterion optimization algorithms on all test functions. In addition, the NP obtained by the optimization algorithm proposed in this paper on all test functions was also significantly larger than that obtained by other single-criterion optimization algorithms. In actual engineering process applications, obtaining more Pareto front approximations allows engineers to have more choices, which is more propitious to obtaining optimization results in line with engineering practice. Therefore, the algorithm proposed in this paper is more advantageous in engineering applications.
Figure 9,
Figure 10,
Figure 11,
Figure 12,
Figure 13 and
Figure 14 are boxplots of the HV of the approximate Pareto front and NP of all test functions obtained by separately running the multi-objective EGO algorithm based on the
EIp criterion and single EI criterion for 30 times as per different initial schemes.
Table 5,
Table 6,
Table 7,
Table 8,
Table 9 and
Table 10 lists the corresponding standard deviations (SDs) and confidence intervals (CIs, based on 95% confidence intervals).
As seen in
Figure 9, the boxplots for ZDT1 indicate that the
EIp criterion algorithm achieves a marginally higher median HV and a significantly higher median NP compared to the single-criterion algorithms (
EIe,
EIh,
EIm). The interquartile range (IQR) for HV appears similar across all algorithms, suggesting comparable variability in solution quality for this function. However, the spread of NP values is notably larger for the single-criterion methods. The data in
Table 5 quantitatively supports the observations from
Figure 9. The
EIp criterion yields the highest mean HV (120.64) and mean NP (32). Crucially, it also demonstrates the lowest standard deviation (SD) for the HV (0.0074), indicating superior consistency and stability in performance across the 30 runs compared to the other criteria, which have higher SDs of the HV. The SD of the NP for
EIp (5.19) is also the lowest, confirming more reliable convergence in terms of the number of solutions found.
As seen in
Figure 10, for the ZDT2 test problem, the
EIp criterion algorithm clearly outperforms the single-criterion ones. It demonstrates a noticeably higher median HV and a substantially higher median NP. The box for the NP using
EIp is positioned much higher and shows less variability than those of the other algorithms, which cluster near lower values with similar spreads.
Table 6 confirms the superiority of the
EIp criterion on ZDT2. It achieves the highest mean HV (120.30) and mean NP (25). While the SDs for the HV are identical across all algorithms (0.0296), the dramatic difference in mean NP values (25 for
EIp vs. 10, 5, and 15 for the others) highlights the significant advantage of the hybrid approach in finding a more extensive set of Pareto solutions for this problem.
As seen in
Figure 11, the performance on ZDT3 shows a distinct advantage for the
EIp criterion. Its median HV is significantly higher than those of the single-criterion algorithms. The NP values also show a positive trend, with
EIp having a higher median and a wider distribution towards larger numbers of solutions compared to the others. The numerical results in
Table 7 solidify the findings from
Figure 11. The
EIp criterion achieves the highest mean HV (128) by a considerable margin (compared to 127 and 128 ± 0.27 for
EIm, for which the upper bound of the CI is just 128.27) and the highest mean NP (30). Although the SDs for NP are the same, the mean values for the single-criterion algorithms (15, 15, 20) are substantially lower than those of
EIp, demonstrating its effectiveness on this problem with a discontinuous Pareto front.
As seen in
Figure 12, for the three-objective DTLZ2 problem, the
EIp criterion maintains a slight edge, showing a marginally higher median HV compared to the other algorithms. The median NP for
EIp is also visibly higher, suggesting a better ability to approximate the entire Pareto front in 3D space. The data in
Table 8 provides precise metrics for DTLZ2. The
EIp criterion achieves the highest mean HV (15.030) and the highest mean NP (25). The SDs of the HV are identical across all methods (0.0074), indicating similar run-to-run consistency in solution quality for this problem. The key differentiator is again the number of solutions found, where
EIp finds significantly more Pareto points on average.
As seen in
Figure 13, on the DTLZ5 problem, the
EIp criterion demonstrates competitive performance. The median HV appears slightly higher than or equal to that of the other methods. The most notable difference is again in the NP, where
EIp shows a significantly higher median and a larger spread, indicating its strength in discovering more solutions along the curved Pareto front.
Table 9 shows that the
EIp criterion obtains the highest mean HV (13.12) and the highest mean NP (30) for DTLZ5. The HV values are very close, but the hybrid approach consistently finds a more numerous set of Pareto solutions, as evidenced by the higher NP. The SD of the NP for
EIp is larger (14.81), indicating more variability in the count of solutions found between runs, but the average count remains the highest.
As seen in
Figure 14, for the complex DTLZ7 problem, the advantage of the
EIp criterion is pronounced, particularly regarding the NP. The median NP for
EIp is drastically higher than that for any single-criterion algorithm. The HV median for
EIp also appears competitive, situated within the higher range of values. The results in
Table 10 for DTLZ7 are striking. The
EIp criterion achieves a significantly higher mean HV (59,000) compared to the other methods (58,500, 58,000, 58,500) and a vastly higher mean NP (40 vs. 20, 15, 15). This demonstrates the hybrid algorithm’s superior capability in handling problems with disconnected Pareto fronts, both in terms of the quality of the approximation (HV) and the diversity and number of solutions found (NP), despite the higher absolute SD of the HV.
Through comparison, it can be seen that the optimization algorithm proposed in this paper is better than the multi-objective EGO algorithm based on a single EI criterion under the same number of sample points.
Figure 15,
Figure 16 and
Figure 17 display the final approximate Pareto front in the case of the smallest HV of ZDT1, ZDT2 and ZDT3 functions, respectively, in 30 repeated experiments of test functions obtained after 50 iterations of calculation using the
EIp criterion. The Pareto front of the ZDT1 function was convex, and massive Pareto front points with a relatively uniform distribution were found through the algorithm proposed in this paper. According to
Figure 17, discrete Pareto fronts were found on the ZDT3 function. Through the algorithm proposed in this paper, a total of 17 approximations were discovered, which were distributed on five discrete fronts. The Pareto front of the DTLZ2 function was relatively regular. As shown in
Figure 18, Pareto front points with a large number and a relatively uniform distribution were found through the algorithm proposed in this paper. As for the DTLZ5 function, the Pareto front was curvilinear, and the three-objective Pareto front presented a spatial curve in a three-dimensional space. Hence, the DTLZ5 function can be employed to test the ability of the multi-objective optimization algorithm to search for a curvilinear Pareto front. It can be seen from
Figure 19 that better results were also acquired through the algorithm proposed in this paper. The DTLZ7 function had discrete Pareto fronts, as shown in
Figure 20. Therefore, the DTLZ7 function can be used to test the capacity of the multi-objective optimization algorithm to optimize discrete Pareto fronts. The approximate Pareto front found through the algorithm proposed in this paper is close to the real Pareto front. More points were found on the four discrete Pareto fronts.
Moreover, the computing time for completing the same sample estimation using the optimization algorithm based on the
EIp criterion was compared with that using the optimization algorithm based on a single criterion. Since the computing time required by optimization algorithms based on
EIh,
EIe, and
EIm criteria are basically the same, the
EIh criterion-based optimization algorithm was adopted for the comparison in this paper. In the two algorithms, a computer with the same configuration was utilized, with single-core computing for the algorithm based on the
EIh criterion and parallel three-core computing for the algorithm based on the
EIp criterion. To exclude the effect of initial sample points on experimental results, all experiments were run 30 times as per different initial schemes. The explicit core/thread counts and system specifications are listed in
Table 11. The average computing times were taken and compared, as shown in
Figure 21. It can be seen that the computing time required by the algorithm based on the
EIh criterion was about three times that required by the algorithm based on the
EIp criterion. This suggests that for all test functions, the algorithm proposed in this paper requires less computing time and can obtain relatively superior results, and it confirms that the algorithm proposed in this paper is superior in providing computational efficiency.