Next Article in Journal
A Study on the Effectiveness of Partial Discharge Models for Various Electrical Machines’ Insulation Materials
Previous Article in Journal
Experimental Investigation of Vibration Reduction Effect of High-Pressure Air Compressor Using Composite Damping Base
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Error-Pursuing Adaptive Uncertainty Analysis Method Based on Bayesian Support Vector Regression

1
State Key Laboratory of Performance Monitoring and Protecting of Rail Transit Infrastructure, East China Jiaotong University, Nanchang 330013, China
2
Key Laboratory of Conveyance and Equipment of Ministry of Education, East China Jiaotong University, Nanchang 330013, China
3
School of Mechatronics & Vehicle Engineering, East China Jiaotong University, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Machines 2023, 11(2), 228; https://doi.org/10.3390/machines11020228
Submission received: 3 December 2022 / Revised: 16 January 2023 / Accepted: 2 February 2023 / Published: 4 February 2023
(This article belongs to the Section Machine Design and Theory)

Abstract

:
The Bayesian support vector regression (BSVR) metamodel is widely used in various engineering fields to analyze the uncertainty arising from uncertain parameters. However, the accuracy of the BSVR metamodel based on the traditional one-shot sampling method fails to meet the requirements of the uncertainty analysis of complex systems. To this end, an error-pursing adaptive uncertainty analysis method based on the BSVR metamodel is presented by combining a new adaptive sampling scheme. This new sampling scheme was improved by a new error-pursuing active learning function that is named, herein, adjusted mean square error (AMSE), which guides the adaptive sampling of the BSVR metamodel’s design of experiments (DoE). During the sampling process, AMSE combines mean square error and leave-one-out cross-validation error to estimate the prediction error of the metamodel in the entire design space. Stepwise refinement of the metamodel was achieved by placing the sampled regions at locations with large prediction errors. Six benchmark analytical functions featuring different dimensions were used to validate the proposed method. The effectiveness of the method was then further illustrated by a more realistic application of an overhung rotor system.

1. Introduction

In practical problems, the presence of uncertainty factors seriously affects the performance of engineering systems. Therefore, quantifying these uncertainties and fully considering the impact of them on aspects such as system security assessment, optimal design, and usage maintenance is essential [1]. Uncertainty analysis aims to quantify the uncertainties in engineering system outputs that are propagated from uncertain inputs [2], which usually consist of the following three aspects: (a) the calculation of the statistical moments of the stochastic response [3,4], (b) the probability density function of the response [5,6], and (c) the probability of exceeding an acceptable threshold (reliability analysis) [7,8,9].
Recently, uncertainty analysis methods based on Bayesian support vector regression (BSVR) have gained widespread attention [10,11]. BSVR is a special support vector regression (SVR) model that is derived from the Bayesian inference framework. Law and Kwok [12] applied MacKay’s Bayesian framework to SVR in the weight space. Gao et al. [13] derived an evidence and error bar approximation for SVR based on the approach proposed by Sollich [14]. Chu et al. [15] proposed the soft insensitive loss function (SILF) to address the inaccuracies in evidence evaluation and inference that were brought on by the unsmoothed loss function, which further enriches the diversity of BSVR models. Compared with other metamodel methods, the BSVR model based on the structural risk minimization principle features better generalization ability and shows superior performance in dealing with non-linear problems and avoiding overfitting.
An accurate BSVR metamodel is the basis for uncertainty analysis, while the accuracy of the metamodel highly depends on the number and quality of training samples. For uncertainty analysis tasks in real engineering, numerical computational models are usually complex, which means that training sample points are costly to obtain. Therefore, the only way to obtain a high-precision BSVR metamodel for uncertainty analysis is to improve the quality of the training samples. The traditional one-shot sampling method determines the sample size and number of points in one stage [16], which fails to meet the requirement of constructing a high-precision metamodel using as few sample points as possible. The challenge of appropriate training sample selection has led to the development of various adaptive sampling schemes. These schemes start with an initial design of experiments (DoE) with a minimal sample set, and then the most informative or important sample points are sequentially added to the DoE by the active learning function. The steps for improving the metamodel by utilizing the adaptive sampling schemes are illustrated in detail in Figure 1.
From Figure 1, it can be observed that the most crucial factor during the adaptive sampling process is the active learning function. In general, active learning functions require a point-wise local prediction error or confidence interval to guide the operation [17,18], i.e., active learning functions all feature error-pursuit properties. The active learning function focuses on determining the prediction error information of the unsampled points. Note that the actual prediction error is unknown a priori. Therefore, the actual unsampled prediction error is generally estimated by the mean square error (also known as the prediction variance), the cross-validation error, and the local gradient information. Only the mean square error and cross-validation error are discussed here.
Most of the mean square error-based active learning functions are deeply combined with the Gaussian process (GP) model. Thanks to the assumption of a Gaussian process, BSVR can provide information about the mean square error at the unsampled points. The active learning function based on mean square error considers that regions with a significant mean square error of the metamodel may have larger prediction errors. Shewry and Wynn [19] proposed the maximum entropy (ME) criterion that selects new points by maximizing the determinant of the correlation matrix in the Bayesian framework. In the maximizing the mean square error (MMSE) method proposed by Jin et al. [20], new sample points were selected by maximizing the mean square error. Interestingly, Jin et al. [20] pointed out that the MMSE criterion is equivalent to the one-by-one ME criterion. It is worth noting that the above sampling method utilizes the property that the GP model follows the stationary assumption, which causes the value of the covariance matrix (mean square error) to depend only on the relative distance between the sample points. Lin et al. [21] pointed out that the stationary assumption is inappropriate in adaptive experimental designs, because the correlation between two points does not depend only on the relative distance.
Active learning functions based on cross-validation errors usually focus excessively on local exploration and, thus, often require the introduction of distance constraints [22]. However, without sufficient a priori knowledge, the choice of distance constraint usually has a large uncertainty. In the SFCVT learning function proposed by Aute [23], the Euclidean distance between the sampled and unsampled points was used. The CV-Voronoi learning function proposed by Xu [24] selects the unsampled point in the Voronoi cell that is farthest from the sampled point. Jiang [25] used the average minimum distance between sampled and unsampled points as the distance constraint. Jiang [26] and Roy [27], on the other hand, used the maximin distance criterion, i.e., the maximum value of the minimum distance between sampled and unsampled points was chosen as the distance constraint. However, without sufficient a priori knowledge, the choice of distance constraint usually has a significant uncertainty.
In general, the above active learning functions have the following drawbacks:
(1) Local error information is not considered;
(2) Clustering problems exist.
To address the above problems, in this paper, a new error-pursing active learning function, named, herein, adjusted mean square error (AMSE), was designed to improve the adaptive sampling process. In fact, AMSE can combine the mean square error and LOO cross-validation error to evaluate the prediction error information of unsampled points. It introduces local error information through the LOO cross-validation error, which results in a significant advantage for AMSE when dealing with non-linear problems. Moreover, the global exploration property of the mean square error can effectively avoid the clustering problem.
Finally, in order to implement uncertain analysis, the error-pursing adaptive uncertain analysis method based on BSVR was present by integrating the adaptive sampling scheme with the new error-pursing AMSE. Additionally, six benchmark analytical functions, featuring different dimensions and a more realistic application of an overhung rotor system, were selected to validate the proposed algorithm.
The rest of the paper is organized as follows. Section 2 briefly reviews Bayesian support vector regression based on Bayesian inference. Then, the proposed adaptive sampling scheme based on AMSE and the new error-pursuing adaptive uncertainty analysis method are introduced in Section 3. In Section 4, the performance and properties of the proposed method are explored using several analytic mathematical examples and are compared with other methods. In Section 5, the proposed method is applied to rotor system uncertainty analysis to validate the advantages of the proposed method further. Finally, conclusions are given.

2. The Bayesian Support Vector Regression Method

In this section, the basic theories of the Bayesian support vector regression (BSVR) proposed by Chu [15] and Cheng [16] are briefly introduced.

2.1. Bayesian Inference Framework

For a regression problem with a given training sample D = { X , Y } , the mapping relationship between the inputs and outputs can be expressed as:
y i = g ^ ( x i ) + δ i     i = 1 , 2 , , N
where g ^ ( x ) is the regression model, and δ i   is the additive noise with the independent identical distribution.
The regression aims to use the training sample D to obtain a specific formula for g ^ ( x ) . In the framework of Bayesian inference, this problem can be transformed by solving for the maximum posterior of g ^ ( x ) :
P ( G | D ) = P ( D | G ) P ( G ) P ( D )
where G = [ g ^ ( x 1 ) , g ^ ( x 2 ) , , g ^ ( x N ) ] T , P ( D | G ) is the likelihood function, P ( G ) is the prior probability, and P ( D ) is the marginal likelihood, which can usually be assumed to be a constant.

2.2. Prior Probability

Unlike the traditional SVR problem, where g ^ ( x ) is considered as a series of hyperplane equations, BSVR assumes g ^ ( x ) as a stationary Gaussian process with a mean function of zero and a covariance function of k ( X , X ) , i.e., g ^ ( x ) ~ G P ( 0 , k ( X , X ) ) . The Gaussian covariance function is used in this study and illustrated as follows:
k ( x i , x j ) = exp ( l = 1 d θ l ( x i l x j l ) 2 )     i , j = 1 , 2 , , N
where θ = [ θ 1 , , θ d ] T is the parameter to be determined for the covariance function, and x l denotes the l - th element of the input vector x .
With these assumptions, the expression of the prior probability can be derived as follows:
P ( G ) = 1 z G exp ( 1 2 G T K 1 G )
where z G = ( 2 π ) n 2 | K | , K is the covariance matrix of training samples with entry K i j = k ( x i , x j ) .

2.3. Likelihood Function

The likelihood function P ( D | G ) is essentially a model of noise [15]. In the previous description, the noise δ i   is assumed to obey an independent identical distribution, and then P ( D | G ) can be expressed as:
P ( D | G ) = i = 1 n P ( y i f ( x i ) ) = i = 1 n P ( δ i ) exp ( C i = 1 n ( y i f ( x i ) ) )

2.4. Posterior Probability

By substituting Equations (4) and (5) into Equation (2), the expression of the posterior probability can be obtained as follows:
P ( G | D ) = 1 z exp ( h ( G ) )
where z = exp ( h ( G ) )   d G , and h ( G ) = C i = 1 N ( y i g ^ ( x i ) ) + 1 2 G T K 1 G .
Thus, the maximum a posteriori probability of g ^ ( x ) can be expressed as:
min G i = 1 N C ( y i g ^ ( x i ) ) + 1 2 G T K 1 G
where C is the regularization coefficient, and ( ) denotes the loss function. The ε -insensitive squared loss function is used here, while any other type of loss function, such as those introduced in [16], can also be used:
( δ ) = { 0   ,                                                 i f | δ | ε 1 2 ( | δ | ε ) 2 ,                   o t h e r w i s e }

2.5. Bayesian Support Vector Regression

It can be observed from Equations (3), (7), and (8), and that the hyperparameters in the BSVR model mainly come from the covariance function, the regularization coefficient, and the loss function, i.e., η = [ C , ε , θ ] . Similarly to solving the regression model, the optimal model hyperparameters can be obtained using Bayes’ theorem, which is not repeated here, and the specific derivation process is described in [16].
After obtaining the model hyperparameters, the BSVR model can be obtained by solving Equation (7). It is worth noting that Equation (7) is a convex quadratic optimization problem, which can be conveniently solved by using the Lagrangian algorithm.
Unlike the traditional SVR model, the BSVR model built following the Gaussian process provides the error measure of this predicted response along with the mean squared error information of the sample points, which is the area of interest in this paper. The predicted response and the mean squared error information of the BSVR can be expressed as:
μ ^ ( x ) = k ( x , X ) ( α α * ) = j = 1 m ( α j α j * ) k ( x , x j )
Σ ^ 2 ( x ) = k ( x , x ) k m ( x , x m ) ( K m + I m / C ) 1 K m ( x m , x )
where k ( x , X ) = [ k ( x 1 , x ) , , k ( x N , x ) ] T represents the covariance matrix between the predicted and training samples, which can be calculated according to Equation (3), and α i , α i * are Lagrange multipliers. The training samples with α i α i * 0   ,   i = 1 , 2 , , m are support vectors, and K m , k m are the subsets of K , k , respectively.

3. A New Error-Pursuing Adaptive Uncertainty Analysis Method

In this section, after reviewing some often-used error-pursing active learning functions, the new function named AMSE was designed to improve the adaptive sampling process. Finally, the algorithm steps and flowchart of the proposed error-pursing adaptive uncertainty analysis method based on BSVR and the AMSE-based adaptive sampling scheme are presented.

3.1. Some Frequrntly Used Error-Pursing Active Learning Functions

Based on the above mean squared error information, various error-pursuing active learning functions have been proposed in the past decades, among which the properties of some frequently used active learning functions, such as MMSE, MEI, and CV-Voronoi, are compared and described as follows.
To facilitate the formulation, the following assumptions are introduced:
  • All sample points in the adaptive sampling process are divided into an initial experimental design SI and a candidate sample set SC;
  • The initial DoE is assumed to be SI = { X I , Y I } = { ( x I 1 , y I 1 ) , , ( x I N , y I N ) } , where X I is obtained by sampling in the design space of the input variable x according to a certain distribution law, Y I is the response of X I on the real model y ( ) , and ( x I N , y I N ) is the N-th set of sample points in the initial experimental design SD;
  • The candidate sample set is assumed to be SC = { X C } = { x C 1 , , x C P } . Similarly to X I , X C is obtained by sampling in the design space of the input variable x. It is worth noting that the candidate sample set SC does not contain model response values.
In the maximizing the mean square error (MMSE) learning function proposed by Jin et al. [20], the MMSE selected new sample points by maximizing the mean square error. The employment of the MMSE method in BSVR can be expressed as:
M M S E ( x n e w ) = arg   max 2 ( x C )
As can be seen from Equation (11), it is clear that the mean square error depends on the relative distance between sample points and fails to characterize the prediction error information of the candidate sample points, which can lead to sampling in inappropriate regions.
To address this problem, Lam [28] introduced the error information of the candidate sample points and proposed a modified expected improvement (referred to here as MEI) learning function with the following expression:
M E I ( x n e w ) = arg   max { ( μ ^ ( x C ) y I * ) 2 +   Σ ^ 2 ( x C ) }
where y I * is the true model response of the nearest initial sample point x I * to the candidate sample x C , and μ ^ ( x C ) is the predicted response of the candidate sample. By comparing Equations (11) and (12), it can be found that the MEI assumes the true response of the candidate sample as the response of the closest initial sample and introduces the prediction error information of the candidate sample by solving for the deviation between the predicted response and the true response. However, this assumption is sensitive to the number of initial samples and the degree of nonlinearity of the true model. A case for a highly non-linear situation is illustrated in Figure 2. It is clear that the distance between the initial sample points and the candidate sample points is small, but the gap between the corresponding true models’ responses is huge. In this case, the error information of the calculated candidate samples is not reliable.
It is noteworthy that the error measurement provided by the BSVR model, i.e., the mean square error, is used in both MMSE and MEI.
As another manifestation of model error, the cross-validation error, is also frequently used in active learning functions, such as the cross-validation-Voronoi (CV-Voronoi) method proposed by Xu [24], which uses the Voronoi diagram to divide the candidate samples into a set of Voronoi cells according to the initial DoE and assumes that the same sample points in the Voronoi cell have the same error behavior. The error information of each Voronoi cell is obtained by the leave-one-out (LOO) cross-validation approach. The specific expression of CV-Voronoi is as follows:
Find   x n e w = arg   max { C s e n s i t i v e } s . t .   | | x n e w x I | | 2 = max { | | x C s e n s i t i v e x I s e n s i t i v e | | }
From Equation (13), it is shown that CV-Voronoi adds a distance constraint to prevent the clustering of new sample points based on local exploration; however, this distance constraint may lead to the selection of sample points located at the junction of different Voronoi cells, and the error description of this point will be biased, as is shown in Figure 3. Figure 3 shows the Voronoi cells divided using the initial DoE, and the black dots are the error information of different Voronoi cells; the larger the dots, the larger the error. The red triangles indicate the new points selected using the distance constraint.

3.2. The Proposed New Error-Pursuing Active Learning Function

To address the above-mentioned issues, a new error-pursuing active learning function called adjusted mean square error (AMSE) was designed. The AMSE active learning function can be expressed by the following formula:
x n e w = arg   max x S C ( 2 ( x ) + e L O O ( x ) )  
where 2 ( x ) is the predictive variance of the Bayesian support vector regression, which can be calculated from Equation (10), and e L O O ( x ) is the leave-one-out (LOO) cross-validation error.
For the initial training sample points, the LOO cross-validation method was used to calculate e L O O , which is easy to implement and can estimate metamodel errors [21]:
e i L O O = ( y ( x i ) y ^ D / x i ( x i ) ) 2 , x i S I
where S I is the initial set of training sample points, y ( x i ) denotes the true response at point x i , and y ^ D / x i ( x i ) represents the predicted response at the point of the metamodel constructed with all initial training sample points except for point x i .
For the error information e L O O of the candidate sample point, since the true response of the point is unknown, its error information cannot be calculated directly using Equation (15). Currently, there are two main types of methods used to solve this problem; one is to use the LOO error information of the initial sample point to establish an error prediction metamodel, through which the LOO error information of the candidate sample point is predicted [23]. Being limited by the number of initial sample points, this method often cannot obtain a better error prediction effect, which will further affect the efficacy of subsequent model adaptations. The other method assumes that the error information of the points in the adjacent region is similar [29], which is influenced by the initial sample points to a lesser extent than the first assumption; moreover, with the addition of sample points, the assumed error information will gradually approximate the true error information of the points. Thus, the prediction error information at the candidate sample points can be expressed as:
e L O O ( x j ) = e L O O ( x i ) = ( y ( x i ) y ^ D / x i ( x i ) ) 2 , x j S C , x i S I
where x j is the nearest candidate sample point to the initial sample point x i . Equation (14) can therefore be rewritten as:
x n e w = arg   max x S P ( 2 ( x ) max 2 + e L O O ( x ) e max L O O )  
where max 2 = max x S C ( 2 ( x ) )   and   e max L O O = max x S C ( e L O O ( x ) ) . From Equation (17), it can be seen that this active learning function consists of two parts. The first term is responsible for global exploration, and the second term is responsible for local exploration. The local exploration aims to develop regions with large nonlinearity or variability, while the global exploration tends to select points with large prediction variance, with the combination of the two effectively characterizing the prediction error information of the metamodel during the construction process. The implementation steps of the proposed AMSE active learning function are summarized as follows:
Step 1: Calculate the LOO cross-validation error for the training samples.
Step 2: Calculate the mean square error for the candidate samples.
Step 3: Calculate the leave-one-out cross-validation error for the candidate samples.
Step 4: Select new sample points by maximizing the adjusted mean square error.

3.3. Example Illustration

The advantage of the proposed method for prediction error characterization is illustrated using a one-dimensional test function with the following expression:
y = 3 ( 1 x ) 2 e ( x 2 ) ( 2 x 10 x 3 ) e x 2 1 3 e ( ( x + 1 ) 2 ) ,     x ~ U [ 8 , 8 ]
The initial DoE is composed by averaging five sample points in the range of the x values and calculating the corresponding response values. The BSVR metamodel is then built based on the initial DoE. The prediction error of the metamodel is characterized by solving for the absolute error between the metamodel and the real model, and the corresponding error bars are derived, as is shown in Figure 4.
As can be seen from Figure 4, the true model has multiple local optima on the interval [−4, 4], while the other regions tend to be flat. The metamodel built by the initial DoE has a large deviation from the true model, and the prediction errors are mainly concentrated in the non-linear region.
In this test example, 15 samples were selected from the same candidate sample set using MEI, CV-Voronoi, MMSE, and AMSE methods; then, the BSVR metamodel was constructed based on these samples, as is shown in Figure 5.
As can be seen in Figure 5, CV-Voronoi and MEI tended to oversample regions with large errors and ignore other regions, which leads to a good approximation in the non-linear region and shows a large deviation in the rest of the region. More importantly, CV-Voronoi suffered from clustering problems in the second non-linear region; MMSE selected sample points that were evenly distributed throughout the sampling area, which indicates that MMSE does not guide the sample points to be placed in regions with large errors. It is worth noting that the proposed AMSE can determine the regions with large model prediction errors well and reasonably avoid the clustering problem by using global exploration. It can also be seen that the metamodel constructed based on AMSE has the best approximation.

3.4. Algorithm Summary

The flowchart of the proposed new adaptive uncertainty analysis method is depicted in Figure 6 with 7 steps as summarized below:
Step 1: Initialization of the algorithm. In this step, the initial DoE sample size is set as 10N [30]. N is the number of input variables, the candidate sample size is fixed at 10,000, and the stopping criterion is set as the maximum number of sample points to be added.
Step 2: Generation of the initial sample set SI and the candidate sample set SC. In this study, the Latin hypercube sampling method in UQLab [31] is used to select a certain number of SI and SC that satisfy the corresponding distribution law.
Step 3: Construction of the BSVR model. The BSVR model is constructed using the initial DoE in step 2, and the initial values of the hyperparameters η = [ C , ε , θ ] in the model are set to 10 5 , 10 5 , and 1. To obtain the best hyperparameters, a wide optimization range is set, namely, [ 10 5 , 10 10 ] for C, [ 10 7 , 10 2 ] for ε , and [ 10 5 , 10 2 ] for θ .
Step 4: Selection of samples using AMSE to enrich the DoE. New sample points are selected from the candidate sample set SC in each iteration with the active learning function AMSE.
Step 5: Update the DoE. Call the true model to calculate the response value y N e w at the new point x N e w , and update the initial DoE; let S I = S I x n e w , Y I = Y I y n e w .
Step 6: End of iterative process. If the stopping criterion is satisfied, the final BSVR model is exported, and the uncertainty analysis is based on the final BSVR metamodel; otherwise, return to step 3.
Step 7: Uncertainty analysis. Once the final BSVR metamodel is obtained, uncertainty analysis can be performed by combining the BSVR metamodel with Monte Carlo simulation (MCS) to obtain the uncertainty analysis results of interest.

4. Numerical Examples

In this section, the proposed new error-pursing adaptive uncertainty analysis method based on BSVR, AMSE, and MCS was applied to the uncertainty analysis of six numerical cases to investigate the effect of various stochastic parameters on the model’s response. These six numerical examples were specially selected to have different performances, such as multi-response modes, high dimensionality, highly nonlinearity, etc. The specific functional behaviors and expressions are shown in Table 1. For simplicity, these cases are denoted as P1–P6.
To simulate the different distribution laws of the random parameters in the uncertainty analysis problem, three cases were designed in which the random parameters obeyed uniform, normal, and log-normal distributions, and here, only the random variables were considered to be independent of each other. Table 2 provides the distribution laws corresponding to the random parameters obeyed by these examples.
In this study, the proposed AMSE method was compared with the MMSE [20], MEI [28], and CV-Voronoi [24] methods. The statistical moment information of the response is a crucial indicator in uncertainty analysis, which reflects the probabilistic information of the response and the statistical laws of the change in the system’s characteristics. Therefore, the calculation of the first fourth-order statistical moments of the uncertainty response was regarded here as the main content of the uncertainty analysis.
To assess the accuracy performance of different adaptive sampling methods in the uncertainty analysis, the relative error index was used, and the reference value for each order of statistical moments was the result of 10 5 Monte Carlo simulations. The relative error of the calculated results can be expressed as:
ϵ β ^ i = | β ^ i β ^ i M C S | β ^ i M C S × 100 %     i = 1 , , 4
where β ^ i M C S denotes the statistical moments of each order calculated by the MCS, and β ^ i denotes the statistical moments of each order calculated by the adaptive algorithm.
All adaptive sampling algorithms started with the same number of initial experimental designs (10N). The maximum number of additional sample points was used as the stopping criterion and kept the same stopping criterion for the same dimensional problem. The sampling configurations of the six test functions are shown in Table 3. In each iteration of the adaptive sampling algorithm, each adaptive sampling method selects a new point from the same pool of candidate samples and uses the same test samples to evaluate the approximation accuracy. It is worth noting that the initial candidate and test samples of the same test function obeyed the same probability density function, and the samples’ generation and the MCS calculation were conducted using UQLab [31].

Test Results and Discussion

In this section, a general comparison between the AMSE and the other three adaptive sampling methods is presented. Table 4, Table 5, Table 6 and Table 7 show the results of the first four orders of statistical moments computed by different adaptive sampling methods, and the data in the tables show the relative errors between the computed results of different adaptive sampling algorithms and the MCS simulation results. Note that the data in bold indicates the smallest relative error.
From Table 4 and Table 5, it can be seen that all adaptive sampling algorithms showed better accuracy in calculating the lower-order statistical moments, and the relative errors of all adaptive sampling algorithms were kept below 5%, except for the test function P1, which fully demonstrates the advantages of the adaptive agent model in the field of uncertainty analysis. It is worth noting that the relative error of the AMSE method was also lower than 5% when dealing with case P1.
As can be seen from Table 6 and Table 7, the proposed AMSE method showed a far smaller error in the calculation of higher-order statistical moments than the other methods, especially in examples P6 and P1. In example P5, the MMSE method gave the best results, but both the AMSE and CV-Voronoi showed comparable accuracy. Significantly, CV-Voronoi performed much worse than the other methods in calculating the skewness values; for example, in P3, which indicates that the CV-Voronoi method picked inappropriate sample points in the model adaptation process.
When dealing with high-dimensional, non-linear problems (i.e., test function P6), the skewness and kurtosis calculations of the MMSE were much worse than the other methods. Following the previous discussion, this is mainly because the MMSE method considers only mean square error information when selecting new sample points, making the distribution of the selected sample points in the design space have a tendency to be “space-filling”.
To further explore the characteristics of the proposed method, the convergence curves of different methods were plotted.
Figure 7 shows the results of the convergence curves of the first fourth-order statistical moments of the P1 test function obtained by different adaptive sampling algorithms. It can be seen that the MMSE method, which did not consider the local errors at the sampling points, performed the worst in all cases. The error of the CV-Voronoi method showed a steadily decreasing trend when calculating the lower-order statistical moments, but it had a significant error jump in the latter part of the adaptive process when calculating the higher-order statistical moments. The MEI and the proposed AMSE method had comparable convergence rates, but the final accuracy performance of the AMSE was better than that of the MEI. It is worth noting that all adaptive sampling methods showed good convergence trends in computing low-order statistical moments, and it is clear from combining Table 1, Table 2, Table 3 and Table 4 that all methods performed much better at computing low-order statistical moments than high-order statistical moments in terms of accuracy. Therefore, only the convergence curves of higher-order statistics are plotted in the subsequent examples.
As can be seen from Figure 8, the MMSE method showed the best convergence speed in calculating the statistical moments of skewness of the uncertainty response. It significantly reduced the relative error once 20 sample points were added. Still, its final convergence accuracy was worse than the other methods, because it did not consider the local error information of the sample points. When calculating kurtosis, all methods had comparable convergence speeds.
Figure 9 shows the convergence curve of test function P3, and it can be seen that the convergence speed and final accuracy of CV-Voronoi were the worst, and its relative error showed an abnormal rising trend after adding the 85th sample point, which indicates that the CV-Voronoi method selected unsuitable sample points when dealing with test function P3.
It can be observed from Figure 10 that all the methods showed large error fluctuations in the test function P4, with the MMSE showing the most drastic relative error change, while the AMSE showed a good convergence trend after adding 70 sample points.
As can be seen from Figure 11, the AMSE and CV-Voronoi were better than other methods for problems of medium dimensionality. Both the MEI and MMSE showed a sudden increase in error when adding the 70th sample and a sudden decrease in error when adding the 150th sample; this sudden change in model performance indicates that both methods significantly reduced/increased the amount of information obtained from the sample by adding a new sample, thus reflecting that MEI and MMSE are not good judges of the amount of information in each candidate sample point when dealing with this problem.
As shown in Figure 12, the relative errors of AMSE and CV-Voronoi method gradually decrease as the sample points increase in the face of the high-dimensional and highly non-linear problems (P6). While CV-Voronoi exhibited large error fluctuations at the beginning of the model adaptation, and AMSE performed more smoothly and had the best final accuracy. It is worth noting that MMSE and MEI had better accuracy performance when a small number of sample points were added, but with the addition of sample points, they did not make the accuracy decrease steadily; instead, the accuracy performance became worse and worse in the continuous fluctuation, which shows that MMSE and MEI do not deal with highly non-linear problems well.
Altogether, the proposed AMSE method showed the best accuracy and convergence rate when dealing with the uncertainty analysis of high-dimensional test functions; when dealing with low and medium-dimensional test functions, the AMSE provided superior accuracy and comparable convergence rates.

5. The Overhung Rotor System as an Engineering Example

To further verify the effectiveness of the proposed method, it was applied to the uncertainty analysis of an overhung rotor system to calculate the first four orders of statistical moment information of the system’s peak response past the first-order critical speed under the action of random parameters.
The overhung rotor system has flexible support at both ends, where the flexible rotor shaft is 1.5 m long and 0.05 m in diameter; it is shown in Figure 13. The thickness and diameter of the rigid disc are denoted as T and D, respectively. Bearings 1 and 2 are reduced to two anisotropic linear spring damping coefficients with the same characteristics and are located at the leftmost and 2/3 of the span of the rotating shaft, in that order. In this study, only the main bearing characteristics were considered, and the characteristic coefficients K m n and C m n ( m = 1 , 2 is the bearing number; n = x , y is the direction of the main bearing characteristics) were used to represent the stiffness and damping coefficients, respectively. In addition, the material parameters involved in the rotor shaft and the stiffness disc, such as the modulus of elasticity, Poisson’s ratio, and density, were denoted by E, μ , and ρ , respectively. The rotor’s transient startup stochastic dynamics equation can be expressed as [32]:
M ( X ) δ ¨ ( X , t ) + [ C ( X ) + θ ˙ ( t ) G ( X ) ] δ ˙ ( X , t ) + K ( X ) δ ( X , t ) = F ( X , t )
θ ( t ) = a t 2 + b t + c
where M, C, G, and K are the mass, damping, gyroscopic, and stiffness matrices of the rotor system, respectively; θ ˙ ( t ) and θ ( t ) are the rotor angular velocity and rotor angular displacement, respectively; a is the rotor starting acceleration; b and c are the rotor angular velocity and angular displacement at t = 0 , respectively; F is the synchronous unbalance force vector caused by the rigid disc unbalance; δ is the unknown global displacement vector; and X = [ X 1 , X 2 , , X N ] T is a random vector consisting of N random input variables. Solving Equations (20) and (21) yields an expression for the corresponding random response quantity:
δ ( X , t ) = { δ | M δ ¨ + ( C + θ ˙ G ) δ ˙ + K δ = F X f X ( x ) }
where f X ( x ) is the joint probability density function of the random variable X. Here, only the random variables are considered to be independent of each other, and each random variable is subject to a normal distribution. Their specific parameters are shown in Table 8.
The number of initial experimental designs was set to 100 (10N), with the stopping criterion set to the maximum number of sample points added (400), among which the MCS (sample size of 10 5 ) was used as the reference result for the calculation of the statistical moments of each order. The relative errors of different adaptive sampling algorithms for calculating the first-four statistical moments are given in Table 9. Note that the data in bold indicates the smallest relative error.
From Table 9, it can be seen that, after reaching the stopping criterion, the best prediction was achieved using the AMSE method, which had the smallest relative error for the first fourth-order moments; the CV-Voronoi and MMSE methods showed comparable prediction accuracy in the face of low-order moments, but poor performance in the face of high-order moments; the MEI method performed the worst in the calculation of all statistical moments. It is worth noting that the relative errors of the AMSE method were much smaller than the other methods in calculating the kurtosis and skewness indicators, thus showing good prediction ability for higher-order statistical moments.
As can be seen in Figure 14, the MEI, CV-Voronoi, and MMSE methods all showed a good trend of decreasing error when faced with skewness and kurtosis metrics predictions until 150 sample points were added; however, there was a sudden increase in error during subsequent sampling. This sudden change in the performance of the model indicates that the amount of information obtained from the sample by the sampling method was significantly reduced by adding a new sample, which indicates to some extent that the MEI, CV-Voronoi, and MMSE methods did not discriminate the error information of the sample points well in the face of a high-dimensional, strongly non-linear rotor system. The proposed AMSE method, on the other hand, showed a good convergence trend, and the relative errors of both skewness and kurtosis prediction results steadily decreased with the addition of sample points.
Overall, the proposed AMSE method had a better advantage in the analysis of statistical moments in the face of random responses and had the best accuracy performance and comparable convergence speed compared to other adaptive sampling algorithms.

6. Conclusions

In this paper, an error-pursuing adaptive method based on BSVR was proposed for efficient and accurate uncertainty analysis. This adaptive method was improved by a new error-pursuing active learning function named AMSE, which guided the adaptive sampling of the BSVR metamodel’s DoE. To address the deficiency of the mean square error in characterizing the prediction error information of sample points, the AMSE active learning function adjusts the mean square error using the leave-one-out (LOO) cross-validation error, and the adjusted prediction variance information depends not only on the relative distance between sample points, but also on their errors.
To verify the effectiveness of this uncertainty analysis method, six numerical examples with different dimensions were investigated, and the results were compared in detail with other frequently used error-pursing active learning functions, such as the MMSE, MEI, and CV-Voronoi. The results showed that the proposed method had better accuracy and comparable convergence speed, in addition to exhibiting better prediction of the higher-order statistical moments. Furthermore, a single-disk cantilever rotor system was studied, which was subject to 10 uncertainties. This application was used to demonstrate the computational efficiency of the method when applied to the rotor system. It was shown that the method had a significant accuracy advantage with the inclusion of the same sample points.
Overall, the proposed AMSE error-pursing active learning function performed better in terms of accuracy and convergence speed than others. Moreover, an adaptive metamodel constructed based on this method shows promising applications in the field of uncertainty analysis.

Author Contributions

Conceptualization, S.-T.Z. and J.J.; methodology, S.-T.Z.; software, J.J.; validation, S.-T.Z., J.J. and J.-M.Z.; formal analysis, P.-H.C.; investigation, Q.X.; resources, S.-T.Z.; data curation, J.J.; writing—original draft preparation, J.J.; writing—review and editing, J.J.; visualization, J.J. and P.-H.C.; supervision, S.-T.Z., J.-M.Z. and Q.X.; funding acquisition, S.-T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 52065022), the Natural Science Foundation of Jiangxi, China (No. 20202BABL204036), the project of the Jiangxi Provincial Department of Education (GJJ210638), and the open project of the Key Laboratory of Conveyance and Equipment of Ministry of Education at East China Jiaotong University (No. KLCE2021-04, KLCEZ2022-07).

Data Availability Statement

Not applicable here.

Acknowledgments

This program is jointly supported by the projects of the National Nature Science Foundation of China, the State Key Laboratory of Rail Transit Infrastructure Performance Monitoring and Guarantee, and the Key Laboratory of Conveyance and Equipment at East China Jiaotong University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roy, C.J.; Oberkampf, W.L. A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput Method Appl. Mech. Eng. 2011, 200, 2131–2144. [Google Scholar] [CrossRef]
  2. Soize, C. Uncertainty Quantification; Springer: Cham, Switzerland, 2017. [Google Scholar]
  3. Zhou, T.; Peng, Y. Adaptive Bayesian quadrature based statistical moments estimation for structural reliability analysis. Reliab. Eng. Syst. Saf. 2020, 198, 106902. [Google Scholar] [CrossRef]
  4. Xu, J.; Dang, C. A new bivariate dimension reduction method for efficient structural reliability analysis. Mech. Syst. Signal Process. 2019, 115, 281–300. [Google Scholar] [CrossRef]
  5. Zhang, X.; Low, Y.M.; Koh, C.G. Maximum entropy distribution with fractional moments for reliability analysis. Struct. Saf. 2020, 83, 101904. [Google Scholar] [CrossRef]
  6. Dang, C.; Xu, J. Novel algorithm for reconstruction of a distribution by fitting its first-four statistical moments. Appl. Math. Model. 2019, 71, 505–524. [Google Scholar] [CrossRef]
  7. Zhao, H.; Li, S.; Ru, Z. Adaptive reliability analysis based on a support vector machine and its application to rock engineering. Appl. Math. Model. 2017, 44, 508–522. [Google Scholar] [CrossRef]
  8. Pan, Q.; Dias, D. An efficient reliability method combining adaptive support vector machine and Monte Carlo simulation. Struct. Saf. 2017, 67, 85–95. [Google Scholar] [CrossRef]
  9. Cheng, K.; Lu, Z. Active learning polynomial chaos expansion for reliability analysis by maximizing expected indicator function prediction error. Int. J. Numer. Methods Eng. 2020, 121, 3159–3177. [Google Scholar] [CrossRef]
  10. Wang, J.; Li, C.; Xu, G.; Li, Y.; Kareem, A. Efficient structural reliability analysis based on adaptive Bayesian support vector regression. Comput. Method. Appl. Mech. Eng. 2021, 387, 114172. [Google Scholar] [CrossRef]
  11. Cheng, K.; Lu, Z. Adaptive Bayesian support vector regression model for structural reliability analysis. Reliab. Eng. Syst. Saf. 2021, 206, 107286. [Google Scholar] [CrossRef]
  12. Law, M.H.; Kwok, J.T.Y. Bayesian support vector regression. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, Hyatt Hotel, Key West, FL, USA, 4–7 January 2001. [Google Scholar]
  13. Gao, J.B.; Gunn, S.R.; Harris, C.J.; Brown, M. A probabilistic framework for SVM regression and error bar estimation. Mach. Learn. 2002, 46, 71–89. [Google Scholar] [CrossRef]
  14. Sollich, P. Bayesian methods for support vector machines: Evidence and predictive class probabilities. Mach. Learn. 2002, 46, 21–52. [Google Scholar] [CrossRef]
  15. Chu, W.; Keerthi, S.S.; Ong, C.J. Bayesian support vector regression using a unified loss function. IEEE T Neural Networ. 2004, 15, 29–44. [Google Scholar] [CrossRef] [PubMed]
  16. Cheng, K.; Lu, Z. Active learning Bayesian support vector regression model for global approximation. Inform. Sci. 2021, 544, 549–563. [Google Scholar] [CrossRef]
  17. Echard, B.; Gayton, N.; Lemaire, M. AK-MCS: An active learning reliability method combining Kriging and Monte Carlo simulation. Struct Saf. 2011, 33, 145–154. [Google Scholar] [CrossRef]
  18. Liu, H.; Ong, Y.S.; Cai, J. A survey of adaptive sampling for global metamodeling in support of simulation-based complex engineering design. Struct. Multidiscipl. Optim. 2018, 57, 393–416. [Google Scholar] [CrossRef]
  19. Shewry, M.C.; Henry, P. Wynn. Maximum entropy sampling. J. Appl. Stat. 1987, 14, 165–170. [Google Scholar] [CrossRef]
  20. Jin, R.; Chen, W.; Sudjianto, A. On sequential sampling for global metamodeling in engineering design. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Montreal, QC, Canada, 29 September–2 October 2002. [Google Scholar]
  21. Lin, Y.; Mistree, F.; Allen, J.K.; Tsui, K.L.; Chen, V.C. A sequential exploratory experimental design method: Development of appropriate empirical models in design. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Salt Lake City, UT, USA, 28 September–2 October 2004. [Google Scholar]
  22. Fuhg, J.N.; Fau, A.; Nackenhorst, U. State-of-the-art and comparative review of adaptive sampling methods for kriging. Arch. Comput. Methods Eng. 2021, 28, 2689–2747. [Google Scholar] [CrossRef]
  23. Aute, V.; Saleh, K.; Abdelaziz, O.; Azarm, S.; Radermacher, R. Cross-validation based single response adaptive design of experiments for Kriging metamodeling of deterministic computer simulations. Struct. Multidiscipl. Optim. 2013, 48, 581–605. [Google Scholar] [CrossRef]
  24. Xu, S.; Liu, H.; Wang, X.; Jiang, X. A robust error-pursuing sequential sampling approach for global metamodeling based on voronoi diagram and cross validation. J. Mech. Design 2014, 136, 071009. [Google Scholar] [CrossRef]
  25. Jiang, P.; Shu, L.; Zhou, Q. A novel sequential exploration-exploitation sampling strategy for global metamodeling. IFAC PapersOnLine 2015, 48, 532–537. [Google Scholar] [CrossRef]
  26. Jiang, C.; Cai, X.; Qiu, H. A two-stage support vector regression assisted sequential sampling approach for global metamodeling. Struct. Multidiscipl. Optim. 2018, 58, 1657–1672. [Google Scholar] [CrossRef]
  27. Roy, A.; Chakraborty, S. Support vector regression based metamodel by sequential adaptive sampling for reliability analysis of structures. Reliab. Eng. Syst. Saf. 2020, 200, 10694. [Google Scholar] [CrossRef]
  28. Lam, C.Q. Sequential Adaptive Designs in Computer Experiments for Response Surface Model Fit. Doctoral Dissertation, The Ohio State University, Columbus, OH, USA, 2008. [Google Scholar]
  29. Liu, H.; Xu, S.; Ma, Y.; Chen, X.; Wang, X. An adaptive Bayesian sequential sampling approach for global metamodeling. J. Mech. Design. 2016, 138, 011404. [Google Scholar] [CrossRef]
  30. Loeppky, J.L.; Sacks, J.; Welch, W.J. Choosing the sample size of a computer experiment: A practical guide. Technometrics 2009, 51, 366–376. [Google Scholar] [CrossRef] [Green Version]
  31. Marelli, S.; Sudret, B. UQLab: A framework for uncertainty quantification in Matlab. In Proceedings of the 2nd International Conference on Vulnerability and Risk Analysis and Management, University of Liverpool, Liverpool, UK, 13–16 July 2014. [Google Scholar]
  32. Zhou, S.; Zhang, P.; Xiao, Q. Global sensitivity analysis for peak response of a cantilevered rotor with single disc during startup. J. Vib. Shock. 2021, 40, 17–25. (In Chinese) [Google Scholar]
Figure 1. Flowchart of adaptive sampling schemes for metamodeling.
Figure 1. Flowchart of adaptive sampling schemes for metamodeling.
Machines 11 00228 g001
Figure 2. Illustration of a highly non-linear situation. Note that the black triangle represents the initial sample points, while the blue diamond indicates the candidate sample points.
Figure 2. Illustration of a highly non-linear situation. Note that the black triangle represents the initial sample points, while the blue diamond indicates the candidate sample points.
Machines 11 00228 g002
Figure 3. Illustration of distance constraint of CV-Voronoi active learning function.
Figure 3. Illustration of distance constraint of CV-Voronoi active learning function.
Machines 11 00228 g003
Figure 4. Prediction error of the metamodel through five initial samples.
Figure 4. Prediction error of the metamodel through five initial samples.
Machines 11 00228 g004
Figure 5. The plot of the metamodel and its training sample: (a) samples generated by EI, CV-Voronoi, MMSE, and AMSE for the test case; (b) BSVR metamodel built by different methods.
Figure 5. The plot of the metamodel and its training sample: (a) samples generated by EI, CV-Voronoi, MMSE, and AMSE for the test case; (b) BSVR metamodel built by different methods.
Machines 11 00228 g005
Figure 6. Flowchart of the proposed error-pursuing adaptive uncertainty analysis method.
Figure 6. Flowchart of the proposed error-pursuing adaptive uncertainty analysis method.
Machines 11 00228 g006
Figure 7. Convergence curve of example P1: (a) mean; (b) standard deviation; (c) skewness; and (d) Kurtosis.
Figure 7. Convergence curve of example P1: (a) mean; (b) standard deviation; (c) skewness; and (d) Kurtosis.
Machines 11 00228 g007
Figure 8. Convergence curve of example P2: (a) skewness; (b) kurtosis.
Figure 8. Convergence curve of example P2: (a) skewness; (b) kurtosis.
Machines 11 00228 g008
Figure 9. Convergence curve of example P3: (a) skewness; (b) kurtosis.
Figure 9. Convergence curve of example P3: (a) skewness; (b) kurtosis.
Machines 11 00228 g009
Figure 10. Convergence curve of example P4: (a) Skewness; (b) Kurtosis.
Figure 10. Convergence curve of example P4: (a) Skewness; (b) Kurtosis.
Machines 11 00228 g010
Figure 11. Convergence curve of example P5: (a) skewness; (b) kurtosis.
Figure 11. Convergence curve of example P5: (a) skewness; (b) kurtosis.
Machines 11 00228 g011
Figure 12. Convergence curve of example P6: (a) skewness; (b) kurtosis.
Figure 12. Convergence curve of example P6: (a) skewness; (b) kurtosis.
Machines 11 00228 g012
Figure 13. The overhung rotor system.
Figure 13. The overhung rotor system.
Machines 11 00228 g013
Figure 14. Convergence history for the overhung rotor system: (a) skewness; (b) kurtosis.
Figure 14. Convergence history for the overhung rotor system: (a) skewness; (b) kurtosis.
Machines 11 00228 g014
Table 1. Six benchmark test functions.
Table 1. Six benchmark test functions.
Test FunctionExpressionPerformance
P1 y = 3 ( 1 x 1 ) 2 exp ( a ) 10 ( x 1 5 x 1 3 x 2 5 ) exp ( b ) 1 3 exp ( c ) a = x 1 2 ( x 2 + 1 ) 2 , b = x 1 2 x 2 2 , c = ( x 1 + 1 ) 2 x 2 2 low-dimensional with non-linear multimodal
P2 y = cos ( ( x 1 3 ) 2 + ( x 2 2 ) 2 ) low-dimensional with non-linear multimodal
P3 y = 0.5 + sin ( ( x 1 2 + x 2 2 ) 2 1 + 0.1 ( x 1 2 + x 2 2 ) 2 )   low-dimensional with non-linear multimodal
P4 y = ( 30 + x 1 sin x 1 ) ( 4 + e x 2 2 )   low-dimensional with non-linear multimodal
P5 y = 1 1 + x 1 2 + + x 4 2 medium dimension with nonlinearity
P6 y = 20 exp ( 0.2 0.1 i = 1 8 x i 2 ) exp ( 0.1 i = 1 8 cos ( 2 π x i ) ) + 20   high dimension with nonlinearity
Table 2. Statistical information of the random variables.
Table 2. Statistical information of the random variables.
Test FunctionNumber of VariablesDistributionParameter 1Parameter 2
P12Uniform−44
P22Uniform−55
P32Lognormal0.50.25
P42Normal52
P54Normal0.50.25
P68Normal0.50.25
Table 3. The sampling configurations for the six test functions.
Table 3. The sampling configurations for the six test functions.
Test FunctionInitial SamplesEnriched SamplesCandidate SamplesTest Samples
P12010010,00050,000
P22010010,00050,000
P32010010,00050,000
P42010010,00050,000
P54020010,00050,000
P68030010,00050,000
Table 4. Results of mean.
Table 4. Results of mean.
Test FunctionMEICV-VoronoiMMSEAMSE
P15.29337.01238.44420.2401
P21.13670.44630.31910.7125
P30.10773.19870.08730.4698
P40.00880.02020.07180.0936
P50.08430.05390.02320.0099
P60.17040.16540.21190.0935
Table 5. Results of standard deviation.
Table 5. Results of standard deviation.
Test FunctionMEICV-VoronoiMMSEAMSE
P10.03850.38495.01660.0989
P20.07330.03020.00700.0076
P30.06622.76030.13130.4936
P40.48970.03620.71480.5214
P50.63691.30970.00320.0647
P62.66643.74483.40273.0182
Table 6. Results of skewness.
Table 6. Results of skewness.
Test FunctionMEICV-VoronoiMMSEAMSE
P16.6322811.5411132.051911.17126
P20.356822.398391.102450.07698
P32.5081769.034831.465640.44338
P41.474530.7475512.661280.47580
P52.130770.374820.104020.27997
P614.8498912.5507122.712760.02850
Table 7. Results of kurtosis.
Table 7. Results of kurtosis.
Test FunctionMEICV-VoronoiMMSEAMSE
P12.8415814.5685830.115510.64745
P20.070450.100460.009920.0129
P30.251426.327131.118540.04331
P49.64494.619775.76370.10787
P52.568131.656620.420690.33926
P612.115330.1321226.653620.14522
Table 8. Statistical information on the random variables.
Table 8. Statistical information on the random variables.
VariableMeansStandard Deviation
Elastic Modulus E / P a 2.11 × 10 11 2.11 × 10 10
Poisson’s ratio μ 0.30 0.03
Density ρ / ( kg m 3 ) 7810 781
Disc diameter D / m 0.250 0.025
Disc thickness T / m 0.040 0.004
x-direction stiffness of bearing 1 k 1 x / ( N m 1 ) 1 × 10 7 1 × 10 6
x-direction stiffness of bearing 2 k 2 x / ( N m 1 ) 1 × 10 7 1 × 10 6
x-direction damping for bearing 1 c 1 x / [ ( N S ) m 1 ] 4 × 10 5 4 × 10 6
x-direction damping for bearing 2 c 2 x / [ ( N S ) m 1 ] 4 × 10 5 4 × 10 6
Imbalance f / ( kg m ) 1 × 10 3 1 × 10 4
Table 9. Relative error of different methods.
Table 9. Relative error of different methods.
MEICV-VoronoiMMSEAMSE
Mean0.84030.18960.15710.141
Standard Deviation1.99161.11591.17820.1922
Skewness9.25663.23395.05940.5524
Kurtosis13.927812.433911.496961.4485
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, S.-T.; Jiang, J.; Zhou, J.-M.; Chen, P.-H.; Xiao, Q. An Error-Pursuing Adaptive Uncertainty Analysis Method Based on Bayesian Support Vector Regression. Machines 2023, 11, 228. https://doi.org/10.3390/machines11020228

AMA Style

Zhou S-T, Jiang J, Zhou J-M, Chen P-H, Xiao Q. An Error-Pursuing Adaptive Uncertainty Analysis Method Based on Bayesian Support Vector Regression. Machines. 2023; 11(2):228. https://doi.org/10.3390/machines11020228

Chicago/Turabian Style

Zhou, Sheng-Tong, Jian Jiang, Jian-Min Zhou, Pei-Han Chen, and Qian Xiao. 2023. "An Error-Pursuing Adaptive Uncertainty Analysis Method Based on Bayesian Support Vector Regression" Machines 11, no. 2: 228. https://doi.org/10.3390/machines11020228

APA Style

Zhou, S. -T., Jiang, J., Zhou, J. -M., Chen, P. -H., & Xiao, Q. (2023). An Error-Pursuing Adaptive Uncertainty Analysis Method Based on Bayesian Support Vector Regression. Machines, 11(2), 228. https://doi.org/10.3390/machines11020228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop