Next Article in Journal
A Training Smartphone Application for the Simulation of Outdoor Blind Pedestrian Navigation: Usability, UX Evaluation, Sentiment Analysis
Previous Article in Journal
An Open-Source Platform for Indoor Environment Monitoring with Participatory Comfort Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on an Assembly Prediction Method of RV Reducer Based on IGWO Algorithm and SVR Model

College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 366; https://doi.org/10.3390/s23010366
Submission received: 4 November 2022 / Revised: 15 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022
(This article belongs to the Section Industrial Sensors)

Abstract

:
This paper proposes a new method for predicting rotation error based on improved grey wolf–optimized support vector regression (IGWO-SVR), because the existing rotation error research methods cannot meet the production beat and product quality requirements of enterprises, because of the disadvantages of its being time-consuming and having poor calculation accuracy. First, the grey wolf algorithm is improved based on the optimal Latin hypercube sampling initialization, nonlinear convergence factor, and dynamic weights to improve its accuracy in optimizing the parameters of the support vector regression (SVR) model. Then, the IGWO-SVR prediction model between the manufacturing error of critical parts and the rotation error is established with the RV-40E reducer as a case. The results show that the improved grey wolf algorithm shows better parameter optimization performance, and the IGWO-SVR method shows better prediction performance than the existing back propagation (BP) neural network and BP neural network optimized by the sparrow search algorithm rotation error prediction methods, as well as the SVR models optimized by particle swarm algorithm and grey wolf algorithm. The mean squared error of IGWO-SVR model is 0.026, the running time is 7.843 s, and the maximum relative error is 13.5%, which can meet the requirements of production beat and product quality. Therefore, the IGWO-SVR method can be well applied to the rotate vector (RV) reducer parts-matching model to improve product quality and reduce rework rate and cost.

1. Introduction

With the continuous progress of digitalization, artificial intelligence, and smart production, rotate vector (RV) reducers, which are the critical constituents of industrial robots, have been mass-produced and widely used. However, limited to equipment precision and manufacturing capability, many RV reducer companies want to improve assembly precision and reduce scrap through parts matching. Thus, many scholars are devoted to researching various fast and accurate RV reducer assembly prediction methods and related contents to meet the enterprise production beat and product quality requirements. The key indicators to evaluate the quality of RV reducer are transmission power, torsional moment, transmission accuracy, service life, etc. [1], among which the indicator with a great impact on the transmission accuracy of RV-40E reducers is mainly the rotation error, the size of which is determined by the quality of each component in the RV-40E reducer drive chain and the quality of the assembly work [2].
Regarding the calculation of the rotation error, many studies have been conducted by domestic and foreign scholars; for example, Blanche [3] proposed a purely geometric method to determine the transmission error of the single cycloid of the planetary reducer. Takeshi Ishida [4] proposed a spring equivalence model to qualitatively analyze the RV rotation error by numerical analysis to establish accurate modeling for the RV reducer rotation error. Yinghui Zhang [5] carried out the simulation calculation of dynamic rotation error by establishing a virtual prototype model of RV reducers. Xiaotao Tong [6] proposed a back propagation (BP) neural network-based dynamic rotation error prediction method for RV reducers to achieve the prediction of the rotation error of RV reducers by including five critical factors of manufacturing errors in the second-stage cycloid pin-wheel transmission mechanism. Houzhen Sun [7] established a BP neural network transmission-error prediction model based on the sparrow search algorithm for back propagation (SSA-BP) and selected key dimensional parameters of assembled parts as influencing factors to achieve the prediction of key quality characteristics before assembly. These research results can calculate the RV reducer rotation error to a certain extent, but there are some defects, such as poor calculation accuracy and long operation time, that come up when used for actual assembly. For example, the calculation process of the pure geometric method is relatively cumbersome, and it does not take into account the actual part machining error and assembly error. Additionally, the mathematical analysis method of the equivalent model has a large gap between the final modeling results and the actual situation owing to massively simplified processing in the modeling process. Further, the model construction of the virtual prototype analysis method is complicated and time-consuming. Although the BP neural network prediction method can achieve the fast prediction of rotation errors, it needs massive sample data to obtain better prediction accuracy because of the complex internal structure. The existing research methods of rotation error are difficult to be used in practical assembly because of the disadvantages, such as being time-consuming and having poor calculation accuracy. Therefore, to address the abovementioned disadvantages of the existing research methods for rotation errors, this paper will establish a new prediction method for rotation errors to achieve fast and accurate predictions.

1.1. Literature Review

The problem of rotation error prediction is characterized by multifactor coupling, high-dimensional nonlinearity, and a limited number of samples, which makes the support vector regression (SVR) in machine learning have certain application advantages relative to the already-existing BP and SSA-BP neural network rotation error prediction methods. The SVR method in machine learning is characterized by strong generalizability and simple structure in solving small-sample high-dimensional nonlinear problems [8], and it is used in many fields. For example, Balogun [9] solved the spatial prediction problem of landslide sensitivity by using the SVR model combined with grey wolf optimization (GWO) algorithm, bat algorithm, and cuckoo optimization algorithm. Zichen Zhang [10] solved the power load forecasting problem by using the SVR model combined with GWO algorithm. Peng [11] solved the pipeline corrosion rate prediction problem by combining the SVR model with principal component analysis and the chaotic particle swarm optimization algorithm. Therefore, to address the shortcomings of existing rotation error research methods and the performance of the SVR method in solving high-dimensional nonlinear, small-sample problems, this paper will use the SVR method to establish a prediction model for rotation errors.
Although the modeling process is simple, the values of the penalty factor and the kernel function parameter in the SVR model have a significant impact on the model’s prediction performance [12,13]. Therefore, in order to use the SVR model to achieve better predictions for rotation errors, the problem of parameter optimization in the SVR model first needs to be solved. Currently, population intelligence algorithms and their improved versions are becoming increasingly popular in solving different optimization problems and are widely used in many fields [14]. For example, for the dynamic economic emission dispatch problems, Zhifeng Liu [15] proposed a novel solving approach based on the enhanced moth-flame optimization algorithm, which effectively reduced fuel costs as well as the pollutant emissions of power generation systems; Lingling Li [16] proposed an improved tunicate swarm algorithm for optimizing fuel costs and pollutant emissions, which optimized the optimal dynamic scheduling scheme. For feature-selection problems, Tubishat [17] proposed a dynamic salp swarm algorithm (DSSA), and the improvements significantly enhanced its performance in solving feature-selection problems; Hongliang Zhang [18] used chaotic initialization and differential evolution to improve the sparrow search algorithm (SSA) as a way to increase the convergence speed of the algorithm, which achieved good optimization results in the feature-selection problem. In the field of mechanical engineering optimization problems, population intelligence algorithms solve a wide variety of optimization problems [19]. Abderazek [20] proposed a queuing search algorithm to optimize the main parameters of the spur gear. Yaliang Wang [21] proposed a cellular differential evolutionary algorithm with double-stage external population leading to solve the optimal design of cycloid reducer parameters. Because of the powerful optimization capability of population intelligence algorithms, domestic and foreign scholars usually also use intelligent algorithms such as particle swarm optimization (PSO) and GWO algorithms to optimize the parameters in the SVR model [22,23].
The GWO algorithm is a novel heuristic algorithm proposed by Mirjalili et al. in 2014, which has the advantages of fast convergence speed and high solution accuracy compared with other intelligent algorithms [24]. However, the algorithm itself has the drawbacks of poor initial population diversity, slow convergence speed at the later stage, and easy-to-fall-into local optimum. Additionally, for these drawbacks, various improvement methods have been proposed and successfully applied by domestic and foreign scholars [25,26]. Although the GWO algorithm has stronger convergence performance than other traditional intelligent algorithms, it does not have a great advantage in the parameter optimization problem, so in order to further improve the performance of the algorithm in optimizing parameters, this paper proposes an improved grey wolf optimization (IGWO) algorithm based on the initialization of the optimal Latin hypercube sampling (OLHS) method, the nonlinear convergence factor, and the dynamic weight. In this study, the optimization capability of the proposed IGWO algorithm is compared with the GWO algorithm, the PSO algorithm, and the advanced SSA algorithm on six test functions, and the performance of the IGWO algorithm in solving the SVR model parameters optimization problem is verified by an actual case.

1.2. Major Contributions

In order to solve the problem that the current research method of rotation error is difficult to apply to the actual assembly process of RV reducer owing to its shortcomings, such as its being time-consuming and having poor calculation accuracy, and given the high-dimensional, nonlinear, and small-sample-size characteristics of the rotation error prediction problem, this paper proposes a method of rotation error prediction based on improved grey wolf–optimized support vector regression (IGWO-SVR). In addition, in order to improve the prediction accuracy of the SVR model, this paper improves the initialization population, the convergence factor, and the head wolves’ weights of the GWO algorithm to improve its accuracy for SVR model parameter optimization, and its optimization performance is verified by test functions. Finally, the parameter optimization ability of the IGWO algorithm and the prediction performance of the IGWO-SVR method are verified by using a practical case. The main contributions of this study are as follows:
  • An improved grey wolf optimization algorithm is proposed, with three improvements:
    • Improving their initialized populations through the optimal Latin hypercube sampling idea as a way to increase initial population diversity.
    • Improving the convergence factor by the cosine nonlinear function, which improves the global search ability in the early stage and the convergence speed in the later stage of the algorithm.
    • Improving the speed of convergence of this algorithm to the optimal solution through the improvement of the dynamic weighting strategy.
  • Establish a new rotation error prediction method based on the IGWO algorithm and the SVR model to achieve fast and accurate predictions of rotation errors.
  • The IGWO-SVR method shows better prediction performance relative to other rotation error prediction methods, and the IGWO algorithm also shows good parameter optimization performance, as verified by the RV reducer example.

2. Structural Principle and Rotation Error of RV Reducer

2.1. Structural Principle Analysis of RV Reducer

There are various models of RV reducers, and this article is mainly about the RV-40E reducer. The RV-40E reducer is constituted primarily of a planetary wheel, a crankshaft, a cycloid wheel, a flange, a needle gear, etc. Figure 1a shows a schematic diagram of its structure. Rotation error, which is the critical quality characteristic of the RV reducer, can be affected by the processing error of each component of the transmission chain, the modification amount of contact gear tooth profile, assembly clearance, elastic-plastic deformation, etc. [27]. Additionally, its size refers to the difference between the theoretical output angle and the actual output angle.
The transmission of the RV reducer is divided into two stages: the first-stage involute planetary gear transmission and the second-stage cycloid transmission. Figure 1b shows the principle of the RV reducer transmission. Because the first-stage transmission system is far from the output end and because the transmission ratio of the second-stage transmission is 3~25 times that of the first-stage transmission, the impact of the first-stage involute gear planetary transmission system on the overall rotation error of the assembly will be greatly reduced. Second, because the influence of the second-stage cycloid pin-wheel transmission on the transmission error is directly reflected on the output shaft, the effect on the transmission accuracy of the RV reducer depends mainly on the second-stage rotation error [28]. In conclusion, this paper considers only the second-stage cycloid pin-wheel transmission system to establish the rotation error prediction model and ignores the first-stage transmission.

2.2. Analysis of Influencing Factors of Rotation Error

There are many manufacturing errors that affect the rotation error in the second-stage transmission system, and it is difficult to study the manufacturing errors of the parts one by one. Therefore, this paper uses the Taylor series expansion principle to conduct a sensitivity analysis of manufacturing errors to identify the primary influencing elements from the manufacturing errors in the second-stage cycloidal pin-wheel transmission. Additionally, parameters for each influencing factor are noted as   θ = [ θ 1 , θ 2 , , θ n ] . According to the principle of sensitivity analysis, the rotation error model is defined in Equation (1):
φ = φ ( θ 1 , θ 2 , , θ n )
Using the Taylor series expansion principle, the sensitivity index of each error can be defined in the Equation (2):
S i = φ / θ i φ / θ 0 i = 1 , 2 , 3 n  
where φ / θ 0 is the reference factor and the crank-bearing clearance is chosen as the reference in this paper.
Table 1 shows the sensitivity analysis results of eight key manufacturing errors in the second-stage cycloidal pin-wheel transmission. According to Table 1, this paper selects five factors, including the cycloid gear isometric modification error ( θ 1 ), the radius error of needle tooth center circle ( θ 2 ), the cycloid gear shift modification error ( θ 3 ), the needle tooth radius error ( θ 4 ), and the crankshaft eccentricity error ( θ 5 ), to establish the rotation error prediction model. Although the sensitivity index of the crankshaft eccentricity error is very small, the crank-bearing clearance error that can be caused by the crankshaft eccentricity error has a great influence on rotation error; therefore, the crankshaft eccentricity error is also considered to establish the prediction model.

3. The Improvement of the GWO Algorithm

3.1. GWO Algorithm

The GWO was proposed in 2014 by Mirjalili [24] as a novel heuristic group intelligence method, which is based on the principle of simulating the hierarchical mechanism and hunting mode of the grey wolf pack. The grey wolf pack is generally divided into four classes, and the top-three wolves with the best adaptability in the pack are divided into wolfα, wolfβ, and wolfγ, and the rest are ω wolves. Its hunting process is mainly that wolfα, wolfβ, and wolfγ lead other ω wolves to continuously search, surround, and attack prey. Additionally, update the wolfα, wolfβ, and wolfγ of each iteration during the hunting process.
Step 1: surrounding the prey. The wolf pack ω is constantly updated according to the prey’s position and its distance from the prey, thus gradually approaching the prey. The mathematical formula is as follows:
D = | C · X p ( t ) X ( t ) |
X ( t + 1 ) = X p ( t ) A · D
where D denotes the distance between ω wolf and prey, t denotes the present iterations, X p ( t ) denotes the location of prey at t th iterations, X ( t ) denotes the location vector of grey wolf in t th iterations, and X ( t + 1 ) denotes the updated position of the ω wolves in ( t + 1 )th iteration. Finally, C and A are the random coefficients, which can be formulated as follows:
A = 2 α · r 1 α
C = 2 r 2
α = 2 2 t t m a x
where r 1 , r 2 denote the random vectors of [0, 1] interval, t denotes the present iterations, t m a x denotes the maximum iterations, and α denotes a converging factor.
Step 2: hunting. The hunting behavior is completed under the leadership of the wolfα, wolfβ, and wolfγ. The leading of ω wolves by leaders depends mainly on the constant A and the distance D between ω wolf and prey. Figure 2 shows a schematic diagram of a ω wolf position update, which can be formulated as follows:
{ D α = | C · X α X ( t ) | D β = | C · X β X ( t ) | D γ = | C · X γ X ( t ) |
{ X 1 = X α A 1 · D α X 2 = X β A 2 · D β X 3 = X γ A 3 · D γ
X ( t + 1 ) = X 1 + X 2 + X 3 3
where D α , D β and D γ indicate the distance between the grey wolf and the prey; X α , X β , and X γ   indicate the position of the wolfα, wolfβ, and wolfγ, respectively; and X ( t + 1 ) indicates the updated location of the wolf pack ω in the ( t + 1 )th iteration. The iterations are performed by the above method, and the optimal solution is obtained when the iteration times satisfy the termination condition.

3.2. Improved GWO Algorithm

The traditional GWO algorithm has some disadvantages: poor initial population diversity, slow convergence in the later stage, and easy-to-fall-into local optimization [25]. Additionally, the inaccuracy of the parameter optimization results of the GWO algorithm may result in poor forecast precision from the SVR model for the rotation error, thus failing to meet the requirements of the RV reducer assembly line. Hence, this article proposes an IGWO algorithm, based on the initialization of the OLHS method, the cosine nonlinear convergence factor, and dynamic weights to improve the accuracy of the algorithm’s parameter search.

3.2.1. Wolf Pack Initialization by the OLHS Method

In intelligent optimization algorithms, the distribution of the initial population location influences the global search speed and the convergence accuracy of algorithms, and the uniform and diverse population distribution facilitates the optimization properties of the algorithm [29]. The traditional GWO algorithm uses a random method to generate the initial population, so it lacks diversity. In addition, the uneven distribution of the initial population may cause it to run into a topical optimization solution; that is, the initial range does not cover the location of the optimal solution, so the algorithm cannot find the global optimal solution.
The OLHS method is an improvement on the Latin hypercube sampling (LHS) method. The traditional LHS rule can ensure only that the projection is uniformly distributed on each coordinate axis, but the distribution in space is not uniform. Therefore, in the case of improving its space-filling property, in 1990 Johnson [30] proposed the minimax rule to improve the space-filling degree, which is one of the most widely used methods to evaluate sampling uniformity. However, the calculation scale of this method is very large, so Morris [31] proposed the OLHS method in 1995 on the basis of the scalar discriminant function, which not only guaranteed the characteristics of space-filling but also reduced the calculation scale. Its scalar discriminant function formula is as follows:
φ q ( X ) = ( i = 1 m J i d i q ) 1 q
where d i , i = 1 , 2 , 3 m indicates the distance between all possible combinations of two points in the sampling matrix, X , j i , i = 1 , 2 , 3 m indicates the number of pairs of points with distance d i in the sampling matrix X, and q indicates the mode norm of the space, which is a positive integer.
In this paper, the OLHS method with good space-filling property is used to initialize the distribution position of grey wolves. Figure 3 shows the different distributions of the initializations of the random method and the OLHS method when generating 100 grey wolf individuals between [0, 1] in two dimensions. According to Figure 3, it is obvious that the initialization of the OLHS method can result in a more uniform distribution of the initial population in the sample space, and the more uniform initial population can provide more information on the global optimal solution, which can improve the algorithm’s global search capability and convergence speed.

3.2.2. Nonlinear Convergence Factor

The key problem of the traditional GWO algorithm is how to make a trade-off between the ability of global optimization and local optimization. Additionally, the global optimization capacity affects the stability and accuracy of the algorithm optimization, and the local optimization capacity affects the rate of the algorithm convergence. The traditional GWO algorithm balances the ability of global and local searches by the change in A , which is controlled by the linear variation of α. When | A | 1 , the algorithm tends to develop the capacity of local searches; when | A   | > 1 , the algorithm tends to develop the capacity of global searches.
The convergence factor parameter α   that is shown in Equation (5) is a linear, decreasing transformation that represents the convergence process from 2 to 0. However, the convergence of the algorithm is often a nonlinear process, so the linear decrement of α is difficult to use to control the convergence process of the algorithm, causing the algorithm to easily run into a local optimum [32]. The research of Yuxiang Hou [33] shows that the nonlinear transformation of parameter α can enhance the optimization capacity of the algorithm and avoid running into a local optimum. Therefore, this paper uses cosine nonlinear convergence parameter α to control the convergence process of the algorithm, and its formula is given by Equation (12):
α = cos   ( t t m a x π ) + 1
where t indicates the iterations, t m a x indicates the maximum iterations, and α indicates the convergence factor. The convergence processes of α for the IGWO algorithm and traditional GWO algorithm are shown in Figure 4.
According to Figure 4, the α of the GWO algorithm linearly decreases, which does not apply to the nonlinear convergence process of the algorithm. The α of the IGWO algorithm is nonlinear and normally decreases, and during the first half of the iteration, the convergence factor decreases more slowly compared with the traditional GWO algorithm, thus improving the global search ability of the algorithm and avoiding the algorithm’s falling into the local optimum in the early stage. During the second half of the iteration, the decreasing speed of the convergence factor is accelerated, thus improving the convergence speed and accuracy of the local search of the algorithm. In summary, the cosine nonlinear improvement of the convergence factor can better equilibrate the capabilities of the global optimization and the local optimization of the algorithm.

3.2.3. Weight-Based Grey Wolf Position Update

In the GWO algorithm, the leading weights of wolfα, wolfβ, and wolfγ are the same, which leads to a slow convergence of the algorithm. In addition, many scholars have verified that a better new grey wolf pack can be generated by weighted fitness or weighted distance, thus accelerating the convergence speed of the algorithm. In this paper, the weight is dynamically adjusted according to the fitness and distance, and the position update formula in Equation (10) is transformed into Equation (14). The mathematical expressions of weight and position update formula are as follows:
{ W α 1 = f α + f β + f γ f α , W β 1 = f α + f β + f γ f β , W γ 1 = f α + f β + f γ f γ W α 2 = | X 1 | | X 1 + X 2 + X 3 | , W β 2 = | X 2 | | X 1 + X 2 + X 3 | , W γ 2 = | X 3 | | X 1 + X 2 + X 3 | W 1 = W α 1 W α 2 W α 1 W α 2 + W β 1 W β 2 + W γ 1 W γ 2 W 2 = W β 1 W β 2 W α 1 W α 2 + W β 1 W β 2 + W γ 1 W γ 2 W 3 = W γ 1 W γ 2 W α 1 W α 2 + W β 1 W β 2 + W γ 1 W γ 2
X ( t + 1 ) = W 1 X 1 + W 2 X 2 + W 3 X 3
where t indicates the current iteration number; f α ,   f β , and f γ are the current fitness values of wolfα, wolfβ, and wolfγ, respectively; and X ( t + 1 ) represents the position of the ( t + 1 )th iteration ω wolf.

3.2.4. Validation of IGWO Algorithm

In the literature, an article [34] chose six functions to test the properties of the IGWO algorithm, among which f 1 and f 2 are single-peaked functionns, f 3 and   f 4 are multipeaked functions, and f 5 and f 6 are fixed-dimension multipeaked functions, as shown in Table 2 and Figure 5. Additionally, it compared the IGWO algorithm with the PSO algorithm, the SSA algorithm, and the traditional GWO algorithm to better demonstrate the first’s seeking performance. To ensure fairness and effectiveness, the maximum iteration number and the initial population size of algorithms were uniformly set to 500 and 30, c 1 = 2, c 2 = 2, ω = 0.75 for the PSO algorithm, q = 10 for the IGWO algorithm, and R 2 = 0.8, S T = 0.8, p e r c e n t   = 0.2 for the SSA algorithm.
The average and the standard deviation of the 20 optimization results are used as the criteria to reflect the optimization performance of the algorithms. Table 3 and Figure 6 show the optimization results and convergence curves. The results show that for single-peak functions f 1 and f 2 , the mean of the IGWO algorithm is more similar to the real optimal solution and that the algorithm converges faster compared with the PSO, GWO, and SSA algorithms. For multipeaked functions f 3 and f 4 , the IGWO algorithm shows good optimization performance, and there is no error between its optimization results and the real results, whereas all other algorithms have some errors. For the fixed-dimensional multipeaked functions f 5 and f 6 , the convergence results of all four algorithms are accurate. Therefore, the IGWO algorithm has better function optimization performance and better convergence speed and accuracy than the PSO, GWO, and SSA algorithms, which indicates that the enhancement of the algorithm in this article is efficient.

4. Rotation Error Prediction Model Based on IGWO-SVR

4.1. SVR Model

Support vector regression is generalized from support vector machines and applied to solve regression prediction problems [35]. For the problem of predicting the rotation error of the RV reducer, cycloid gear isometric modification error ( θ 1 ), needle tooth center circle radius error ( θ 2 ), cycloid gear shift modification error ( θ 3 ), needle tooth radius error ( θ 4 ), and crankshaft eccentricity error ( θ 5 ) are composed of input variables, and the rotation error is taken as the output variable. Thus, the sample can be formulated as follows:
x i = [ θ 1 ,   θ 2 ,   θ 3 , θ 4 ,   θ 5 ]
X = [ x 1 ,   x 2 ,   x 3 ,   x 4 , , x n ] T
Y = [ Y 1 ,   Y 2 ,   Y 3 ,   Y 4 , , Y n ] T
where n denotes the number of samples, x i represents the input variables, X represents an array of input factor variables, and Y denotes an actual value array of sample data rotation errors.
According to the sample data, if there is a functional relationship between the rotation error and the five key manufacturing errors above. The SVR model is shown in Equation (18):
f ( x ) = ω T φ ( x ) + b
where x is the input vector, f ( x ) denotes the predicted value of rotation error, φ ( x ) is the mapping relationship function, and b and ω are the deviation term and the weight vector, respectively.
By introducing an insensitive loss function and minimizing structural risk, Equation (18) can be transformed as follows:
m i n 1 2 | | ω | | 2 + C i = 1 n ( ξ   i + ξ   i + )
s . t . { Y i ( ω T φ ( x i ) + b ) ε + ξ   i + ( ω T φ ( x i ) + b ) Y i ε + ξ   i ξ   i , ξ   i + 0 ,   i = 1 , 2 , · · · , n
where | | ω | | 2 is the regular term, ε is the parameter of the insensitive penalty function, ξ   i and ξ   i + are slack variables, x i is the input variable, Y i is the actual value of the rotation error, and C is a penalty factor.
By introducing Lagrange multipliers α   i + ,   α   i and kernel functions, (19) and (20) can be transformed into their dual form problem, as shown in Equations (21) and (22):
m a x [ 1 2 i = 1 n j = 1 n ( α i + α i ) ( α j + α j ) K ( x , x i ) ε i = 1 n ( α i + + α i ) + i = 1 n ( α i + α i ) y i ]
s . t . { i = 1 n ( α   i + α   i ) = 0 0 α   i + , α   i C
where α   i + , α   i are Lagrange multipliers and K ( x , x i ) is a Gauss radial basis kernel function. The formula is given by Equation (23):
K ( x , x i ) = e x p ( | | x x i | | 2 2 σ 2 ) = exp ( g | | x x i | | 2 )
where σ denotes the width of the function and g represents the parameter of the kernel.
Finally, through the Karush–Kuhn–Tucker condition, the mathematical expression of the final SVR model is shown in Equation (24):
f ( x ) = i = 1 n ( α   i + α   i ) K ( x , x i ) + b

4.2. Process of Building Rotation Error Prediction Model

The prediction of the rotation error is a typical high-dimensional nonlinear problem. By the SVR method, the kernel function throws the sample data from the low dimensional space to the relatively higher dimensional space to solve the problem.
The prediction accuracy of rotation errors based on SVR is significantly affected by C and g , as shown in Equations (23) and (24). First, the penalty factor C represents the penalty on the sample points and beyond the ± ε pipe, and the magnitude of this value has an impact on the stability as well as the complexity of the SVR prediction model. In addition, the kernel function g represents the correlation between the sample points and beyond the ±ε tube. The larger the g value is, the stronger the correlation between these points is, but the accuracy of the rotation error prediction model is difficult to guarantee. On the contrary, the smaller the g value is, the looser the correlation between these points is, resulting in a more complex SVR model and the poorer universalization capacity of the model. Therefore, taking the appropriate C and g can enhance the precision of the SVR model for predicting rotation errors. To enhance the prediction precision of the SVR model for RV reducer rotation errors, this article uses the abovementioned IGWO algorithm to find the suitable values of C and g. Figure 7 is the prediction process of rotation errors based on IGWO-SVR, and the specific steps are as follows:
Step 1: Import five key manufacturing error factors X and rotation error Y into the prediction model as input and output data, perform normalization, and separate sample data into training and test sets.
Step 2: Set an optimization range of parameters about the SVR model, establish the rotation error prediction model based on SVR, and train the model using the training set data.
Step 3: Initialize the IGWO algorithm parameters, and generate the initial grey wolf population within the parameter range by following the OLHS method.
Step 4: Calculate the fitness (fitness value is mean square error) of each grey wolf by the SVR model, and save the top-three wolves of the best fitness as wolfα, wolfβ, and wolfγ.
Step 5: Update the IGWO algorithm parameters and ω wolves, and then calculate the fitness.
Step 6: Compare the fitness of each ω wolf with the corresponding fitness values of wolfα, wolfβ, and wolfγ in turn, and if it is better, replace the corresponding leading wolf’s position with the ω wolf’s position.
Step 7: Judge whether the current iteration count is equal to the maximum iteration count. If yes, output the b e s t C and b e s t g ; otherwise, return to step 6.
Step 8: Replace C and g with b e s t C and b e s t g to establish an IGWO-SVR model of rotation errors.
Step 9: Test the rotation error prediction model with the test set data.
Step 10: Judge whether the rotation error prediction model’s forecast accuracy meets the assembly line requirement. If it meets the accuracy requirement, output the prediction results and save the prediction model; otherwise, go back to Step 3.

5. Result and Discussion

The prediction model in this paper was implemented by using Python 3.8 programming in Windows 10 system. This paper selects mean square error (MSE), mean relative error (MRE), and mean absolute error (MAE) as the evaluation metrics to assess the prediction performance of the IGWO-SVR model.
M S E = 1 n i = 1 n ( Y i Y i * ) 2
M A E = 1 n i = 1 n | Y i Y i * |
M R E = i = 1 n | Y i Y i * | Y i n
where Y i denotes a real value of rotation error, Y i * denotes a predicted value of rotation error, and n is the number of test samples.

5.1. Preprocessing of Data

In this paper, 100 sets of data from the actual production process of an RV reducer manufacturing company are selected as samples. Because different factors have different magnitudes, the sample data are normalized to eliminate the influence of magnitudes among factors. Then the sample data are divided into test samples and training samples at a rate of 2:8. A normalized mathematical expression is shown in Equation (28):
x n e w i = x i x m i n x m a x x m i n
where x n e w i denotes the normalized value, x i denotes the raw value, and x m a x and x m i n denote the maximum value and the minimum value, respectively. Some of the normalized data are shown in Table 4.

5.2. Optimization Results of Parameters

In this paper, the parameters C and g in the SVR model need to be optimized. Therefore, the dimension of the optimized parameter in the IGWO algorithm is set to 2, and the value range of super parameters C and g is set as 0.01 to 100. Table 5 shows the values of the parameters of the algorithm.
The MSE is chosen as the fitness function, then the fitness of each individual is computed during the iterative process, and the results of the optimal parameters b e s t C and b e s t g are finally obtained. Finally, the b e s t C and b e s t g in SVR are 10.897 and 0.1918, respectively, by the IGWO algorithm, and the corresponding optimal fitness value at this point is 0.0258. The optimal fitness curve and average fitness curve in the iteration process are shown in Figure 8. As can be seen from Figure 8, the optimal fitness curve basically coincides with the average fitness curve after 20 iterations.

5.3. Analysis of Predictive Effect of the IGWO-SVR Model

In this paper, MSE, MAE, MRE, and calculation time are used as criteria to evaluate the prediction effectiveness of the IGWO-SVR model, and as they get closer to 0, the better the prediction effectiveness of the model is. Figure 9 shows the prediction result of the model compared with the actual value, and Figure 10 shows the relative error for each sample point.
Figure 9 shows that the MSE, MRE, MAE, and calculation time of the IGWO-SVR model are 0.026, 0.0784, 0.1195, and 7.873s after 20 test set samples, respectively, indicating that the trained IGWO-SVR model achieves a high prediction accuracy and can also meet the requirement of enterprise production line beat in calculation time. Figure 10 shows that the maximum relative error of the forecasting model is 13.5%, which meets the enterprise’s requirement for the error to be within 20%. In summary, although the model has some errors, the overall prediction effect is good, it can more accurately express the nonlinear relationship between the rotation error and the dimensional errors of key parts, and its prediction accuracy and calculation time can meet the requirements of RV reducer enterprises.

5.4. Performance Evaluation of Model

In order to better evaluate the performance of the IGWO-SVR model, it is compared with the SVR models optimized by the PSO algorithm and the GWO algorithm, the existing BP neural network rotation error prediction method [6], and the SSA-BP neural network rotation error prediction method [7]. The parameters of different rotation error prediction models are shown in Table 6.
Figure 11 and Table 7 clearly show the prediction results of different rotation error prediction methods. From Table 7, the MSE of IGWO is 0026, which is 27.37% lower than PSO-SVR method, 28.57% lower than GWO-SVR method, 78.53% lower than BP method, and 28.37% lower than SSA-BP method; the MRE of IGWO is 0.0784, which is 11.21% lower than PSO-SVR method, 13.94% lower than GWO-SVR method, 59.06% lower than BP method, and 37.68% lower than SSA-BP method; and the MAE of IGWO is 0.1195, which is 10.75% lower than PSO-SVR method, 12.65% lower than GWO-SVR method, 57.46% lower than BP method, and 32.52% lower than SSA-BP method. Therefore, the results of the data analysis show that the IGWO method is better than the other prediction methods at prediction accuracy. In addition, for calculations, the IGWO-SVR method outperforms the PSO-SVR model and significantly outperforms the BP and SSA-BP models. Although the computing efficiency of the IGWO-SVR method is a little slower compared with the GWO-SVR model, it can also meet the beat requirement of the RV reducer production line. Therefore, given the prediction accuracy and computational efficiency, the IGWO-SVR method has better prediction performance than the other methods.
Figure 12 and Figure 13 show the relative error for each sample point and maximum relative error of different prediction methods. According to Figure 12, the relative error curve of the IGWO-SVR method is less volatile and more stable than those of the other methods. In addition, according to Figure 13, the maximum relative error of IGWO-SVR is 13.5%, which is not only within the 20% range required by companies but also lower than other forecasting methods. The maximum relative errors of GWO-SVR and PSO-SVR are 19.7% and 19.5%, respectively, and they are only slightly below 20%, so they are not as reliable. Additionally, the maximum relative errors of the BP neural network and the SSA-BP neural network are 28.1% and 24%, respectively, which can no longer meet the requirements of the forecasting accuracy of enterprises. Therefore, the prediction results of the IGWO-SVR method for rotation errors are more reliable than the other methods.
In summary, the IGWO-SVR method not only has good performance in prediction accuracy and prediction efficiency but also has low volatility and good stability. Thus, it can more accurately express the nonlinear relationship between rotation errors and the dimensional errors of key parts. Additionally, it provides a more accurate quality prediction input model for the part-matching model of the RV reducer, which will be more conducive to improving assembly quality.

6. Conclusions

This study solves the problem that the traditional rotation error research methods cannot be applied to the parts-matching process, because they are time-consuming and have poor calculation accuracy. Therefore, this paper proposes to use the SVR method, combined with the improvement of the grey wolf optimization algorithm, to predict the rotation error with high accuracy and speed. In addition, this paper takes the RV-40E reducer as a case to verify the performance of the method. The main contents include the following parts:
(1)
Traditional GWO algorithm is enhanced based on the OLHS method, the cosine nonlinear convergence factor, and the dynamic weight strategy. Through verification, the IGWO algorithm has good optimization performance.
(2)
The prediction model for the rotation error of the RV reducer based on IGWO-SVR is established by optimizing the C and g of SVR by using the IGWO algorithm. Additionally, its MSE is 0.026, running time is 7.843 s, and maximum relative error is 13.5%, which can meet the requirements of production beat and the product quality of enterprise.
(3)
A comparison of the IGWO-SVR method with other methods shows that the former provides better prediction performance and the IGWO algorithm shows better parameter optimization performance.
The innovative contributions of this paper are the following three: (1) Improve its optimization performance by improving the traditional grey wolf optimization algorithm. (2) A new rotation error research method based on IGWO-SVR model is proposed for the disadvantages of existing rotation error research methods. (3) This research method can guide the assembly of RV reducers through the parts-matching process, thus improving its assembly quality and efficiency.
Although the IGWO-SVR method has good prediction performance for rotation errors, the method also has some drawbacks. First, the rotation error is affected by many factors, but the method considers only five key factors in the second-stage cycloidal pin-wheel transmission. Second, the method is only for the RV-40E model reducer, and the applicability to other models is not necessarily very high. Therefore, these issues will be considered in future work to obtain a more optimal prediction model for rotation errors.

Author Contributions

Methodology, S.J.; software, M.C.; validation, S.J.; writing—original draft preparation, M.C.; writing—review and editing, M.C., Q.Q. and G.Z.; supervision, Y.W.; project administration, S.J.; data curation, Q.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National High-tech R&D Program of China (Grant No.2015AA043002), the Natural Science Foundation of Zhejiang Province (Grant No. LQ22E050017).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available, because they involve corporate data privacy.

Acknowledgments

The authors thank the Zhejiang Shuanghuan Transmission Machinery Co., Ltd. for providing the RV reducer sample and some data.

Conflicts of Interest

The authors declare no conflict of interest in the publication of this article.

References

  1. Yin, Y. Research on On-line Monitoring and Rating of Transmission Accuracy of Industrial Robot RV Reducer. Master’s Thesis, China University of Mining and Technology, Xuzhou, China, 2021. [Google Scholar]
  2. Zhao, L.; Zhang, F.; Li, P.; Zhu, P.; Yang, X.; Jiang, W.; Xavior, A.; Cai, J.; You, L. Analysis on Dynamic Transmission Accuracy for RV Reducer. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2017; Volume 100. [Google Scholar]
  3. Blanche, J.G.; Yang, D.C.H. Cycloid drives with machining tolerances. Mech. Transm. Autom. Des. 1989, 111, 337–344. [Google Scholar] [CrossRef]
  4. Ishida, T.; Wang, H.; Hidaka, T. A study on opening the turning error of the K-H-V star congratulations device using the Cycloid congratulations vehicle (article 2, effects of various processing and assembly errors on turning error). Trans. Jpn. Soc. Mech. Eng. 1994, 60, 278–285. [Google Scholar]
  5. Zhang, Y.H.; Chen, Z.; He, W.D. Virtual Prototype Simulation and Transmission Error Analysis for RV Reducer. Appl. Mech. Mater. 2015, 789–790, 226–230. [Google Scholar] [CrossRef]
  6. Tong, X.T. Research on Dynamic Transmission Error of RV Reducer based on Virtual Prototype technology. Master’s Thesis, Zhejiang University of Technology, Hangzhou, China, 2019. [Google Scholar]
  7. Sun, H.Z.; Yuan, H.B.; Yu, B. the RV reducer transmission error prediction based on SSA-BP. J. Mech. Transm. 2022, 46, 149–154. [Google Scholar]
  8. Dai, W.; Zhang, C.Y.; Meng, L.L.; Li, J.H.; Xiao, P.F. A support vector machine milling cutter wear prediction model based on deep learning and feature post-processing. Comput. Integr. Manuf. Syst. 2022, 26, 2331–2343. [Google Scholar]
  9. Balogun, A.L.; Rezaie, F.; Pham, Q.B.; Gigović, L.; Drobnjak, S.; Aina, Y.A.; Lee, S. Spatial prediction of landslide susceptibility in western Serbia using hybrid support vector regression (SVR) with GWO, BAT and COA algorithms. Geosci. Front. 2021, 12, 101104. [Google Scholar] [CrossRef]
  10. Zhang, Z.; Hong, W.C. Application of variational mode decomposition and chaotic grey wolf optimizer with support vector regression for forecasting electric loads. Knowl. Based Syst. 2021, 228, 107297. [Google Scholar] [CrossRef]
  11. Peng, S.; Zhang, Z.; Liu, E.; Liu, W.; Qiao, W. A new hybrid algorithm model for prediction of internal corrosion rate of multiphase pipeline. J. Nat. Gas Sci. Eng. 2021, 85, 103716. [Google Scholar] [CrossRef]
  12. Nguyen, H.; Choi, Y.; Bui, X.N.; Nguyen-Thoi, T. Predicting Blast-Induced Ground Vibration in Open-Pit Mines Using Vibration Sensors and Support Vector Regression-Based Optimization Algorithms. Sensors 2019, 20, 132. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, B.; Li, K.; Hu, Y.; Ji, K.; Han, B. Prediction of Backfill Strength Based on Support Vector Regression Improved by Grey Wolf Optimization. J. Shanghai Jiaotong Univ. (Sci.) 2022, 1–9. Available online: https://link.springer.com/article/10.1007/s12204-022-2408-7 (accessed on 3 November 2022). [CrossRef]
  14. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  15. Liu, Z.F.; Li, L.L.; Liu, Y.W.; Liu, J.Q.; Li, H.Y.; Shen, Q. Dynamic economic emission dispatch considering renewable energy generation: A novel multi-objective optimization approach. Energy 2021, 235, 121407. [Google Scholar] [CrossRef]
  16. Li, L.L.; Liu, Z.F.; Tseng, M.L.; Zheng, S.J.; Lim, M.K. Improved tunicate swarm algorithm: Solving the dynamic economic emission dispatch problems. Appl. Soft Comput. 2021, 108, 107504. [Google Scholar] [CrossRef]
  17. Tubishat, M.; Ja’afar, S.; Alswaitti, M.; Mirjalili, S.; Idris, N.; Ismail, M.A.; Omar, M.S. Dynamic salp swarm algorithm for feature selection. Expert Syst Appl. 2021, 164, 113873. [Google Scholar] [CrossRef]
  18. Zhang, H.; Liu, T.; Ye, X.; Heidari, A.A.; Liang, G.; Chen, H.; Pan, Z. Differential evolution-assisted salp swarm algorithm with chaotic structure for real-world problems. Eng. Comput. Germany. 2022, 1–35. [Google Scholar] [CrossRef]
  19. Yildiz, A.R.; Abderazek, H.; Mirjalili, S. A comparative study of recent non-traditional methods for mechanical design optimization. Arch. Comput. Method E. 2020, 27, 1031–1048. [Google Scholar] [CrossRef]
  20. Abderazek, H.; Hamza, F.; Yildiz, A.R.; Gao, L.; Sait, S.M. A comparative analysis of the queuing search algorithm, the sine-cosine algorithm, the ant lion algorithm to determine the optimal weight design problem of a spur gear drive system. Mater Test. 2021, 63, 442–447. [Google Scholar] [CrossRef]
  21. Wang, Y.; Ni, C.; Fan, X.; Qian, Q.; Jin, S. Cellular differential evolutionary algorithm with double-stage external population-leading and its application. Eng. Comput. Germany 2022, 38, 2101–2120. [Google Scholar] [CrossRef]
  22. Kamarzarrin, M.; Refan, M.H. Intelligent Sliding Mode Adaptive Controller Design for Wind Turbine Pitch Control System Using PSO-SVM in Presence of Disturbance. J. Control Autom. Electr. Syst. 2020, 31, 912–925. [Google Scholar] [CrossRef]
  23. Yang, Z.; Wang, Y.; Kong, C. Remaining Useful Life Prediction of Lithium-Ion Batteries Based on a Mixture of Ensemble Empirical Mode Decomposition and GWO-SVR Model. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, X.F.; Wang, X.Y. A Survey of Gray Wolf Optimization Algorithms. Comput. Sci. 2022, 46, 30–38. [Google Scholar]
  26. Zhao, X.; Ren, S.; Quan, H.; Gao, Q. Routing Protocol for Heterogeneous Wireless Sensor Networks Based on a Modified Grey Wolf Optimizer. Sensors 2020, 20, 820. [Google Scholar] [CrossRef] [Green Version]
  27. Jin, S.S.; Tong, X.T.; Wang, Y.L. Influencing Factors on Rotate Vector Reducer Dynamic Transmission Error. Int. J. Autom. Technol 2019, 13, 545–556. [Google Scholar] [CrossRef]
  28. Liu, T.D.; Lu, M.; Shao, G.F.; Wang, R.Y. Transmission ERROR Modeling and Optimization of Robot Reducer. Control Theory Appl. 2022, 37, 215–221. [Google Scholar]
  29. Lei, B.; Jin, Y.T.; Liu, H.L. Job Scheduling for Cross-layer Shuttle Vehicle Storage System with FJSP Problem. Comput. Integr. Manuf. Syst. 2022, 1, 14. [Google Scholar]
  30. Johnson, M.E.; Leslie, M.M.; Donald, Y. Minimax and maximin distance designs. J. Stat. Plan. Infer. 1990, 26, 131–148. [Google Scholar] [CrossRef]
  31. Morris, M.D.; Toby, J.M. Exploratory designs for computational experiments. J. Stat. Plan. Infer. 1995, 43, 381–402. [Google Scholar] [CrossRef] [Green Version]
  32. Lin, L.; Chen, F.J.; Xie, J.L.; Li, F. Online Public opinion Prediction based on improved Grey Wolf Optimized Support Vector Regression. Syst. Eng. Theory Pract. 2022, 42, 487–498. [Google Scholar]
  33. Hou, Y.; Gao, H.; Wang, Z.; Du, C. Improved Grey Wolf Optimization Algorithm and Application. Sensors 2022, 22, 3810. [Google Scholar] [CrossRef]
  34. Zhu, A.J.; Xu, C.P.; Li, Z.; Wu, J.; Liu, Z.B. Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. J. Syst. Eng. Electr. 2015, 26, 317–328. [Google Scholar] [CrossRef]
  35. Yan, J.W.; Zhong, X.H.; Fan, Y.; Guo, S.M. Residual life prediction of high power semiconductor lasers based on cluster sampling and support vector regression. Model. China Mech. Eng. 2022, 32, 1523–1529. [Google Scholar]
Figure 1. Structural schematic diagram of RV reducer. 1. Input shaft; 2. Planetary wheel; 3. Crankshaft; 4. Cycloid wheel; 5. Needle tooth shell; 6. Flange; 7. Planet carrier; 8. Needle gear.
Figure 1. Structural schematic diagram of RV reducer. 1. Input shaft; 2. Planetary wheel; 3. Crankshaft; 4. Cycloid wheel; 5. Needle tooth shell; 6. Flange; 7. Planet carrier; 8. Needle gear.
Sensors 23 00366 g001
Figure 2. Schematic diagram of grey wolf location update.
Figure 2. Schematic diagram of grey wolf location update.
Sensors 23 00366 g002
Figure 3. Comparison between generating point of OLHS and random generating point.
Figure 3. Comparison between generating point of OLHS and random generating point.
Sensors 23 00366 g003
Figure 4. The decreasing process of the convergence factor.
Figure 4. The decreasing process of the convergence factor.
Sensors 23 00366 g004
Figure 5. Three-dimensional graphs of functions.
Figure 5. Three-dimensional graphs of functions.
Sensors 23 00366 g005
Figure 6. The convergence curves of algorithms on test functions. (a) f 1 function; (b) f 2 function; (c) f 3 function; (d) f 4 function; (e) f 5 function; (f) f 6 function.
Figure 6. The convergence curves of algorithms on test functions. (a) f 1 function; (b) f 2 function; (c) f 3 function; (d) f 4 function; (e) f 5 function; (f) f 6 function.
Sensors 23 00366 g006
Figure 7. Flowchart of the IGWO-SVR method.
Figure 7. Flowchart of the IGWO-SVR method.
Sensors 23 00366 g007
Figure 8. Fitness curves of different algorithm models.
Figure 8. Fitness curves of different algorithm models.
Sensors 23 00366 g008
Figure 9. Prediction results of the IGWO-SVR model for rotation errors.
Figure 9. Prediction results of the IGWO-SVR model for rotation errors.
Sensors 23 00366 g009
Figure 10. Relative error of the IGWO-SVR model for each individual sample point.
Figure 10. Relative error of the IGWO-SVR model for each individual sample point.
Sensors 23 00366 g010
Figure 11. Prediction results of rotation error for different models.
Figure 11. Prediction results of rotation error for different models.
Sensors 23 00366 g011
Figure 12. Relative errors of different models for rotation error prediction.
Figure 12. Relative errors of different models for rotation error prediction.
Sensors 23 00366 g012
Figure 13. Maximum relative error of different prediction methods.
Figure 13. Maximum relative error of different prediction methods.
Sensors 23 00366 g013
Table 1. Sensitivity index of manufacturing errors of key components.
Table 1. Sensitivity index of manufacturing errors of key components.
Manufacturing Errors of Key ComponentsIndex of SensitivityWeight %
Cycloid gear isometric modification error ( θ 1 )1.613123.040
Radius error of needle tooth center circle ( θ 2 )1.10215.746
Cycloid gear shift modification error ( θ 3 )−1.102415.746
Needle tooth radius error ( θ 4 )−0.806511.519
Crankshaft eccentricity error ( θ 5 )0.000070.001
Accumulated pitch error of cycloidal gear ( θ 6 )−0.5898.410
Needle hole circumferential position error ( θ 7 )0.5878.341
Cycloid ring gear radial runout error ( θ 8 )0.2012.871
Crank-bearing clearance ( θ 9 )1.00014.283
Table 2. Test functions.
Table 2. Test functions.
Test FunctionsDimensionRangeMin
f 1 = i = 1 n x i 2 30[−100, 100]0
f 2 = i = 1 n | x i | + П i = 1 n | x i | 30[−10, 10]0
f 3 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 4 = 1 4000 i = 1 n x i 2 П i = 1 n cos ( x i i ) + 1 30[−600, 600]0
f 5 = 4 x 1 2 2.1 x 1 4 + x 1 6 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 6 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 2[−5, 5]0.3979
Table 3. Algorithm optimization results.
Table 3. Algorithm optimization results.
FunctionAlgorithmAverageSt.dev
f 1 PSO3.73 × 10−125.45 × 10−12
GWO3.88 × 10−48 6.79 × 10−48
SSA2.76 × 10−76.27 × 10−7
IGWO1.69 × 10−77 1.97 × 10−78
f 2 PSO1.59 × 10−3 1.84 × 10−2
GWO8.65 × 10−45 5.89 × 10−44
SSA5.54 × 10−6 1.59 × 10−5
IGWO4.07 × 10−561.43 × 10−58
f 3 PSO3.67 × 10−2 5.32 × 10−2
GWO5.44 × 10−15 1.09 × 10−16
SSA7.98 × 10−6 2.06 × 10−5
IGWO00
f 4 PSO0.00980.0105
GWO0.00250.0189
SSA3.75 × 10−8 9.42 × 10−8
IGWO00
f 5 PSO−1.03164.66 × 10−8
GWO−1.03167.77 × 10−8
SSA−1.03167.54 × 10−5
IGWO−1.03163.57 × 10−8
f 6 PSO0.39795.29 × 10−7
GWO0.39791.82 × 10−8
SSA0.39791.97 × 10−4
IGWO0.39791.83 × 10−8
Table 4. Partially normalized sample data.
Table 4. Partially normalized sample data.
Sample θ 1 θ 2 θ 3 θ 4 θ 5 Rotation   Error   ( Y / )
10.1560.5000.9030.8500.8001.133
20.2500.3500.2880.3500.4001.231
30.7500.6500.7110.3500.6001.938
40.7500.5000.2880.500.4001.452
50.5000.6500.2880.6500.4001.564
60.8430.1500.9030.8500.5001.272
70.5000.1500.9030.5000.8001.464
80.8430.8500.5000.1500.8001.473
90.3120.5000.8070.7500.7001.190
100.50.2500.5000.5000.7001.240
Table 5. Parameter values of the IGWO algorithm.
Table 5. Parameter values of the IGWO algorithm.
Number of OptimizationsScope of OptimizationsNumber of WolvesMaximum Iterations Mode Norm of Space
2[0.01, 100]2010010
Table 6. Parameter values for different rotation error prediction models.
Table 6. Parameter values for different rotation error prediction models.
ModelParameterValue
BP neural networkLearning rate0.01
optimizerStochastic gradient descent
SSA-BP neural networkLearning rate0.01
optimizerStochastic gradient descent
IGWO-SVR b e s t C 10.897
b e s t g 0.1918
GWO-SVR b e s t C 1.275
b e s t g 6.183
PSO-SVR b e s t C 1.059
b e s t g 7.532
Table 7. Prediction results of rotation error for different models.
Table 7. Prediction results of rotation error for different models.
Prediction ModelEvaluating IndicatorTime Duration/s
MSEMREMAE
IGWO-SVR0.02600.07840.11957.843
PSO-SVR0.03580.08830.13398.926
GWO-SVR0.03640.09110.13686.542
BP neural network0.12110.19150.280910.508
SSA-BP neural network0.03630.12580.177111.851
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, S.; Cao, M.; Qian, Q.; Zhang, G.; Wang, Y. Study on an Assembly Prediction Method of RV Reducer Based on IGWO Algorithm and SVR Model. Sensors 2023, 23, 366. https://doi.org/10.3390/s23010366

AMA Style

Jin S, Cao M, Qian Q, Zhang G, Wang Y. Study on an Assembly Prediction Method of RV Reducer Based on IGWO Algorithm and SVR Model. Sensors. 2023; 23(1):366. https://doi.org/10.3390/s23010366

Chicago/Turabian Style

Jin, Shousong, Mengyi Cao, Qiancheng Qian, Guo Zhang, and Yaliang Wang. 2023. "Study on an Assembly Prediction Method of RV Reducer Based on IGWO Algorithm and SVR Model" Sensors 23, no. 1: 366. https://doi.org/10.3390/s23010366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop