Next Article in Journal
Deep Learning Convolutional Neural Network for the Retrieval of Land Surface Temperature from AMSR2 Data in China
Previous Article in Journal
A Cancelable Iris- and Steganography-Based User Authentication System for the Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Precision Compensation for a Total Reflection Prism Laser Gyro Bias in Consideration of High Frequency Oscillator Voltage

1
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2
Shanghai Aerospace Control Technology Institute, Shanghai 201109, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(13), 2986; https://doi.org/10.3390/s19132986
Submission received: 26 April 2019 / Revised: 3 July 2019 / Accepted: 4 July 2019 / Published: 6 July 2019
(This article belongs to the Section Physical Sensors)

Abstract

:
Traditional compensation methods based on temperature-related parameters are not effective for complex total reflection prism laser gyro (TRPLG) bias variation. Because the high frequency oscillator voltage (UHFO) fundamentally affects the TRPLG bias, and the UHFO has a stronger correlation with the TRPLG bias when compared with the temperature, an introduction of UHFO into the TRPLG bias compensation can be evaluated. In consideration of the limitations of least squares (LS) regression and multivariate stepwise regression, we proposed a compensation method for TRPLG bias based on iterative re-weighted least squares support vector machine (IR-LSSVM) and compared with LS regression, stepwise regression, and LSSVM algorithm in large temperature cycling experiments. When temperature, slope of temperature variation, and UHFO were selected as inputs, the IR-LSSVM based on myriad weight function improved the TRPLG bias stability by 61.19% to reach the maximum and eliminated TRPLG bias drift. In addition, the UHFO proved to be the most important parameter in the process of TRPLG bias compensation; accordingly, it can alleviate the shortcomings of traditional compensation based on temperature-related parameters and can greatly improve the TRPLG bias stability.

1. Introduction

The total reflection prism laser gyros (TRPLGs) are applied in several systems [1]. For example, the I42-1-C strapdown navigation system of civilian aircrafts IL-96-300 and TU-204 is based on TRPLG. The development of total reflection prism cavities with high Q-factor, small polarization non-reciprocity, and low back-scattering increases TRPLG accuracy and opens new applications such as precise measuring systems for railways as well as oil and gas pipe lines [2].
There are three major errors in conventional two-mode active laser gyros: mode locking, scale factor error, and null shift [3,4]. Mode locking is caused by a weak coupling mechanism between two otherwise independent traveling waves that causes backscattering from one wave into the other, which occurs mostly at the mirrors due to surface imperfections [5,6,7]. Some compensation methods have been developed to rectify this error [8,9,10,11]. Scale factor error refers to variations in the scale factor as a function of the rotation rate [12,13]. Null shift happens when the frequency difference is non-zero for zero input rate and is the most significant error pertaining to accuracy. Because the bias of ring laser gyro (RLG) is sensitive to temperature variations [14,15], it is important to establish an accurate RLG bias temperature model to reduce the effect of the latter [16].
At present, the traditional temperature compensation method of RLG bias is LS regression [17], which is subject to certain restrictions due to complex nonlinear relationships. Second-order multivariate stepwise regression model, which is based on temperature, slope of temperature variation, and its cross terms, can improve partial compensation effect [18]. The neural network technology, a parallel distributed system which overcomes the defects of traditional logic symbol-based artificial intelligence in dealing with intuitive and unstructured information, has seen rapid advances in recent years. It has the characteristics of self-adaptation, self-organization, and real-time learning [19]. However, due to empirical risk minimization, it is easy to achieve local rather than global optimization with this method. Therefore, Vapnik et al. developed a support vector machine (SVM) that has the characteristics of structural risk minimization [20]. The training of SVM is a quadratic optimization problem with a unique solution, making it optimal on a global scale. Moreover, SVM has a better generalization ability than those offered by neural networks. On account of the above advantages, SVM has been widely used in the fields of classification, regression, pattern recognition, and image processing [21]. However, the SVM algorithm is difficult to implement for large-scale training samples because the storage and calculation of the matrix requires considerable machine memory and computation time. Therefore, Suykens et al. proposed the least squares support vector machine (LSSVM), which starts from the loss function and converts the inequality constraint into an equality constraint, thus transforming the optimization problem into a solving linear equation problem [22]. Finally, the computational complexity of the SVM is avoided and high modeling accuracy is preserved. The LSSVM has been widely applied to the temperature compensation of the RLG bias [23]. However, a single temperature compensation method does not have much effect on complex RLG bias changes. Therefore, other output signals for RLG bias compensation are being studied [24]. However, LSSVM has two main disadvantages, it is sensitive to outliers and lacks sparseness, which limits the method when training large-scale problems. To overcome the former disadvantage, Suykens et al. proposed a weighted LSSVM (W-LSSVM) algorithm [25] and Bao et al. proposed an iterative re-weighted LSSVM (IR-LSSVM) algorithm [26,27]. In addition, some techniques utilize non-convex loss functions to improve robustness [28,29,30,31], which is known as robust LSSVM (R-LSSVM). To overcome the latter disadvantage, Chen et al. proposed a sparse R-LSSVM (SR-LSSVM) to achieve a sparse solution of the primal R-LSSVM after obtaining a low-rank approximation of the kernel matrix [32]. Recently, these LSSVM-based improved algorithms have not been applied to the compensation of RLG bias.
The TRPLG differs from the traditional RLG in the structure and control principle of the resonant cavity [33,34]. Therefore, the factors causing the gyro bias are not exactly the same. In view of the particularity of the TRPLG output, this paper does not use the latest regression algorithm and adopts the more suitable IR-LSSVM algorithm. Further, the UHFO is also considered. This paper examines high precision compensation for TRPLG bias in consideration of the UHFO. In Section 2, the TRPLG bias compensation model, the LSSVM and IR-LSSVM algorithm are described. Experimental configuration is shown in Section 3. In Section 4, large temperature cycling experiments verify the importance of the UHFO and the effectiveness of the proposed method. Finally, Section 5 contains the conclusions of the study.

2. Model and Algorithm of TRPLG Bias Compensation

2.1. TRPLG Parameters Used for Bias Compensation

Basic block diagram of TRPLG is shown in Figure 1. The core component was a resonant microcrystalline glass cavity with fused silica glass prisms [2]. The X1 channel was filled with helium neon gas mixed in a certain proportion with a working wavelength of 0.6328 µm. The laser was generated under the action of ignition transformer and high frequency oscillator. X2 and X4 were vacuum channels, and the X3 channel was filled with dry air. The refractive index of the air was changed by connecting the heater to stabilize the resonant frequency of the cavity. The parts marked as I, II, III, and IV were prisms, which were sealed by a protective cover to keep their surfaces clean. The control circuit consisted of field programmable gate array (FPGA) and other peripheral circuits, which mainly include photoelectric detection and amplification, analog to digital converter (ADC), digital to analog converter (DAC), high frequency oscillator, ignition transformer, and temperature acquisition. The major functions completed by the FPGA included ignition control, light intensity control, frequency stabilization control, temperature reading control, pulse counting system, and serial port control. The ignition control system provided the ignition pulse signal for ignition transformer, and the light intensity control system generated UHFO according to the DC light intensity to maintain its stability. The frequency stabilization control system generated the heater voltage based on AC light intensity to maintain the stability of the resonator frequency. The temperature reading control completed reading the temperature, and the pulse counting system was for phase detection, high-speed sampling, and low-pass filtering. Finally, useful information was read through the serial port control system.
The main factors that affect TRPLG bias are temperature, slope of temperature variation, and UHFO. The temperature variation causes a change in the refraction index of the prism, which causes a change in the optical path and its length, eventually altering the TRPLG bias. The temperature acquisition circuit is shown in Figure 2. PT100 was used to convert the temperature into a resistance, which converts the voltage signal through this circuit. Therefore, FPGA controls ADC to acquire it. The temperature acquisition circuit consists of a bridge circuit, an operational amplifier, and a low-pass filter. The high frequency oscillator circuit is depicted in Figure 3. UHFO served as the input to the circuit while the output was a low-voltage AC signal of 150MHz, which acts on the upper and lower electrodes of the X1 channel. Therefore, the UHFO affected the size of the output voltage as well as the He-Ne gas excitation and TRPLG bias.
The hardware structure of TRPLG bias compensation is shown in Figure 4. Temperature, slope of temperature variation, UHFO, and TRPLG output were inputs to the microprocessor through the serial port, and the micro-processor compensated for the TRPLG bias using the TRPLG bias compensation algorithm. In this study, IR-LSSVM was used as the high precision compensation algorithm and was compared with the LS, stepwise regression, and LSSVM. In addition, the influence of UHFO on TRPLG bias compensation was the focus of study.

2.2. LSSVM for Nonlinear Function Regression

In the original weight space, the model is calculated as follows
y ( x ) = ω T φ ( x ) + b
with given training data { x k , y k } k = 1 N and φ ( ) : n n h mapped into high dimensional feature space, it can be of infinite dimension but implicitly defined. The ω T and b are two unknown variables. In this nonlinear case, the vector ω can also become infinite dimensional. For standard SVM, the optimization problem in the original weight space becomes [21]:
min ω , b , ξ , ξ J P ( ω , ξ , ξ ) = 1 2 ω T ω + c k = 1 N ( ξ k + ξ k )   subject   to   { y k ω T φ ( x k ) b ε + ξ k   ,   k = 1 , , N ω T φ ( x k ) + b y k ε + ξ k   ,   k = 1 , , N ξ k , ξ k 0   ,   k = 1 , , N
where ξ k ( ) represents the upper (lower) training error at data point ( x k , y k ) , c represents penalty factor, and ε represents the coefficient of regression estimation accuracy.
After using Lagrangian and optimal conditions, the following double problems are obtained:
max α , α J D ( α , α ) = 1 2 k ,   l = 1 N ( α k α k ) ( α l α l ) K ( x k , x l )   ε k = 1 N ( α k + α k ) + k = 1 N y k ( α k α k ) subject   to   { k = 1 N ( α k α k ) = 0 α k , α k [ 0 , c ]
Core skills have been applied here K ( x k , x l ) = φ ( x k ) T φ ( x l ) for k , l = 1 , , N . The double representation of the model becomes:
y ( x ) = k = 1 N ( α k α k ) K ( x , x k ) + b
where α k , α k are the solution to the quadratic programming (QP) problem. The bias term b follows from the complementarity Karush Kuhn Tucker (KKT) conditions. The solution to the QP problem is global and unique provided that the chosen kernel function is positive definite.
With standard SVM as above in the solution of the QP problem, the number of training samples has a significant influence on the matrix scale, resulting in an excessive calculation amount and low calculation speed. Therefore, the extended algorithm LSSVM is studied. The equality constraints are used instead of the inequality constraints. Therefore, this optimization problem described in Equation (2) becomes as follows:
min ω , b , ξ J P ( ω , ξ ) = 1 2 ω T ω + c 1 2 k = 1 N ξ k 2 subject   to   y k = ω T φ ( x k ) + b + ξ k ,   k = 1 , , N .
This is just a ridge regression cost function formulated in the feature space. However, it should be noted that, when ω becomes an infinite dimension, the original problem cannot be solved. Therefore, we continued building Lagrange and exporting double questions.
One constructs the Lagrangian as:
L ( ω , b , ξ ; α ) = J P ( ω , ξ ) k = 1 N α k { ω T φ ( x k ) + b + ξ k y k }
where α k are Lagrange multipliers. The conditions for optimality are given by:
{ L ω = 0 ω = k = 1 N α k φ ( x k ) L b = 0 k = 1 N α k = 0 L ξ k = 0 α k = c ξ k , k = 1 , , N L α k = 0 ω T φ ( x k ) + b + ξ k y k = 0 ,   k = 1 , , N .
After elimination of the variables ω and ξ one gets the following solution:
[ 0 1 1 1 K ( x 1 , x 1 ) + 1 / c K ( x 1 , x N ) 1 K ( x N , x 1 ) K ( x N , x N ) + 1 / c ]   [ b α 1 α N ] = [ 0 y 1 y N ]
The core skills of applying here are as follows:
K ( x k , x l ) = φ ( x k ) T φ ( x l ) k , l = 1 , , N .
The resulting LSSVM model for function estimation becomes then [21]
y ( x ) = k = 1 N α k K ( x , x k ) + b
In accordance with Mercer’s condition, many kinds of kernel function K ( x , x k ) satisfy K ( x k , x l ) = φ ( x k ) T φ ( x l ) . In this study, we used a Gaussian function as the kernel function as follows:
K ( x , x k ) = exp ( x x k 2 / 2 σ 2 )
where σ represents the Kernel width. The above regression problem has only two additional tuning parameters ( c , σ 2 ) in LSSVM. After the quadratic optimization is transformed into the solution of the linear equations, the computational efficiency is greatly improved.

2.3. Regression by IR-LSSVM

To obtain a robust regression based on the LSSVM solution, one can weight the error variables ξ k = α k / c by weighting factors υ k . This leads to the optimization problem [25]:
min ω , b , ξ J P ( ω , ξ ) = 1 2 ω T ω + c 1 2 k = 1 N υ k ξ k 2 subject   to   y k = ω T φ ( x k ) + b + ξ k ,   k = 1 , , N .
The Lagrangian becomes:
L ( ω , b , ξ ; α ) = J P ( ω , ξ ) k = 1 N α k { ω T φ ( x k ) + b + ξ k y k }
The unknown variables for this weighted LSSVM problem are denoted by the symbol. From the conditions for optimality and regression of ω , ξ one obtains the KKT system
[ 0 1 1 1 K ( x 1 , x 1 ) + 1 / ( c υ 1 ) K ( x 1 , x N ) 1 K ( x N , x 1 ) K ( x N , x N ) + 1 / ( c υ N ) ]   [ b α 1 α N ] = [ 0 y 1 y N ]
The choice of the weights υ k is determined based on the error variables ξ k = α k / c from the (unweighted) LSSVM case (8). Robust regression is obtained by the Huber, Hampel, Logistic, and Myriad weight functions.
Huber weight function is
υ k = { 1 if   | ξ k / s ^ | < β β | ξ k / s ^ | if   | ξ k / s ^ | β
Hampel weight function is
υ k = { 1 if   | ξ k / s ^ | < b 1 b 2 | ξ k / s ^ | b 2 b 1 if   b 1 | ξ k / s ^ | b 2 0 if   | ξ k / s ^ | > b 2
Logistic weight function is
υ k = tanh ( ξ k / s ^ ) ξ k / s ^
Myriad weight function is
υ k = δ 2 δ 2 + ( ξ k / s ^ ) 2
where s ^ is a robust regression of the standard deviation of the LSSVM error variables ξ k :
s ^ = I Q R 2 × 0.6745
where I Q R is the interquartile range, that is, the difference between the two data at 75% and 25%, respectively, after sorting the error values
s ^ = 1.483 × MAD ( x i )
where MAD ( x i ) stands for the median absolute deviation. we use the robust version of cross-validation, three of the four weight functions contain parameters which have to be tuned The cross-validation automatically tunes the parameters of the Huber and myriad weight function according to the best performance for these two weight functions. The two parameters of the Hampel weight function are set to b 1 = 2.5 and b 2 = 3 .
The IR-LSSVM requires to obtain the distribution information of the error based on the training of the LSSVM, and then the weight υ k ( k = 1 , , N ) is set to minimize F ( x ) , and finally the weighted LSSVM is trained again [26].
IR-LSSVM Algorithm is as follows:
  • Given training data { x k , y k } k = 1 N , find an optimal ( c , σ 2 ) combination (by ten-fold cross-validation or generalization bounds) by solving systems (8).
  • For the optimal ( c , σ 2 ) combination one computers ξ k = α k / c from (8).
  • Computer s ^ from the ξ k distribution.
  • Determine the weights υ k based on ξ k , s ^ , besides, a suitable weight function is selected from (15) to (18).
  • Solve the weighted LSSVM (14), giving the model y ( x ) = k = 1 N α k K ( x , x k ) + b .

3. Experimental Configuration

The gyro used in the temperature experiment was the type 70 TRPLG produced by Xi’an North Jierui Optoelectronics Technology Ltd. (Xi’an, China), and is shown in Figure 5.
The large temperature cycle was set up as follows: (1) the TRPLG was fixed with bolts in the temperature chamber, and tested for 2 h at 25 °C, (2) the temperature was then decreased at the rate of 1 °C/min until −40 °C and tested for 6 h, (3) increased at the rate of 1 °C/min to 70 °C and tested for 10 h, (4) decreased at the rate of 1 °C/min until 25 °C and tested for 4 h. The entire cycle was performed twice, once for training and once for compensation.
The test software provided the data of temperature, slope of temperature variation, UHFO and TRPLG output, the slope of temperature variation based on the temperature difference per unit time, at intervals of 100 s. The relationship between the TRPLG output and different parameters in the large temperature cycling experiments are shown in Figure 6. TRPLG output exhibited a complex nonlinearity with the slope of temperature variation, which can greatly affect the performance of the TRPLG. The TRPLG output exhibited a good correlation with the temperature. Overall, the UHFO not only correlated well with the TRPLG output but also had better local characteristics. The correlation coefficients of these three parameters to the TRPLG output are shown in Table 1. The correlation between the UHFO and TRPLG output was the strongest, indicating that the TRPLG bias compensation using UHFO would achieve a better result.

4. Analysis and Discussion of Results

4.1. Bias Compensation Using LS(least squares) Model

For further comparison, the traditional LS model of TRPLG bias is given by [16]
B = k 0 + k 1 X + k 2 X 2 + k 3 X 3
where B is TRPLG bias, X is TRPLG parameter, and k i ( i = 0 , 1 , 2 , 3 ) are LS fitting coefficients.
The TRPLG bias can be obtained by subtracting the projection of the earth’s rotation angle rate at this latitude from the TRPLG output. The compensation results of the TRPLG bias using different parameters based LS model are shown in Figure 7. TRPLG bias compensation using slope of temperature variation improved little on the raw data when compared with the model using other two parameters. TRPLG bias compensation using UHFO showed a slightly better result when compared with the model using temperature. Because the LS model was subject to certain restrictions due to complex nonlinear relationships, the compensation result was not satisfactory. Table 2 describes the TRPLG bias stability for original and compensated data based on LS model using different parameters. UHFO improved the TRPLG bias stability by 40.54% to reach the maximum, which was 7.87% higher than traditional temperature compensation.

4.2. Bias Compensation Using Stepwise Regression Model

To improve the regression effect, we introduced two multivariate stepwise regression models. Model 1 is a traditional second-order model based on temperature, slope of temperature variation, and its cross terms. The expression is
B = k 0 + k 1 T + k 2 T 2 + k 3 d T d t + k 4 ( d T d t ) 2 + k 5 T d T d t
where T is the temperature, d T / d t is the slope of temperature variation, k i ( i = 0 , 1 , , 5 ) are regression coefficients.
This study used interactive stepwise regression analysis to find the optimal regression equation. First, the first three items included in the initial model were specified as T , T 2 , and d T / d t . Then, the upper limit of the significance probability of entering the model was set to 0.05, and the variable with the P value of the significance test less than 0.05 was likely to be introduced into the model. The lower limit of the significance probability of removing the variable from the model was 0.1, and the variable with the value of the significance test greater than 0.1 might be excluded from the model. Finally, the stepwise regression model is
B = 1.427 × 10 3 + 4.906 × 10 4 T 4.616 × 10 6 T 2 0.917 d T d t 57.349 ( d T d t ) 2 1.926 × 10 2 T d T d t
The compensation result of Model 1 is shown in Figure 8a. It can be seen that the compensation effect was better than the LS model, and the fluctuation of the middle portion was reduced.
As seen in Table 1, given that the TRPLG output has the strongest correlation and the best LS compensation result with the UHFO, as seen in Table 2, we established a second-order stepwise regression model based on temperature, slope of temperature variation, UHFO, and its cross terms. The expression of Model 2 is
B = k 0 + k 1 T + k 2 T 2 + k 3 d T d t + k 4 ( d T d t ) 2 + k 5 U + k 6 U 2 + k 7 T d T d t + k 8 T U + k 9 d T d t U
where U is the UHFO, k i ( i = 0 , 1 , , 9 ) are regression coefficients.
The variables in Model 2 were introduced and removed using the interactive stepwise regression analysis method in Model 1. Ultimately, the ( d T / d t ) 2 was eliminated, and the expression of the regression equation is
B = 1.565 + 8.292 × 10 3 T + 5.529 × 10 6 T 2 + 26.263 d T d t + 0.646 U 6.494 × 10 2 U 2 4.055 × 10 2 T d T d t 1.562 × 10 3 T U 4.666 d T d t U
The compensation result of Model 2 is shown in Figure 8b. Model 2 reduced overall fluctuations when compared with Model 1. Additionally, Model 2 improved the TRPLG bias stability by 54.49%, which was 10.62% higher than Model 1, as seen in Table 3. The introduction of the UHFO improved the TRPLG bias stability by 10.62% when compared with the traditional stepwise regression model.

4.3. Bias Compensation Using IR-LSSVM Model

The IR-LSSVM algorithm is shown in Section 2.3. We used four different weight functions and we used different single parameters as the input and the TRPLG bias as the output to establish an IR-LSSVM model. The regression and compensation results are shown in Figure 9, which shows that the IR-LSSVM model of TRPLG bias compensation using UHFO was better than that using other two parameters. In comparison to the LS model, the IR-LSSVM model achieved better compensation results.
Table 4 describes the TRPLG bias stability for compensated data based on IR-LSSVM model using different single parameters. IR-LSSVM based on Humber improved the TRPLG bias stability by 45.46% to reach the maximum; moreover, the increased percentage using UHFO as an input parameter is higher than the others overall. Therefore, the importance of the UHFO parameters was verified again.
The IR-LSSVM model of TRPLG bias compensation using single parameter cannot completely eliminate TRPLG bias drift. To complete high precision compensation, a bias composite compensation method based on IR-LSSVM model was used. We selected different composite parameters as inputs and TRPLG bias as outputs; the regression and compensation results are shown in Figure 10. When two parameters were selected as inputs, most of the TRPLG bias drift was eliminated, with only local drift at the end of the data. When all parameters were selected as inputs, TRPLG bias drift was basically eliminated.
Table 5 describes the TRPLG bias stability for compensated data based on IR-LSSVM model using different composite parameters. When all parameters were selected as inputs, the myriad weight function improved the TRPLG bias stability by 61.19% to reach the maximum. The compensation effect was better than the stepwise regression method.
To clearly illustrate the significance of the UHFO, we compared Table 4 and Table 5. When temperature and UHFO were used as composite parameters, the myriad weight function improved the TRPLG bias stability by 59.84%. It was 31.77% better than using temperature as the sole input. When slope of temperature variation and UHFO were used as inputs as composite parameters, the Huber weight function improved the TRPLG bias stability by 55.78%, which was 39.68% better than only using slope of temperature variation as an input. In addition, when all parameters were selected as inputs, TRPLG bias stability was 4.52% higher than using temperature and slope of temperature variation as inputs.
In summary, the UHFO is the most important parameter in the process of TRPLG bias compensation. It can make up for the shortcomings of traditional temperature compensation and can greatly improve the TRPLG bias stability.
We compared the compensation results of IR-LSSVM model and LSSVM model. The LSSVM compensation model is expressed in Equation (10). To obtain an LSSVM model with the RBF kernel, two extra tuning parameters are needed: the regularization parameter c , which determines the trade-off between the training error minimization and smoothness of the estimated function, and the kernel function parameter σ 2 . In this study, we used the leave-one-out method to determine the tuning parameters to train the first set of sampling data and produced the LSSVM model for the TRPLG bias compensation.
The comparison of compensation results between the IR-LSSVM model and the LSSVM model is shown in Figure 11. With the LSSVM model, the TRPLG bias stability was improved by 59.03%, which was smaller than the compensation effect (61.19%) of the IR-LSSVM model. Therefore, the IR-LSSVM model that considered the UHFO parameter realized high precision compensation for TRPLG bias.

5. Conclusions

To eliminate the complex TRPLG bias variation and improve TRPLG bias stability, we theoretically analyzed the factors affecting the TRPLG bias. The UHFO was introduced into TRPLG bias compensation based on temperature and slope of temperature variation; comparative analysis was performed in LS regression, stepwise regression, and IR-LSSVM. Large temperature cycling experiments can draw the following conclusions.
First, in comparison with LS regression, multivariate stepwise regression can reduce partial TRPLG bias drift.
Second, IR-LSSVM based on composite parameters can overcome the shortcomings of poor nonlinear fitting ability of the LS regression and stepwise regression.
Third, the UHFO proved to be the most important parameter for TRPLG bias compensation; it can make up for the shortcoming of traditional compensation based on temperature-related parameters and can greatly improve the TRPLG bias stability.
Fourth, when temperature, slope of temperature variation, and UHFO are used as inputs, the IR-LSSVM based on myriad weight function can completely eliminated TRPLG bias drift to achieve high precision compensation.
In conclusion, the IR-LSSVM model proves to be accurate, reliable, and has a significant practical value in engineering.

Author Contributions

Conceptualization, Y.T. and S.L.; methodology, Y.T. and Q.F.; software, Y.T.; validation, Y.T., S.L. and Q.F.; formal analysis, Y.T. and F.W.; investigation, S.L. and Q.F.; resources, S.L.; data curation, J.Z.; writing—original draft preparation, Y.T.; writing—review and editing, J.Z. and F.W.; visualization, Y.T.; supervision, S.L.; project administration, Q.F.; funding acquisition, S.L.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities (grant number 3102019ZX041).

Acknowledgments

TRPLG and experimental equipment were supported by Xi’an North Jierui Optoelectronics Technology Ltd.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, Q.; Wu, W.; Jiang, M.; Li, Y. A new filtering and smoothing algorithm for railway track surveying based on landmark and IMU/Odometer. Sensors 2017, 17, 1438. [Google Scholar] [CrossRef] [PubMed]
  2. Bakin, Y.V.; Ziouzev, G.N.; Lioudomirski, M.B. Laser Gyros with Total Reflection Prisms; Moscow Bauman State Technical University: Moscow, Russia, 2003; pp. 1–34. [Google Scholar]
  3. Chow, W.W. The ring laser gyro. Rev. Mod. Phys. 1985, 57, 61–104. [Google Scholar] [CrossRef]
  4. Leonel, N.; Menegozzi Willis, E.; Lamb, J. Theory of a ring lase. Phys. Rev. A 1973, 8, 2103–2125. [Google Scholar]
  5. Andrews, D.A.; Roden, S.; King, T.A. A model for lock-in growth in ring laser gyroscopes. IEEE J. Quantum Electron. 1995, 31, 1709–1715. [Google Scholar] [CrossRef]
  6. Fan, Z.; Luo, H.; Lu, G.; Hu, S. Online effective backscattering estimation for ring laser gyro. Chin. Opt. Lett. 2012, 10, 13. [Google Scholar]
  7. Hurst, R.B.; Rabeendran, N.; Schreiber, K.U.; Wells, J.R. Correction of backscatter-induced systematic errors in ring laser gyroscopes. Appl. Opt. 2014, 53, 7610–7618. [Google Scholar] [CrossRef] [PubMed]
  8. Yu, H.; Wu, W.; Wu, M.; Feng, G.; Hao, M. Systematic angle random walk estimation of the constant rate biased ring laser gyro. Sensors 2013, 13, 2750–2762. [Google Scholar] [CrossRef]
  9. Fan, Z.; Luo, H.; Lu, G.; Hu, S. Dynamic lock-in compensation for mechanically dithered ring laser gyros. Chin. Opt. Lett. 2012, 10, 26. [Google Scholar]
  10. Song, S.W.; Lee, J.C.; Hong, S.K.; Chwa, D. New random walk reduction algorithm in ring laser gyroscopes. J. Opt. 2010, 12, 115501. [Google Scholar] [CrossRef]
  11. Fan, Z.; Luo, H.; Hu, S. Instantaneous phase method for readout signal processing of body dithered ring laser gyro. Appl. Opt. 2011, 50, 3455–3460. [Google Scholar] [CrossRef]
  12. Aronowitz, F.; Killpatrick, J.E.; Gallaghan, S.P. Power-dependent correction to the scale factor in the laser gyro. IEEE J. Quantum Electron. 1974, 10, 201–208. [Google Scholar] [CrossRef]
  13. Hurst, R.B.; Mayerbacher, M.; Gebauer, A.; Schreiber, K.U.; Wells, J.R. High-accuracy absolute rotation rate measurements with a large ring laser gyro: Establishing the scale factor. Appl. Opt. 2017, 56, 1124–1130. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, P.; Wang, Y.; Yu, X.; Wei, G.; Tang, J. Effect of temperature characteristic of light path on RLG’s bias. Infrared Laser Eng. 2011, 40, 2393–2397. [Google Scholar]
  15. Beghi, A.; Belfi, J.; Beverini, N.; Bouhadef, B.; Cuccato, D.; Virgilio, A.D.; Ortolan, A. Compensation of the laser parameter fluctuations in large ring-laser gyros: A Kalman filter approach. Appl. Opt. 2012, 51, 7518–7528. [Google Scholar] [CrossRef] [PubMed]
  16. Buchler, R.J.; Moeller, R.; Fann, S.W.; Tazartes, D.A.; Mark, J.G. Temperature Compensation Method for Strapdown Inertial Navigation System. U.S. Patent 6,175,807 B1, 16 January 2001. [Google Scholar]
  17. Guo, C.; Xu, Y.; Zhao, X. Investigation on the temperature compensation model for ring laser gyroscope. Chin. Opt. Lett. 2006, 4, 100576. [Google Scholar]
  18. Yu, X.; Wang, Y.; Wei, G.; Zhang, P.; Long, X. Novel temperature modeling and compensation method for bias of ring laser gyroscope based on least-squares support vector machine. Chin. Opt. Lett. 2011, 9, 051201. [Google Scholar]
  19. Ding, J.; Zhang, J.; Huang, W.; Chen, S. Laser gyro temperature compensation using modified RBFNN. Sensors 2014, 14, 18711–18727. [Google Scholar] [CrossRef] [PubMed]
  20. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
  21. Vapnik, V.N. The Nature of Statistical Learning Theory; Spring: New York, NY, USA, 1999; pp. 988–999. [Google Scholar]
  22. Suykens, J.A.K.; Gestel, T.V.; Brabanter, J.D.; Moor, B.D.; Vandewalle, J. Least Squares Support Vector Machines; World Scientific: Singapore, 2002; pp. 29–116. [Google Scholar]
  23. Wei, G.; Li, G.; W, Y.; Long, X. Application of least squares-support vector machine in system-level temperature compensation of ring laser gyroscope. Measurement 2011, 44, 1898–1903. [Google Scholar] [CrossRef]
  24. Li, G.; Wang, F.; Xiao, G.; Wei, G.; Zhang, P.; Long, X. Temperature compensation method using readout signals of ring laser gyroscope. Opt. Express 2015, 23, 13320–13332. [Google Scholar] [CrossRef]
  25. Suykens, J.A.K.; Brabanter, J.D.; Lukas, L.; Vandewalle, J. Weighted least squares support vector machines: Robustness and sparse approximation. Neurocomputing 2002, 48, 85–105. [Google Scholar] [CrossRef]
  26. Bao, X.; Dai, L. Robust iterative algorithm of weighted least squares support vector machine and its application in spectral analysis. Acta Chim. Sin. 2009, 67, 1081–1086. [Google Scholar]
  27. Chen, C.; Yan, C.; Li, Y. A robust weighted least squares support vector regression based on least trimmed squares. Neurocomputing 2015, 168, 941–946. [Google Scholar] [CrossRef]
  28. Yang, X.; Tan, L.; He, L. A robust least squares support vector machine for regression and classification with noise. Neurocomputing 2014, 140, 41–52. [Google Scholar] [CrossRef]
  29. Wang, K.; Zhong, P. Robust non-convex least squares loss function for regression with outliers. Knowl. Based Syst. 2014, 71, 290–302. [Google Scholar] [CrossRef]
  30. Atashrouz, S.; Mirshekar, H.; Mohaddespour, A. A robust modeling approach to predict the surface tension of ionic liquids. J. Mol. Liq. 2017, 236, 344–357. [Google Scholar] [CrossRef]
  31. Wang, Y.; Yang, L.; Yuan, C. A robust outlier control framework for classification designed with family of homotopy loss function. Neural Netw. 2019, 112, 41–53. [Google Scholar] [CrossRef]
  32. Chen, L.; Zhou, S. Sparse algorithm for robust LSSVM in primal space. Neurocomputing 2018, 275, 2880–2891. [Google Scholar] [CrossRef] [Green Version]
  33. Yao, C.; Zeng, X.; Cao, C. Intensity properties of output light in prism laser gyro with mechanical dither bias. Acta Phys. Sin. 2012, 61, 094216. [Google Scholar]
  34. Yao, C.; Zeng, X.; Cao, C. Polarization properties in a prism laser gyro with mechanical dither bias. Chin. Phys. B 2012, 21, 124206. [Google Scholar] [CrossRef]
Figure 1. Basic block diagram of TRPLG (total reflection prism laser gyro).
Figure 1. Basic block diagram of TRPLG (total reflection prism laser gyro).
Sensors 19 02986 g001
Figure 2. Temperature acquisition circuit.
Figure 2. Temperature acquisition circuit.
Sensors 19 02986 g002
Figure 3. High frequency oscillator circuit.
Figure 3. High frequency oscillator circuit.
Sensors 19 02986 g003
Figure 4. The hardware structure of TRPLG bias compensation.
Figure 4. The hardware structure of TRPLG bias compensation.
Sensors 19 02986 g004
Figure 5. TRPLG used in the temperature experiment.
Figure 5. TRPLG used in the temperature experiment.
Sensors 19 02986 g005
Figure 6. The relationship between the TRPLG output and different parameters: (a) Temperature; (b) Slope of temperature variation; (c) UHFO.
Figure 6. The relationship between the TRPLG output and different parameters: (a) Temperature; (b) Slope of temperature variation; (c) UHFO.
Sensors 19 02986 g006aSensors 19 02986 g006b
Figure 7. TRPLG bias compensation using different parameters based on LS model: (a) Results of regression; (b) Results after compensation.
Figure 7. TRPLG bias compensation using different parameters based on LS model: (a) Results of regression; (b) Results after compensation.
Sensors 19 02986 g007
Figure 8. TRPLG bias compensation based on stepwise regression model: (a) Results of Model 1; (b) Results of Model 2.
Figure 8. TRPLG bias compensation based on stepwise regression model: (a) Results of Model 1; (b) Results of Model 2.
Sensors 19 02986 g008
Figure 9. TRPLG bias compensation using different weight functions based on IR-LSSVM model: (a) Regression based on temperature; (b) Compensation based on temperature; (c) Regression based on slope of temperature variation; (d) Compensation based on slope of temperature variation; (e) Regression based on UHFO; (f) Compensation based on UHFO.
Figure 9. TRPLG bias compensation using different weight functions based on IR-LSSVM model: (a) Regression based on temperature; (b) Compensation based on temperature; (c) Regression based on slope of temperature variation; (d) Compensation based on slope of temperature variation; (e) Regression based on UHFO; (f) Compensation based on UHFO.
Sensors 19 02986 g009aSensors 19 02986 g009b
Figure 10. TRPLG bias compensation using different weight functions and composite parameters based on IR-LSSVM model: (a) Regression based on temperature and slope of temperature variation; (b) Compensation based on temperature and slope of temperature variation; (c) Regression based on temperature and UHFO; (d) Compensation based on temperature and UHFO; (e) Regression based on slope of temperature variation and UHFO; (f) Compensation based on slope of temperature variation and UHFO; (g) Regression based on all parameters; (h) Compensation regression based on all parameters.
Figure 10. TRPLG bias compensation using different weight functions and composite parameters based on IR-LSSVM model: (a) Regression based on temperature and slope of temperature variation; (b) Compensation based on temperature and slope of temperature variation; (c) Regression based on temperature and UHFO; (d) Compensation based on temperature and UHFO; (e) Regression based on slope of temperature variation and UHFO; (f) Compensation based on slope of temperature variation and UHFO; (g) Regression based on all parameters; (h) Compensation regression based on all parameters.
Sensors 19 02986 g010
Figure 11. TRPLG bias compensation based on LSSVM model and IR-LSSVM model: (a) Regression based on all parameters; (b) Compensation based on all parameters.
Figure 11. TRPLG bias compensation based on LSSVM model and IR-LSSVM model: (a) Regression based on all parameters; (b) Compensation based on all parameters.
Sensors 19 02986 g011
Table 1. Correlation coefficient between TRPLG output and parameters.
Table 1. Correlation coefficient between TRPLG output and parameters.
TemperatureSlope of Temperature VariationUHFO
Correlation coefficient with TRPLG output0.71−0.43−0.82
Table 2. TRPLG bias stability for compensated data based on LS model using different parameters.
Table 2. TRPLG bias stability for compensated data based on LS model using different parameters.
Parameters Based on LS Model
TemperatureSlope of Temperature VariationUHFO
TRPLG bias stability ( ° / h )Before compensation0.018500.018500.01850
After compensation0.011940.017020.01100
Improvement35.46%8.00%40.54%
Table 3. TRPLG bias stability for compensated data based on two stepwise regression models.
Table 3. TRPLG bias stability for compensated data based on two stepwise regression models.
Model 1Model 2
TRPLG bias stability ( ° / h )Before compensation0.018500.01850
After compensation0.009420.00842
Improvement49.08%54.49%
Table 4. TRPLG bias stability for compensated data based on IR-LSSVM model using different single parameters.
Table 4. TRPLG bias stability for compensated data based on IR-LSSVM model using different single parameters.
No.ParametersWeight Function c σ 2 F ( x ) ( × 10 3 ) Bias   Stability   ( ° / h ) Improvement
1 T Huber 1.5958 × 10 2 0.03568.54100.0110540.27%
Hampel78.61030.40868.52800.0112339.30%
Logistic 9.3665 × 10 8 0.21388.40470.0108941.14%
Myriad 1.9285 × 10 4 0.07428.49900.0109041.08%
2 d T / d t Huber 8.2961 × 10 3 7.279413.11210.017127.46%
Hampel1.02070.000312.60740.0135726.65%
Logistic1.44210.000312.71820.0135626.70%
Myriad1.05373.807113.11770.017147.35%
3 U Huber0.27180.00448.11780.0100945.46%
Hampel0.16430.00658.16100.0102444.65%
Logistic0.26910.00708.13940.0102144.81%
Myriad0.19550.00608.14370.0101545.14%
Table 5. TRPLG bias stability for compensated data based on IR-LSSVM model using different composite parameters.
Table 5. TRPLG bias stability for compensated data based on IR-LSSVM model using different composite parameters.
No.ParametersWeight Function c σ 2 F ( x ) ( × 10 3 ) Bias   Stability   ( ° / h ) Improvement
1 T , d T / d t Huber 1.2538 × 10 3 0.20656.54560.0075559.19%
Hampel1.19910.09306.60190.0079457.08%
Logistic 3.5476 × 10 2 0.13996.59490.0075759.08%
Myriad 1.8752 × 10 3 0.20416.50800.0075259.35%
2 T , U Huber 4.3846 × 10 3 0.03907.01110.0078957.35%
Hampel79.86620.02977.11300.0082355.51%
Logistic 6.2443 × 10 2 0.03047.09460.0080756.38%
Myriad8.41770.00457.14020.0074359.84%
3 d T / d t , U Huber 2.4444 × 10 4 1.60256.77290.0081855.78%
Hampel0.75460.23866.81290.0083754.76%
Logistic1.99950.21446.89750.0083255.03%
Myriad1.42670.37996.90930.0083854.70%
4 T , d T / d t , U Huber 4.0580 × 10 5 2.27996.42170.0074060.00%
Hampel26.32120.12636.44790.0073360.38%
Logistic 3.4780 × 10 4 1.06236.42180.0074060.00%
Myriad76.07960.14586.37110.0071861.19%
Unweighted 1.3624 × 10 3 0.97716.67910.0075859.03%

Share and Cite

MDPI and ACS Style

Tao, Y.; Li, S.; Zheng, J.; Wu, F.; Fu, Q. High Precision Compensation for a Total Reflection Prism Laser Gyro Bias in Consideration of High Frequency Oscillator Voltage. Sensors 2019, 19, 2986. https://doi.org/10.3390/s19132986

AMA Style

Tao Y, Li S, Zheng J, Wu F, Fu Q. High Precision Compensation for a Total Reflection Prism Laser Gyro Bias in Consideration of High Frequency Oscillator Voltage. Sensors. 2019; 19(13):2986. https://doi.org/10.3390/s19132986

Chicago/Turabian Style

Tao, Yuanbo, Sihai Li, Jiangtao Zheng, Feng Wu, and Qiangwen Fu. 2019. "High Precision Compensation for a Total Reflection Prism Laser Gyro Bias in Consideration of High Frequency Oscillator Voltage" Sensors 19, no. 13: 2986. https://doi.org/10.3390/s19132986

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop