Next Article in Journal
A New Axial Stress Measurement Method for High-Strength Short Bolts Based on Stress-Dependent Scattering Effect and Energy Attenuation Coefficient
Previous Article in Journal
Simultaneous Determination of Caffeic Acid and Ferulic Acid Using a Carbon Nanofiber-Based Screen-Printed Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Vision Aided Initial Alignment Method of Strapdown Inertial Navigation Systems in Polar Regions

School of Marine Science and Technology, Northwestern Polytechnical University, 127 West Youyi Road, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 4691; https://doi.org/10.3390/s22134691
Submission received: 28 March 2022 / Revised: 11 June 2022 / Accepted: 13 June 2022 / Published: 21 June 2022
(This article belongs to the Section Navigation and Positioning)

Abstract

:
The failure of the traditional initial alignment algorithm for the strapdown inertial navigation system (SINS) in high latitude is a significant challenge due to the rapid convergence of polar longitude. This paper presents a novel vision aided initial alignment method for the SINS of autonomous underwater vehicles (AUV) in polar regions. In this paper, we redesign the initial alignment model by combining inertial navigation mechanization equations in a transverse coordinate system (TCS) and visual measurement information obtained from a camera fixed on the vehicle. The observability of the proposed method is analyzed under different swing models, while the extended Kalman filter is chosen as an information fusion algorithm. Simulation results show that: the proposed method can improve the accuracy of the initial alignment for SINS in polar regions, and the deviation angle has a similar estimation accuracy in the case of uniaxial, biaxial, and triaxial swing modes, which is consistent with the results of the observable analysis.

1. Introduction

Autonomous underwater vehicles (AUV) have always been an important tool to undertake Marine military tasks and complete Marine resource development, especially in polar resource exploration, and has attracted more and more attention from researchers around the world [1,2]. Due to the special geographical environment and geomagnetic characteristics in polar regions, common satellite navigation, radio navigation, and geomagnetic navigation can not work effectively in polar regions for a long time [3,4,5]. Ionospheric scintillation often occurs in high latitude areas, and the strong phase change and amplitude fluctuation of the signal may interfere with the work of the global positioning system (GPS) receiver, which has a great impact on the accuracy, reliability, and availability of GPS systems [6]. The inertial navigation system (INS) has high autonomy and is not affected by external factors such as climate and positions, so it can continuously provide speed and attitude information. Therefore, as one of the main components of the AUV navigation system, the inertial navigation system has become the key for AUV to complete its polar resource exploration mission [7].
However, accurate initial alignment of navigation equipment must be completed before starting navigation; otherwise, navigation accuracy will be affected [8]. However, with the increase of latitude, the included angle between the angular velocity vector of the Earth’s rotation and the gravitational acceleration vector decreases until it overlaps. Therefore, the strap-down inertial navigation system cannot achieve self-alignment in the polar region [9,10,11], and the initial alignment must be completed by other methods. Therefore, the drift navigation system [12], grid navigation system [13,14,15], and horizontal navigation system [16,17] are designed and developed, and initial alignment is carried out on this basis. However, none of these navigation systems has global navigation capability. The transfer alignment of navigation system based on the main inertial navigation system (MINS) is the main method of strap-down inertial navigation system initial alignment in the polar region [18,19]. In order to realize the initial alignment of the polar region, a fast transfer alignment scheme of speed and attitude matching is proposed in the [20], which requires the aircraft to perform simple motions. In Ref. [21], aiming at the initial alignment problem of ship inertial navigation in the polar region, an MINS transfer alignment model based on an inverse coordinate system is established. Based on the grid coordinate system, Ref. [22] estimates and corrects the inertial navigation errors of airborne weapons by matching the speed and attitude of the information of the main inertial navigation system with the polar transfer alignment method of airborne weapons. However, these algorithms are based on the information of the main inertial navigation system. These methods cannot be used to perform initial alignments of inertial navigation on an AUV without a high precision main inertial navigation system. Therefore, the application of these methods has great limitations.
In recent years, the topic of vision-assisted inertial navigation has attracted extensive attention. The low cost, low weight, and low power of the camera make it an ideal auxiliary system for the inertial navigation system, and it has certain applications in indoor auxiliary navigation [23], UAV navigation [24], and intelligent vehicle navigation [25]. After function extraction, function matching, tracking, and motion estimation of image sequence, the attitude and orientation information are updated. The visual feature points and constraints between the two images are obtained by matching the camera, so as to determine the movement information of the device [26,27]. On this basis, vision and navigation technology are gradually integrated and become a new research focus and development direction in the navigation field.
Aiming at the problem that the strapdown inertial navigation system of AUV cannot self-align in the polar region, this paper proposes a visually-assisted initial alignment method of strapdown inertial navigation by combining vision with inertial navigation and taking the visual measurement position and attitude as the observation amount. In practice, the strapdown inertial navigation system (SINS) on the AUV can use known feature points for alignment during the day and stars for alignment at night. Because it does not rely on other high precision navigation equipment, it can meet the use of complex situations.
The main work of this paper is as follows:
(1)
To solve the problem that strap-down inertial navigation system cannot use a traditional mechanized system in a high latitude area, the establishment of a horizontal mechanized system based on a horizontal coordinate system (TCS) can better meet the requirements of initial alignment of AUV in polar areas.
(2)
Designing an initial alignment method of SINS assisted by visual measurement information, combine SINS with visual measurement, update the state equation and measurement equation by using the observation result of AUV’s motion as the measurement information, design extended kalman filter (EKF), and obtain the initial alignment method of SINS assisted by vision.
(3)
The observability measure of the initial alignment method of the SINS was analyzed, and the initial alignment simulation experiment of the shaking base was designed. The same alignment accuracies was obtained through different swinging modes.
The rest of this article is organized as follows: Section 2 describes the establishment of the TCS and horizontal mechanized system. In Section 3, SINS is combined with visual measurement, and an EKF alignment method designed by the visual model is proposed. Section 4 constructs the observability matrix and analyzes the observability of the algorithm. In Section 5, numerical simulation and simulation of strap-down inertial navigation system in the polar region are carried out to verify the performance of the method. In Section 6, the pitch uniaxial swing scheme is used for experimental verification. Finally, the conclusions and future work are summarized in Section 7.

2. Polar Initial Alignment Algorithm Based on Abscissa System

2.1. Transversal Earth Coordinate System (TEF) and Transversal Geographic Coordinate System (TGF) Definition and Parameter Conversion

The horizontal longitude and latitude of a sphere are defined by referring to the definition method of geographic longitude and latitude of a sphere, as shown in Figure 1. The traditional longitude line is turned to the equator, which is called pseudo-longitude and latitude. The transverse earth coordinate system, namely e t system, uses 90 E/W coils as the pseudo equator, 0 / 180 coils as the pseudoprime meridians, and the intersection of the equator and the pseudoprime meridian as the pseudo pole. In the horizontal earth coordinate system, the northeast celestial geographic coordinate system is called the transverse geographic coordinate system, denoted as the g t system. Taking point P as an example, the direction of point P tangent to the pseudo meridian and pointing to the pseudo-North Pole is defined as false north ( o y g t ), and the direction of point P perpendicular to the local horizontal plane is defined as false celestial ( o z g t ). The P point is tangent to the pseudo-east coil ( o x g t ). o x g t , o y g t , and o z g t constitute the right-handed Cartesian coordinate system. The o x g t y g t z g t coordinate system is the horizontal geographic coordinate system defined by the pseudo-earth coordinate system.
According to the definition of TCS, the e-frame can turn to the e t -frame by rotating the e-frame around axis o y e . Based on the theory of rotation, the conversion relationship between e-frame and e t -frame can be expressed as follows:
C e e t = cos 90 0 sin 90 0 1 0 sin 90 0 cos 90 = 0 0 1 0 1 0 1 0 0
By relying on the mathematical model of the spherical right angle, the transfer relationship of latitude and longitude between g-frame and g t -frame can be defined as:
L t = arcsin cos L cos λ
λ t = arctan cos L sin λ sin L t
In addition, the conversion relationship between the g t -frame and g-frame can be expressed as follows:
C g g t = C e t g t C e e t C g e
where the transformation matrix of the e-frame to the g-frame is:
C e g = sin λ cos λ 0 sin L cos λ sin L sin λ cos L cos L cos λ cos L sin λ sin L
C e t g t = sin λ t cos λ t 0 sin L t cos λ t sin L t sin λ t cos L t cos L t cos λ t cos L t sin λ t sin L t

2.2. The Mechanization Equations of Latitude and Longitude in the e t -Frame

The transversal g-frame is defined as the navigation coordinate system in the transversal n-frame, and its mechanical equations are similar to the North-pointed INS.
(1)
Attitude angle differential equation:
C ˙ b g t = C b g t Ω g t b b
where Ω g t b b represents the cross product of the anti-symmetric matrix of ω g t b b .
ω g t b b = ω i b b C g t b ω i g t g t
(2)
Velocity differential equation:
The mechanics equation of the vehicle in TNF is shown as follows:
υ ˙ g t = f g t ω e g t g t + 2 ω i e g t × υ g t + g g t
where f g t = C b g t f b , g = C g g t g g 0 0 g T .
(3)
Position differential equation:
The differential equation of transversal latitude L t , transversal longitude λ t , and height h can be expressed as follows:
L ˙ t = υ N t R o t + h , λ ˙ t = υ E t R o t + h cos L t , h ˙ = υ U t
(4)
Error equation of attitude:
In the case where the scale factor error and installation error of the SINS gyroscope have been compensated, the vector form of the attitude error equation can be expressed as follows:
ϕ ˙ n t = ϕ n t × ω i n t n t + δ ω i n t n t C b n t δ ω i b b
where the partial derivative δ ω i n t n t of ω i n t n t with pseudo speed υ E t , υ N t , υ U t and transversal position L t , λ t , h is:
δ ω i n t n t = δ ω i e t n t + δ ω e t n t n t
δ ω i e t n t = ω i e 0 cos λ t 0 cos L t cos λ t sin L t sin λ t 0 sin L t cos λ t cos L t sin λ t 0 δ L t δ λ t δ h
δ ω e t n t n t = 0 1 R o t + h 0 1 R o t + h 0 0 tan L R o t + h 0 0 δ υ E t δ υ N t δ υ h t + 0 0 υ N t R o t + h 2 0 0 υ E t R o t + h 2 υ E t R o t + h sec 2 L 0 υ E t tan L t R o t + h 2 δ L t δ λ t δ h
Gyro error δ ω i b b is composed of random constant ε b and Gaussian white noise ε w :
δ ω i b b = ε b + ε w ε ˙ b = 0
(5)
Velocity error equation
In the case where the accelerometer’s scale factor error and installation error have been compensated, the vector form of the velocity error equation can be expressed as follows:
δ υ ˙ n t = ϕ n t × C b n t f b + δ υ n t × ( 2 ω i e t n t + ω e t n t n t ) + υ n t × ( 2 δ ω i e t n t + δ ω e t n t n t ) + C b n t δ f b
Assume that the accelerometer error δ f b consists of a random constant b and Gaussian white noise w , i.e., δ f b = b + w ˙ b = 0 .
(6)
Position error equation
The component of the position error equation is:
δ L ˙ t δ λ ˙ t δ h ˙ = 0 1 R o t + h 0 sec L t R o t + h 0 0 0 0 1 δ υ E t δ υ N t δ υ U t + 0 0 υ N t ( R o t + h ) 2 υ E t tan L t sec L t R o t + h 0 υ E t sec L t ( R o t + h ) 2 0 0 0 δ L t δ λ t δ h

3. Visually Assisted SINS Initial Alignment Algorithm in the Polar Region

3.1. The Theoretical Basis of Visual Positioning

The system state variables of filter can be expressed as follows:
X = ϕ n t δ V n t δ T n t ε b b θ δ l b c b T
where ϕ n t is the misalignment angle of SINS, δ V n t is the velocity error, δ T n t is the position error, ε b is the constant drift error of gyroscope, b is the constant drift error of accelerometer, θ is the setting-angle error between INS and camera. The error caused by lever arm effect between camera and INS is δ l b c b .
The state equation of the system can be expressed as follows:
ϕ ˙ n t = ϕ n t × ω i n t e + δ ω i n t n t C b n t δ ω i b b δ υ ˙ n t = ϕ n t × C b n t f b + δ υ n t × ( 2 ω i e t n t + ω e t n t n t ) + υ n t × ( 2 δ ω i e t n t + δ ω e t n t n t ) + C b n t δ f b δ T ˙ n t = δ V n t ε ˙ b b = 0 ˙ b b = 0 θ ˙ = 0 δ l ˙ b c b = 0
n t is the navigation coordinate system, and the horizontal geographic coordinate system g t is the navigation coordinate system n t .

3.2. The Establishment of the Measurement Equation of Filter

The transformation relation between AUV body coordinate system and camera coordinate system ( c ) is represented by a 3 × 3 dimensional rotation matrix C and a translation vector T:
X f Y f Z f = C X C Y C Z C + T
The above formula is expressed in the homogeneous coordinate form:
X f Y f Z f 1 = C / T X C Y C Z C 1 = M X C Y C Z C 1 = C 11 C 12 C 13 T 14 C 21 C 22 C 23 T 24 C 31 C 32 C 33 T 34 0 0 0 1 X C Y C Z C 1
where M is defined as the relative pose transfer matrix, where:
T 14 = C 11 X f C 12 Y f C 13 Z f T 24 = C 21 X f C 22 Y f C 23 Z f T 34 = C 31 X f C 32 Y f C 33 Z f
where T 11 T 33 is the component of the rotation matrix T of the AUV body coordinate system and camera coordinate system.
According to the pinhole imaging model, the conversion relationship between the camera coordinate system and image plane coordinate system is as follows:
Z C x p y p 1 = f 0 0 0 0 f 0 0 0 0 1 0 X C Y C Z C 1
where f is the focal length of the camera. Substitute Equation ( 20 ) into Equation ( 23 ) , and derive:
Z C u v 1 = 1 / d x 0 u 0 0 1 / d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 C / T X C Y C Z C 1 = M i M e X C Y C Z C 1
In the formula, the internal parameter matrix M i contains the parameters f , d x , d y , u 0 , v 0 that reflect the internal optical and geometric characteristics of the camera, and the external parameter matrix M e contains the attitude transfer matrix C and position transfer vector T that reflects the spatial position relationship between the camera coordinate system and the three-dimensional reference coordinate system.
(1)
Visual measurement equation
The position coordinates of feature points in the image plane coordinate system (P system) can be obtained from the image. The subscript i indicates the ith feature point.
The feature points satisfy the collinear equation:
Z i c u i v i = f / d x 0 u 0 0 f / d y v 0 C n t c T i n t T o c n t
T i n t is the position coordinate of the feature point i in the n-frame. T o c n t is the position coordinate of camera optical center under the n-frame. C n t c is the attitude transformation matrix of the n t -frame to the c-frame. u i v i T is the position coordinate of the imaging of feature point i in the p-frame. Z i c is the projection length of the distance from the feature point i to the optical center of the camera on the optical axis. f / d x and f / d y are the camera’s equivalent focal lengths. u 0 and v 0 are the position coordinates of the intersection of the camera optical axis and the image in the p-frame. For the camera that has been calibrated, f , d x , d y , u 0 , v 0 are known quantities.
The position coordinates of the image of n (n > 6) feature points in the p-frame and the position coordinates of feature points in the n-frame can be obtained at time t. Based on the linear approximation of the position coordinates of feature points in the p-frame and n-frame, the transformation matrix C n t c ( t ) of the n-frame to the c-frame at this moment and the position T n t ( t ) of the camera under the navigation system can be obtained by using the principle of the least-squares method.
The relationship between the transformation matrix C n t c ( t ) and C b n t ( t ) can be expressed as follows:
C b n t ( t ) = ( C b c ) 1 C n t c ( t ) T
The relationship between the position T n t ( t ) and T b n t ( t ) can be expressed as follows:
T b n t ( t ) = T n t ( t ) C b n t ( t ) l b c b
where the C b n t ( t ) is the transformation matrix from b-frame to n t -frame, the T b n t ( t ) is the position coordinates of the vehicle under the n-frame.
After the camera is calibrated and fixed with the vehicle, the transformation matrix C b c and the translation vector l b c b of the b-frame to the c-frame can be obtained. Then, the attitude transformation matrix C b n t ( t ) and the position coordinate T b n t ( t ) of the vehicle under the n-frame can be calculated with C n t c ( t ) and T n t ( t ) .
(2)
Establishment of the measurement equation
The measurement equation of the system can be expressed as follows:
z = M a t 2 A n g C ˜ c n t C ˜ b c C ˜ n t b T b n t t + C b n t t l b c b T c n t t ϕ n t + C b n t C b c T θ δ T n t t + C b n t δ l b c b + ω c
where M a t 2 A n g { } is the attitude angle corresponding to the attitude transformation matrix. C ˜ c n t is the attitude matrix measured by the camera. C ˜ b c is the transformation matrix between camera to INS. C ˜ n t b is the attitude matrix measured by the INS. T b n t ( t ) is the position measured by the INS. T c n t ( t ) is the position measured by the camera. ω c is the visual measurement noise. C b c is the transformation matrix of the b-frame to the c-frame. Thereinto, the schematic diagram of visual aided INS alignment platform is shown in Figure 2.

4. Observability Analysis of SINS Polar Initial Alignment Algorithm with Visual Assistance

The observability of the system is different under different swaying modes. Observability analysis theory is used to analyze the observability of the system under different maneuvers and to find the maneuvering method that is most suitable for AUV initial alignment in the polar region.
In this paper, we analyze the observability of the system by the analytical method.
When swaying around the inertial measurement unit (IMU), since the position of the IMU does not change and the true velocity is zero, the error equation can be simplified to make the analysis intuitive and simple. Then, we can get:
ϕ ˙ n t = ω i n t e × ϕ n t ξ n
δ υ ˙ n t = ϕ n t × C b n t f b + n
δ T ˙ n t = δ V n t
The state space model is:
X ˙ = A X + G W Z = H X + V
where:
A = w i e × 0 33 0 33 C b n 0 33 0 33 0 33 g × 0 33 0 33 0 33 C b n 0 33 0 33 0 33 I 33 0 33 0 33 0 33 0 33 0 33 0 ( 12 , 3 )
H = I 33 0 33 0 33 0 33 0 33 C b n 0 33 0 33 0 33 I 33 0 33 0 33 0 33 C b n
Q = H T ( H A ) T ( H A 20 ) T is observability matrix, Z = y T , y ˙ T , , ( y ( 20 ) ) T T is the matrix consisting of observables and their derivatives. We use rows 1th to 18th for analysis:
ϕ ˜ E = ϕ E + C 11 θ right + C 12 θ f r o n t + C 13 θ u p ϕ ˜ N = ϕ N + C 21 θ r i g h t + C 22 θ f r o n t + C 23 θ u p ϕ ˜ U = ϕ U + C 31 θ r i g h t + C 32 θ f r o n t + C 33 θ u p T ˜ E = T E + C 11 l r i g h t + C 12 l f r o n t + C 13 l u p T ˜ N = T N + C 21 l r i g h t + C 22 l f r o n t + C 23 l u p T ˜ U = T U + C 31 l r i g h t + C 32 l f r o n t + C 33 l u p ϕ ˜ ˙ E = w u ϕ N w N ϕ U + ξ E n ϕ ˜ ˙ N = w u ϕ E w E ϕ U + ξ N n ϕ ˜ ˙ U = w u ϕ N + w E ϕ N + ξ U n T ˜ ˙ E = δ V E T ˜ ˙ N = δ V N T ˜ ˙ U = δ V U ϕ ˜ ¨ E = ( w U 2 w N 2 ) ϕ E + w N w E ϕ N + w U w E ϕ U + w U ξ N n + w N ξ U n ϕ ˜ ¨ N = w N w E ϕ E + ( w U 2 w E 2 ) ϕ N + w U w N ϕ U + w U ξ E n + w E ξ U n ϕ ˜ ¨ U = w U w E ϕ E + w U w N ϕ N + ( w N 2 w E 2 ) ϕ U + w E ξ N n + w N ξ E n T ˜ ¨ E = g ϕ N + E n T ˜ ¨ N = g ϕ E + N n T ˜ ¨ U = U n
where:
C b n = cos φ cos γ sin φ sin η sin γ sin φ cos η cos φ sin γ + sin φ sin η cos γ sin φ cos γ + cos φ sin η sin γ cos φ cos η sin φ sin γ cos φ sin η cos γ cos η sin γ sin η cos η cos γ
Can be written as:
C b n = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33
w i e e × = 0 w U w N w U 0 w E w N w E 0
ξ E n = C 11 ξ r i g h t + C 12 ξ f r o n t + C 13 ξ u p
ξ N n = C 21 ξ r i g h t + C 22 ξ f r o n t + C 23 ξ up
ξ U n = C 31 ξ r i g h t + C 32 ξ f r o n t + C 33 ξ u p
E n = C 11 r i g h t + C 12 f r o n t + C 13 u p
N n = C 21 r i g h t + C 22 f r o n t + C 23 u p
U n = C 31 r i g h t + C 32 f r o n t + C 33 u p
In the formula, φ is yaw, γ is roll, and η is pitch. From the order of the derivative of observations, ϕ E , ϕ N , ϕ U , θ r i g h t , θ f r o n t , θ u p , T E , T N , T U , l r i g h t , l f r o n t , l u p corresponds to the zero-order derivative of the observations, so they have the highest degree of observability. δ V E , δ V N , δ V U and ξ r i g h t , ξ f r o n t , ξ u p correspond to the first-order derivatives of the observations, and they have the second highest degree of observability. r i g h t , f r o n t , u p corresponds to the second-order derivative of the observation with the lowest observability.
It can be seen from the first three rows that C b n is the coefficient of θ r i g h t , θ f r o n t , θ u p . C b n is a constant in the static state, so θ r i g h t , θ f r o n t , θ u p cannot be accurately estimated by these three equations. Due to the coupling between ϕ E , ϕ N , ϕ U and θ r i g h t , θ f r o n t , θ u p , the inaccuracy of θ r i g h t , θ f r o n t , θ u p will affect the estimation accuracy of ϕ E , ϕ N , ϕ U . When in the three-axis swing, C b n keeps changing and the coefficient of θ r i g h t , θ f r o n t , θ u p also changes. We can obtain many sets of nonlinear equations through multiple measurements at different times, and then accurately estimate the six values of ϕ E , ϕ N , ϕ U , θ r i g h t , θ f r o n t , θ u p . In the same way, the estimated value of T E , T N , T U , l r i g h t , l f r o n t , l u p will be more accurate in the triaxial swing state than in the stationary state. In addition, ϕ N = ( T ˜ ¨ E E n ) / g , ϕ E = ( T ˜ ¨ N N n ) / g can be derived from the 16th and 17th rows. ϕ E , ϕ N can be estimated through the second derivative of the position error and the estimation accuracy is / g . Therefore, the ϕ E and the ϕ N will converge faster and more accurately than the ϕ U in theory. In addition, taking into account the coupling relationship between ϕ E , ϕ N , ϕ U , θ r i g h t , θ f r o n t , θ u p , it will have an advantage on the estimation of θ r i g h t , θ f r o n t , θ u p .
However, in engineering, it is difficult to perform triaxial sway, so the following is to analyze the observability of uniaxial or biaxial sway.
For uniaxial sway, the change of pitch will affect nine values of C b n . The change of roll and yaw will affect six values of C b n . Therefore, the estimation accuracy of uniaxial sway is pitch > roll = yaw. However, the estimation accuracy will be affected by the initial attitude with uniaxial sway. One degree of freedom will be ignored at a certain initial attitude. For example, when the initial pitch and yaw are both zero, the change in yaw will only cause the change of four values of C b n . When taking biaxial sway, the initial attitude will not affect the estimation accuracy and at least eight values in C b n will change. Thus, the effects of biaxial sway and triaxial sway are basically the same in theory.

5. Simulation

Assume that, in the simulation experiment, the initial position of AUV in g-frame is ( 89 N , 108 E , 0 m ) , and in the g t -frame is ( 0.30 N , 0.95 E , 0 m ) . The initial state of yaw is 45 , pitch and rolls are 10 , and the speed is 0 m / s . AUV sways around the yaw axis, pitch axis, and roll axis with a swing amplitude of ± 15 .
Set the initial value of filter: the initial value of system state is X 0 = 0 . Initial misalignment angles are ϕ E = ϕ N = 1 and ϕ U = 5 . Speed error is 0.1 m / s , 0.1 m / s , 0.1 m / s . Position error is 1 m , 1 m , 1 m . Gyro constant drift error is 0.05 / h . Random noise is 0.01 / h . Accelerometer constant bias is 100 μ g . Random noise is 50 μ g / Hz . The difference in attitude angle between the camera and the IMU is ( 2 , 5 , 2 ) . The misalignment angle after calibration is ( 0.5 , 0.5 , 0.5 ) . The lever arm between the camera and IMU is 0.4 m , 1 m , 0.6 m . The error caused by the lever arm is ( 0.1 m , 0.1 m , 0.1 m ) . The accuracy of visual measurement is related to the distance between the camera and the marker. It is also related to the distance of the feature point on the marker. If the distance between the feature points is farther, the measurement accuracy will be higher. In addition, the closer the distance between the camera and the marker, the higher the measurement accuracy will be. When the distance between the feature points is 30 mm and the distance between the camera and the marker is 2000 mm, the distance error is below 10 mm, the attitude error is below 0.2 , and both are white noise. Thus, in this simulation, the attitude measurement noise is set to 0.2 , 0.2 , 0.2 , the position measurement noise is 0.01 m , 0.01 m , 0.01 m , the update period of the inertial navigation data are 20 ms , the update period of the camera data are 100 ms , and the filtering period is 1 s. Then, the initial variance matrix P ( 0 ) , the system matrix Q, and the measurement noise matrix R are as follows:
P ( 0 ) = 20 d i a g 1 2 , 1 2 , 5 2 , 0.1 m / s 2 , 0.1 m / s 2 , 0.1 m / s 2 , 1 m 2 , 1 m 2 , 1 m 2 , 0.05 / h 2 , 0.05 / h 2 , 0.05 / h 2 , 100 μ g 2 , 100 μ g 2 , 100 μ g 2 , 0.5 2 , 0.5 2 , 0.5 2 , 1 m 2 , 1 m 2 , 1 m 2 20
Q = d i a g 0.01 / h 2 , 0.01 / h 2 , 0.01 / h 2 , 50 μ g / Hz 2 , 50 μ g / Hz 2 , 50 μ g / Hz 2 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0
R = d i a g 0.2 2 , 0.2 2 , 0.2 2 , 0.01 m 2 , 0.01 m 2 , 0.01 m 2
According to the above simulation conditions, the Kalman filtering method is used to carry out the moving base alignment simulation with the simulation time of 200 s.
According to the above simulation conditions, The difference between the true value and the estimated value is shown in Figure 3, Figure 4 and Figure 5. The residual mounting misalignment angle between the camera and the IMU after calibration is shown in Figure 6. The residual rod arm error between the camera and IMU after calibration is shown in Figure 7.
The figure shows the state estimation under the condition of triaxial sway. Because the initial attitude error is large, in order to obtain a better estimation effect, the initial navigation system will be modified once with the current estimation value to reduce the nonlinearity of the model in the 30th second.
The estimates for the triaxial swing are shown in Table 1.
As can be seen from the table, there is little difference in the estimation accuracy of the east misalignment angle, north misalignment angle, velocity error, and position error, whether swinging or stationary. However, there is a significant difference in the estimation accuracy of sky error angle under different swinging modes. The estimation accuracy of the celestial misalignment angle is significantly higher than that of the single-axis roll, single-axis yaw, and stationary rotation when the rotation is three-axis, two-axis, and single-axis pitching. Meanwhile, the residual installation error angle between the camera and the inertial navigation can be accurately estimated, which is consistent with the theoretical analysis. When the gyroscope is biased at 0.05 / h and the accelerometer is biased at 100 μ g , the triaxial alignment accuracy of the strap-down inertial navigation system is 0.11 , 0.17 and 0.30 . For the estimation of the deviation angle, it has the same effect in the case of uniaxial, biaxial, and triaxial swing after alignment 200 s, which is consistent with the results of the observable analysis. In addition, the residual rod arm error between the camera and the inertial navigation device can only be accurately estimated under the condition of triaxial and biaxial swing, and the residual rod arm error between the camera and the inertial navigation device can not be accurately estimated under the condition of uniaxial and static swing. At the same time, the added zero drift can not be observed under any circumstances and the gyro zero bias estimation is not accurate; there is a large deviation.
In engineering, the rod arm between the camera and the inertial navigation device can be obtained through calibration, and the accuracy can be up to a millimeter-level. Therefore, the AUV can only carry out the pitching single-axis swing for initial alignment in the polar region, which will greatly reduce the difficulty of engineering alignment compared with the three-axis swing.

6. Experimental Verification

In Section 5, the initial alignment schemes of SINS under different maneuvering schemes are analyzed and simulated. The results show that the effect of single axis swing scheme in pitch is basically the same as that of the three-axis swing scheme. However, in practical engineering applications, it is difficult to realize the three-axis swing, and the pitching single-axis swing scheme is simple and easy to operate. Therefore, the pitch uniaxial swing scheme was adopted in this experiment to verify the initial alignment algorithm of the strapdown inertial navigation pole region based on visual assistance. Considering the feasibility and operability of practical engineering, a hand-pushed turntable is used to simulate the rocking motion of the vehicle.
STIM300 is selected as inertial measurement element, camera selection industrial camera DYSMT205A. By fixing the bracket, the camera and STIM300 are fixed on the turntable bracket to ensure that the relative position and pose of the inertial measurement element and the camera remain unchanged, as shown in Figure 8. The camera uses an AR Marker to obtain pose information. AR Marker is placed directly in front of the camera and will not move or change during the whole experiment. Before the motion of hand push simulated vehicle swing, the AR Marker position and attitude information of AR Marker were measured and recorded with high-precision equipment. The lever arm error of the system is very small and can be ignored.
A total of three groups of data were recorded in the experiment. Because of the high accuracy of pitch angle and roll angle, they are not given here. Using the vision-aided initial alignment algorithm, the yaw angle converges to 29.4 29.8 after stationary. The error results with the yaw angle value obtained by the corresponding turntable are recorded in Table 2.
It can be seen from Table 2 that the yaw angle error can be limited to about 0.5478 , which includes the small angle error of axis unalignment between the camera and the turntable as well as the small angle error of axis unalignment between STIM300 and the turntable. The results show that the error of yaw angle is within an acceptable range and can meet the actual needs, which verifies the initial alignment algorithm of SINS based on visual assistance.

7. Conclusions

This paper solves the problem that the strap-down inertial navigation system of AUV cannot realize self-alignment in the polar region. Combining navigation with vision, a visually-assisted initial alignment method of SINS based on an abscise-coordinate system is proposed. When the AUV is swinging, the position and attitude of the AUV itself are calculated by the motion of known feature points relative to the camera. SINS is combined with visual measurement, and the observation result of AUV’s motion is used as the measurement information to update the state equation and measurement equation, in order to complete the high-precision initial alignment of SINS with visual assistance. Simulation results show that this algorithm can estimate the error angle effectively; the error between the estimated value and the true value is close to zero and does not diverge with time and has good convergence. The experimental results show that the yaw angle error can meet the practical needs and has strong practical application. At the same time, AUV only needs to pitch a single axis swing that can meet the needs of polar alignment, greatly reducing the difficulty of engineering to achieve the initial alignment of the AUV polar region. Therefore, the algorithm can better meet the requirements of AUV initial alignment in the polar region. Compared with the traditional polar alignment method, this method is simpler and more widely applicable. At present, only simulation verification and analysis have been carried out, and field investigation and test have been actively carried out in the later stage.

Author Contributions

Conceptualization, F.Z.; methodology, F.Z.; software, X.G. and W.S.; validation, X.G. and W.S.; investigation, W.S.; resources, F.Z.; data curation, X.G.; writing—original draft preparation, F.Z.; writing—review and editing, W.S.; visualization, X.G.; supervision, F.Z.; project administration, F.Z.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SINSStrapdown inertial navigation system
AUVAutonomous underwater vehicles
TCSTransverse coordinate system
EKFExtended Kalman filter
INSInertial navigation system
GPSGlobal positioning system
MINSMain inertial navigation system
TEFTransversal Earth coordinate system
TGFTransversal geographic coordinate system
IMUInertial measurement unit

References

  1. González-García, J.; Gómez-Espinosa, A.; García-Valdovinos, L.G.; Salgado-Jiménez, T.; Cuan-Urquizo, E.; Escobedo Cabello, J.A. Experimental Validation of a Model-Free High-Order Sliding Mode Controller with Finite-Time Convergence for Trajectory Tracking of Autonomous Underwater Vehicles. Sensors 2022, 22, 488. [Google Scholar] [CrossRef] [PubMed]
  2. Yan, Z.; Wang, L.; Wang, T.; Zhang, H.; Zhang, X.; Liu, X. A Polar Initial Alignment Algorithm for Unmanned Underwater Vehicles. Sensors 2017, 17, 2709. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Babich, O.A. Extension of the Basic Strapdown INS Algorithms to Solve Polar Navigation Problems. Gyroscopy Navig. 2019, 10, 330–338. [Google Scholar] [CrossRef]
  4. Ngwira, C.M.; Mckinnell, L.A.; Cilliers, P.J. GPS phase scintillation observed over a high-latitude Antarctic station during solar minimum. J. Atmos. Solar-Terr. Phys. 2010, 72, 718–725. [Google Scholar] [CrossRef]
  5. Andalsvik, Y.L.; Jacobsen, K.S. Observed high-latitude GNSS disturbances during a less-than-minor geomagnetic storm. Radio Sci. 2014, 49, 1277–1288. [Google Scholar] [CrossRef]
  6. Meziane, K.; Kashcheyev, A.; Jayachandran, P.T.; Hamza, A.M. On the latitude-dependence of the GPS phase variation index in the polar region. In Proceedings of the 2020 IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE), Vicenza, Italy, 12–14 October 2020; pp. 72–77. [Google Scholar] [CrossRef]
  7. Wu, R.; Wu, Q.; Han, F.; Zhang, R.; Hu, P.; Li, H. Hybrid Transverse Polar Navigation for High-Precision and Long-Term INSs. Sensors 2018, 18, 1538. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Liu, M.; Gao, Y.; Li, G.; Guang, X.; Li, S. An Improved Alignment Method for the Strapdown Inertial Navigation System (SINS). Sensors 2016, 16, 621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Liu, W.C.; Tan, Z.Y.; Bian, H.W. A plication of wander azimuth INS /GPS integra-ted navigation in polar region. Fire Control Command Control 2013, 38, 69–71. (In Chinese) [Google Scholar]
  10. Cui, W.; Ben, Y.; Zhang, H. A Review of Polar Marine Navigation Schemes. In Proceedings of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 April 2020; pp. 851–855. [Google Scholar] [CrossRef]
  11. Li, Q.; Ben, Y.; Yu, F. System reset of transversal strapdown INS for ship in polar region. Measurement 2015, 60, 247–257. [Google Scholar] [CrossRef]
  12. Salychev, O.S. Applied Inertial Navigation Problems and Solutions; The BMSTU Press: Moscow, Russia, 2004. [Google Scholar]
  13. Ge, H.R.; Xu, X.; Huang, L.; Zhao, H. SINS/GNSS Polar Region Integrated Navigation Method based on Virtual Spherical Model. Navig. Position. Timing 2021, 8, 81–87. [Google Scholar] [CrossRef]
  14. Yan, Z.; Wang, L.; Zhang, W.; Zhou, J.; Wang, M. Polar Grid Navigation Algorithm for Unmanned Underwater Vehicles. Sensors 2017, 17, 1599. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Di, Y.R.; Huang, H.; Ruan, W. Underwater Vehicle Pole-aligned Network navigation Technology. Ship Sci. Technol. 2021, 43, 77–82. [Google Scholar]
  16. Yan, Z.; Wang, L.; Wang, T.; Yang, Z.; Li, J. Transversal navigation algorithm for Unmanned Underwater Vehicles in the polar region. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 4546–4551. [Google Scholar] [CrossRef]
  17. Yan, Z.; Wang, L.; Wang, T.; Zhang, H.; Yang, Z. Polar Transversal Initial Alignment Algorithm for UUV with a Large Misalignment Angle. Sensors 2018, 18, 3231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Lin, X.; Bian, H.; Wang, R. Analysis of Navigation Performance of Initial Error Based on INS Transverse Coordinate Method in Polar Region. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  19. Cheng, J.H.; Liu, J.X.; Zhao, L. Review on the Development of Polar Navigation and Positioning Support Technology. Chin. Ship Res. 2021, 16, 16–29. [Google Scholar] [CrossRef]
  20. Kain, J.E.; Cloutier, J.R. Rapid Transfer Alignment for Tactical Weapon Applications. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Boston, MA, USA, 14–19 August 1989; pp. 1290–1300. [Google Scholar]
  21. Sun, F.; Yang, X.L.; Ben, Y.Y. Research on Polar Transfer Alignment Technology Based on Inverse coordinate System. J. Proj. Arrows Guid. 2014, 34, 179–182. [Google Scholar]
  22. Wu, F.; Qin, Y.Y.; Zhou, Q. Polar Transfer Alignment Algorithm for Airborne Weapon. J. Chin. Inert. Technol. 2013, 21, 141–146. [Google Scholar]
  23. Rantanen, J.; Makela, M.; Ruotsalainen, L.; Kirkko-Jaakkola, M. Motion Context Adaptive Fusion of Inertial and Visual Pedestrian Navigation. In Proceedings of the 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Nantes, France, 24–27 September 2018; pp. 206–212. [Google Scholar] [CrossRef]
  24. Shao, W.; Dou, L.; Zhao, H.; Wang, B.; Xi, H.; Yao, W. A Visual/Inertial Relative Navigation Method for UAV Formation. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 1831–1836. [Google Scholar] [CrossRef]
  25. Wang, H.; Zhang, Y.; Yuan, Q.; Wu, H. Intelligent Vehicle Visual Navigation System Design. In Proceedings of the 2010 International Conference on Machine Vision and Human-machine Interface, Kaifeng, China, 24–25 April 2010; pp. 522–525. [Google Scholar] [CrossRef]
  26. Jiayun, L.; Fubin, Z. A vision assisted navigation method suitable for polar regions used by autonomous underwater vehicle. In Proceedings of the OCEANS 2018 MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018; pp. 1–4. [Google Scholar] [CrossRef]
  27. Chen, M.; Xu, J.H.; Yu, P. Research on the Integrated Navigation Technology of Inertial-Aided Visual Odometry. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
Figure 1. Transversal coordinate system.
Figure 1. Transversal coordinate system.
Sensors 22 04691 g001
Figure 2. Schematic diagram of the alignment platform of the visual aided INS.
Figure 2. Schematic diagram of the alignment platform of the visual aided INS.
Sensors 22 04691 g002
Figure 3. The difference between the true value and the estimated value of the misalignment angle.
Figure 3. The difference between the true value and the estimated value of the misalignment angle.
Sensors 22 04691 g003
Figure 4. The difference between the true value and the estimated value of the velocity error.
Figure 4. The difference between the true value and the estimated value of the velocity error.
Sensors 22 04691 g004
Figure 5. The difference between the true value and the estimated value of the position error.
Figure 5. The difference between the true value and the estimated value of the position error.
Sensors 22 04691 g005
Figure 6. Residual mounting misalignment angle between camera and IMU after calibration.
Figure 6. Residual mounting misalignment angle between camera and IMU after calibration.
Sensors 22 04691 g006
Figure 7. Residual bar arm error between camera and IMU after calibration.
Figure 7. Residual bar arm error between camera and IMU after calibration.
Sensors 22 04691 g007
Figure 8. Overall connection diagram of the experiment.
Figure 8. Overall connection diagram of the experiment.
Sensors 22 04691 g008
Table 1. Estimates under triaxial sway.
Table 1. Estimates under triaxial sway.
Rotation Axis x b , y b , z b x b , z b x b , y b y b , z b z b x b y b Still
ϕ E ( ) 0.110.260.610.410.500.400.360.51
ϕ N ( ) −0.17−0.120.310.70−0.11−0.210.37−0.18
ϕ U ( ) −0.30−0.750.900.760.86−3.209.216.38
δ v E ( m / s ) 6.98  × 10 3 −1.20  × 10 3 −3.0  × 10 3 −1.20  × 10 3 1.4  × 10 3 −1.2  × 10 3 −6.64  × 10 4 −8.67  × 10 4
δ v N ( m / s ) 8.71  × 10 3 1.1  × 10 3 −2.10  × 10 3 −1.24  × 10 3 5.4  × 10 4 1.0  × 10 3 −5.57  × 10 4 −9.41  × 10 4
δ v U ( m / s ) 3.13  × 10 4 1.1  × 10 3 −2.0  × 10 3 −1.80  × 10 3 1.3  × 10 4 1.6  × 10 3 −1.0  × 10 4 2.68  × 10 4
δ L ( m ) −0.0070.0220.035−0.0025−0.0170.3510.0010.333
δ λ ( m ) −0.0130.0280.0230.0340.304−0.0120.0010.323
δ H ( m ) 0.0170.0150.0210.023−0.0020.0130.3170.315
ε r i g h t ( / h ) −0.0690.0640.0540.0490.0580.0730.0710.062
ε f r o n t ( / h ) 0.0420.0350.0500.0580.0430.0780.0210.043
ε u p ( / h ) 0.0230.0210.0320.0120.0340.018−0.0290.005
r i g h t ( μ g ) xxxxxxxx
f r o n t ( μ g ) xxxxxxxx
u p ( μ g ) xxxxxxxx
δ A r i g h t ( ) 29.4029.2029.0128.7528.8229.4331.0429.23
δ A f r o n t ( ) 31.2031.2131.2030.6031.0131.6028.2929.02
δ A u p ( ) 29.5031.1931.8029.5331.4033.4031.3024.12
δ l r i g h t ( m ) 0.01130.01090.01070.0113−0.2020.0110.103−0.213
δ l f r o n t ( m ) 0.01240.01150.01230.01210.0120−0.2470.108−0.230
δ l u p ( m ) 0.01060.01150.01140.01140.01070.096−0.227−0.227
Table 2. The difference between the yaw angle obtained by the visual aid initial alignment algorithm and the corresponding turntable.
Table 2. The difference between the yaw angle obtained by the visual aid initial alignment algorithm and the corresponding turntable.
The Experimental Group Number123Average Value
Difference in yaw ( ) 0.44380.58900.61060.5478
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, F.; Gao, X.; Song, W. A Vision Aided Initial Alignment Method of Strapdown Inertial Navigation Systems in Polar Regions. Sensors 2022, 22, 4691. https://doi.org/10.3390/s22134691

AMA Style

Zhang F, Gao X, Song W. A Vision Aided Initial Alignment Method of Strapdown Inertial Navigation Systems in Polar Regions. Sensors. 2022; 22(13):4691. https://doi.org/10.3390/s22134691

Chicago/Turabian Style

Zhang, Fubin, Xiaohua Gao, and Wenbo Song. 2022. "A Vision Aided Initial Alignment Method of Strapdown Inertial Navigation Systems in Polar Regions" Sensors 22, no. 13: 4691. https://doi.org/10.3390/s22134691

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop