Next Article in Journal
ARBIN: Augmented Reality Based Indoor Navigation System
Previous Article in Journal
Microfluidic Production of Autofluorescent BSA Hydrogel Microspheres and Their Sequential Trapping for Fluorescence-Based On-Chip Permanganate Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

INS/CNS Deeply Integrated Navigation Method of Near Space Vehicles

School of Aerospace Science and Technology, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(20), 5885; https://doi.org/10.3390/s20205885
Submission received: 25 September 2020 / Revised: 14 October 2020 / Accepted: 15 October 2020 / Published: 17 October 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
Celestial navigation is required to improve the long-term accuracy preservation capability of near space vehicles. However, it takes a long time for traditional celestial navigation methods to identify the star map, which limits the improvement of the dynamic response ability. Meanwhile, the aero-optical effects caused by the near space environment can lead to the colorization of measurement noise, which affects the accuracy of the integrated navigation filter. In this paper, an INS/CNS deeply integrated navigation method, which includes a deeply integrated model and a second-order state augmented H-infinity filter, is proposed to solve these problems. The INS/CNS deeply integrated navigation model optimizes the attitude based on the gray image error function, which can estimate the attitude without star identification. The second-order state augmented H-infinity filter uses the state augmentation algorithm to whiten the measurement noise caused by the aero-optical effect, which can effectively improve the estimation accuracy of the H-infinity filter in the near space environment. Simulation results show that the proposed INS/CNS deeply integrated navigation method can reduce the computational cost by 50%, while the attitude accuracy is kept within 10” (3 σ ). The attitude root mean square of the second-order state augmented H-infinity filter does not exceed 5”, even when the parameter error increases to 50%, in the near space environment. Therefore, the INS/CNS deeply integrated navigation method can effectively improve the rapid response ability of the navigation system and the filtering accuracy in the near space environment, providing a reference for the future design of near space vehicle navigation systems.

1. Introduction

Near space refers to the airspace 20 to 100 km above the surface [1]. Near space vehicles can cruise at a high speed in both the atmosphere and space. Compared with traditional aircraft, near space vehicles have been widely applied in space transportation, remote penetration, and so on, due to their advantages relating to launch costs and re-usability, among others [2,3]. Therefore, near space vehicles have drawn a great amount of attention, and many countries, such as the United States, Russia, China, France, and Germany, have conducted research on near space vehicles, such as the X-43A, X51A, X-37B, SHEFEX-1, SHEFEX-2, Avangard, and so on [4,5].
Near space vehicles fly so fast that attitude errors can greatly affect their position accuracy [6,7]. As the most accurate navigation method [8,9,10], celestial navigation is helpful for improving the long-term accuracy preservation capability of near space vehicles [11]. Celestial navigation calculates the attitude of aircraft by measuring stars which are firmly fixed in the inertial space, such that navigation error does not accumulate with time. Star sensors are widely used in modern celestial navigation systems, which capture images of stars and calculate the attitude according to the star point locations [12]. However, star sensors are vulnerable to environmental impacts. To improve the reliability of the navigation system, an inertial sensor is usually added to produce an integrated navigation system [13,14]. The INS/CNS integrated navigation system can be divided into two modes: Loose integration and tight integration [15]. In a loosely integrated navigation system, the inertial and star sensors work independently, where the measurement of the integrated navigation system is subject to the navigation error of both [16]. In a tightly integrated navigation system, the star sensor is no longer used for independent navigation calculation but, instead, is used as a sensor to measure the star vector; the system then sets up the measurement equation based on the star vector error [17,18]. The tightly integrated mode can save computational time at the sensor level, but it needs more time to update the measurements in the navigation filter. The computational efficiency of tightly integrated mode is close to loosely integrated mode in the whole navigation loop. Regardless of whether a loosely or tightly integrated navigation system is employed, star identification is necessary to identify navigation stars, which is the most time-consuming step [9,19]. The computational cost of the star sensor causes an attitude data delay [20]. When the vehicle speed is low and the attitude changes slowly, the attitude delay does not affect the navigation accuracy. However, the speed of near space vehicles is so high, and their mobility is so strong, that data delay has a significant impact on their navigation accuracy. Therefore, one solution is to simplify the celestial navigation algorithm and improve the computational efficiency of the star sensor [5,20].
Furthermore, the complex flow structures around the optical window of the star sensor produce target image offsets (called aero-optical effects), which is an important factor affecting the navigation accuracy of the near space vehicle [21,22]. The main problem of celestial navigation in the near space environment is the colored noise caused by aero-optical effects, which can cause the accuracy of the integrated navigation filter to decrease or even diverge [23,24].
At present, the Kalman filter is the most widely used technology in integrated navigation systems [25]. However, the Kalman filter makes some ideal assumptions in the filtering process, such as knowledge of the system model and system disturbances. The Kalman filtering algorithm assumes that the noise is white noise with zero mean and known covariance [26]; obviously, this assumption is no longer tenable in the near space environment. To deal with the uncertainty of noisy models, the H-infinity filter has been proposed. The H-infinity filter does not require the statistical characteristics of the noise model and has better robustness in the uncertainty system model [27]. Therefore, the H-infinity filter has great advantages over other filters in the near space environment.
Based on the above analyses, there are two problems that need to be solved: First, due to the strong mobility and high speed of near space vehicles, attitude data delay has a great impact on the navigation accuracy [1,5,20]. Therefore, the computational efficiency of the navigation algorithm needs to be improved, in order to adapt to the dynamic characteristics of near space vehicles. Second, the aero-optical effects affect the observation of the star sensor, leading to colorized measurement noise [21,23]. Therefore, the filter of the integrated navigation system needs to be improved using a colored noise model, in order to adapt to the near space environment.
To solve the above problems, an INS/CNS deeply integrated navigation model is proposed in this paper. In loosely and tightly integrated mode, IMU and star sensor work independently, but in deeply integrated mode, IMU will be involved in the calculation process of star sensor. IMU data is used to predict the navigation star and ensure that the algorithm can converge. In the deeply integrated navigation system, the star sensor only outputs a gray image (star identification is unnecessary). The attitude is obtained by optimizing the gray image error function, which is constructed with the help of IMU. Therefore, compared with the loosely and tightly integrated modes, the integration will be deeper in deeply integrated mode. Meanwhile, a state augmentation algorithm is introduced to whiten the measurement noise, which can effectively reduce the uncertainty of the measurement noise model. Then, a second-order state augmented H-infinity filter is proposed by using prior information of the colored noise, which can effectively improve the estimation accuracy of the near space vehicle navigation system.
This paper is organized as follows: In Section 2, the INS/CNS deeply integrated navigation model is proposed, to improve the computational efficiency of the star sensor. The second-order state augmented H-infinity filter is proposed in Section 3, to improve the navigation accuracy under colored noise conditions in the near space environment. Simulations are presented in Section 4, to demonstrate the performance of the proposed model. Finally, our conclusions are drawn in Section 5.

2. INS/CNS Deeply Integrated Model

The main idea of INS/CNS integrated navigation is to use the error between the measurement information of the star sensor and the prediction information of the INS to estimate the misalignment angle. The difference is the navigation information processing level; for example, a loosely integrated model deals with navigation information at the attitude angle level [28], while a tightly integrated model deals with navigation information at the star vector level [29]. Furthermore, the deeply integrated model deals with navigation information at the gray image level. The deeper the navigation information level is, the less processing links there are to the star sensor. Therefore, in the tightly integrated mode, the star sensor does not need to calculate the attitude. In the deeply integrated mode, star identification is only used to provide initial values at the beginning, the star sensor does not need to carry out star identification subsequently.

2.1. Gray Image Error Function

The gray image error function is the gray error between the measurement star image of the star sensor and the predicted star image by the inertial navigation system.
As shown in Figure 1, g 1 is the measurement star image of the star sensor; g 2 is the prediction star image of inertial navigation; O s x s y s z s is the star sensor co-ordinate system; O s x s y s z s is the virtual star sensor co-ordinate system based on INS prediction; C s s is the transformation matrix between the star sensor and virtual star sensor co-ordinate systems; and ϕ s is the misalignment angle of C s s , which satisfies C s s = exp ( ϕ s × ) . Suppose p 1 is a pixel in g 1 and p s is the projection vector of p 1 in the star sensor co-ordinate system, which satisfy [30]:
p 1 = [ u x u y 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ X / Z Y / Z 1 ] = A p ¯ s ,
where u x and u y represent the horizontal and vertical pixel co-ordinates, respectively; A is the internal parameter matrix of the star sensor; the parameters f x , f y , c x , and c y are fixed after delivery; X , Y , and Z are the co-ordinates of p s ; and p ¯ s is the normalized vector of p s .
Due to the INS misalignment angle, g 2 is based on the virtual star sensor co-ordinate system, p s and p s are projection vectors of the same point in the star sensor and virtual star sensor co-ordinate system, and p 2 represents the pixel co-ordinates of p s in g 2 , which satisfy
p 2 = A p ¯ s = A C s s p ¯ s = A exp ( ϕ s × ) p ¯ s .
I 1 ( p 1 ) is the gray value of point p 1 in g 1 , I 2 ( p 2 ) is the gray value of point p 2 in g 2 , p 1 and p 2 can be regarded as projections of the same vector into different images, ϕ ^ s is the estimate of ϕ s , the subscript i represents the number of pixels in the image, and e i ( ϕ ^ s ) is the gray error function of a single point, which calculated as follows:
e i ( ϕ ^ s ) = I 1 ( p 1 ) I 2 ( p 2 ) = I 1 ( A p ¯ s ) I 2 ( A exp ( ϕ ^ s × ) p ¯ s ) .
According to the gray scale invariant, image transformation does not change the gray value of the same star point [31]. When ϕ ^ s is close to the true value ϕ s , e i ( ϕ ^ s ) will be close to zero. Considering all pixels in the image, the star sensor attitude estimation problem can be transformed into a non-linear optimization problem, as follows:
min ϕ ^ s J ( ϕ ^ s ) = e ( ϕ ^ s ) 2 , e ( ϕ ^ s ) = [ e 1 ( ϕ ^ s ) e N ( ϕ ^ s ) ] ,
where N is the number of pixels to be optimized in the image and e ( ϕ ^ s ) is the gray image error function. The optimal estimate of ϕ s is obtained when J ( ϕ ^ s ) reaches its minimum.

2.2. Attitude Optimization Algorithm Based on the Damped Newton Method

According to Equation (4), the principle of the deeply integrated model is to minimize the mean square error between two images by adjusting ϕ ^ s . To explain the principle of the deeply integrated model intuitively, suppose that there is only one star in the image. The physical meanings of g 1 , g 2 , and the gray image error function are shown in Figure 2.
At the beginning, star identification is needed to modify INS to make sure that ϕ ^ s is small enough. Then, the star in g 1 and the star in g 2 will appear in the same star window as shown in Figure 2. There will be one obvious peak A and one obvious trough B in Δ g above, adjust ϕ ^ s in the direction of B A . A and B will come closer and closer by iteration, and finally become the same as the Δ g below.
According to the gray scale invariant,
I 1 ( A p ¯ s ) = I 1 ( A exp ( ϕ ^ s × ) p ¯ s ) .
Equation (3) can be further written as
e i ( ϕ ^ s ) = I 1 ( p 1 ) I 2 ( p 2 ) = I 1 ( A exp ( ϕ ^ s × ) p ¯ s ) I 2 ( A exp ( ϕ ^ s × ) p ¯ s ) = Δ I ( p 2 ) .
Δ I ( p 2 ) is the gray value of point p 2 in Δ g ,the physical meaning of e i ( ϕ ^ s ) can be understood as the value of the pixel co-ordinate p 2 in Δ g . The purpose of global optimization is to minimize the mean square error of all points in Δ g .
As shown in Figure 3, when ϕ ^ s deviates from ϕ s , there will be an obvious peak and an obvious valley in Δ g . The purpose of the deeply integrated model is to adjust the rotation relationship ϕ ^ s between g 1 and g 2 until the peak and valley overlap, such that the gray error becomes close to zero. Obviously, the fastest adjustment direction is the red line in Figure 3, which is actually the gradient direction of e i ( ϕ ^ s ) . The gradient can be described as follows:
e i ( ϕ ^ s ) ϕ ^ s = [ I 1 ( p 1 ) I 2 ( p 2 ) ] ϕ ^ s = [ I 1 ( A p ¯ s ) I 2 ( A exp ( ϕ ^ s × ) p ¯ s ) ] ϕ ^ s = I 2 ( A exp ( ϕ ^ s × ) p ¯ s ) ϕ ^ s .
It is difficult to calculate I 2 / ϕ ^ s directly, so an intermediate variable is introduced to decompose Equation (7). Suppose ρ = exp ( ϕ ^ s × ) p ¯ s and ϑ = A ρ , then Equation (7) can be decomposed into:
e i ( ϕ ^ s ) ϕ ^ s = I 2 ϑ ϑ ρ ρ ϕ ^ s .
Next, we calculate the three partial derivatives. I 2 / ϑ is the pixel gradient of g 2 at ϑ . If one supposes that the pixel coordinate is ϑ = [ u a u b ] T , then the pixel gradient is
I 2 ϑ = 1 2 [ I 2 ( u a + 1 , u b ) I 2 ( u a 1 , u b ) I 2 ( u a , u b + 1 ) I 2 ( u a , u b 1 ) ] T .
Supposing that ρ = [ X ρ Y ρ Z ρ ] T , ϑ / ρ can be calculated as follows:
ϑ ρ = [ f x Z ρ 0 f x X ρ Z ρ 2 0 f y Z ρ f y Y ρ Z ρ 2 ] .
It is difficult to calculate ρ / ϕ ^ s directly, so the Baker–Campbell–Hausdorff formula is used to approximate ρ / ϕ ^ s , which satisfies [32]:
ρ ϕ ^ s = exp ( ϕ ^ s × ) p ¯ s ϕ ^ s = lim δ ϕ ^ s 0 exp [ ( ϕ ^ s + δ ϕ ^ s ) × ] p ¯ s exp ( ϕ ^ s × ) p ¯ s δ ϕ ^ s = lim δ ϕ ^ s 0 exp [ ( κ δ ϕ ^ s ) × ] exp ( ϕ ^ s × ) p ¯ s exp ( ϕ ^ s × ) p ¯ s δ ϕ ^ s lim δ ϕ ^ s 0 ( κ δ ϕ ^ s ) × exp ( ϕ ^ s × ) p ¯ s δ ϕ ^ s = ρ × κ
According to the Baker–Campbell–Hausdorff formula [32],
{ κ = sin χ χ I 3 × 3 + ( 1 sin χ χ ) γ γ T + 1 cos χ χ ( γ × ) χ = | ϕ ^ s | , γ = ϕ ^ s χ .
The Jacobian matrix, J i , can be obtained by substituting Equations (9)–(11) into Equation (8), which satisfies:
J i = e i ( ϕ ^ s ) ϕ ^ s = I 2 ϑ ϑ ρ ρ ϕ ^ s = 1 2 [ I 2 ( u a + 1 , u b ) I 2 ( u a 1 , u b ) I 2 ( u a , u b + 1 ) I 2 ( u a , u b 1 ) ] T [ f x Z ρ 0 f x X ρ Z ρ 2 0 f y Z ρ f y Y ρ Z ρ 2 ] [ ρ × κ ] .
When there are N pixels to be optimized, each J i is stacked into a global Jacobian matrix J ( ϕ ^ s ) = [ J 1 T J N T ] T , where J i is a 1 × 3 dimensional matrix and J ( ϕ ^ s ) is an N × 3 dimensional matrix.
The damped Newton method is used to update ϕ ^ s , and Δ ϕ ^ s is calculated as follows:
{ H J Δ ϕ ^ s = g J H J = J T ( ϕ ^ s ) J ( ϕ ^ s ) + η I g J = J T ( ϕ ^ s ) e ( ϕ ^ s ) ,
where η is the damping coefficient, which can avoid singularities and make the iterative process more stable. Then, ϕ ^ s = ϕ ^ s + Δ ϕ ^ s is used to modify ϕ ^ s until it converges. It should be noted that the method does not always converge, when the initial value of ϕ ^ s is larger than 2′, the method will diverge in simulation. Therefore, it is suggested that the initial value of ϕ ^ s should be set within 2′ to ensure the convergence of the algorithm. Generally, the method only needs 3-4 iterations to achieve convergence.
Following which, C n n can be calculated. p s and p s are the true and predicted values of the same vector, which satisfy
{ p s = C b s C n b p n p s = C b s C n b p n p s = C s s p s .
C n n is calculated as follows:
C n n = ( C b s C n b ) 1 exp ( ϕ ^ s × ) C b s C n b .
C n n is used to modify C n b to the CNS attitude result C n b , where q c n s is the quaternion of C n b . As the CNS attitude, q c n s , has been obtained by the deeply integrated model, the next section will describe the filtering algorithm of the INS/CNS integrated navigation method.

3. Second-Order State Augmented H-Infinity Filter

There is a strong interaction between the aircraft and the surrounding airflow when the near space vehicle re-enters the atmosphere. This effect is called the aero-optical effect, which causes colorization of the attitude noise [23]. This section solves this problem by using a filtering approach.

3.1. Star Sensor Pixel Offset Model in the Near Space Environment

When light travels through a rapidly varying flow field, the imaging position on the focal plane may be biased. The image distortion due to aero-optical effects can be described approximately as follows:
I 1 ( i 1 , j 1 ) = I 0 ( i 0 + Δ i , j 0 + Δ j ) ,
where I 0 ( i 0 + Δ i , j 0 + Δ j ) is the gray of reference image without aero-optical effects, I 1 ( i 1 , j 1 ) is the gray of distorted image affected by aero-optical effects, ( i 0 , j 0 ) is the reference image co-ordinate, ( i 1 , j 1 ) is the distorted image co-ordinate, and Δ i and Δ j are the pixel offsets in the X-axis and Y-axis, respectively, caused by aero-optical effects. The pixel offset effect is shown in Figure 4, where the hollow point represents the original pixel position and the solid point represents the pixel position after offset.
It has been pointed out, in [24], that the pixel offset caused by aero-optical effects can be modeled approximately as follows:
{ Δ i ( t ) = a 1 sin ( 2 π f 1 t + θ 1 ) Δ j ( t ) = a 2 sin ( 2 π f 2 t + θ 2 ) .
However, sinusoidal mathematical models are difficult to directly apply in the navigation systems of near space vehicles. This is because, in the actual flight environment, it is difficult to obtain the real-time phases θ 1 and θ 2 . Therefore, in this paper, a recursive model is used to describe the pixel offset caused by aero-optical effects. The discrete sine sequence can be written as follows:
sin ( ω n ) 2 cos ( ω ) sin ( ω ( n 1 ) ) sin ( ω ( n 2 ) ) ,
where ω is the digital angular frequency.
Therefore, the discrete recursive model of pixel offset can be obtained as follows:
{ Δ i ( t k ) = 2 cos ( 2 π f 1 f s ) Δ i ( t k 1 ) Δ i ( t k 2 ) + ω i ( t k ) Δ j ( t k ) = 2 cos ( 2 π f 2 f s ) Δ j ( t k 1 ) Δ j ( t k 2 ) + ω j ( t k ) ,
where f s is the sampling frequency, and ω i ( t k ) and ω j ( t k ) are zero-mean white noise sequences. This modeling method only needs to know the frequency, which improves the feasibility of engineering applications.

3.2. Second-Order State Augmented H-Infinity Filtering Model

The H-infinity filter can obtain the optimal estimation of state variables under the condition of noise with unknown statistics [33,34]. According to the characteristics of the near space environment, the H-infinity filter can be improved based on the prior information of V k . Specific improvements are presented in the following sections.

3.2.1. Measurement Noise Whitening

Suppose the filter model is
{ X k = Φ k / k 1 X k 1 + Γ k 1 W k 1 Z k = H k X k + V k ,
where V k is the observation noise of the star sensor in the near space environment, which includes the two parts V k = V c , k + V w , k , in which V c , k is the colored noise part and V w , k is the white noise part ( E [ V w , k ] = 0 , E [ V w , k V w , j T ] = R w , k δ k j ). The mechanisms of V c , k and V w , k are different, so they are not related. V c , k can be approximated as a second-order Markov process, which can be described as V c , k = α 1 V c , k 1 + α 2 V c , k 2 + ξ k , where ξ k is the driving white noise ( E [ ξ k ] = 0 , E [ ξ k ξ j T ] = R c , k δ k j ). X and Φ can be described as follows:
X = [ ϕ T ( δ v n ) T ( δ p ) T ( ε b ) T ( b ) T ] T , Φ I + F t .
where
F = [ [ ω i n n × ] M v M 1 + M 2 C b n 0 3 × 3 [ f i b n × ] [ ( 2 ω i e n + ω e n n ) × ] + [ v × ] M v M g + [ v × ] ( 2 M 1 + M 2 ) 0 3 × 3 C b n 0 3 × 3 M p v M p p 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 ] .
M 1 = [ 0 0 0 ω i e sin L 0 0 ω i e cos L 0 0 ] , M 2 = [ 0 0 v N ( R M + h ) 2 0 0 v E ( R N + h ) 2 v E sec 2 L ( R N + h ) 0 v E tan L ( R N + h ) 2 ] , M v = [ 0 1 R M + h 0 1 R N + h 0 0 tan L R N + h 0 0 ] .
M p v = [ 0 1 R M + h 0 sec L R N + h 0 0 0 0 1 ] , M p p = [ 0 0 v N ( R M + h ) 2 v E sec L tan L R N + h 0 v E sec L ( R N + h ) 2 0 0 0 ] .
The first- and second-order terms of the noise sequence V c , k can be expressed as follows:
[ V c , k V c , k 1 ] = [ α 1 I α 2 I I 0 ] [ V c , k 1 V c , k 2 ] + [ ξ k 0 ] .
V c , k 1 and V c , k 2 can be extended into the state parameters to form the second-order augmented state parameters and, so the measurement equation can be expressed as follows:
Z k = H k X k + V c , k + α 1 V c , k 1 + α 2 V c , k 2 + ξ k = [ H k α 1 I α 2 I ] [ X k V c , k 1 V c , k 2 ] + ( V w , k + ξ k ) .
The state equation can be expressed as follows:
[ X k V c , k 1 V c , k 2 ] = [ Φ k / k 1 0 0 0 α 1 I α 2 I 0 I 0 ] [ X k 1 V c , k 2 V c , k 3 ] + [ Γ k 1 0 0 0 I 0 0 0 I ] [ W k 1 ξ k 0 ] .
Suppose
{ X k a = [ X k V c , k 1 V c , k 2 ] , Φ k / k 1 a = [ Φ k / k 1 0 0 0 α 1 I α 2 I 0 I 0 ] , Γ k 1 a = [ Γ k 1 0 0 0 I 0 0 0 I ] W k 1 a = [ W k 1 ξ k 0 ] , H k a = [ H k α 1 I α 2 I ] , V k a = V w , k + ξ k .
Then, the second-order state augmented model (after whitening) can be written as follows:
{ X k a = Φ k / k 1 a X k 1 a + Γ k 1 a W k 1 a Z k = H k a X k a + V k a .
The noise characteristics of W k a and V k a can be analyzed, which satisfy
{ E [ W k a ] = 0 , E [ V k a ] = 0 E [ W k a ( W j a ) T ] = E [ [ W k 1 ξ k 0 ] [ W k 1 T ξ k T 0 ] ] = [ Q k 0 0 0 R c , k 0 0 0 0 ] δ k j E [ V k a ( V j a ) T ] = E [ V w , k V w , j T + ξ k ξ j T ] = ( R w , k + R c , k ) δ k j E [ W k a ( V j a ) T ] = E [ [ W k 1 ξ k 0 ] ( V w , j T + ξ j T ) ] = [ 0 R c , k 0 ] δ k j .
Although V k a has been converted to white noise, the cost is that the system noise W k a and measured noise V k a become dependent ( E [ W k a ( V j a ) T ] 0 ). This is because the prior model of the colored noise is added to the state quantity and the colored noise V k a is affected by the driving white noise ξ k , which leads to correlation between W k a and V k a .

3.2.2. Decorrelation of System Noise and Measurement Noise in the Second-Order Augmented Model

Equation (31) can be abbreviated as follows:
{ E [ W k a ( W j a ) T ] = Q k a δ k j E [ V k a ( V j a ) T ] = R k a δ k j E [ W k a ( V j a ) T ] = S k δ k j .
According to Equation (30), 0 = J k 1 ( Z k 1 H k 1 a X k 1 a + V k 1 a ) , where J k 1 is an arbitrary matrix. The state equation can be organized as
X k a = Φ k / k 1 a X k 1 a + Γ k 1 a W k 1 a + J k 1 ( Z k 1 H k 1 a X k 1 a + V k 1 a ) = ( Φ k / k 1 a J k 1 H k 1 a ) X k 1 a + J k 1 Z k 1 + ( Γ k 1 a W k 1 a J k 1 V k 1 a ) = Φ k / k 1 X k 1 a + J k 1 Z k 1 + W k 1
where
{ Φ k / k 1 = Φ k / k 1 a J k 1 H k 1 a W k 1 = Γ k 1 a W k 1 a J k 1 V k 1 a .
The covariance matrix of system noise and measurement noise is as follows:
E [ W k ( V j a ) T ] = E [ Γ k a W k a ( V j a ) T J k V k a ( V j a ) T ] = ( Γ k a S k J k R k a ) δ k j .
Obviously, if J k satisfies Γ k a S k J k R k a = 0 , W k and V k a are no longer relevant and, so, J k should satisfy
J k = Γ k a S k ( R k a ) 1 .
The variance matrix of W k is
E [ W k ( W j ) T ] = ( Γ k a Q k a ( Γ k a ) T + J k R k a J k T Γ k a S k J k T J k S k T ( Γ k a ) T ) δ k j = ( Γ k a Q k a ( Γ k a ) T J k S k T ( Γ k a ) T ) δ k j = ( Γ k a Q k a ( Γ k a ) T J k R k a J k T ) δ k j
If Q k = Γ k a Q k a ( Γ k a ) T J k R k a J k T , the second-order state augmented model (after decorrelation) can be expressed as follows:
{ X k a = Φ k / k 1 X k 1 a + J k 1 Z k 1 + W k 1 Z k = H k a X k a + V k a ,
where
{ E [ W k ] = 0 , E [ V k a ] = 0 , E [ W k ( V j a ) T ] = 0 E [ W k ( W j ) T ] = Q k δ k j , E [ V k a ( V j a ) T ] = R k a δ k j .
Suppose the estimator of the H-infinity Filter, after expanding its dimension, is
y k = Τ k X k = [ Τ k 0 ] [ X k V c , k 1 V c , k 2 ] = Τ k a X k a .
A second-order state augmented H-infinity filter can be obtained by substituting Equation (38) and Equation (40) into the standard H-infinity formula, thus satisfying
{ Ω ˜ k a = ( Τ k a ) T Ω k Τ k a X k a = Φ k / k 1 X k 1 a + J k 1 Z k 1 + Φ k / k 1 K k 1 ( Z k 1 H k 1 a X k 1 a ) K k 1 = P k 1 a [ I θ Ω ˜ k P k 1 a + ( H k 1 a ) T ( R k 1 a ) 1 H k 1 a P k 1 a ] 1 H k 1 a ( R k 1 a ) 1 P k a = Φ k / k 1 P k 1 a [ I θ Ω ˜ k P k 1 a + ( H k 1 a ) T ( R k 1 a ) 1 H k 1 a P k 1 a ] 1 ( Φ k / k 1 ) T + Q k 1 .

4. Results and Discussion

4.1. Simulation Verification of the INS/CNS Deeply Integrated Model

4.1.1. Algorithm Robustness Verification

The optimization object of the deeply integrated model is the global gray error of all navigation stars, where the optimization result is the optimal solution (in the least-squares sense). Theoretically, the mismatching of individual stars does not affect the final result; this was verified by simulation.
The Smithsonian Astrophysical Observatory (SAO) catalog was used for simulation validation, where precession and nutation were compensated for by the IAU1980 model, aberration only considered first-order corrections, and polar shift correction was provided by the International Earth Rotation Service (IERS). As shown in Figure 5, the simulation was validated under three working conditions: Low, medium, and high uncertainty.
The attitude calculation results obtained under the three conditions are shown in Figure 6.
From Figure 6, it can be seen that, even under high uncertainty (where the INS only correctly predicted 30% of the navigation stars), the deeply integrated algorithm still converged, and the accuracies under the three conditions were only slightly different.
To explain this phenomenon, suppose e a ( ϕ ^ s ) ( 1 a N s ) represents the gray error function of correctly matched pixels and e b ( ϕ ^ s ) ( N s + 1 b N ) represents the gray error function of incorrectly matched pixels. After the star point p 1 in g 1 is transformed into g 1 by exp ( ϕ ^ s × ) , there will be no star point at the corresponding position p 2 in g 2 if the star points between g 1 and g 2 do not match. e b ( ϕ ^ s ) will be equal to I 1 ( p 1 ) , which can be recorded as C b . Obviously, C b is a constant that does not change with ϕ ^ s . The optimal objective is equivalent to
min ϕ ^ s J ( ϕ ^ s ) = e ( ϕ ^ s ) 2 = a = 1 N s e a 2 ( ϕ ^ s ) + b = N s + 1 N e b 2 ( ϕ ^ s ) = i = 1 N s e a 2 ( ϕ ^ s ) + j = N s + 1 N C b 2 min ϕ ^ s J ( ϕ ^ s ) = i = 1 N s e a 2 ( ϕ ^ s ) .
Equation (37) shows that the constant term does not affect the final optimization result, such that optimizing all stars is equivalent to only optimizing the correctly matched stars, which means that unmatched stars do not affect the optimization result.

4.1.2. Comparison of Different Integrated Models

Star identification is not necessary in the deeply integrated mode and the navigation system can still work when the number of navigation stars is less than three. Therefore, the simulation should cover three conditions: The number of navigation satellites is more than three, equal to three, and less than three. The star map of the star sensor in the simulation is shown in Figure 7.
The damped Newton method was used to optimize the misalignment angle iteratively. Taking the time t = 10 s as an example, the iterative optimization results are shown in Figure 8.
As shown in Figure 8, the attitude could reach convergence after only 3–4 iterations. Compared with the first and fourth iteration optimization results, the gray image error is shown in Figure 9.
As shown in Figure 9, ϕ ^ s did not converge in the first iteration, and there was an obvious peak and trough in the gray error image; in contrast, ϕ ^ s converged in the fourth iteration, and the position deviation was close to zero. However, because of the image noise, there will be many little peaks and troughs, and the fluctuation in the image was caused only by image noise.
The accuracies and calculation times of loosely, tightly, and deeply integrated modes are presented in Figure 10.
The simulation results show that the attitude accuracies of the loosely, tightly, and deeply integrated navigation modes were of the same magnitude when the number of navigation stars was sufficient, the non-optical axis attitude accuracy was 10” (3 σ ), and the optical axis attitude accuracy was 50” (3 σ ). The loosely and tightly integrated navigation modes could not identify the navigation stars when the number of navigation stars was insufficient, such that the accuracy became divergent. In contrast, the deeply integrated navigation mode could still be used for the navigation solution when the non-optical axis attitude accuracy was about 50” (3 σ ) and the optical axis attitude accuracy was about 100” (3 σ ).
Each star occupies about 9 pixels in the star image, and the deeply integrated mode needs 4 iterations. On average, each star needs 36 iterations; the tightly and loosely integrated mode require star identification, there are 196 navigation stars in a sub catalogue on average, and the argument search space is 196 × 196 . The searches number of per star is 196 × 196 / 2 = 19208 . Obviously, the computation time of star identification is much larger than the time to update the measurements in deeply integrated navigation filter. The simulation results in Figure 10b show that the computational cost of the deeply integrated navigation mode was 50% lower than that of the tightly integrated navigation mode and 60% lower than that of the loosely integrated navigation mode.

4.1.3. Comparative Simulation of Single- and Double-Star Sensor Configurations

The attitude accuracy of the star sensor on the optical axis was lower than the remaining two axes, so celestial navigation systems of near space vehicles should be configured with double star sensors. The star sensor configuration schemes are shown in Figure 11.
An accuracy comparison of the single- and double-star sensor configurations is shown in Figure 12.
As shown in Figure 12, the attitude accuracy of the yaw under the single-star sensor configuration was 50” (3 σ ), which was much lower than the other two axes. The attitude accuracy of the three axes was kept within 10” (3 σ ) under the double-star sensor configuration, which effectively reduced the yaw angle error. For the near space vehicle, the accuracy of the yaw angle had a great influence on the position and, so, it is recommended that the double-star sensor configuration is adopted in celestial navigation systems.

4.2. Simulation Verification of the Second-Order State Augmented H-Infinity Filter

4.2.1. Comparison of the Second-Order State Augmented H-Infinity Filter and Standard H-Infinity Filter

The vehicle attitude profile, star sensor and gyro specification, initial attitude estimation error are shown in Figure 13 and Table 1.
The second-order state augmented H-infinity filter and standard H-infinity filter were compared in a near space environment. As can be seen from Figure 13, the angular velocity of the vehicle is largest in 500 s–600 s, and the 100 s data is used for simulated in Figure 14.
At the beginning of filtering, the SOSA filter exhibit a large error at around 3 s, this is because the initial value of V c , k 1 and V c , k 2 was unknown, which has to be set to zero at the beginning. Due to V c , k = α 1 V c , k 1 + α 2 V c , k 2 + ξ k , if V c , k 1 = 0 3 × 1 , V c , k 2 = 0 3 × 1 , V c , k will also close to zero. At this time, the state augmented model cannot estimate the colored noise effectively. In addition, because the estimated value of V c , k is close to zero, the filter will assume the colored noise to be very small (although it is actually not); then, the filter will allocate the error caused by V c , k to other navigation parameters, so the filter will exhibit a large error at around 3 s. However, as the filtering goes on, the colored noise V c , k will be estimated gradually, the state augmentation model will gradually play a role, and finally the navigation accuracy will be improved. Therefore, the filtering results should be used at least 10 s after the beginning of filtering. It can be seen from the simulation that the standard H-infinity filter did not filter out the colored noise; however, the second-order state augmented H-infinity filter proposed in this paper was able to effectively improve the colored noise filtering accuracy. Compared with the standard H-infinity filter, it is more suitable for the near space environment.

4.2.2. The Influence of Colored Noise Model Error on the Filtering Effect

The core of the second-order state augmented H-infinity filter is the colored noise model, where the core parameter of the colored noise model is the digital angular frequency, ω , of the aero-optical effects, whose influences are further illustrated in Figure 15. The attitude root mean square of the second-order state augmented H-infinity filter was about 3” when there was no parameter error. There was little change in the attitude accuracy when the parameter error was 10%, indicating that a 10% parameter error did not have a significant impact on the filtering accuracy. When the parameter error increased to 30%, the attitude root mean square decreased to about 4”. Even when the parameter error increased to 50%, the attitude root mean square did not exceed 5” and, so, the parameter error of the digital angular frequency ω does not significantly affect the second-order state augmented H-infinity filter.

5. Conclusions

In this paper, an INS/CNS deeply integrated navigation method was presented for near space vehicles. This method does not need star identification and can significantly reduce the required computational cost. Meanwhile, the proposed second-order state augmented H-infinity filter can weaken the influence of aero-optical effects on the measurement noise and effectively improve the filtering accuracy in near space environments. The simulation results show that the attitude accuracy of the INS/CNS deeply integrated navigation method is kept within 10” (3 σ ), while the computational cost can be reduced by 50%. The INS/CNS deeply integrated navigation method therefore can assist in improving the navigation accuracy of near space vehicles and reducing the computational cost of the associated navigation systems, providing a theoretical reference for the design of the near space vehicle navigation systems in the future.

Author Contributions

Data curation, R.M.; Methodology, H.S.; Project administration, N.C.; Software, R.M.; Validation, Y.L.; Writing—original draft, H.S.; Writing—review & editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Wang, W. Near-Space Vehicle-Borne SAR with Reflector Antenna for High-Resolution and Wide-Swath Remote Sensing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 338–348. [Google Scholar] [CrossRef]
  2. Gao, Z.; Jiang, B.; Qi, R.; Xu, Y. Robust reliable control for a near space vehicle with parametric uncertainties and actuator faults. Int. J. Syst. Sci. 2011, 42, 2113–2124. [Google Scholar] [CrossRef]
  3. Zhang, J.; Sun, C.; Zhang, R.; Qian, C. Adaptive sliding mode control for re-entry attitude of near space hypersonic vehicle based on backstepping design. IEEE/CAA J. Autom. Sin. 2015, 2, 94–101. [Google Scholar]
  4. Xu, Y.; Jiang, B.; Tao, G.; Gao, Z. Fault Tolerant Control for a Class of Nonlinear Systems with Application to Near Space Vehicle. Circuits Syst. Signal Process. 2011, 30, 655–672. [Google Scholar] [CrossRef]
  5. Chen, B.; Zheng, Y.; Chen, Z.; Zhang, H.; Liu, X. A review of celestial navigation system on near space hypersonic vehicle. Acta Aeronaut. Astronaut. Sin. 2020, 41, 623686. [Google Scholar]
  6. Jiang, B.; Gao, Z.; Shi, P.; Xu, Y. Adaptive Fault-Tolerant Tracking Control of Near-Space Vehicle Using Takagi–Sugeno Fuzzy Models. IEEE Trans. Fuzzy Syst. 2010, 18, 1000–1007. [Google Scholar] [CrossRef]
  7. Zhang, W.; Quan, W.; Guo, L. Blurred Star Image Processing for Star Sensors under Dynamic Conditions. Sensors 2012, 12, 6712–6726. [Google Scholar] [CrossRef]
  8. Quan, W.; Fang, J. A Star Recognition Method Based on the Adaptive Ant Colony Algorithm for Star Sensors. Sensors 2010, 10, 1955–1966. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, G.; Lv, W.; Li, J.; Wei, X. False Star Filtering for Star Sensor Based on Angular Distance Tracking. IEEE Access 2019, 7, 62401–62411. [Google Scholar] [CrossRef]
  10. Wan, X.; Wang, G.; Wei, X.; Li, J.; Zhang, G. Star Centroiding Based on Fast Gaussian Fitting for Star Sensors. Sensors 2018, 18, 2836. [Google Scholar] [CrossRef] [Green Version]
  11. Fialho, M.A.A.; Mortari, D. Theoretical Limits of Star Sensor Accuracy. Sensors 2019, 19, 5355. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, W.; Wei, X.; Li, J.; Wang, G. Noise suppression algorithm of short-wave infrared star image for daytime star sensor. Infrared Phys. Technol. 2017, 85, 382–394. [Google Scholar] [CrossRef]
  13. Ning, X.; Wang, L.; Wu, W.; Fang, J. A Celestial Assisted INS Initialization Method for Lunar Explorers. Sensors 2011, 11, 6991–7003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Ning, X.; Liu, L.; Fang, J.; Wu, W. Initial position and attitude determination of lunar rovers by INS/CNS integration. Aerosp. Sci. Technol. 2013, 30, 323–332. [Google Scholar] [CrossRef]
  15. Ning, X.; Liu, L. A Two-Mode INS/CNS Navigation Method for Lunar Rovers. IEEE Trans. Instrum. Meas. 2014, 63, 2170–2179. [Google Scholar] [CrossRef]
  16. Wang, Q.; Cui, X.; Li, Y.; Ye, F. Performance Enhancement of a USV INS/CNS/DVL Integration Navigation System Based on an Adaptive Information Sharing Factor Federated Filter. Sensors 2017, 17, 239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Guo, B.; Cheng, Y.; Ruiter, H.J. INS/CNS navigation system based on multi-star pseudo measurements. Aerosp. Sci. Technol. 2019, 95, 105506. [Google Scholar]
  18. Yang, Y.; Zhang, C.; Lu, J. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles. Sensors 2017, 17, 167. [Google Scholar] [CrossRef] [Green Version]
  19. Quan, W.; Xu, L.; Fang, J. A New Star Identification Algorithm based on Improved Hausdorff Distance for Star Sensors. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2101–2109. [Google Scholar]
  20. Chen, Z.; Wang, X. Research on INS / CNS Integrated Navigation System in near space vehicle. Aerodyn. Missile J. 2020, 4, 90–95. [Google Scholar]
  21. Kong, X.; Ning, G.; Yang, M.; Peng, Z.; Zhao, X.; Wang, S.; Xiu, P.; Liu, L. Study on aero optical effect of star navigation imaging. Infrared Laser Eng. 2018, 47, 367–372. [Google Scholar]
  22. Xiong, X.; Fei, J.; Chen, C.; Xiao, H. Connotation of Aero-Optical Effect and Its Influence Mechanism on Imaging Detection. Mod. Def. Technol. 2017, 45, 139–146. [Google Scholar]
  23. Guo, G. Studies on Mechanism and Control Methods of Aero-optical Effects for Supersonic Mixing Layers. Ph.D. Thesis, Shanghai Jiao Tong University, Shanghai, China, 2018. [Google Scholar]
  24. Guo, G. Study on correction method of aero-optical effects caused by supersonic mixing layer based on flow control. Infrared Laser Eng. 2019, 48, 245–252. [Google Scholar]
  25. Wang, Q.; Li, Y.; Diao, M.; Gao, W.; Qi, Z. Performance enhancement of INS/CNS integration navigation system based on particle swarm optimization back propagation neural network. Ocean Eng. 2015, 108, 33–45. [Google Scholar] [CrossRef]
  26. Gao, B.; Hu, G.; Gao, S.; Zhong, Y.; Gu, C. Multi-sensor Optimal Data Fusion for INS/GNSS/CNS Integration Based on Unscented Kalman Filter. Int. J. Control Autom. Syst. 2018, 16, 129–140. [Google Scholar] [CrossRef]
  27. Wang, X. Maneuvering Target Tracking Based on H∞ Filter and Multi-Model Algorithm. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2018. [Google Scholar]
  28. Li, J.; Wei, X.; Zhang, G. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors. Sensors 2017, 17, 1921. [Google Scholar]
  29. Liang, H. Study of Nonlinear Gaussian Filters and Its Application to CNS/SAR/SINS Integrated Navigation. Ph.D. Thesis, Harbin Institute of Technology, Harbin, China, 2015. [Google Scholar]
  30. Song, L. Research on the Key Technologies of Vision Navigation for UAV in Flight. Ph.D. Thesis, Northwestern Polytechnical University, Xi’an, China, 2015. [Google Scholar]
  31. Zhu, Z. Research on Vision-Based Navigation Methods for Aircrafts Using 3D Terrain Reconstruction and Matching. Ph.D. Thesis, National University of Defense Technology, Changsha, China, 2014. [Google Scholar]
  32. Gao, X.; Zhang, T. 14 Lectures on Visual SLAM: From Theory to Practice; Publishing House of Electronics Industry: Beijing, China, 2017; pp. 192–195. [Google Scholar]
  33. Zhao, J.; Mili, L. A Decentralized H-Infinity Unscented Kalman Filter for Dynamic State Estimation Against Uncertainties. IEEE Trans. Smart Grid 2019, 10, 4870–4880. [Google Scholar] [CrossRef]
  34. Jiang, C.; Zhang, S.; Zhang, Q. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation. Sensors 2016, 16, 2127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Construction process of the gray error function.
Figure 1. Construction process of the gray error function.
Sensors 20 05885 g001
Figure 2. Fundamental principles of the deeply integrated model.
Figure 2. Fundamental principles of the deeply integrated model.
Sensors 20 05885 g002
Figure 3. Optimization direction of gray error in a star window.
Figure 3. Optimization direction of gray error in a star window.
Sensors 20 05885 g003
Figure 4. Sketch map of the pixel offset model.
Figure 4. Sketch map of the pixel offset model.
Sensors 20 05885 g004
Figure 5. Initial star map for each working condition: (a) Star map observation by the star sensor; (b) prediction star map obtained under low uncertainty conditions; (c) prediction star map obtained under medium uncertainty conditions; and (d) prediction star map obtained under high uncertainty conditions (all star maps had enhanced contrast for ease of viewing).
Figure 5. Initial star map for each working condition: (a) Star map observation by the star sensor; (b) prediction star map obtained under low uncertainty conditions; (c) prediction star map obtained under medium uncertainty conditions; and (d) prediction star map obtained under high uncertainty conditions (all star maps had enhanced contrast for ease of viewing).
Sensors 20 05885 g005
Figure 6. Attitude error curve for each working condition: (a) Attitude error curve under low uncertainty conditions; (b) attitude error curve under medium uncertainty conditions; and (c) attitude error curve under high uncertainty conditions.
Figure 6. Attitude error curve for each working condition: (a) Attitude error curve under low uncertainty conditions; (b) attitude error curve under medium uncertainty conditions; and (c) attitude error curve under high uncertainty conditions.
Sensors 20 05885 g006
Figure 7. Simulation design.
Figure 7. Simulation design.
Sensors 20 05885 g007
Figure 8. Iterative optimization results of the deeply integrated mode.
Figure 8. Iterative optimization results of the deeply integrated mode.
Sensors 20 05885 g008
Figure 9. Iterative results of image gray error in a star window: (a) Image gray error of the first iteration; and (b) image gray error of the fourth iteration.
Figure 9. Iterative results of image gray error in a star window: (a) Image gray error of the first iteration; and (b) image gray error of the fourth iteration.
Sensors 20 05885 g009
Figure 10. Comparison of different integrated models: (a) Accuracy comparison of loosely, tightly, and deeply integrated navigation modes (the optical axis attitude is the Yaw and the non-optical axis attitude is Pitch and Roll); and (b) computational cost comparison of loosely, tightly, and deeply integrated navigation modes.
Figure 10. Comparison of different integrated models: (a) Accuracy comparison of loosely, tightly, and deeply integrated navigation modes (the optical axis attitude is the Yaw and the non-optical axis attitude is Pitch and Roll); and (b) computational cost comparison of loosely, tightly, and deeply integrated navigation modes.
Sensors 20 05885 g010
Figure 11. Configuration schemes of star sensors.
Figure 11. Configuration schemes of star sensors.
Sensors 20 05885 g011
Figure 12. Simulation results of INS/CNS deeply integrated navigation: (a) Single-star sensor configuration; and (b) double-star sensor configuration.
Figure 12. Simulation results of INS/CNS deeply integrated navigation: (a) Single-star sensor configuration; and (b) double-star sensor configuration.
Sensors 20 05885 g012
Figure 13. The trajectory of near space vehicles: (a) Nominal trajectory; (b) the vehicle attitude profile; and (c) the vehicle position profile.
Figure 13. The trajectory of near space vehicles: (a) Nominal trajectory; (b) the vehicle attitude profile; and (c) the vehicle position profile.
Sensors 20 05885 g013
Figure 14. Comparison of the filtering accuracy: (a) Measurement error of the star sensor; and (b) accuracy comparison of the second-order state augmented H-infinity filter and standard H-infinity filter.
Figure 14. Comparison of the filtering accuracy: (a) Measurement error of the star sensor; and (b) accuracy comparison of the second-order state augmented H-infinity filter and standard H-infinity filter.
Sensors 20 05885 g014
Figure 15. Statistical results of filtering accuracy under various working conditions.
Figure 15. Statistical results of filtering accuracy under various working conditions.
Sensors 20 05885 g015
Table 1. The star sensor and gyro specification.
Table 1. The star sensor and gyro specification.
ParameterValue
Field of star sensor10°
Resolution of star sensor256 * 256
Measurement noise of star sensor20”(3 σ )
Gyro bias0.1 ° / h
Gyro noise0.5 ° / h (3 σ )
Initial attitude estimation error20”(3 σ ) 1
1 After the initial star identification.

Share and Cite

MDPI and ACS Style

Mu, R.; Sun, H.; Li, Y.; Cui, N. INS/CNS Deeply Integrated Navigation Method of Near Space Vehicles. Sensors 2020, 20, 5885. https://doi.org/10.3390/s20205885

AMA Style

Mu R, Sun H, Li Y, Cui N. INS/CNS Deeply Integrated Navigation Method of Near Space Vehicles. Sensors. 2020; 20(20):5885. https://doi.org/10.3390/s20205885

Chicago/Turabian Style

Mu, Rongjun, Hongchi Sun, Yuntian Li, and Naigang Cui. 2020. "INS/CNS Deeply Integrated Navigation Method of Near Space Vehicles" Sensors 20, no. 20: 5885. https://doi.org/10.3390/s20205885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop