INS/CNS Deeply Integrated Navigation Method of Near Space Vehicles

Celestial navigation is required to improve the long-term accuracy preservation capability of near space vehicles. However, it takes a long time for traditional celestial navigation methods to identify the star map, which limits the improvement of the dynamic response ability. Meanwhile, the aero-optical effects caused by the near space environment can lead to the colorization of measurement noise, which affects the accuracy of the integrated navigation filter. In this paper, an INS/CNS deeply integrated navigation method, which includes a deeply integrated model and a second-order state augmented H-infinity filter, is proposed to solve these problems. The INS/CNS deeply integrated navigation model optimizes the attitude based on the gray image error function, which can estimate the attitude without star identification. The second-order state augmented H-infinity filter uses the state augmentation algorithm to whiten the measurement noise caused by the aero-optical effect, which can effectively improve the estimation accuracy of the H-infinity filter in the near space environment. Simulation results show that the proposed INS/CNS deeply integrated navigation method can reduce the computational cost by 50%, while the attitude accuracy is kept within 10” (3 σ). The attitude root mean square of the second-order state augmented H-infinity filter does not exceed 5”, even when the parameter error increases to 50%, in the near space environment. Therefore, the INS/CNS deeply integrated navigation method can effectively improve the rapid response ability of the navigation system and the filtering accuracy in the near space environment, providing a reference for the future design of near space vehicle navigation systems.


Introduction
Near space refers to the airspace 20 to 100 km above the surface [1]. Near space vehicles can cruise at a high speed in both the atmosphere and space. Compared with traditional aircraft, near space vehicles have been widely applied in space transportation, remote penetration, and so on, due to their advantages relating to launch costs and re-usability, among others [2,3]. Therefore, near space vehicles have drawn a great amount of attention, and many countries, such as the United States, Russia, China, France, and Germany, have conducted research on near space vehicles, such as the X-43A, X51A, X-37B, SHEFEX-1, SHEFEX-2, Avangard, and so on [4,5].
Near space vehicles fly so fast that attitude errors can greatly affect their position accuracy [6,7]. As the most accurate navigation method [8][9][10], celestial navigation is helpful for improving the long-term accuracy preservation capability of near space vehicles [11]. Celestial navigation calculates the attitude of aircraft by measuring stars which are firmly fixed in the inertial space, such that navigation error does not accumulate with time. Star sensors are widely used in modern celestial navigation systems, which capture images of stars and calculate the attitude according to the star point locations [12]. However, star sensors are vulnerable to environmental impacts. To improve This paper is organized as follows: In Section 2, the INS/CNS deeply integrated navigation model is proposed, to improve the computational efficiency of the star sensor. The second-order state augmented H-infinity filter is proposed in Section 3, to improve the navigation accuracy under colored noise conditions in the near space environment. Simulations are presented in Section 4, to demonstrate the performance of the proposed model. Finally, our conclusions are drawn in Section 5.

INS/CNS Deeply Integrated Model
The main idea of INS/CNS integrated navigation is to use the error between the measurement information of the star sensor and the prediction information of the INS to estimate the misalignment angle. The difference is the navigation information processing level; for example, a loosely integrated model deals with navigation information at the attitude angle level [28], while a tightly integrated model deals with navigation information at the star vector level [29]. Furthermore, the deeply integrated model deals with navigation information at the gray image level. The deeper the navigation information level is, the less processing links there are to the star sensor. Therefore, in the tightly integrated mode, the star sensor does not need to calculate the attitude. In the deeply integrated mode, star identification is only used to provide initial values at the beginning, the star sensor does not need to carry out star identification subsequently.

Gray Image Error Function
The gray image error function is the gray error between the measurement star image of the star sensor and the predicted star image by the inertial navigation system.
As shown in Figure 1, g 1 is the measurement star image of the star sensor; g 2 is the prediction star image of inertial navigation; O s x s y s z s is the star sensor co-ordinate system; O s x s y s z s is the virtual star sensor co-ordinate system based on INS prediction; C s s is the transformation matrix between the star sensor and virtual star sensor co-ordinate systems; and φ s is the misalignment angle of C s s , which satisfies C s s = exp(φ s ×). Suppose p 1 is a pixel in g 1 and p s is the projection vector of p 1 in the star sensor co-ordinate system, which satisfy [30]: where u x and u y represent the horizontal and vertical pixel co-ordinates, respectively; A is the internal parameter matrix of the star sensor; the parameters f x , f y , c x , and c y are fixed after delivery; X, Y, and Z are the co-ordinates of p s ; and p s is the normalized vector of p s .
Sensors 2020, 20, x FOR PEER REVIEW 3 of 19 colored noise conditions in the near space environment. Simulations are presented in Section 4, to demonstrate the performance of the proposed model. Finally, our conclusions are drawn in Section 5.

INS/CNS Deeply Integrated Model
The main idea of INS/CNS integrated navigation is to use the error between the measurement information of the star sensor and the prediction information of the INS to estimate the misalignment angle. The difference is the navigation information processing level; for example, a loosely integrated model deals with navigation information at the attitude angle level [28], while a tightly integrated model deals with navigation information at the star vector level [29]. Furthermore, the deeply integrated model deals with navigation information at the gray image level. The deeper the navigation information level is, the less processing links there are to the star sensor. Therefore, in the tightly integrated mode, the star sensor does not need to calculate the attitude. In the deeply integrated mode, star identification is only used to provide initial values at the beginning, the star sensor does not need to carry out star identification subsequently.

Gray Image Error Function
The gray image error function is the gray error between the measurement star image of the star sensor and the predicted star image by the inertial navigation system.
As shown in Figure 1 where x u and y u represent the horizontal and vertical pixel co-ordinates, respectively; A is the internal parameter matrix of the star sensor; the parameters    Due to the INS misalignment angle, g 2 is based on the virtual star sensor co-ordinate system, p s and p s are projection vectors of the same point in the star sensor and virtual star sensor co-ordinate system, and p 2 represents the pixel co-ordinates of p s in g 2 , which satisfy I 1 p 1 is the gray value of point p 1 in g 1 , I 2 p 2 is the gray value of point p 2 in g 2 , p 1 and p 2 can be regarded as projections of the same vector into different images,φ s is the estimate of φ s , the subscript i represents the number of pixels in the image, and e i φ s is the gray error function of a single point, which calculated as follows: According to the gray scale invariant, image transformation does not change the gray value of the same star point [31]. Whenφ s is close to the true value φ s , e i φ s will be close to zero. Considering all pixels in the image, the star sensor attitude estimation problem can be transformed into a non-linear optimization problem, as follows: where N is the number of pixels to be optimized in the image and e φ s is the gray image error function.
The optimal estimate of φ s is obtained when J φ s reaches its minimum.

Attitude Optimization Algorithm Based on the Damped Newton Method
According to Equation (4), the principle of the deeply integrated model is to minimize the mean square error between two images by adjustingφ s . To explain the principle of the deeply integrated model intuitively, suppose that there is only one star in the image. The physical meanings of g 1 , g 2 , and the gray image error function are shown in Figure 2.
According to the gray scale invariant, image transformation does not change the gray value of the same star point [31]. When ˆs φ is close to the true value s φ , ( ) i s e φ will be close to zero. Considering all pixels in the image, the star sensor attitude estimation problem can be transformed into a non-linear optimization problem, as follows: where N is the number of pixels to be optimized in the image and ( )s e φ is the gray image error function. The optimal estimate of s φ is obtained when ( )s J φ reaches its minimum.

Attitude Optimization Algorithm Based on the Damped Newton Method
According to Equation (4), the principle of the deeply integrated model is to minimize the mean square error between two images by adjustingˆs φ . To explain the principle of the deeply integrated model intuitively, suppose that there is only one star in the image. The physical meanings of 1 g , 2 g , and the gray image error function are shown in Figure 2.   At the beginning, star identification is needed to modify INS to make sure thatφ s is small enough. Then, the star in g 1 and the star in g 2 will appear in the same star window as shown in Figure 2. There will be one obvious peak A and one obvious trough B in ∆g above, adjustφ s in the direction of → BA. A and B will come closer and closer by iteration, and finally become the same as the ∆g below.
According to the gray scale invariant, Equation (3) can be further written as ∆I p 2 is the gray value of point p 2 in ∆g,the physical meaning of e i φ s can be understood as the value of the pixel co-ordinate p 2 in ∆g. The purpose of global optimization is to minimize the mean square error of all points in ∆g.
As shown in Figure 3, whenφ s deviates from φ s , there will be an obvious peak and an obvious valley in ∆g. The purpose of the deeply integrated model is to adjust the rotation relationshipφ s between g 1 and g 2 until the peak and valley overlap, such that the gray error becomes close to zero. Obviously, the fastest adjustment direction is the red line in Figure 3, which is actually the gradient direction of e i φ s . The gradient can be described as follows: Sensors 2020, 20, x FOR PEER REVIEW 6 of 19 According to the Baker-Campbell-Hausdorff formula [32], The Jacobian matrix, i J , can be obtained by substituting Equations (9), (10), and (11) into Equation (8), which satisfies: When there are N pixels to be optimized, each i J is stacked into a global Jacobian matrix The damped Newton method is used to update ˆs  , and ˆs  is calculated as follows: where  is the damping coefficient, which can avoid singularities and make the iterative process more stable. Then, ˆˆ= s s s +    is used to modify ˆs  until it converges. It should be noted that the method does not always converge, when the initial value of ˆs  is larger than 2′, the method will diverge in simulation. Therefore, it is suggested that the initial value of ˆs  should be set within 2′ to ensure the convergence of the algorithm. Generally, the method only needs 3-4 iterations to achieve convergence. Following which, n n C can be calculated.  It is difficult to calculate ∂I 2 /∂φ s directly, so an intermediate variable is introduced to decompose Equation (7). Suppose ρ = exp φ s × p s and ϑ = Aρ, then Equation (7) can be decomposed into: Sensors 2020, 20, 5885 6 of 18 Next, we calculate the three partial derivatives. ∂I 2 /∂ϑ is the pixel gradient of g 2 at ϑ. If one supposes that the pixel coordinate is ϑ = u a u b T , then the pixel gradient is Supposing that ρ = X ρ Y ρ Z ρ T , ∂ϑ/∂ρ can be calculated as follows: It is difficult to calculate ∂ρ/∂φ s directly, so the Baker-Campbell-Hausdorff formula is used to approximate ∂ρ/∂φ s , which satisfies [32]: According to the Baker-Campbell-Hausdorff formula [32], The Jacobian matrix,J i , can be obtained by substituting Equations (9)-(11) into Equation (8), which satisfies: When there are N pixels to be optimized, each J i is stacked into a global Jacobian The damped Newton method is used to updateφ s , and ∆φ s is calculated as follows: where η is the damping coefficient, which can avoid singularities and make the iterative process more stable. Then,φ s =φ s + ∆φ s is used to modifyφ s until it converges. It should be noted that the method does not always converge, when the initial value ofφ s is larger than 2 , the method will diverge in simulation. Therefore, it is suggested that the initial value ofφ s should be set within 2 to ensure the convergence of the algorithm. Generally, the method only needs 3-4 iterations to achieve convergence.
Following which, C n n can be calculated. p s and p s are the true and predicted values of the same vector, which satisfy Sensors 2020, 20, 5885 7 of 18 C n n is calculated as follows: C n n is used to modify C b n to the CNS attitude result C b n , where q cns is the quaternion of C b n . As the CNS attitude, q cns , has been obtained by the deeply integrated model, the next section will describe the filtering algorithm of the INS/CNS integrated navigation method.

Second-Order State Augmented H-Infinity Filter
There is a strong interaction between the aircraft and the surrounding airflow when the near space vehicle re-enters the atmosphere. This effect is called the aero-optical effect, which causes colorization of the attitude noise [23]. This section solves this problem by using a filtering approach.

Star Sensor Pixel Offset Model in the Near Space Environment
When light travels through a rapidly varying flow field, the imaging position on the focal plane may be biased. The image distortion due to aero-optical effects can be described approximately as follows: where I 0 (i 0 + ∆i, j 0 + ∆ j) is the gray of reference image without aero-optical effects, I 1 (i 1 , j 1 ) is the gray of distorted image affected by aero-optical effects, (i 0 , j 0 ) is the reference image co-ordinate, (i 1 , j 1 ) is the distorted image co-ordinate, and ∆i and ∆ j are the pixel offsets in the X-axis and Y-axis, respectively, caused by aero-optical effects. The pixel offset effect is shown in Figure 4, where the hollow point represents the original pixel position and the solid point represents the pixel position after offset.
n n C is calculated as follows: where cns q is the quaternion of b n C . As the CNS attitude, cns q , has been obtained by the deeply integrated model, the next section will describe the filtering algorithm of the INS/CNS integrated navigation method.

Second-Order State Augmented H-Infinity Filter
There is a strong interaction between the aircraft and the surrounding airflow when the near space vehicle re-enters the atmosphere. This effect is called the aero-optical effect, which causes colorization of the attitude noise [23]. This section solves this problem by using a filtering approach.

Star Sensor Pixel Offset Model in the Near Space Environment
When light travels through a rapidly varying flow field, the imaging position on the focal plane may be biased. The image distortion due to aero-optical effects can be described approximately as follows: where ( ) is the gray of reference image without aero-optical effects, ( ) ij is the reference image co-ordinate, ( ) 11 , ij is the distorted image co-ordinate, and i  and j  are the pixel offsets in the X-axis and Y-axis, respectively, caused by aero-optical effects. The pixel offset effect is shown in Figure 4, where the hollow point represents the original pixel position and the solid point represents the pixel position after offset. It has been pointed out, in [24], that the pixel offset caused by aero-optical effects can be modeled approximately as follows: It has been pointed out, in [24], that the pixel offset caused by aero-optical effects can be modeled approximately as follows: However, sinusoidal mathematical models are difficult to directly apply in the navigation systems of near space vehicles. This is because, in the actual flight environment, it is difficult to obtain the real-time phases θ 1 and θ 2 . Therefore, in this paper, a recursive model is used to describe the pixel offset caused by aero-optical effects. The discrete sine sequence can be written as follows: Sensors 2020, 20, 5885 where ω is the digital angular frequency. Therefore, the discrete recursive model of pixel offset can be obtained as follows: where f s is the sampling frequency, and ω i (t k ) and ω j (t k ) are zero-mean white noise sequences. This modeling method only needs to know the frequency, which improves the feasibility of engineering applications.

Second-Order State Augmented H-Infinity Filtering Model
The H-infinity filter can obtain the optimal estimation of state variables under the condition of noise with unknown statistics [33,34]. According to the characteristics of the near space environment, the H-infinity filter can be improved based on the prior information of V k . Specific improvements are presented in the following sections.

Measurement Noise Whitening
Suppose the filter model is where V k is the observation noise of the star sensor in the near space environment, which includes the two parts The mechanisms of V c,k and V w,k are different, so they are not related. V c,k can be approximated as a second-order Markov process, which can be described as . X and Φ can be described as follows: where Sensors 2020, 20, 5885 9 of 18 The first-and second-order terms of the noise sequence V c,k can be expressed as follows: V c,k−1 and V c,k−2 can be extended into the state parameters to form the second-order augmented state parameters and, so the measurement equation can be expressed as follows: The state equation can be expressed as follows: Suppose Then, the second-order state augmented model (after whitening) can be written as follows: The noise characteristics of W a k and V a k can be analyzed, which satisfy Although V a k has been converted to white noise, the cost is that the system noise W a k and measured noise V a k become dependent (E W a k V a j T 0). This is because the prior model of the colored noise is added to the state quantity and the colored noise V a k is affected by the driving white noise ξ k , which leads to correlation between W a k and V a k .

Decorrelation of System Noise and Measurement Noise in the Second-Order Augmented Model
Equation (31) can be abbreviated as follows: According to Equation (30), is an arbitrary matrix. The state equation can be organized as where The covariance matrix of system noise and measurement noise is as follows: Obviously, if J k satisfies Γ a k S k − J k R a k = 0, W * k and V a k are no longer relevant and, so, J k should satisfy The variance matrix of W * k is If Q * k = Γ a k Q a k Γ a k T − J k R a k J T k , the second-order state augmented model (after decorrelation) can be expressed as follows: where Suppose the estimator of the H-infinity Filter, after expanding its dimension, is

Algorithm Robustness Verification
The optimization object of the deeply integrated model is the global gray error of all navigation stars, where the optimization result is the optimal solution (in the least-squares sense). Theoretically, the mismatching of individual stars does not affect the final result; this was verified by simulation.
The Smithsonian Astrophysical Observatory (SAO) catalog was used for simulation validation, where precession and nutation were compensated for by the IAU1980 model, aberration only considered first-order corrections, and polar shift correction was provided by the International Earth Rotation Service (IERS). As shown in Figure 5, the simulation was validated under three working conditions: Low, medium, and high uncertainty.
Sensors 2020, 20, x FOR PEER REVIEW 11 of 19 where ( ) Suppose the estimator of the H-infinity Filter, after expanding its dimension, is A second-order state augmented H-infinity filter can be obtained by substituting Equation (38) and Equation (40) into the standard H-infinity formula, thus satisfying (41)

Algorithm Robustness Verification
The optimization object of the deeply integrated model is the global gray error of all navigation stars, where the optimization result is the optimal solution (in the least-squares sense). Theoretically, the mismatching of individual stars does not affect the final result; this was verified by simulation.
The Smithsonian Astrophysical Observatory (SAO) catalog was used for simulation validation, where precession and nutation were compensated for by the IAU1980 model, aberration only considered first-order corrections, and polar shift correction was provided by the International Earth Rotation Service (IERS). As shown in Figure 5, the simulation was validated under three working conditions: Low, medium, and high uncertainty.  The attitude calculation results obtained under the three conditions are shown in Figure 6.
Sensors 2020, 20, x FOR PEER REVIEW 12 of 19 The attitude calculation results obtained under the three conditions are shown in Figure 6. From Figure 6, it can be seen that, even under high uncertainty (where the INS only correctly predicted 30% of the navigation stars), the deeply integrated algorithm still converged, and the accuracies under the three conditions were only slightly different.
To explain this phenomenon, suppose e a φ s (1 ≤ a ≤ N s ) represents the gray error function of correctly matched pixels and e b φ s (N s + 1 ≤ b ≤ N) represents the gray error function of incorrectly matched pixels. After the star point p 1 in g 1 is transformed into g 1 by exp φ s × , there will be no star point at the corresponding position p 2 in g 2 if the star points between g 1 and g 2 do not match. e b φ s will be equal to I 1 (p 1 ), which can be recorded as C b . Obviously, C b is a constant that does not change withφ s . The optimal objective is equivalent to Equation (37) shows that the constant term does not affect the final optimization result, such that optimizing all stars is equivalent to only optimizing the correctly matched stars, which means that unmatched stars do not affect the optimization result.

Comparison of Different Integrated Models
Star identification is not necessary in the deeply integrated mode and the navigation system can still work when the number of navigation stars is less than three. Therefore, the simulation should cover three conditions: The number of navigation satellites is more than three, equal to three, and less than three. The star map of the star sensor in the simulation is shown in Figure 7. The damped Newton method was used to optimize the misalignment angle iteratively. Taking the time 10 t s = as an example, the iterative optimization results are shown in Figure 8. As shown in Figure 8, the attitude could reach convergence after only 3-4 iterations. Compared with the first and fourth iteration optimization results, the gray image error is shown in Figure 9. The damped Newton method was used to optimize the misalignment angle iteratively. Taking the time t = 10 s as an example, the iterative optimization results are shown in Figure 8. The damped Newton method was used to optimize the misalignment angle iteratively. Taking the time 10 ts = as an example, the iterative optimization results are shown in Figure 8. As shown in Figure 8, the attitude could reach convergence after only 3-4 iterations. Compared with the first and fourth iteration optimization results, the gray image error is shown in Figure 9. As shown in Figure 8, the attitude could reach convergence after only 3-4 iterations. Compared with the first and fourth iteration optimization results, the gray image error is shown in Figure 9. As shown in Figure 8, the attitude could reach convergence after only 3-4 iterations. Compared with the first and fourth iteration optimization results, the gray image error is shown in Figure 9.  As shown in Figure 9,φ s did not converge in the first iteration, and there was an obvious peak and trough in the gray error image; in contrast,φ s converged in the fourth iteration, and the position deviation was close to zero. However, because of the image noise, there will be many little peaks and troughs, and the fluctuation in the image was caused only by image noise.
The accuracies and calculation times of loosely, tightly, and deeply integrated modes are presented in Figure 10. As shown in Figure 9, ˆs  did not converge in the first iteration, and there was an obvious peak and trough in the gray error image; in contrast, ˆs  converged in the fourth iteration, and the position deviation was close to zero. However, because of the image noise, there will be many little peaks and troughs, and the fluctuation in the image was caused only by image noise. The accuracies and calculation times of loosely, tightly, and deeply integrated modes are presented in Figure 10. The simulation results show that the attitude accuracies of the loosely, tightly, and deeply integrated navigation modes were of the same magnitude when the number of navigation stars was sufficient, the non-optical axis attitude accuracy was 10" (3  ), and the optical axis attitude accuracy was 50" (3  ). The loosely and tightly integrated navigation modes could not identify the navigation stars when the number of navigation stars was insufficient, such that the accuracy became divergent. In contrast, the deeply integrated navigation mode could still be used for the navigation solution when the non-optical axis attitude accuracy was about 50" (3  ) and the optical axis attitude accuracy was about 100" (3  ).
Each star occupies about 9 pixels in the star image, and the deeply integrated mode needs 4 iterations. On average, each star needs 36 iterations; the tightly and loosely integrated mode require star identification, there are 196 navigation stars in a sub catalogue on average, and the argument search space is 196 196  . The searches number of per star is 196 196 2 =19208  . Obviously, the computation time of star identification is much larger than the time to update the measurements in The simulation results show that the attitude accuracies of the loosely, tightly, and deeply integrated navigation modes were of the same magnitude when the number of navigation stars was sufficient, the non-optical axis attitude accuracy was 10" (3 σ), and the optical axis attitude accuracy was 50" (3 σ). The loosely and tightly integrated navigation modes could not identify the navigation stars when the number of navigation stars was insufficient, such that the accuracy became divergent. In contrast, the deeply integrated navigation mode could still be used for the navigation solution when the non-optical axis attitude accuracy was about 50" (3 σ) and the optical axis attitude accuracy was about 100" (3 σ).
Each star occupies about 9 pixels in the star image, and the deeply integrated mode needs 4 iterations. On average, each star needs 36 iterations; the tightly and loosely integrated mode require star identification, there are 196 navigation stars in a sub catalogue on average, and the argument search space is 196 × 196. The searches number of per star is 196 × 196/2 = 19, 208. Obviously, the computation time of star identification is much larger than the time to update the measurements in deeply integrated navigation filter. The simulation results in Figure 10b show that the computational cost of the deeply integrated navigation mode was 50% lower than that of the tightly integrated navigation mode and 60% lower than that of the loosely integrated navigation mode.

Comparative Simulation of Single-and Double-Star Sensor Configurations
The attitude accuracy of the star sensor on the optical axis was lower than the remaining two axes, so celestial navigation systems of near space vehicles should be configured with double star sensors. The star sensor configuration schemes are shown in Figure 11. As shown in Figure 12, the attitude accuracy of the yaw under the single-star sensor configuration was 50" (3  ), which was much lower than the other two axes. The attitude accuracy of the three axes was kept within 10" (3  ) under the double-star sensor configuration, which effectively reduced the yaw angle error. For the near space vehicle, the accuracy of the yaw angle had a great influence on the position and, so, it is recommended that the double-star sensor configuration is adopted in celestial navigation systems. An accuracy comparison of the single-and double-star sensor configurations is shown in Figure 12. An accuracy comparison of the single-and double-star sensor configurations is shown in Figure  12. As shown in Figure 12, the attitude accuracy of the yaw under the single-star sensor configuration was 50" (3  ), which was much lower than the other two axes. The attitude accuracy of the three axes was kept within 10" (3  ) under the double-star sensor configuration, which effectively reduced the yaw angle error. For the near space vehicle, the accuracy of the yaw angle had a great influence on the position and, so, it is recommended that the double-star sensor configuration is adopted in celestial navigation systems. As shown in Figure 12, the attitude accuracy of the yaw under the single-star sensor configuration was 50" (3 σ), which was much lower than the other two axes. The attitude accuracy of the three axes was kept within 10" (3 σ) under the double-star sensor configuration, which effectively reduced the yaw angle error. For the near space vehicle, the accuracy of the yaw angle had a great influence on the position and, so, it is recommended that the double-star sensor configuration is adopted in celestial navigation systems. The vehicle attitude profile, star sensor and gyro specification, initial attitude estimation error are shown in Figure 13 and Table 1.   The second-order state augmented H-infinity filter and standard H-infinity filter were compared in a near space environment. As can be seen from Figure 13, the angular velocity of the vehicle is largest in 500s-600s, and the 100s data is used for simulated in Figure 14.   The second-order state augmented H-infinity filter and standard H-infinity filter were compared in a near space environment. As can be seen from Figure 13, the angular velocity of the vehicle is largest in 500 s-600 s, and the 100 s data is used for simulated in Figure 14.   The second-order state augmented H-infinity filter and standard H-infinity filter were compared in a near space environment. As can be seen from Figure 13, the angular velocity of the vehicle is largest in 500s-600s, and the 100s data is used for simulated in Figure 14.  At the beginning of filtering, the SOSA filter exhibit a large error at around 3 s, this is because the initial value of V c,k−1 and V c,k−2 was unknown, which has to be set to zero at the beginning. Due to V c,k = α 1 V c,k−1 + α 2 V c,k−2 + ξ k , if V c,k−1 = 0 3×1 , V c,k−2 = 0 3×1 , V c,k will also close to zero. At this time, the state augmented model cannot estimate the colored noise effectively. In addition, because the estimated value of V c,k is close to zero, the filter will assume the colored noise to be very small (although it is actually not); then, the filter will allocate the error caused by V c,k to other navigation parameters, so the filter will exhibit a large error at around 3 s. However, as the filtering goes on, the colored noise V c,k will be estimated gradually, the state augmentation model will gradually play a role, and finally the navigation accuracy will be improved. Therefore, the filtering results should be used at least 10 s after the beginning of filtering. It can be seen from the simulation that the standard H-infinity filter did not filter out the colored noise; however, the second-order state augmented H-infinity filter proposed in this paper was able to effectively improve the colored noise filtering accuracy. Compared with the standard H-infinity filter, it is more suitable for the near space environment.

The Influence of Colored Noise Model Error on the Filtering Effect
The core of the second-order state augmented H-infinity filter is the colored noise model, where the core parameter of the colored noise model is the digital angular frequency, ω, of the aero-optical effects, whose influences are further illustrated in Figure 15. The attitude root mean square of the second-order state augmented H-infinity filter was about 3" when there was no parameter error. There was little change in the attitude accuracy when the parameter error was 10%, indicating that a 10% parameter error did not have a significant impact on the filtering accuracy. When the parameter error increased to 30%, the attitude root mean square decreased to about 4". Even when the parameter error increased to 50%, the attitude root mean square did not exceed 5" and, so, the parameter error of the digital angular frequency ω does not significantly affect the second-order state augmented H-infinity filter. V is close to zero, the filter will assume the colored noise to be very small (although it is actually not); then, the filter will allocate the error caused by , ck V to other navigation parameters, so the filter will exhibit a large error at around 3 s. However, as the filtering goes on, the colored noise , ck V will be estimated gradually, the state augmentation model will gradually play a role, and finally the navigation accuracy will be improved. Therefore, the filtering results should be used at least 10 s after the beginning of filtering. It can be seen from the simulation that the standard H-infinity filter did not filter out the colored noise; however, the second-order state augmented H-infinity filter proposed in this paper was able to effectively improve the colored noise filtering accuracy. Compared with the standard H-infinity filter, it is more suitable for the near space environment.

The Influence of Colored Noise Model Error on the Filtering Effect
The core of the second-order state augmented H-infinity filter is the colored noise model, where the core parameter of the colored noise model is the digital angular frequency,  , of the aero-optical effects, whose influences are further illustrated in Figure 15. The attitude root mean square of the second-order state augmented H-infinity filter was about 3" when there was no parameter error. There was little change in the attitude accuracy when the parameter error was 10%, indicating that a 10% parameter error did not have a significant impact on the filtering accuracy. When the parameter error increased to 30%, the attitude root mean square decreased to about 4". Even when the parameter error increased to 50%, the attitude root mean square did not exceed 5" and, so, the parameter error of the digital angular frequency  does not significantly affect the second-order state augmented Hinfinity filter.

Conclusions
In this paper, an INS/CNS deeply integrated navigation method was presented for near space vehicles. This method does not need star identification and can significantly reduce the required computational cost. Meanwhile, the proposed second-order state augmented H-infinity filter can weaken the influence of aero-optical effects on the measurement noise and effectively improve the

Conclusions
In this paper, an INS/CNS deeply integrated navigation method was presented for near space vehicles. This method does not need star identification and can significantly reduce the required computational cost. Meanwhile, the proposed second-order state augmented H-infinity filter can weaken the influence of aero-optical effects on the measurement noise and effectively improve the filtering accuracy in near space environments. The simulation results show that the attitude accuracy of the INS/CNS deeply integrated navigation method is kept within 10" (3 σ), while the computational cost can be reduced by 50%. The INS/CNS deeply integrated navigation method therefore can assist in improving the navigation accuracy of near space vehicles and reducing the computational cost of the associated navigation systems, providing a theoretical reference for the design of the near space vehicle navigation systems in the future. Funding: This research received no external funding.

Conflicts of Interest:
The authors declare no conflict of interest