Next Article in Journal
The Impact of MERRA-2 and CAMS Aerosol Reanalysis Data on FengYun-4B Geostationary Interferometric Infrared Sounder Simulations
Next Article in Special Issue
Delay–Doppler Block Division Multiplexing: An Integrated Navigation and Communication Waveform for LEO PNT
Previous Article in Journal
U-MGA: A Multi-Module Unet Optimized with Multi-Scale Global Attention Mechanisms for Fine-Grained Segmentation of Cultivated Areas
Previous Article in Special Issue
A Disturbance-Observer-Based Prescribed Performance Control Approach for Low-Earth-Orbit Satellite Trajectory Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correction Method for Thermal Deformation Line-of-Sight Errors of Low-Orbit Optical Payloads Under Unstable Illumination Conditions

1
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(5), 762; https://doi.org/10.3390/rs17050762
Submission received: 13 January 2025 / Revised: 20 February 2025 / Accepted: 21 February 2025 / Published: 22 February 2025
(This article belongs to the Special Issue LEO-Augmented PNT Service)

Abstract

:
Accurate optical axis pointing of optical payloads in low orbits is essential for sustained indication and high-precision positioning of motion targets. Owing to the short orbital period in low orbits and the influence of the sun, the incident light on the optical payloads and the space thermal environment undergo drastic and irregular changes over a short period. These changes cause optical distortions within the camera and variations in the installation matrix referenced for the satellite. Ultimately, these changes affect the imaging process of the camera and the line-of-sight (LOS) accuracy, greatly disadvantaging the high-precision pointing and positioning of space targets. In this paper, a correction method based on stellar observation data is proposed to address the LOS deviation issue of low-orbit optical payloads caused by space thermal deformation (STD). The proposed method innovatively utilizes the angle relationship between the solar vector, the satellite position vector, and the camera LOS vector as the correction parameters to characterize the thermal environment in which the payload operates. This method overcomes the irregularity and frequent correction requirements of LOS errors in low-orbit payloads. Experimental results showed that the mean absolute error of the camera LOS after the correction was 0.001096 rad, representing an 80.28% improvement over previous measurements, even reaching 99% improvement in the final mission. At a 95% confidence level, the correction errors for the final mission were consistently below 10 4 (2σ) rad in the right ascension and declination directions.

1. Introduction

In recent years, cameras have been widely applied in low or medium earth orbit in the fields of remote sensing, earth observation, astronomical observation, and space surveillance [1]. Optical payload in the low or medium orbit provides new capabilities not covered by geostationary orbit observation missions, such as the ability to continuously monitor moving targets and conduct wide-area detection. This is an important method for the early warning and monitoring of space targets. The target monitoring and positioning accuracy of space optical payloads depends on the pointing accuracy of the camera [2]. There are typically two types of errors related to the line of sight (LOS) [2,3]: random errors and systematic errors. Random errors primarily consist of measurement errors in the satellite position, attitude, and pointing angles, and typically exhibit the characteristics of zero mean and determinate variance. Systematic errors include thermal deformation and camera installation errors, which can result in fixed or slowly varying offset errors in the corresponding measurements. The primary cause of the changes in the determination model of the LOS in low-orbit or medium-orbit cameras is space thermal deformation (STD). The main causes of STD are as follows [4]:
  • Variation in the angles of solar light incidence;
  • Environmental temperature changes resulting from different distances between the camera and the sun;
  • Thermal deformation caused by satellite launch processes.
Therefore, optical payloads typically require frequent recalibration on orbit [5,6,7].
Compared with satellites in low or medium orbits, geostationary orbit satellites experience more regular fluctuations in the working environment temperature, leading to more predictable variations in the LOS errors. Currently, numerous scholars are working on correcting the LOS determination errors caused by thermal deformation in geostationary orbit cameras.
Algorithms have been proposed for determining and calibrating the LOS of the space-based infrared system (SBIRS) payloads in geostationary orbit missions [8,9]. These correction methods are based on the stellar observation data or ground control points (GCPs) and use a multi-state Extended Kalman Filter (EKF) algorithm to correct the payload LOS deviations, including attitude errors, thermal deformation errors, and assembly errors.
To ensure a stable thermal environment inside the satellite, the Chinese GaoFen4 (GF-4) remote sensing satellite has implemented a corrective method to shut down the optical payload, thus reducing positioning errors caused by thermal deformation during periods of intense solar radiation [4]. Additionally, in-orbit geometric calibration with minimal GCPs adequately compensates for the internal distortion of the GF-4 camera. The positioning accuracy of both the panchromatic and near-infrared sensor, and the mid-infrared sensor, was better than 1.0 pixel [10,11]. In addition, in response to the systematic LOS errors of remote sensing cameras in geostationary orbit, a correction method based on stellar observations has been proposed [4]. This method utilizes the Fourier series fitting to model the 24 h error pattern and uses in-orbit error data from the previous two days to correct the camera’s geometric positioning error on the third day. The corrected positioning error remains within 1.9 pixels.
In contrast to geostationary orbits, satellites in low or medium orbits exhibit shorter orbital periods and lower orbital altitudes. This means that the variation in the operating temperature during a mission is relatively more stable than that of geostationary orbit satellites; however, the regularity is poor, particularly in low orbits. Therefore, correcting the LOS errors caused by STD for optical payloads of low-orbit satellites is more challenging. Furthermore, research on correcting LOS errors for cameras in low orbits is relatively limited. In 1996, a six-state EKF algorithm based on stellar observation data was proposed to correct the spacecraft attitude errors [12]. In addition, a correction method for a push-broom camera in low orbit was proposed based on modeling with the Satellite Pour l’Observation de la Terre (SPOT) 4-HRV1 sensor [13]. This method primarily addresses the mechanical strain caused by satellite launch rather than the effects of STD. An integrated algorithm for target tracking and positioning has been proposed for narrow-field tracking sensors of space-based electro-optical (EO) systems [14]. While performing target tracking, background stellar observations are utilized for real-time determination and correction of the LOS. This method effectively eliminates the dynamic effects of the LOS errors in real time, improving the overall system’s timeliness as well as the target tracking and positioning accuracy. In addition, research on the internal and external orientation parameters of the SPOT-5 revealed that the effect of internal orientation parameters on the geometric calibration accuracy was more significant [15]. The geometric pointing accuracy of the ZiYuan-3 (ZY-3) satellite with the bundle block adjustment method reached 3.5 pixels and 1.8 pixels in plane measurement and elevation, respectively, without GCPs, thus addressing the camera distortion issues caused by the jitter and attitude fluctuation of the low-orbit satellite platform. With high-precision GCPs, the plane measurement and elevation accuracy can reach 1.5 pixels and 0.8 pixels, respectively [16,17,18,19]. In addition, there are studies on the geometric calibration of the camera based on stellar observation data from the Jilin-1 07 video satellite, using a method that corrects the angular variations between the camera and the star tracker [20,21].
The exterior orientation parameter variation real-time monitoring system (EOPV-RTMS) [22] effectively corrects errors in exterior orientation parameters of remote sensing cameras caused by vibrations and temperature fluctuations. This system utilizes lasers to establish a full-link active optical monitoring path. By receiving star and laser signals with the star tracker, it enables real-time monitoring of the exterior orientation parameters. A novel on-orbit attitude planning method for Earth observation imaging applied to the Wuhan-1 satellite divides geometric calibration into two stages, pre-calibration and post-calibration, which enhances the geometric positioning accuracy [23]. The linear array camera on the Jilin-1 satellite adopts a novel geometric calibration method, changing the camera from observing the ground to observing space [24]. By using stars as control points, this method eliminates the influence of astronomical and attitude errors, reducing the high maintenance costs associated with ground calibration. The accuracy of geometric positioning without control is verified by ground images of different regions to be 30 m. Furthermore, reference [25] proposes a correction method for low-frequency errors of the star sensor caused by the space thermal environment, which can effectively identify and compensate for interior orientation parameters. The Gaofen-5B (GF-5B) satellite also employs a correction method for attitude low-frequency errors (ALFEs) caused by thermal environment fluctuations [26]. This method calibrates and compensates for low-frequency errors (LFEs) based on their spatial characteristics in different regions and the temporal drift, improving the geometric positioning accuracy from 3.045 to 1.438 pixels.
However, some correction methods for camera geometric positioning errors, which are premised on the existence of certain regularities in error variations, are not well suited to solve the problem of camera pointing offset caused by STD in low orbit. Additionally, for LOS errors caused by STD, most studies only correct for certain parameters affected by temperature variations. In the actual space imaging environment, the impact of the thermal environment on various parameters of LOS is complex.
Accordingly, a new correction method based on stellar observations is proposed to reduce the LOS errors of low-orbit optical payloads caused by STD. The contributions of this study are as follows.
  • The angle relationship between the solar vector, satellite position vector, and camera LOS vector was innovatively utilized to characterize the thermal environment in which the payload operates. This provides the possibility of quantitatively analyzing the complex factors of the space thermal environment, overcoming the irregularity and frequent correction requirements of the LOS errors in low-orbit payloads.
  • A LOS determination model for conversion from the pixel coordinates to celestial coordinates was established for low-orbit optical payloads, and potential errors introduced during the imaging process were analyzed.
  • Neural networks were innovatively utilized in the correction of camera LOS issues, and the backpropagation neural network was used to solve the mapping relationship between the space thermal environment and camera LOS offset, which significantly enhanced the accuracy of the camera LOS correction.
Experiments indicate that the proposed algorithm performs exceptionally well in correcting the LOS errors of low-orbit payloads caused by STD. The mean absolute error of the camera LOS after correction was 0.001096 rad, representing an 80.28% improvement over previous measurements. The correction error for the final mission could be controlled within 10 4 rad, with a mean error of 6.0408 × 10 5 rad, resulting in nearly 99% improvement in the correction effect. At a 95% confidence level, the correction errors of the camera LOS for the final mission in the right ascension and declination directions were both below 10 4 rad (2σ). In addition, the effectiveness of LOS correction for similar spatial thermal environments is rarely affected by the time interval.
The remainder of this paper is organized as follows. Section 2 describes the camera LOS determination model based on the stellar observations. Section 3 introduces the proposed correction method for low-orbit camera LOS errors. Section 4 validates the effectiveness of the proposed method using on-orbit data. Finally, Section 5 presents a summary of the entire document and a forward-looking perspective on future work.

2. Stellar-Based Determination Model of LOS

The stellar-based LOS determination model for the camera in low orbit mainly consists of the interior orientation model (IOM) and the exterior orientation model (EOM) [27].

2.1. Interior Orientation Model

The IOM was used to convert the pixel coordinates to camera coordinates and accurately determine the LOS of the sensor in the camera coordinate system.
As shown in Figure 1, the pixel coordinate system O 1 i j directly corresponds to the output image data of the camera. We define its origin at the top-left corner of the image, and its orientation as downward along the image column direction ( j -axis) and rightward along the image row direction ( i -axis). The pixel coordinate system is represented by pixels with the coordinate origin at ( 1 , 1 ) . The focal plane coordinate system O 2 u v is established based on the pixel coordinate system and is used to calculate the physical distance of each pixel relative to the center of the focal plane. The u - and v -axes of the focal plane coordinate system are parallel to the i - and j -axes of the pixel coordinate system, respectively, and the origin is located at a physical position corresponding to the center of the pixel coordinate system.
Optical distortion is inevitable in the design, manufacturing, and assembly of optical instruments. Deviations introduced during the manufacturing and installation of optical lenses result in image distortion. This distortion has a significant effect on the LOS accuracy of the camera. Therefore, a virtual coordinate system for distortion correction, namely the ideal focal plane coordinate system O 3 u v , was introduced. Assuming that the camera distortion model is D ( u , v ) , according to the distortion correction theory [28,29], the correspondence between the coordinate systems O 3 u v and O 2 u v is as follows:
u = u D u u , v v = v D v u , v   ,
where ( u , v ) and ( u , v ) represent the point coordinates in the focal plane coordinate system and the actual point coordinates in the ideal focal plane coordinate system, respectively, as illustrated in Figure 2.
Assume that the coordinates of the origin O 3 in the coordinate system O 2 u v are ( u 0 , v 0 ) . This represents the ideal image plane center of the camera, that is, the principal point position of the camera. The distortion correction parameter of a certain pixel is represented as ( u ( u , v ) , v ( u , v ) ), and its distortion correction model is represented as follows.
D u ( u , v ) = u 0 + u ( u , v ) D v ( u , v ) = v 0 + v ( u , v )
To address the distortion caused by optical aberration, the calibration method involves building a distortion model that characterizes the internal geometric model and optical characteristics of the camera, and then using direct or iterative methods to estimate the camera’s internal parameters [30]. The general lens distortion model includes parameters for the inner orientation, lens distortion coefficients, and orthogonality and scaling factors of the image coordinate axis [28,31,32]. Calibration methods mostly utilize the least squares (LS) method to solve the parameters of the linear distortion model and iterative algorithms to resolve nonlinear distortion model issues [30,32].
Although these methods can effectively approximate the distortion of the focal plane and exhibit surprisingly high correction accuracy, adjusting the estimated parameters using LS or iterative methods is not a robust approach. This is because incorrect image observations may lead to completely erroneous correction results and may even cause the parameter estimation process to fail to converge [33]. Considering the wide applicability of camera calibration in the engineering field, the feasibility and computational speed of the correction method must be considered. The two-dimensional (2-D) Lagrange interpolation method is a distortion correction method commonly used in engineering that shows impressive accuracy [31,34,35]. Similarly, based on this interpolation idea and considering that the specifications of the camera aperture and focal plane array are not large, an improved 2-D nearest neighbor interpolation method for optical distortion correction is proposed. Using the correspondence between the actual and ideal coordinates of the sampled points in the focal plane as the basis, the camera’s projection center and distortion correction parameters for each pixel were fitted to mitigate the effect of optical distortion.
The ideal spatial coordinate system O 4 x y z is a three-dimensional coordinate system with its origin O 4 located at the projection center of the camera. It is used to transform the two-dimensional coordinates ( u , v ) in the ideal focal plane to the three-dimensional coordinate system ( u , v , f ) , where f represents the camera’s principal distance, obtained from the previous step involving the distortion correction parameter fitting. The x   - and y   -axes are parallel to the u - and v -axes, respectively.
The basic prism coordinate system O 5 X c p Y c p Z c p has its origin located at the geometric center of the basic prism. The transformation matrix R I S 2 C from the ideal spatial coordinate system to the camera basic prism coordinate system is related to the reflection matrix of the internal pointing mirror of the camera as well as its azimuth and elevation angles. These parameters must be calibrated using a theodolite.
The transformation process of the IOM is expressed as follows.
L O S c a m = R I S 2 C · [ d x 0 D u u i j , v i j 0 d y D v ( u i j , v i j ) 0 0 f ] · [ i j 1 ]
Here, L O S c a m represents the camera exit vector corresponding to the image point i , j in the camera coordinate system. D u u i j , v i j and D v ( u i j , v i j ) represent the components of the distortion model corresponding to the image point ( i , j ) in the u - and v -axes, respectively. d x and d y are the pixel dimensions.

2.2. Exterior Orientation Model

The EOM determines the transformation relationship from the camera coordinate system to the celestial coordinate system [11,36,37], as shown in Figure 3.
The camera is installed on the satellite platform. There are three declination angles α , β , and γ between the coordinate axis of the camera coordinate system O 6 X c Y c Z c and satellite body coordinate system O 7 X s Y s Z s . Thus, the transformation matrix from the camera coordinate system to the satellite body coordinate system is represented as R c 2 s ( α , β , γ ) . The transformation from the satellite body coordinate system to the orbital coordinate system O 8 X o Y o Z o requires knowledge of the satellite’s three-axis attitude ( p i t c h , r o l l , and y a w ), and its coordinate transformation matrix is represented as R s 2 o ( p i t c h , r o l l , y a w ) . The X -axis of the orbital coordinate system points along the tangential direction of the satellite movement. The Z -axis points toward the center of the earth, whereas the coordinate axis follows the right-hand rule. The transformation relationship between the orbital coordinate system and celestial coordinate system O 9 X J Y J Z J (J2000.0) depends on the orbital parameters of the satellite, that is, the instantaneous position vector and velocity vector of the satellite. Its transformation matrix is expressed as R o 2 J [4].
Based on the above description, the exit vector of a pixel in the sensor array is expressed in the celestial coordinate system as follows.
L O S J = R o 2 J · R s 2 o ( p i t c h , r o l l , y a w ) · R c 2 s ( α , β , γ ) · L O S c a m
Here, L O S J represents the exit vector of the camera corresponding to the image point ( i , j ) in the celestial coordinate system.
L O S J = X J Y J Z J = c o s σ   c o s δ c o s σ   s i n δ s i n σ
By using the above formulas, the right ascension δ and declination σ corresponding to the image point ( i , j ) in the celestial coordinate system can be calculated.

2.3. Stellar-Based Imaging Model

First, the target centroid extraction algorithm is used to obtain the position ( i , j ) of the star in the pixel coordinate system. Subsequently, the information required for the coordinate transformations, such as the internal parameters of the camera and the mounting matrix, is obtained using a measuring device. Finally, the camera exit vector L O S J c a l for the pixel point ( i , j ) in the celestial coordinate system is constructed according to (1)–(5).
L O S J c a l = R o 2 J · R s 2 o ( p i t c h , r o l l , y a w ) · R c 2 s ( α , β , γ ) · R I S 2 C · [ d x 0 D u u i j , v i j 0 d y D v ( u i j , v i j ) 0 0 f ] · [ i j 1 ]
The deformation of (5) is used to derive the calculable values for the right ascension and declination ( δ , σ ) of the star.
From the star catalog, the theoretical values of the right ascension and declination ( δ , σ ) of the star are obtained. The absolute error in the camera LOS in the right ascension and declination directions can be obtained using the following formula:
E r r o r R A = δ δ E r r o r D E = σ σ .
The vector L O S J s t a r corresponding to the theoretical value ( δ , σ ) of the star in the celestial coordinate system is calculated by (5). The total absolute error of the camera LOS, measured in degrees or arcseconds, is expressed as follows:
E r r o r T o t a l = cos 1 ( L O S J c a l · L O S J s t a r L O S J c a l · L O S J s t a r ) .
The values of the LOS vectors for the star in the celestial coordinate system, calculated by (6), are obtained by considering only the influence of the optical distortion of the camera. After the on-orbit operation, the following errors in optical payload must also be considered:
  • Measurement errors introduced during the measurement process of R o 2 J , R s 2 o , and R I S 2 C ;
  • Errors caused by STD and satellite platform vibrations;
  • Changes in the distortion model of the camera after the on-orbit operation.
These factors influence the accuracy of the payload LOS and, consequently, affect target positioning accuracy.

3. Correction Method for LOS

3.1. Analysis of Causes for LOS Error

Based on the camera imaging process, LOS errors can be classified as internal and external orientation parameter errors. Internal orientation parameter errors refer to the calibration errors of parameters such as the camera principal point, principal distance, and pixel distortion, which are caused by optical system distortion after on-orbit operation [11]. Additionally, internal orientation parameter errors include extraction errors of the image centroid position and measurement errors in the internal pointing mirror parameters of the camera. External orientation parameter errors primarily include camera installation errors and measurement errors in orbital parameters and satellite attitudes [37]. Furthermore, errors are caused by variations in the space thermal environment, satellite platform vibrations, and time asynchrony [38,39].
Measurement errors in the LOS determination process, such as satellite attitude measurement errors and orbital parameter measurement errors, are classified as random errors. Random errors arise from various unpredictable factors, including instrument noise and external disturbances. These errors typically exhibit a zero mean and a deterministic variance, remaining largely consistent over time. While it is challenging to eliminate random errors through deterministic models, they can be mitigated using statistical methods such as filtering or smoothing. One effective approach to reducing their impact on LOS accuracy is to employ more precise measurement instruments.
On the other hand, structural deformation errors of the camera, optical system errors, and camera installation errors fall under systematic errors. Systematic errors refer to errors that consistently occur in repeated measurements under the same conditions. These errors typically exhibit fixed patterns, causing the corresponding measurements to vary within a certain range. Unlike random errors, systematic errors can often be effectively compensated for through modeling and correction techniques.
It is important to note that variations in the thermal environment introduce numerous errors. Specifically, under the influence of space thermal distortion (STD), changes occur in the optical distortion model, the camera installation matrix, the camera structure, and even the camera attitude measurements, all of which have a significant impact on LOS accuracy [20,25,38]. Specifically, changes in the environmental temperature can cause thermal deformation in the optical system, making the principal point, focal length, and pixel distortion parameters obtained during ground tests invalid. It can also lead to the expansion or contraction of the camera structure, affecting the relative positions of optical components. This results in optical axis deviation and coordinate system misalignment. In addition, variations in environmental temperature can cause thermal deformation in the camera mounting surface, resulting in significant errors in the camera mounting matrix. All of these factors can have a significant impact on LOS accuracy.
Satellites in low orbits are characterized by a short orbital period, high angular velocity of movement, and continual changes in the orbit. Sunlight is the main cause of payload thermal deformation. It strongly affects the payload’s temperature environment and the optical distortion model. The parameters directly reflecting the influence of illumination are the satellite position, satellite attitude, and the corresponding camera attitude. However, the satellite orbit is constantly changing, and the orbit positions in each operational cycle are not the same. Thus, it is nearly impossible to complete a full-orbit illumination analysis. Furthermore, the orbits of different satellites are also distinct. Therefore, the use of satellite position parameters as the analysis object lacks generality for low-orbit optical payloads. Analyzing satellite attitude and the corresponding camera attitude is more complex, which requires an analysis of the camera’s attitude based on the installation matrix and other information in situations where the satellite attitude varies significantly. The essence of analyzing these parameters is to explore the impact of thermal deformation caused by sunlight on the camera’s LOS accuracy.
To make the proposed methods applicable to low-orbit optical payloads and simplify the calculation and correction processes, we propose two parameters that reflect the influence of sunlight on the payload for analysis.
  • θ S u n S a t : angle between the solar vector and the satellite position vector in the celestial coordinate system.
  • θ S u n L O S : angle between the solar vector and the camera’s LOS vector in the celestial coordinate system.
The former reflects the effect of changes in the environmental temperature caused by sunlight on the thermal deformation of the payload. The latter characterizes the effect of sunlight on the optical distortion model. The schematic is presented in Figure 4.

3.1.1. Angle Between Solar Vector and Satellite Position Vector

The first step is to calculate the solar position vector in the celestial coordinate system, which is the vector pointing from the center of the Earth to the center of the sun. This requires converting the current time t to Julian centuries T counted from the moment J2000.0 [40]. Subsequently, based on the Julian epoch, the geometric mean longitude of the sun ( L s u n ), and mean anomaly of the sun ( M s u n ) can be computed [40], with all angles measured in degrees. The sun’s equation C s u n is expressed as follows.
C s u n = 1.914600 0.004817 · T 0.000014 · T 2 · s i n M s u n + 0.019993 0.000101 · T · s i n 2 · M s u n + 0.000290 · s i n 3 · M s u n
Θ s u n = L s u n + C s u n
V s u n = M s u n + C s u n
Here, Θ s u n and V s u n represent the sun’s true longitude and true anomaly, respectively. After corrections of Θ s u n for nutation and aberration, the apparent longitude of the sun λ s u n in the ecliptic coordinate system is obtained [40].
λ s u n = Θ s u n 0.00478 · s i n 125.04 1934.136 · T 0.00569
Next, the true obliquity of the ecliptic ε s u n [40] is computed.
ε s u n = ε 0 + ε ,
where
  ε 0 = 23 + 26 60 + 21.448 3600 46.815 3600 · T 0.00059 3600 · T 2 + 0.001813 3600 · T 3
ε = 0.00256 · c o s ( 125.04 1934.136 · T ) .
ε 0 is the mean obliquity of the ecliptic, and ε is the nutation obliquity.
As expected, the apparent position (right ascension and declination) of the sun ( δ s u n , σ s u n ) at time t can be determined [40].
t a n δ s u n = c o s ( ε s u n ) · s i n ( λ s u n ) / c o s ( λ s u n ) s i n σ s u n = s i n ε s u n · s i n λ s u n                                          
By Combining (5) and (16), the three-dimensional vector ( X s u n , Y s u n , Z s u n ) of the sun in the celestial coordinate system can be calculated.
Utilizing the instantaneous satellite position vector ( X s a t , Y s a t , Z s a t ) at the corresponding time, the angle θ S u n S a t between the solar vector and satellite position vector can be calculated using (8) in Section 2.
It should be noted that the angle calculated using (8) falls within the range of (0°~180°). However, during the actual satellite orbit, the range of θ S u n S a t is (−180°~+180°). For an assumed angle θ S u n S a t = 60 ° , there are two scenarios for the same angle:
  • The satellite is flying from the shadow area to the sunlit area;
  • The satellite is in the sunlit area but moving toward the shadow area.
In the second scenario, as the payload is exposed to sunlight for a longer period, the effects of thermal deformation are different from those in the first scenario. Therefore, differential analysis is required for each scenario. To address this issue, the satellite orbit was segmented into sunlit area and earth-shadow area by using a cylindrical earth-shadow model to determine the instantaneous position of the satellite [41,42].
As illustrated in Figure 5, we define the range of the angle when the satellite moves from the sunlit area to the shadow area as the positive direction, which is denoted as (0°~180°). The angle range when the satellite exits the shadow area and moves toward the sunlit area is considered as the negative direction, written as (−180°~0°).

3.1.2. Angle Between Solar Vector and Camera’s LOS Vector

The angle between the solar position vector ( X s u n , Y s u n , Z s u n ) and the camera’s LOS vector L O S J c a l can also be calculated using (8). The angle range is (0°~180°). This parameter θ S u n L O S primarily characterizes the effect of illumination on the imaging system itself.

3.2. Thermal Deformation Error Model

In the coordinate transformation process, errors caused by small angles (<1000 μrad) can generally be approximated by the following relationship [9].
R η + η R ξ + ξ = R η R η R ξ R ξ R η R ξ R η R ξ R η + ξ R η R ξ
Based on the above description, the deviation in the camera LOS caused by one or more parameter errors can be represented equivalently as an error matrix problem. Similarly, the problem of inaccurate LOS caused by the STD can be equated to the thermal deformation error matrix R S T D . The problem of correcting LOS errors caused by the STD for a low-orbit payload is then transformed into an estimation problem of R S T D for a specific environment. At a certain moment in the mission, assuming θ S u n S a t = ϕ and θ S u n L O S = ω , the thermal deformation error matrix is denoted as R S T D ( ϕ , ω ) . Then, we have the following expression.
L O S J s t a r = R S T D ϕ , ω · L O S J c a l ,
where we consider the thermal deformation error matrix R S T D ( ϕ , ω ) to be a function of the parameters θ S u n S a t and θ S u n L O S . In theory, the thermal deformation errors can be effectively corrected by establishing the relationship between the two.

3.3. Correction Method

In contrast to the STD-induced LOS errors in geostationary orbit, those in low-orbit satellites do not exhibit significant regular changes over long periods. Therefore, traditional data fitting and prediction methods (such as Fourier fitting, polynomial fitting, etc.) do not significantly affect the correction of thermal deformation errors in low orbit. These methods can achieve error fitting for only a single mission and error correction in similar imaging environments. Effective error correction cannot be realized for missions with large variations in payload spatial environments. Additionally, the scenario described in this paper falls within the domain of multiple-input and multiple-output regression prediction. Hence, we opted to use a well-established backpropagation (BP) neural network in this field. This study proposes the use of an improved BP neural network to correct thermal deformation errors in low orbit.
In the proposed method, the two parameters ( θ S u n S a t and θ S u n L O S ) calculated earlier are used as feature parameters and input into the network. The corresponding R S T D ( ϕ , ω ) at the specified time is then used as the training output. These data form a complete dataset. Subsequently, a BP neural network is constructed to learn the relationship between the feature parameters and thermal deformation error matrix R S T D ( ϕ , ω ) . The network structure is shown in Figure 6. Finally, the feature parameters of new mission data are input into the trained network to predict the thermal deformation error matrix. Thus, the LOS errors of the payload, induced by thermal deformation in low orbit, are corrected.
Notably, the training effect of the BP neural network is closely related to the learning rate, minimum error of the training goal, number of hidden layer nodes, and network parameter settings. Hence, this paper proposes the use of the Newton–Raphson-Based Optimizer (NRBO) [43] to adjust the learning rate, convergence error, and number of hidden layer nodes. NRBO is a new metaheuristic algorithm that utilizes the Newton–Raphson Search Rule (NRSR) and the Trap Avoidance Operator (TAO) to complete the entire search process and explore the best results. The NRSR method is employed to enhance the exploration ability of the NRBO and increase the convergence rate to improve the ability to search space positions. The TAO assists the NRBO in avoiding local optimal traps. This method, proposed by Sowmya et al. [43] in 2024, is characterized by a high exploration and exploitation balance, high convergence speed, and effective avoidance of the local optima. In this study, we used this method to optimize the learning rate, convergence error, and number of hidden layer nodes in the BP neural network to improve its training effect.
Additionally, the RIME optimization algorithm [44] was used to optimize the initial network parameters (weights and biases) of the BP neural network. The RIME optimization algorithm is an efficient optimization method based on the physical phenomenon of freezing. This algorithm constructs a soft-rime search strategy and hard-rime puncture mechanism by simulating the soft-rime and hard-rime growth processes of rime ice, achieving exploration and exploitation behaviors in the optimization method. It also utilizes a proactive greedy selection mechanism and updates the population at the stage of selecting the optimal solution to prevent the algorithm from falling into local optima as much as possible. In this study, the RIME algorithm was used to optimize the weights from the input layer to the hidden layers, biases of each hidden layer, and weights between the hidden layers, resulting in improved training effectiveness of the BP neural network and better prediction results.
Finally, the loss function and activation function used in the BP neural network are introduced. The loss function employs the Mean Squared Error (MSE) function. The prediction of thermal deformation error matrices in the BP neural network is essentially a multi-parameter prediction problem. The smaller the error between the predicted and true values of the thermal deformation error matrices, the greater the similarity between the matrices, indicating better performance of the network. Additionally, the Leaky Rectified Linear Unit (ReLU) function is utilized as the activation function. Since the gradient-based optimization method is employed in the BP neural network, the use of the Leaky ReLU activation function effectively addresses the issue of gradient vanishing while also preventing the occurrence of the “neuron death” problem. The expression for the Leaky ReLU activation function used is as follows.
L e a k y R e L U x =                 x                                         x > 0         0.001 x                           x 0        
Based on the above description, this improved network is referred to as the NRBO-RIME-BP neural network. The specific implementation of this algorithm is as follows.
It is assumed that the current number of completed stellar imaging missions is N (N = 50). Initially, a BP neural network is constructed and trained using the data from the first N missions. The number of input layer nodes is set to be the same as the number of feature parameters, and the number of output layer nodes corresponds to the number of parameters in the error matrix R S T D ( ϕ , ω ) . A grid search is used to traverse and optimize the number of hidden layers. Despite the extensive computational demands, it can determine the optimal value within the specified parameter range. In the proposed algorithm, the BP neural network comprises four hidden layers.
Subsequently, the ranges of the values for the learning rate, convergence error, and number of hidden layer nodes are set and optimized using the NRBO, completing the initial setup of the network parameters. The RIME optimization algorithm is then applied to optimize the network weights and biases of the BP neural network. Following this, the BP neural network is trained to establish the relationship between the input variables ( θ S u n S a t and θ S u n L O S ) and error matrix R S T D .
Finally, the two independent variables of the N+1 mission are input into the trained network to predict the thermal deformation error matrix R S T D ( ϕ , ω ) N + 1 . This results in a corrected camera exit vector, that is, L O S J c o r r e c t .
L O S J c o r r e c t = R S T D ( ϕ , ω ) N + 1 · L O S J c a l
As the number of samples (N) continues to increase, the imaging environment characterized by the angles θ S u n S a t and θ S u n L O S becomes more comprehensive and perfect, contributing to an increased level of completeness in the network training. This, in turn, enhances the precision of predicting the thermal deformation error matrix.
The corrected error using the above method can be expressed as follows.
E r r o r T o t a l _ c o r r e c t = cos 1 ( L O S J c a l · L O S J c o r r e c t L O S J c o r r e c t · L O S J c o r r e c t )  
E r r o r R A _ c = δ c δ E r r o r D E _ c = σ c σ   ,
where E r r o r T o t a l _ c o r r e c t denotes the total error of corrected camera LOS; E r r o r R A _ c and E r r o r D E _ c represent the errors in the right ascension and declination directions of the LOS, respectively; and ( δ c , σ c ) denotes the calculated right ascension and declination of the star after correction. Algorithm 1 demonstrates a more detailed pseudocode for this method. Finally, a probabilistic statistical approach is adopted to study the probability distribution of the corrected errors and evaluate the accuracy of this method.
Algorithm 1 NRBO-RIME-BP neural network
Input: 
The thermal deformation error matrix R S T D ( ϕ , ω ) , the angle θ S u n S a t between the solar vector and the satellite position vector, and the angle θ S u n L O S between the solar vector and the camera’s LOS vector, the calculated camera’s LOS L O S J c a l , the range of learning rate L , the range of convergence error λ , the range of the number of hidden layer nodes H , the range of network weights W and biases b ;
Output: 
The corrected camera’s LOS L O S J c o r r e c t ;
1: Set: 
R S T D ϕ , ω N + M = R S T D ϕ , ω 1 ,   , R S T D ϕ , ω N , R S T D ϕ , ω N + 1 , , R S T D ϕ , ω N + M ← The thermal deformation error matrix for the N + M stellar observation missions (N > 0, M > 0);
2: Set: 
θ S u n S a t N + M = { θ S u n S a t 1 , , θ S u n S a t N , θ S u n S a t N + 1 , , θ S u n S a t N + M } ← The angle between the solar vector and the satellite position vector for the N + M stellar observation missions;
3: Set: 
θ S u n L O S N + M = { θ S u n L O S 1 , , θ S u n L O S N , θ S u n L O S N + 1 , , θ S u n L O S N + M } ← The angle between the solar vector and the camera’s LOS vector for the N + M stellar observation missions;
4: Set: 
L O S J c a l N + M = { L O S J c a l 1 , , L O S J c a l N , L O S J c a l N + 1 , , L O S J c a l N + M } ← The calculated camera’s LOS for the N stellar observation missions;
5: Set: 
L [ 10 4 , 10 2 ] ← The range of learning rate;
6: Set: 
λ [ 10 8 , 10 6 ] ← The range of convergence error;
7: Set: 
H k [ 1,20 ] ← The range of the number of hidden layer nodes, k is the number of hidden layers ( H k is an integer, k = 1 , 2 , 3 , 4 );
8: Set: 
W , b [ 1,1 ] ← The range of network weight and bias
9:
[ L b e s t , λ b e s t , H k _ b e s t ] = NRBO ( L , λ , H k , θ S u n S a t N , θ S u n L O S N , R S T D ϕ , ω N );
10:
(the Newton-Raphson-Based Optimizer, used to initialize network parameters)
11:
[ W i n i t , b i n i t ] = RIME ( W , b , θ S u n S a t N , θ S u n L O S N , R S T D ϕ , ω N );
12:
(RIME optimization algorithm, used for initialization of initial network weights and biases)
13:
for  i = 1 to N  do
14:
(Train the network using data from the first N missions)
15:  
  n e t N = B P ( θ S u n S a t i , θ S u n L O S i , R S T D ϕ , ω i , L b e s t , λ b e s t , H k _ b e s t , W i n i t , b i n i t );
16:  
(Parameters of the trained network   n e t N : L b e s t N , λ b e s t N , H k _ b e s t N , W b e s t N , b b e s t N )
17:
end for
18:
for  j = N + 1 to N + M  do
19:   
R S i m ϕ , ω N + j = s i m (   n e t N + j 1 ,   ( θ S u n S a t j , θ S u n L O S j ) ) ;
20:   
L O S J c o r r e c t N + j = ( R S i m ϕ , ω N + j · L O S J c a l N + j ) ← (Getting the corrected camera’s LOS);
21:   
[ L b e s t N + j 1 , λ b e s t N + j 1 , H k _ b e s t N + j 1 ] = NRBO ( L , λ , H k , θ S u n S a t j , θ S u n L O S j , R S T D ϕ , ω j ) ←(Updating Network Parameters);
22:   
[ W b e s t N + j 1 , b b e s t N + j 1 ] = R I M E ( W , b , θ S u n S a t j , θ S u n L O S j , R S T D ϕ , ω j ) ← (Updating network weights and bias);
23:   
n e t N + j =
  B P ( θ S u n S a t j , θ S u n L O S j , R S T D ϕ , ω j , L b e s t N + j 1 , λ b e s t N + j 1 , H k _ b e s t N + j 1 , W b e s t N + j 1 , b b e s t N + j 1 );
24:
end for

4. Experimental Results

The proposed method was experimentally verified using actual stellar observation data obtained from a long-wave infrared detection camera mounted on an experimental low-orbit satellite. The experiments were conducted using MATLAB R2021b on a PC with an i7-13650HX processor, 16 GB RAM, and an NVIDIA RTX 4060 GPU (HP laptop, purchased in Shanghai, China.). Detailed information about the experimental satellites is presented in Table 1. The observation data were collected from 17 June 2023 to 21 November 2023, comprising a total of 96 observation missions. The imaging duration for each mission was approximately 6 min, and the missions were not continuously performed during the collection period. A stellar observation image is shown in Figure 7.

4.1. Variation Tendency Analysis of Camera LOS Error

By utilizing the above data, the total absolute error E r r o r T o t a l of the camera LOS was calculated using the stellar-based camera LOS determination model described in Section 2. Since the ground-based calibration parameters for the internal and external orientation models, apart from the camera distortion correction data, were not obtained, the calculated camera LOS errors are relatively large. However, this does not affect the validation of the proposed algorithm’s performance in correcting thermal deformation errors. The trends in E r r o r T o t a l with the number of missions and variations in θ S u n S a t and θ S u n L O S are shown in Figure 8 and Figure 9, respectively.
It is evident from Figure 8 that the LOS errors for a single mission are relatively stable. However, no significant regularity was observed in the mean LOS errors for each mission. It can be noted from Figure 9 that the LOS errors exhibit relatively obvious patterns as they vary with the feature parameters θ S u n S a t and θ S u n L O S . The LOS errors generally tend to decrease as the angle θ S u n S a t changes from 0° to ±180°. Similarly, the LOS errors tend to decrease as the angle θ S u n L O S changes from 0° to 180°. Before the correction, the mean LOS error of the camera was approximately within 0.012 rad.
The STD has different effects on the decomposition of camera LOS errors in different directions. Therefore, using (7), the total absolute error E r r o r T o t a l of the camera LOS was decomposed into E r r o r R A and E r r o r D E in the right ascension and declination directions. The trends in the mean errors during each mission are illustrated in Figure 10. It can be found that the distribution of the LOS errors is discrete in both the right ascension and declination directions.

4.2. Correction Results of the Camera LOS

The initial set of data from the first 50 missions was used for training. First, a BP neural network was created. The ranges of the learning rate, convergence error, and number of hidden layer nodes were set to [ 10 4 ~ 10 2 ], [ 10 8 ~ 10 6 ], and [1~20], respectively. The maximum number of iterations and population were set to 100 and 30, respectively. The NRBO was used to determine the optimal values of the initial network parameters within a given range. Subsequently, the ranges for the weights and biases were set to [−10~10]. The RIME algorithm was employed to optimize the weights from the input layer to the hidden layers and within the hidden layers, as well as the biases of each hidden layer, to determine the best values for network structure optimization. Both the NRBO and RIME algorithms use the minimum LOS root mean square error (RMSE) as the objective function. Following the training, the current network was employed to predict the thermal deformation error matrix for the 51st mission to complete the correction of the camera LOS. The network was then retrained using data from the 51st mission. This process was repeated to complete the prediction of the error matrix for subsequent missions.
The correction results for missions 51–96 are shown in Figure 11. We compiled the statistics of the total absolute error of the camera’s LOS, as well as its decomposition in the right ascension and declination directions, and averaged them for each mission.
It can be observed from the error correction process that as the size of the training dataset increases, incorporating a greater variety of spatial thermal environments enhances the integrity of the network. This gradually improves the correction of the camera LOS error. Furthermore, the distribution of the error in the right ascension and declination directions shows a trend of convergence as the network training is expanded. Notably, fluctuations occurred during the correction process owing to the emergence of new spatial thermal environments with significant differences from the training data, which reduced the predictive effectiveness of the network. However, it maintained a certain level of correction effectiveness. The graph shows a steady decrease in the correction error of the camera LOS. Taking the results of the 96th mission as an example, the total absolute error E r r o r T o t a l of the camera LOS was controlled within 10 4 rad, with an average of 6.0408 × 10 5 rad, which is an improvement of 99.00% when compared with the uncorrected error of 6.0390 × 10 3 rad. It is important to note that the error curve of the proposed algorithm continuously decreases without convergence. This is due to the limited size of the training dataset, which fails to represent all spatial thermal environments. Expanding the dataset can effectively address this issue.
To verify the effectiveness of the proposed algorithm, we compared it with the CA algorithm [4], third-order Fourier Series Model (FSM), third-order Gaussian Model (GM), third-order Rational Fractional Function Model (RFFM), fixed memory least squares with improved square root cubature Kalman filter (FMLS-ISRCKF) [25], and transformer model [45]. Among them, the CA algorithm is a fitting method used to correct thermal deformation errors in geostationary orbit. FSM, GM, and RFFM are traditional fitting methods for LOS errors. FMLS-ISRCKF is a low-frequency error correction method based on temperature changes in low orbit. The comparison results are shown in Figure 12 and Table 2. The table presents the mean camera LOS error after correction for missions 51–96, as well as the average speed of the algorithm.
Compared to existing algorithms, the proposed algorithm showed obvious advantages after training up to mission 51, and the accuracy of correction continued to improve steadily, far outperforming the results of the other algorithms. This indicates that as the number of missions increases, the training dataset becomes more comprehensive in covering thermal environment characteristics, thereby enhancing the correction performance of the camera’s LOS. Furthermore, effective correction of onboard camera LOS deviations can be achieved without the need for frequent in-orbit calibration. Not surprisingly, the CA algorithm, which performs well in correcting the thermal deformation error of geostationary orbit, is not suitable for LOS correction of low-orbit cameras. This is because the orbital period of GEO exhibits a well-defined periodic characteristic, with thermal environment variations occurring in a 24 h cycle. The CA algorithm is highly effective when using fitting parameters from the previous N-1 cycles to correct the thermal deformation errors in the Nth cycle. However, this approach proves inadequate for correcting the thermal deformation errors in LEO, where no apparent periodic variation exists. Traditional algorithms such as FSM, GM, and RFFM each have advantages in fitting nonlinear input–output relationships. These algorithms offer high flexibility and broad applicability, excelling in periodic systems, multi-peak distribution systems, and complex nonlinear systems. However, experimental results show that they are not suitable for approximating the mapping relationship between the thermal environmental characterization parameters and the error matrix, as proposed in this study. Consequently, they exhibit almost no effectiveness in correcting thermal deformation errors, which may be attributed to the fact that the error matrix is a multi-dimensional feature rather than a simple numerical value. In contrast, the FMLS-ISRCKF algorithm, designed to compensate for low-frequency errors in star sensors induced by the space thermal environment, achieves better correction performance. Using this method for correction, the thermal deformation error is reduced by 33.92%. This is because the algorithm considers variations in solar irradiation angle over an orbital cycle and their impact on the interior orientation elements of the star sensor. Moreover, the transformer model, which also belongs to the deep learning method, shows a superior correction effect on camera LOS errors caused by STD. After correction, the average error is reduced from 0.005559 rad to 0.003124 rad. It is evident that the two proposed feature parameters, θ S u n S a t and θ S u n L O S , exhibit outstanding performance in characterizing the spatial thermal environment. The correction effect of deep learning methods on thermal deformation far surpasses that of traditional algorithms in this context.
Figure 12b illustrates the processing time of different algorithms for individual missions. This not only depends on the processing speed of the algorithms but is also related to the data volume. As shown in Figure 12b, the processing times of most traditional algorithms are significantly longer than those of neural network-based methods. The CA algorithm exhibits the lowest efficiency, followed by the RFFM fitting algorithm. While the FMLS-ISRCKF algorithm and the transformer model achieve similar correction performance, the former offers superior computational efficiency. The proposed algorithm achieves the shortest processing time, with the time for a single mission controlled within 0.01 s.
As shown in Table 2, the proposed algorithm is affected by imperfections in the initial training data of the network, which slightly increases the mean error after correction. However, the effect still improved by 80.28% when compared with the value before the correction. In general, the camera’s LOS accuracy requires control of the error within 100 μrad, which is a comprehensive value that includes corrections for thermal deformation errors, random errors, and all other sources of error. Correcting only thermal deformation errors cannot fully meet this requirement. However, the proposed correction method can reduce over 80% of the error, which significantly exceeds the proportion of errors caused by STD in the overall error budget [38], thus demonstrating the superior performance of the algorithm. Furthermore, in terms of processing speed, the neural network demonstrates significantly higher efficiency compared to traditional algorithms. The proposed algorithm achieves the highest efficiency, with a processing time of less than 0.00002 s per data point. In summary, the proposed method achieves the best correction performance and the highest computational efficiency among all evaluated algorithms, making it the optimal choice in terms of overall performance.
Figure 13 shows the distribution of the camera LOS error for the last mission (96th mission). The errors are centrally distributed in a certain area and gradually dispersed around it, which seems to follow a normal distribution. Therefore, the method based on normal distribution was considered to explore the probability distribution of the camera LOS error after correction. The results are shown in Figure 14 and Table 3.
Obviously, the credibility of the corrected error E r r o r R A within (−8.7747, 2.2165) × 10 5 rad at a 95% confidence level is 95.44% (2σ). Similarly, at a 95% confidence level, the probability of E r r o r D E within the interval (−10.0052, 1.1232) × 10 5 rad can reach 95.44% (2σ).
The above experimental results show that the camera LOS error caused by the STD was effectively corrected using the proposed algorithm, and a significant improvement was observed when compared with the prior state.

4.3. Results of the Ablation Experiment

In order to verify the effectiveness of the improved algorithm, an ablation experiment was conducted, displaying the results of missions 51–96 in Figure 15. Among them, the learning rate for the BP neural network and RIME-BP neural network was set to 0.001, and the convergence error was set to 10 7 .
According to the experimental results, compared to the proposed method, the LOS errors of other methods exhibited a fluctuating decreasing trend, demonstrating unstable correction effects. This is because some network parameters of these methods are fixed and not well applicable to all the data. Comparatively, the proposed algorithm improved the mean LOS error by 69.75% compared to the initial BP neural network, and by 32.61% and 39.08% compared to the RIME-BP neural network and NRBO-BP neural network, respectively. Furthermore, the optimization effects on network structure and parameters are significant.

4.4. Validation of the Extrapolation Ability of the Algorithm

In order to verify the algorithm’s extrapolation performance and applicability, the mission data sequence was shuffled, with the data of 50 missions used for training. Subsequently, some special data from the missions that were not trained were selected for validation. A total of nine sets of data were selected, with each set containing one mission. The characteristic of these nine datasets is that the imaging environment is similar to that of the training datasets, which also implies that their feature parameters θ S u n S a t and θ S u n L O S are similar. However, the time interval between these missions for similar thermal environments is relatively long (at least 2 days between two missions). Two of these missions are used as examples for detailed descriptions, as shown in Table 4.
The experimental results are displayed in Figure 16.
The experimental results indicate that the proposed algorithm exhibits a certain degree of extrapolation capability. The effectiveness of thermal distortion error correction in imaging environments with similar feature parameters is significant. Despite the fact that the time interval between some missions is as long as 4 months, the LOS correction remains effective. However, the effect of correction depends on the similarity of the thermal environment, with better correction achieved in more similar environments.
The purpose of this experiment is to verify that the effectiveness of the proposed algorithm in LOS correction under similar spatial thermal environments is rarely affected by time intervals. This can help avoid frequent on-orbit LOS correction issues, which is crucial for on-orbit detection, tracking, and positioning missions.

5. Conclusions

The space thermal environment in which low-orbit optical payloads operate undergoes significant changes, leading to irregular shifts in the camera LOS. This affects the accurate tracking and positioning of space targets. To address this issue, a novel correction method based on an improved BP neural network is proposed. First, a LOS determination model for conversion from the pixel coordinates to celestial coordinates was established, completing the conversion from the star coordinates in the image plane to the right ascension and declination. Subsequently, two key parameters characterizing the space thermal environment were introduced: (1) the angle θ S u n S a t between the solar vector and satellite position vector; (2) the angle θ S u n L O S between the solar vector and the camera LOS vector. The uncertain space thermal environment was quantitatively analyzed to identify the patterns between the STD and LOS offsets. Finally, based on the improved BP neural network, data from up to N previous missions were used to correct the LOS offset for the N+1th mission. The experimental results showed that this method effectively corrected the camera LOS errors caused by STD, providing the possibility of long-term continuous observation, tracking, and precise positioning of low-orbit cameras. In addition, owing to the similarity in the imaging principles of optical payloads, the proposed algorithm may be applicable to other medium-orbit or low-orbit imaging payloads, displaying a certain level of universality.
However, the limitation of this algorithm lies in the fact that its correction accuracy heavily depends on the size and diversity of the training dataset. Certainly, this is also a common drawback of many algorithms that utilize neural networks for parameter fitting. We recognize that data from a single satellite may not fully encompass all possible sensor types and imaging environments. Therefore, in future research, we will further expand the training dataset by incorporating data from more satellites and different orbital environments to validate the applicability of the proposed method in broader scenarios. Additionally, we are considering the introduction of techniques such as transfer learning. By leveraging data from existing satellites as a pre-trained model and then fine-tuning it for specific satellites, we aim to reduce the reliance on large amounts of new data. We hope that this approach will address the challenge of accurately correcting LOS errors caused by thermal deformation in cases where the dataset size is limited.

Author Contributions

Conceptualization, Y.L., X.C. and P.R.; methodology, Y.L.; software, Y.L.; validation, Y.L.; formal analysis, Y.L.; investigation, Y.L.; resources, G.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, X.C. and P.R.; visualization, Y.L.; supervision, X.C. and P.R.; project administration, X.C.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62175251.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LOSLine of sight
STDSpace thermal deformation
EKFExtended Kalman Filter
IOMInterior orientation model
EOMExterior orientation model
NRBONewton–Raphson-Based Optimizer
TAOTrap Avoidance Operator

References

  1. Jia, J.; Wang, Y.; Zhuang, X.; Yao, Y.; Wang, S.; Zhao, D.; Shu, R.; Wang, J. High spatial resolution shortwave infrared imaging technology based on time delay and digital accumulation method. Infrared Phys. Technol. 2017, 81, 305–312. [Google Scholar] [CrossRef]
  2. Clemons, T.M., III; Chang, K.C. Effect of sensor bias on space-based bearing-only tracker. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition XVII, Orlando, FL, USA, 17 April 2008; pp. 135–143. [Google Scholar]
  3. Clemons, T.M.; Chang, K.C. Bias correction using background stars for space-based IR tracking. In Proceedings of the 2009 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009; pp. 2028–2035. [Google Scholar]
  4. Li, X.; Yang, L.; Su, X.; Hu, Z.; Chen, F. A correction method for thermal deformation positioning error of geostationary optical payloads. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7986–7994. [Google Scholar] [CrossRef]
  5. Li, J.; Tian, S.-F. An efficient method for measuring the internal parameters of optical cameras based on optical fibres. Sci. Rep. 2017, 7, 12479. [Google Scholar] [CrossRef] [PubMed]
  6. De Lussy, F.; Greslou, D.; Dechoz, C.; Amberg, V.; Delvit, J.M.; Lebegue, L.; Blanchet, G.; Fourest, S. Pleiades HR in flight geometrical calibration: Location and mapping of the focal plane. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 519–523. [Google Scholar] [CrossRef]
  7. Radhadevi, P.V.; Müller, R.; d‘Angelo, P.; Reinartz, P. In-flight geometric calibration and orientation of ALOS/PRISM imagery with a generic sensor model. Photogramm. Eng. Remote Sens. 2011, 77, 531–538. [Google Scholar] [CrossRef]
  8. Wu, A. SBIRS high payload LOS attitude determination and calibration. In Proceedings of the 1998 IEEE Aerospace Conference Proceedings (Cat. No.98TH8339), Snowmass, CO, USA, 28 March 1998; Volume 245, pp. 243–253. [Google Scholar]
  9. Chen, J.; An, W.; Deng, X.; Yang, J.; Sha, Z. Space based optical staring sensor LOS determination and calibration using GCPs observation. In Proceedings of the Electro-Optical and Infrared Systems: Technology and Applications XIII, Edinburgh, UK, 21 October 2016; pp. 327–334. [Google Scholar]
  10. Wang, M.; Cheng, Y.; Chang, X.; Jin, S.; Zhu, Y. On-orbit geometric calibration and geometric quality assessment for the high-resolution geostationary optical satellite GaoFen4. ISPRS J. Photogramm. Remote Sens. 2017, 125, 63–77. [Google Scholar] [CrossRef]
  11. Wang, M.; Cheng, Y.; Tian, Y.; He, L.; Wang, Y. A new on-orbit geometric self-calibration approach for the high-resolution geostationary optical satellite GaoFen4. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1670–1683. [Google Scholar] [CrossRef]
  12. Wu, A. Precision attitude determination for LEO spacecraft. In Proceedings of the Guidance, Navigation, and Control Conference, San Diego, CA, USA, 29–31 July 1996; p. 3753. [Google Scholar]
  13. Leprince, S.; Musé, P.; Avouac, J.-P. In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2675–2683. [Google Scholar] [CrossRef]
  14. Clemons, T.M.; Chang, K.-C. Sensor calibration using in-situ celestial observations to estimate bias in space-based missile tracking. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1403–1427. [Google Scholar]
  15. Topan, H.; Maktav, D. Efficiency of orientation parameters on georeferencing accuracy of SPOT-5 HRG level-1A stereoimages. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3683–3694. [Google Scholar] [CrossRef]
  16. Wang, M.; Zhu, Y.; Jin, S.; Pan, J.; Zhu, Q. Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data. ISPRS J. Photogramm. Remote Sens. 2016, 119, 108–123. [Google Scholar] [CrossRef]
  17. Wang, T.; Zhang, G.; Li, D.; Tang, X.; Jiang, Y.; Pan, H.; Zhu, X.; Fang, C. Geometric accuracy validation for ZY-3 satellite imagery. ISPRS J. Photogramm. Remote Sens. 2013, 11, 1168–1171. [Google Scholar]
  18. Zhang, Y.; Zheng, M.; Xiong, J.; Lu, Y.; Xiong, X. On-orbit geometric calibration of ZY-3 three-line array imagery with multistrip data sets. IEEE Trans. Geosci. Remote Sens. 2013, 52, 224–234. [Google Scholar]
  19. Zhang, Y.; Zheng, M.; Xiong, X.; Xiong, J. Multistrip bundle block adjustment of ZY-3 satellite imagery by rigorous sensor model without ground control point. IEEE Geosci. Remote Sens. Lett. 2014, 12, 865–869. [Google Scholar]
  20. Guan, Z.; Zhang, G.; Jiang, Y.; Shen, X. Low-frequency attitude error compensation for the Jilin-1 satellite based on star observation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar]
  21. Chen, X.; Xing, F.; You, Z.; Zhong, X.; Qi, K. On-orbit high-accuracy geometric calibration for remote sensing camera based on star sources observation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
  22. Liu, H.; Liu, C.; Xie, P.; Liu, S. Design of Exterior Orientation Parameters Variation Real-Time Monitoring System in Remote Sensing Cameras. Remote Sens. 2024, 16, 3936. [Google Scholar] [CrossRef]
  23. Huang, D.; Wang, Z.; Gong, J.; Ji, H. On-orbit attitude planning for Earth observation imaging with star-based geometric calibration. Sci. China Technol. Sc. 2025, 68, 1220602. [Google Scholar]
  24. Guan, Z.; Jiang, Y.; Zhang, G.; Zhong, X. Geometric Calibration for the Linear Array Camera Based on Star Observation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 18, 368–384. [Google Scholar] [CrossRef]
  25. Sun, B.; Zhang, L.; Liu, H.; Xiao, Y.; Fan, G. Star sensor low-frequency error correction method based on identification and compensation of interior orientation elements. Measurement 2025, 246, 116746. [Google Scholar]
  26. Wang, Y.; Dong, Z.; Wang, M. Attitude Low-Frequency Error Spatiotemporal Compensation Method for VIMS Imagery of GaoFen-5B Satellite. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar]
  27. Wang, M.; Yang, B.; Hu, F.; Zang, X. On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery. Remote Sens. 2014, 6, 4391–4408. [Google Scholar] [CrossRef]
  28. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar]
  29. Wang, J.; Shi, F.; Zhang, J.; Liu, Y. A new calibration model of camera lens distortion. Pattern Recognit. 2008, 41, 607–615. [Google Scholar] [CrossRef]
  30. Salvi, J.; Armangué, X.; Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognit. 2002, 35, 1617–1635. [Google Scholar] [CrossRef]
  31. Li, X.; Su, X.; Hu, Z.; Yang, L.; Zhang, L.; Chen, F. Improved distortion correction method and applications for large aperture infrared tracking cameras. Infrared Phys. Technol. 2019, 98, 82–88. [Google Scholar] [CrossRef]
  32. Zhang, Y.-J. Camera calibration. In 3-D Computer Vision: Principles, Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 37–65. [Google Scholar]
  33. Yılmaztürk, F. Full-automatic self-calibration of color digital cameras using color targets. Opt. Express 2011, 19, 18164–18174. [Google Scholar] [CrossRef]
  34. Wang, Y.; Chen, F. Calibration of inner orientation elements and distort correction for large diameter space mapping camera. Editor. Off. Opt. Precis. Eng. 2016, 24, 675–681. [Google Scholar] [CrossRef]
  35. Zhao, G.; Liu, H.; Pei, Y. Distort Correction for Large Aperture Off-Axis Optical System. Chin. J. Lasers 2010, 37, 157. [Google Scholar]
  36. Wang, M.; Cheng, Y.; Yang, B.; Jin, S.; Su, H. On-orbit calibration approach for optical navigation camera in deep space exploration. Opt. Express 2016, 24, 5536–5554. [Google Scholar] [CrossRef]
  37. Jiang, L.; Li, X.; Li, L.; Yang, L.; Yang, L.; Hu, Z.; Chen, F. On-orbit geometric calibration from the relative motion of stars for geostationary cameras. Sensors 2021, 21, 6668. [Google Scholar] [CrossRef] [PubMed]
  38. Jiang, F.; Wang, L.; Deng, H.; Zhu, L.; Kong, D.; Guan, H.; Liu, J.; Wang, Z. Thermal Deformation Analysis of a Star Camera to Ensure Its High Attitude Measurement Accuracy in Orbit. Remote Sens. 2024, 16, 4567. [Google Scholar] [CrossRef]
  39. Xia, H.; Tang, X.; Mo, F.; Xie, J.; Li, X. Geographically-Informed Modeling and Analysis of Platform Attitude Jitter in GF-7 Sub-Meter Stereo Mapping Satellite. ISPRS Int. J. Geo-Inf. 2024, 13, 413. [Google Scholar] [CrossRef]
  40. Meeus, J.H. Astronomical Algorithms; Willmann-Bell, Incorporated: Richmond, VA, USA, 1991. [Google Scholar]
  41. Li, C.; He, J.; Li, M.; Lai, P. Overview of research on Satellite shadow model and calculation methods. Sci. Technol. Innov. 2021, 11, 5–8. [Google Scholar] [CrossRef]
  42. Neta, B.; Vallado, D. On satellite umbra/penumbra entry and exit positions. J. Astronaut. Sci. 1998, 46, 91–103. [Google Scholar] [CrossRef]
  43. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar]
  44. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar]
  45. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
Figure 1. Schematic of the imaging system. O 1 i j is the pixel coordinate system, O 2 u v is the focal plane coordinate system, O 4 x y z is the ideal spatial coordinate system, and O 5 X c p Y c p Z c p is the camera basic prism coordinate system.
Figure 1. Schematic of the imaging system. O 1 i j is the pixel coordinate system, O 2 u v is the focal plane coordinate system, O 4 x y z is the ideal spatial coordinate system, and O 5 X c p Y c p Z c p is the camera basic prism coordinate system.
Remotesensing 17 00762 g001
Figure 2. Schematic of the focal plane coordinate system O 2 u v and ideal focal plane coordinate system O 3 u v .
Figure 2. Schematic of the focal plane coordinate system O 2 u v and ideal focal plane coordinate system O 3 u v .
Remotesensing 17 00762 g002
Figure 3. Schematic of the EOM. O 6 X c Y c Z c is the camera coordinate system, O 7 X s Y s Z s is the satellite body coordinate system, O 8 X o Y o Z o is the orbital coordinate system, and O 9 X J Y J Z J is the celestial coordinate system.
Figure 3. Schematic of the EOM. O 6 X c Y c Z c is the camera coordinate system, O 7 X s Y s Z s is the satellite body coordinate system, O 8 X o Y o Z o is the orbital coordinate system, and O 9 X J Y J Z J is the celestial coordinate system.
Remotesensing 17 00762 g003
Figure 4. Diagram illustrating the angle between the solar vector and the satellite position vector, θ S u n S a t , and the angle between the solar vector and the camera’s LOS vector, θ S u n L O S .
Figure 4. Diagram illustrating the angle between the solar vector and the satellite position vector, θ S u n S a t , and the angle between the solar vector and the camera’s LOS vector, θ S u n L O S .
Remotesensing 17 00762 g004
Figure 5. Diagram illustrating the direction division of the angle θ S u n S a t between the solar vector and the satellite’s position vector.
Figure 5. Diagram illustrating the direction division of the angle θ S u n S a t between the solar vector and the satellite’s position vector.
Remotesensing 17 00762 g005
Figure 6. Structure diagram of the BP neural network.
Figure 6. Structure diagram of the BP neural network.
Remotesensing 17 00762 g006
Figure 7. A stellar observation image.
Figure 7. A stellar observation image.
Remotesensing 17 00762 g007
Figure 8. Variation tendency of camera LOS errors. (a) Trend in the mean camera LOS errors for 96 observation missions; (b) variation in error for a single mission during the mission period highlighted in red in (a).
Figure 8. Variation tendency of camera LOS errors. (a) Trend in the mean camera LOS errors for 96 observation missions; (b) variation in error for a single mission during the mission period highlighted in red in (a).
Remotesensing 17 00762 g008
Figure 9. Trend in the mean camera LOS error with variations in the angles θ S u n S a t and θ S u n L O S .
Figure 9. Trend in the mean camera LOS error with variations in the angles θ S u n S a t and θ S u n L O S .
Remotesensing 17 00762 g009
Figure 10. Variation in mean LOS errors in the right ascension and declination directions.
Figure 10. Variation in mean LOS errors in the right ascension and declination directions.
Remotesensing 17 00762 g010
Figure 11. Correction results. (a) The mean absolute error of the corrected camera LOS in missions 51–96. (b) The mean absolute error of the camera LOS decomposed in the right ascension and declination directions.
Figure 11. Correction results. (a) The mean absolute error of the corrected camera LOS in missions 51–96. (b) The mean absolute error of the camera LOS decomposed in the right ascension and declination directions.
Remotesensing 17 00762 g011
Figure 12. Comparison of correction results for different algorithms. (a) Errors of the camera LOS. (b) Algorithm running time.
Figure 12. Comparison of correction results for different algorithms. (a) Errors of the camera LOS. (b) Algorithm running time.
Remotesensing 17 00762 g012
Figure 13. Distribution of camera LOS error for the 96th mission. (a) The absolute error of camera LOS. (b) Absolute error decomposed in the right ascension and declination directions.
Figure 13. Distribution of camera LOS error for the 96th mission. (a) The absolute error of camera LOS. (b) Absolute error decomposed in the right ascension and declination directions.
Remotesensing 17 00762 g013
Figure 14. Probability distribution of the camera LOS error. (a) Probability distribution of E r r o r R A . (b) Probability distribution of E r r o r D E .
Figure 14. Probability distribution of the camera LOS error. (a) Probability distribution of E r r o r R A . (b) Probability distribution of E r r o r D E .
Remotesensing 17 00762 g014
Figure 15. Ablation experiment results.
Figure 15. Ablation experiment results.
Remotesensing 17 00762 g015
Figure 16. Extrapolation experiment results.
Figure 16. Extrapolation experiment results.
Remotesensing 17 00762 g016
Table 1. Parameters of the experimental satellite.
Table 1. Parameters of the experimental satellite.
ItemsDetailed Parameters
Orbit altitude (H)7.19 × 105 m
Orbital period (T)100 min
Camera typeSpace observation camera
Pixel size ( d x & d y )30 μm
Detector size (S)512 × 512 pixels
Focal distance ( f )1430 mm
Field of view (F)1.1° × 1.1°
Table 2. Comparison between different algorithms.
Table 2. Comparison between different algorithms.
MethodMean Error (rad)Algorithm Speed (s/One Data Point)
Proposed algorithm0.0010960.000016
FSM0.0067040.000201
GM0.0049370.000981
RFFM0.0054150.001014
CA0.0060780.010598
FMLS-ISRCKF0.0036730.000045
Transformer0.0031240.000082
Original errors0.005559/
Table 3. Analysis of the probability distribution of camera LOS error.
Table 3. Analysis of the probability distribution of camera LOS error.
E r r o r R A E r r o r D E
CL *95%
μ ^ (×10−5)−3.2791−4.4410
CI of μ ^ (×10−5)(−3.6729, −2.8853)(−4.8348, −4.0472)
σ ^ (×10−5)2.74782.7821
CI of σ ^ (×10−5)(2.4947, 3.0585)(2.5102, 3.0776)
* CL is the confidence level. μ ^ and σ ^ are the estimated values of the mean and the standard deviation, respectively. CI is the confidence interval.
Table 4. Detailed information about the missions.
Table 4. Detailed information about the missions.
Test Set DataCorresponding Training Set Data
Mission number4989
Time of mission16 July 2023 16:33–16:3821 November 2023 10:25–10:30
The mean value of θ S u n S a t 106.69101.75
The mean value of θ S u n L O S 89.3492.44
Mission number103
Time of mission21 June 2023 4:57–4:6218 June 2023 4:46–4:50
The mean value of θ S u n S a t 152.67154.26
The mean value of θ S u n L O S 160.09162.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Chen, X.; Liu, G.; Rao, P. Correction Method for Thermal Deformation Line-of-Sight Errors of Low-Orbit Optical Payloads Under Unstable Illumination Conditions. Remote Sens. 2025, 17, 762. https://doi.org/10.3390/rs17050762

AMA Style

Li Y, Chen X, Liu G, Rao P. Correction Method for Thermal Deformation Line-of-Sight Errors of Low-Orbit Optical Payloads Under Unstable Illumination Conditions. Remote Sensing. 2025; 17(5):762. https://doi.org/10.3390/rs17050762

Chicago/Turabian Style

Li, Yao, Xin Chen, Guangsen Liu, and Peng Rao. 2025. "Correction Method for Thermal Deformation Line-of-Sight Errors of Low-Orbit Optical Payloads Under Unstable Illumination Conditions" Remote Sensing 17, no. 5: 762. https://doi.org/10.3390/rs17050762

APA Style

Li, Y., Chen, X., Liu, G., & Rao, P. (2025). Correction Method for Thermal Deformation Line-of-Sight Errors of Low-Orbit Optical Payloads Under Unstable Illumination Conditions. Remote Sensing, 17(5), 762. https://doi.org/10.3390/rs17050762

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop