Next Article in Journal
FM-Net: Frequency-Aware Masked-Attention Network for Infrared Small Target Detection
Previous Article in Journal
The Bright Decade of Ocean Salinity from Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation

Graduate School, Space Engineering University, Beijing 101416, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2263; https://doi.org/10.3390/rs17132263
Submission received: 25 April 2025 / Revised: 15 June 2025 / Accepted: 29 June 2025 / Published: 1 July 2025
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

When a space target malfunctions and is no longer controlled by its attitude control system, it usually tumbles in orbit and exhibits a slow spinning state. Accurately estimating the on-orbit attitude of spinning space targets is of vital importance for ensuring the operation of space assets. Moreover, it plays a significant role in tasks such as reentry observation and collision avoidance. Currently, most existing methods estimate the attitude of space targets by using a single inverse synthetic aperture radar (ISAR) for long-term observation. However, this approach not only requires a long observation time but also fails to estimate the attitude of spinning targets. To address these limitations, this paper proposes a novel approach for estimating the attitude of spinning space targets, which utilizes the joint observations of a multiple-station ISAR. Specifically, the proposed method fully exploits the projection principle of ISAR imaging and uses an ISAR high-resolution network (ISAR-HRNet) to automatically extract the projection features of typical components of the target. Then, the analytical expressions for the target’s instantaneous attitude and spin vector under the multi-station observation imaging projection model are derived. Based on the extracted features of the typical components, the lengths, orientations, and spin vectors of the space target are determined. Importantly, the proposed method can achieve the attitude estimation of the spinning space targets within a single observation period, without the need for manual intervention or prior information about the target’s three-dimensional (3D) model. Additionally, the analytical method for solving the spin vector offers high efficiency and accuracy. Finally, the effectiveness of the proposed attitude estimation algorithm is verified by experiments on simulated data, and the performance of the ISAR-HRNet is also tested in the key point extraction experiments using measured data.

1. Introduction

With the continuous innovation of aerospace technology and the deepening development of space exploration activities, an increasing number of space objects are being deployed into orbit. The on-orbit attitude of space objects reflects their operational status, which is crucial for safeguarding the security of space assets. Normally functioning space targets maintain three-axis stable operation in orbit under the action of the attitude control system. Nevertheless, when a space target malfunctions and gets out of control, the attitude control system fails, and it tumbles in space and exhibits a slow spinning state [1,2]. Accurately estimating the instantaneous attitude and spin motion parameters of such spinning space targets not only enables the prediction of the reentry trajectory of out-of-control targets, but also provides an important basis for the formulation of collision warning and avoidance strategy for space targets.
When a space target malfunctions and becomes uncontrollable, external observation methods are the only means of estimating its attitude. Specifically, compared with optical sensors, an ISAR has all-day and high-resolution imaging capabilities, which enable it to play a significant role in space target attitude estimation tasks [3,4]. The ISAR imaging principle dictates that the ISAR image is, in essence, a two-dimensional projection of the electromagnetic scattering properties of the target’s 3D structure onto the imaging projection plane (IPP) [5,6]. Leveraging this characteristic, researchers have developed a series of methods for estimating the attitude of space targets by combining ISAR images with the principle of projection imaging. According to the differences in the observation configurations of the stations, existing methods can be categorized into three types: (1) single-station ISAR attitude estimation, (2) multi-station ISAR attitude estimation, and (3) ISAR-optical joint attitude estimation.
The first category of methods estimates the attitude of space targets through long-term observation using single-station ISAR and the analysis of sequential imaging data. In [7,8], the watershed algorithm is employed to extract target contours from ISAR images. Subsequently, the attitude parameters are calculated by rotating the 3D model of the target while optimizing the similarity between theoretical contours and extracted contours. In [9], attitude estimation is achieved by minimizing the distance between the target’s key points and those extracted from the ISAR image. However, real-world ISAR images are frequently affected by noise interference and electromagnetic anisotropy, leading to inaccuracies in contour and key point extraction. To tackle this issue, Du et al. [10] propose an attitude estimation method that focuses on optimizing the energy accumulation value of ISAR image projection areas. This method obviates the need for contour and key point extraction and exhibits enhanced noise robustness. Notably, a significant drawback of all these aforementioned methods is their reliance on the target’s 3D model, which severely limits their effectiveness. To mitigate this dependence, Zhou et al. [11,12] use quadrilateral descriptors to identify rectangular components of targets in sequential ISAR images and solve attitude parameters using ISAR projection models. Although this approach mitigates feature extraction challenges to some extent, it still requires manual intervention and lacks full automation. The advancement of deep learning technology has offered a novel perspective on target feature extraction. Consequently, in [12,13,14,15], deep learning technology is introduced into the attitude estimation process of space targets, and intelligent methods are used to extract target features in ISAR images, which improves the automation of target feature extraction. Nevertheless, the single-station ISAR attitude estimation method still has limitations, such as long observation time and difficulty in effectively estimating the attitude of the spinning target.
The second category of methods leverages multi-station ISAR for the simultaneous observation of space targets, representing a significant advancement beyond single-station ISAR attitude estimation techniques. By capitalizing on the diverse observation perspectives created by the spatial distribution of multiple ISAR stations, this multi-station approach can significantly reduce the time required for attitude estimation and enable effective attitude determination even for spinning targets. Zhou et al. [16] propose a space target attitude estimation method based on multi-station ISAR observation. This method estimates the attitude of the target through three-station ISAR joint observation and achieves the synchronous estimation of the target component orientation and spin vector. Building on this work, Zhou et al. [17] further proposed a method using bistatic spaceborne ISAR. This innovation overcomes the inherent limitations of ground-based ISAR observations, such as geographical constraints and restricted coverage, thereby expanding the scope of applicable scenarios. Notably, all the above methods require prior information about the space target component length, which greatly restricts their applicability.
The third category of methods utilizes both ISAR and optical sensors to simultaneously observe space targets, aiming to improve the attitude estimation ability of space targets. Zhou et al. [18,19,20] propose a spinning target attitude estimation method based on single-station ISAR-optical joint observation. This method analyzes the target projection form under single-station radar-optical joint observation and realizes the attitude estimation performance that requires at least three-station ISAR joint observation. However, this method has obvious limitations. It is based on the idealized assumption that the spin vector of the target is perpendicular to the radar line of sight (LOS). In actual complex space exploration tasks, this assumption is often difficult to meet, which significantly restricts the engineering applicability of the method. To address this limitation, Du et al. [21] propose a spinning target attitude estimation method based on multi-station ISAR-optical joint observation. This method uses one optical sensor and two ISAR sensors to observe the target, without the need for ISAR to be co-located with the optical device, and it does not limit the spin direction of the target. Consequently, this method demonstrates broader applicability. However, due to the high sensitivity of the imaging quality of optical sensors to meteorological conditions and illumination environments, it is difficult to obtain effective observation data under cloudy and low illumination conditions, which leads to the failure risk of the method based on ISAR-optical joint observation.
To address the issues discussed above, an attitude estimation method for spinning space targets utilizing multistatic ISAR joint observation is proposed in this paper. This method employs three geographically distributed ISAR stations to simultaneously observe the target. First, a key point extraction network, namely the ISAR high-resolution network (ISAR-HRNet), is utilized to automatically extract the projection features of the key points of the space target from the ISAR images. Then, based on the range dimension projection features of the typical components of the space target, the lengths and orientations of the typical components of the target are estimated. Then, an optimization function is established based on the azimuth dimension projection features of the typical components of space targets from ISAR images. Finally, the spin vector of the target is solved using an analytical method. Compared with existing works, the main contributions of this article are as follows.
  • Capability of Efficient Attitude Estimation for Spinning Targets: By deploying ISAR stations at three different locations to observe the target simultaneously, this method fully leverages the differences in observation perspectives from multiple stations, allowing accurate estimation of the attitude of spinning targets through a single observation, without the need for long-term target monitoring. This approach provides strong technical support for target surveillance in complex space environments.
  • Automated Feature Extraction: In this paper, we propose ISAR-HRNet, a network capable of achieving automated extraction and precise association of key points in the ISAR images of space targets. By using the ISAR-HRNet, the proposed method significantly reduces manual intervention in the attitude estimation process and improves the algorithm’s automation level.
  • Independent Target Model: The proposed method simultaneously optimizes the lengths and orientations of target components by leveraging the features extracted from ISAR images. Consequently, it eliminates the requirement for prior knowledge of the target’s 3D model, thereby significantly broadening the algorithm’s applicability across diverse scenarios.
  • Analytical Solution of Spin Vector: The analytical expression of the target’s spin vector optimization model is derived and established. Compared with the existing algorithms, it can effectively avoid getting trapped in the local optimal value during the iterative solution process of the spin vector. This not only ensures the accuracy of the spin vector but also significantly shortens the estimation time.
The rest of the article is organized as follows: In Section 2, the joint observation model of multi-station ISAR for space targets is introduced, and the imaging geometry of the multi-station ISAR observation system and the motion modeling of the target during the observation time are elaborated in detail. Section 3 presents the proposed attitude estimation method, with a focus on the introduction of the ISAR-HRNet, instantaneous attitude estimation, and spin vector estimation. In Section 4, experimental results are provided to illustrate the effectiveness and robustness of the proposed method. Section 5 summarizes this paper and looks ahead to future work.

2. Fundamentals of Multi-Station ISAR Joint Observation for Space Targets

In this section, the fundamentals of the proposed algorithm are introduced. First, the imaging geometry of the multi-station ISAR observation system and the involved coordinate systems are introduced. On this basis, the motion model of spinning space targets is analyzed.

2.1. Imaging Geometry of Multi-Station ISAR Joint Observation System

To achieve the attitude estimation of a space target, we adopted a multi-station ISAR joint observation configuration. The multi-station ISAR joint observation system is composed of three ISAR stations located at different positions, and its imaging geometry is shown in Figure 1. When a space target enters the overlapping observable region of these three stations, they simultaneously observe the space target and obtain the imaging results from different perspectives. In order to describe the motion of the target uniformly, the orbital coordinate system (OCS) of the space target was established [22].
As shown in Figure 2, the origin of the OCS is located at the centroid O S of the space target, the X S -axis is tangent to the target orbit and points to the direction of motion, the Z S -axis points to the earth center, and the Y S -axis is determined by the right-hand rule. Due to the motion of the target in orbit, the line of sight (LOS) of the ISAR will change continuously during the imaging time, forming the relative motion between the space target and ISAR.
As shown in Figure 2, the LOS of the ISAR can be described by the LOS vector r in the OCS as follows:
r = cos θ LOS cos φ LOS , cos θ LOS sin φ LOS , sin θ LOS T
where r represents the LOS vector, and θ LOS and φ LOS denote the elevation and azimuth angles of the LOS in the OCS, respectively. Generally, the LOS at the central moment of the imaging observation is referred to as the imaging LOS.
In the actual observation task, the LOS can be obtained from the observation information of the ISAR tracking system, and the LOS in the OCS can be obtained through coordinate transformation. In this paper, we calculate the LOS in the OCS based on the two-line orbital element (TLE) of the target and the position of the ISAR station [23].

2.2. Motion Model of Spinning Space Targets

To characterize the motion of the spinning space targets during imaging intervals, the motion model of the spinning target is established. In this model, it is assumed that the space target rotates uniformly around a certain rotation axis within the imaging duration. Different spin modes can be simulated by adjusting the rotation axis and speed. Based on studies of Envisat, the spin period of the spinning target is generally around 160 s [24,25]. In contrast, the imaging time of the ISAR for low-orbit space targets, such as Envisat and TG-1, is several seconds. Therefore, the spin motion model can be used for the attitude estimation of low-orbit spinning targets.
According to the imaging principle, the ISAR achieves high range resolution by transmitting large bandwidth signals and achieves azimuth resolution by Doppler generated by the relative motion between the target and ISAR [26]. In ISAR imaging, the IPP is determined by the range axis and azimuth axis. The range axis is along the imaging LOS direction, and the azimuth axis is perpendicular to the range axis and effective rotation vector (ERV) ω eff , i.e., the component of the motion vector between the target and ISAR in the direction perpendicular to the LOS [27], at the same time. Different from the three-axis stabilized space targets, the relative motion between the spinning targets and the ISAR is composed of orbital motion and spin motion of the targets. Therefore, there are differences between the IPP of the spinning targets and the IPP of the three-axis stabilized targets. In order to accurately obtain the IPP of the spinning targets, it is necessary to analyze the ERV of the spinning targets.
The ISAR imaging projection geometry of the spinning space targets is shown in Figure 3. During the imaging intervals, the LOS changes with the orbital motion of the target. This process can be equivalent to the turntable model in which the LOS does not change and the space target moves. Based on the turntable model, the target equivalent rotation vector ω LOS can be calculated as follows:
ω LOS = arccos r start T r end T CPI r start × r end r start × r end
where T CPI is the coherent processing interval (CPI) for imaging, r start denotes the LOS at the start of the imaging moment, and r end refers to the LOS at the end of the imaging moment.
At the same time, the space target also has spin motion. Therefore, the synthetic motion vector ω tar of the target can be synthesized from the equivalent rotation vector ω LOS and the spin vector ω spin . The relationship of the motion vectors can be expressed as
ω tar = ω LOS + ω spin
ω tar = ω tar cos θ tar cos φ tar , cos θ tar sin φ tar , sin θ tar T
ω LOS = ω LOS cos θ LOS cos φ LOS , cos θ LOS sin φ LOS , sin θ LOS T
ω spin = ω spin cos θ spin cos φ spin , cos θ spin sin φ spin , sin θ spin T
where ω tar , ω LOS , and ω spin represent the rotation speeds of ω tar , ω LOS , and ω spin , respectively, and θ tar , θ spin , φ tar , and φ spin refer to the elevation angle and the azimuth angle of ω tar and ω spin in the OCS, respectively, which are defined in the same way as θ LOS and φ LOS .
According to the Doppler principle, the rotational motion perpendicular to the radar LOS induces range variations in target scattering centers along the LOS, leading to Doppler modulation in echoes. In contrast, rotational motion along the LOS causes no range changes, and hence no Doppler modulation. Thus, only the perpendicular motion component contributes to ISAR imaging, and the ERV of ISAR imaging can be written as
ω eff = ω tar r T ω tar r
ω eff = ω eff cos θ eff cos φ eff , cos θ eff sin φ eff , sin θ eff T
where ω eff represents the rotation speed of ω eff , and θ eff and φ eff denote the elevation angle and azimuth angle of ω eff in the OCS, respectively, which are defined in the same way as θ LOS and φ LOS .
We call the attitude of the target when it is three-axis stabilized the initial attitude, and the attitude of the target at the central moment of ISAR imaging the instantaneous attitude. Suppose that the target contains a total of K scattering points, and the coordinates of the k -th scattering point in the OCS under the initial attitude are B k = x k , y k , z k T , k = 1 , 2 , , K . At the central moment of the imaging interval, due to the spin motion of the target, there is a rotation transformation relationship between the instantaneous attitude and the initial attitude of the space target, which can be expressed as
x k , y k , z k T = R init α , β , γ x k , y k , z k T
where x k , y k , z k T are the coordinates of the k -th scattering point in the OCS under the instantaneous attitude, R init α , β , γ refers to the coordinate transformation matrix, and α , β , and γ represent the roll angle, pitch angle, and yaw angle, respectively. R init can be calculated as follows:
R init α , β , γ = 1 0 0 0 cos α sin α 0 sin α cos α R x cos β 0 sin β 0 1 0 sin β 0 cos β R y cos γ sin γ 0 sin γ cos γ 0 0 0 1 R z
where R x , R y , and R z denote the rotation matrices around the X -axis, Y -axis, and Z -axis, respectively. Taking the central moment of imaging as the starting point of the slow time, the coordinates of the k -th scattering point of the target at t m are as follows:
x k , m , y k , m , z k , m T = R tar θ m x k , y k , z k T
where R tar θ m represents the rotation matrix corresponding to the synthetic motion of the target; θ m is the angle of the target synthetic rotation motion from t m to the central moment of imaging, θ m = ω tar t m ; ω tar = ω tar is the target synthetic rotational speed; and m = M / 2 , , M / 2 is the pulse index, where M refers to the total number of pulses.
Rotation matrix R tar θ m can be expressed using the Rodriguez rotation formula [28] as
R tar θ m = cos θ m I + 1 cos θ m n n T + sin θ m N
where n = n x , n y , n z T represents the unit vector of ω tar , and N can be expressed as
N = 0 n z n y n z 0 n x n y n x 0
According to the ISAR imaging projection principle, the theoretical projection position of the target’s k -th scattering point in the IPP can be expressed as
r k d k = P I S A R R init α , β , γ x k y k z k
P I S A R = r T 2 r × ω eff T λ
where r k and d k represent the projection of the k -th scattering point of the target in the range dimension and the azimuth dimension, respectively, P I S A R denotes the ISAR imaging projection matrix, and λ is the wavelength of the transmitted signal. From Equations (14) and (15), it can be seen that H α , β , γ , ω tar = P I S A R R i n i t establishes the projection mapping relationship between the spinning target and its range-Doppler (RD) imaging result. For a three-axis stabilized target, since there is no spin motion, the motion vectors of the target satisfy
ω eff = ω LOS
ω tar = ω LOS
At this time, the synthetic motion rotation matrix and the ISAR imaging projection matrix of the target can be expressed as follows:
R tar θ m = R tar ω LOS t m
P I S A R = r T 2 r × ω LOS T λ
Then, the projection mapping relationship between the target and the RD imaging result can be established by H α , β , γ , ω LOS = P I S A R R i n i t .
So far, we have established an ISAR imaging projection model suitable for three-axis stabilized targets and spinning targets. The general ISAR imaging projection matrix can be expressed as
P I S A R = v r a n g e T v d o p p l e r T = r T 2 r × ω tar T λ
where v r a n g e and v d o p p l e r represent the range dimension projection vector and the azimuth dimension projection vector, respectively. For the sake of convenience in expression, we only analyze the imaging projection model of a single ISAR station, and the ISAR imaging projection model combined with the joint observation of three ISAR stations is presented in Section 3.

3. Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation

The attitude of a spinning space target can be described by the instantaneous attitude and the spin vector. The instantaneous attitude of a space target can be determined by the orientations of typical components such as solar panels and the main body. After the joint observation of a multi-station ISAR, the key point feature extraction of the multi-station ISAR images after high-resolution imaging processing is performed. Then, based on the range dimension features of the key points, the least squares method is used to estimate the orientations and lengths of the typical components, and thus, the instantaneous attitude of the space target is obtained. Finally, based on the azimuth dimension features of the key points, an optimization model for multi-station imaging projection is established to solve the spin vector of the target.
The process of the proposed method is shown in Figure 4 and includes the following steps: (1) imaging processing; (2) projection matrix calculation; (3) feature extraction; (4) instantaneous attitude estimation; and (5) spin vector estimation. For the steps of ISAR imaging processing and projection matrix calculation, the corresponding processing methods can be found in [29,30,31,32,33,34]. We will not elaborate on them here.
To comprehensively elucidate the proposed attitude estimation method, this section is structured as follows. First, we provide a concise definition of the key points of space targets. Subsequently, we delve into the detailed introduction of the proposed key point extraction network, namely ISAR-HRNet. Following that, the methodologies for instantaneous attitude estimation and spin vector estimation of the target are discussed. The section concludes with a summary of the presented content.

3.1. Definition of Key Points

As analyzed in Section 2, the ISAR imaging of space targets satisfies the projection mapping relationship. Therefore, we can extract the projection features of typical components from the ISAR images and combine the imaging projection relationship to achieve the attitude estimation of space targets. As depicted in Figure 5, space targets usually consist of two key components, the solar panel and the main body, which can be used as typical features for attitude estimation.
In the OCS, the main body and the solar panel can both be represented by 3D vectors defined in the same way as LOS:
l b = L b cos θ b cos φ b , cos θ b sin φ b , sin θ b T
l p = L p cos θ p cos φ p , cos θ p sin φ p , sin θ p T
where L b and L p represent the length of the body and the solar panel, respectively, and θ p , θ b , φ p , and φ b are the elevation and azimuth of L b and L p , which are defined in the same way as φ LOS and θ LOS .
As shown in Figure 5a, the main body and the solar panel can be described by the four key points defined: p q , where q = 1 , 2 , 3 , 4 ; p 1 and p 2 are the two endpoints of the symmetry axis of the main body, and p 3 and p 4 denote the two endpoints of the symmetry axis of the solar panel. The positions of the four key points in the ISAR image are shown in Figure 5b. The red numbers in Figure 5b indicate the index of the key points. Then, l b and l p can be expressed as
l b = B p 1 B p 2
l p = B p 3 B p 4
where B p 1 , B p 2 , B p 3 , and B p 4 represent the coordinates of key points p 1 , p 2 , p 3 , and p 4 in the OCS, respectively.
According to Equations (14) and (20), the projected lengths of l b and l p in the RD image can be written as
r b d b = v r a n g e T v d o p p l e r T l b
r p d p = v r a n g e T v d o p p l e r T l p
where r b and r p represent the projection lengths of the main body and the solar panel in the range dimension, and d b and d p represent the projection lengths of the main body and the solar panel in the azimuth dimension. Here, r b and r p can be expressed as
r b = r p 1 r p 2
r p = r p 3 r p 4
r p q = v r a n g e T B p q , q = 1 , 2 , 3 , 4
d p q = v d o p p l e r T B p q , q = 1 , 2 , 3 , 4
where r p q and d p q denote the range dimension projection position and the azimuth dimension projection position of the key point p q in the ISAR image. By leveraging Equations (25)–(30), the imaging projection relationship between the projection features of the key points in the ISAR image and the vectors of typical components can be established.
To achieve accurate attitude estimation, it is necessary to extract the key points of the typical components in the ISAR image. Specifically, by precisely extracting the projection positions of the key point p q in both the range and azimuth dimensions of the ISAR image, a two-dimensional feature vector, denoted as g q = r p q , d p q T , can be obtained. In this paper, the ISAR-HRNet is used to extract key points, and the specific details are introduced in Section 3.2.

3.2. Key Point Extraction Using ISAR-HRNet

In terms of two-dimensional image key point extraction, deep learning-based methods have demonstrated superior performance. The lightweight high-resolution net (Lite-HRNet), through its unique cascaded pyramid network architecture, maintains high-resolution feature maps throughout the feature extraction process, effectively avoiding the problem of deep feature loss caused by downsampling in traditional networks. This characteristic has enabled the Lite-HRNet to be widely applied in key point extraction tasks, such as human pose estimation and animal behavior analysis [35,36,37].
However, compared with optical images in natural scenes, ISAR images have significant particularities, including inherent characteristics such as blurred edges, large dynamic range, lack of texture information, and image sparsity [38,39]. Moreover, due to the complex motion of the space target, ISAR images obtained at different observation times often have different degrees of geometric distortion, such as rotation and scaling [40,41]. These challenges put forward higher requirements for the robustness and accuracy of the key point extraction algorithm. Consequently, the traditional method based on the Lite-HRNet is difficult to directly apply to the key point extraction task of ISAR images.
To address the above issues, this paper proposes an improved key point extraction network for ISAR images based on the Lite-HRNet, namely, ISAR-HRNet. The ISAR-HRNet optimizes the structure of the Lite-HRNet and can achieve accurate extraction of key points in ISAR images. The following subsections elaborate on the ISAR-HRNet in detail from aspects such as network structure, design of feature extraction, and enhancement modules.

3.2.1. Structure of ISAR-HRNet

To improve the performance of key point extraction from the ISAR images of space targets, this paper adopts the Lite-HRNet as the backbone network and further improves the network on this basis, proposing the ISAR-HRNet. The improvements include the following: (1) adding the deformable convolution atrous spatial pyramid pooling (DC-ASPP) module in Stage 1 to strengthen the network’s feature extraction ability by expanding the receptive field of the feature map; (2) integrating the spatial and channel synergistic attention (SCSA) [42] and coordinate attention (CA) [43] into the shuffle block to construct the feature extraction block (FEB), thereby further enhancing the important features in the output of the ISAR-HRNet and improving the accuracy of key point extraction; and (3) adding a feature enhancement (FE) stage after Stage 4 to further strengthen the important features in the output of the ISAR-HRNet and boost key point extraction accuracy. The network structure of the ISAR-HRNet is shown in Figure 6.
As illustrated in Figure 6, the ISAR-HRNet incorporates a multi-branch architecture to retain high-resolution features and facilitate the fusion of features with varying scale resolutions. The network comprises four main stages, with the number of branches expanding incrementally across each stage. Each branch generates a feature map corresponding to a distinct resolution. As the network depth progresses, the number of branches escalates upon entering subsequent stages, accompanied by a corresponding halving of the feature map resolution. This hierarchical, progressive design enables the efficient extraction and integration of multi-level features, optimizing the network’s performance in handling complex ISAR images.

3.2.2. Deformable Convolution Atrous Spatial Pyramid Pooling

As the network structure of the ISAR-HRNet reveals, the output of Stage 1 will influence Stage 2, Stage 3, and Stage 4 through different branches and feature fusion. The feature extraction capacity of Stage 1 will directly affect the extraction performance of the key point information of the entire network. To enhance the feature extraction ability of the network, the DC-ASPP module is introduced in Stage 1. The structure of DC-ASPP is shown in Figure 7.
The different branches of ASPP are composed of convolution layers with distinct scales, and the receptive fields of feature maps of different branches vary [44]. Therefore, ASPP can generate a richer feature map of the receptive field, which is helpful for the positioning and extraction of key points. At the same time, in order to further enhance the feature extraction ability of the network, we embed the deformable convolution (DConv) module before the dilated convolution branch in ASPP [45]. Compared with the ordinary convolution with a fixed convolution sampling grid, DConv can dynamically adjust the shape of the convolution sampling grid by learning the offset. This enables the convolution sampling points to adaptively match the deformations of the ISAR images of the target under different motion states, overcome the influence of image sparsity, and better locate the position of key points [46].
By introducing DC-ASPP, the ISAR-HRNet can not only capture richer contextual information but also adaptively adjust the sampling positions of the convolutional kernels to extract the key point features in complex ISAR images more accurately.

3.2.3. Feature Extraction Block

Each branch of the ISAR-HRNet is composed of shuffle blocks, and the structure of shuffle blocks is shown in Figure 8a. However, the shuffle block has two deficiencies. First, when calculating the weights of cross-resolution branches, average pooling is used to generate low-resolution feature maps, which are likely to cause the loss of feature details. Second, spatial information is ignored when calculating the cross-resolution branch weight and the same resolution branch weight, which reduces the key point positioning ability of the network. Therefore, we introduce the enhanced weighted module (EWM) and CA in the shuffle block for feature enhancement and construct the feature extraction block (FEB). The structure of the FEB is shown in Figure 8b.
In the FEB, the EWM is used to calculate the weights of feature maps of different resolution branches. The structure of the EWM is shown in Figure 9. In the EWM, the feature maps of each resolution branch are first unified to the same resolution through the mixed pooling module (MPM). The structure of the MPM is shown in Figure 10. The MPM uses different pooling branches to capture spatial context features, highlight local salient features, and retain global features. The features of different branches can be better preserved after MPM processing.
The feature maps processed by the MPM are further input into the SCSA module to calculate the spatial weights of the cross-resolution branches. The SCSA module consists of two parts: shareable multi-semantic spatial attention (SMSA) and progressive channel-wise self-attention (PCSA) [42]. SMSA captures the multi-semantic spatial information of different resolution branches through multi-scale, depth-shared 1D convolution, effectively integrating global context dependencies and multi-semantic spatial priors. On the other hand, PCSA uses the self-attention mechanism to calculate the similarities and contributions among channels, thus reducing the semantic differences between the features of different resolution branches in SMSA. Through the processing of SCSA, the local and global spatial semantic information of different resolution branches can be effectively extracted, enhancing the ability to locate the key points of the target.
To further improve the accuracy of feature extraction and the positioning accuracy of key points, the feature map after EWM processing is again enhanced by CA [43]. Since CA considers both the relationship between channels and location information, it not only captures cross-channel information but also contains direction and location sensitive information, which makes the ISAR-HRNet more accurate in locating the key point features of the target.

3.2.4. Feature Enhancement Stage

In Stage 4 of the ISAR-LHRNet, interpolation is needed to upsample different resolution branches, and the output feature map for key point positioning is obtained by fusion. However, feature map upsampling by interpolation only uses sub-pixel neighborhood information, making it difficult to effectively capture the semantic information contained in the output features of different resolution branches. Therefore, we add an FE stage after Stage 4, which can aggregate context information in a larger receptive field, enhance the semantic consistency of cross-branch features, prevent information loss, and improve the positioning accuracy of key points.
Figure 11 shows the structure of the proposed FE stage. FE stage uses content-aware reassembly of features (CARAFE) [47] to upsample different resolution branches. CARAFE can achieve more intelligent upsampling based on feature content and can use feature information more effectively than traditional interpolation upsampling methods. At the same time, CA is used to further enhance the target key point feature positioning ability, and finally, the key point heat map that integrates the semantic information of four different resolution branches is obtained. The heat map not only contains rich semantic details but also more accurately represents the position information of key points.

3.3. Instantaneous Attitude Estimation

In this part, we estimate the orientations and length of the main body and solar panel based on the range dimension features of the target key points, and then obtain the instantaneous attitude of the imaging moment.
Based on Equation (25), the projection characteristics of the main body can be expressed as
r b , 1 r b , 2 r b , 3 R b = v r a n g e , 1 T v r a n g e , 2 T v r a n g e , 3 T V l b
where r b , 1 , r b , 2 , and r b , 3 represent the range dimension projection length of the main body in the images obtained by three ISAR stations, respectively. Generally speaking, the LOSs of the three ISAR stations are not coplanar, so V is a full rank matrix, that is, rank V T V = rank V = 3 , and V T V is invertible. Based on Equation (31), the orientation vectors of the main body and the solar panel can be estimated as follows:
l ^ b = V T V 1 V R ^ b
l ^ p = V T V 1 V R ^ p
where l ^ b and l ^ p are the estimated values of l b and l p , and R ^ b = r ^ b , 1 , r ^ b , 2 , r ^ b , 3 T and R ^ p = r ^ p , 1 , r ^ p , 2 , r ^ p , 3 T denote the range dimension projection lengths of the main body and solar panel extracted from different ISAR images.
Then, the lengths and orientations of the main body and the solar panel can be obtained:
L ^ b = l ^ b 2
n ^ b = l ^ b / L ^ b
L ^ p = l ^ p 2
n ^ p = l ^ p / L ^ p
Based on Equations (34) to (37), the instantaneous attitude of the space target and the lengths of the typical components can be obtained; the relevant procedure can be found in [1].

3.4. Spin Vector Estimation

In Section 3.3, the instantaneous attitude and typical component length of the space target are obtained based on the range dimension projection length of the ISAR image of the three ISAR stations. As for the spin vector of the target, it needs to be determined by the azimuth dimension projection lengths of the typical components in the ISAR images from the three ISAR stations.
According to the imaging projection principle, combined with Equations (25) and (26), we obtain
v d o p p l e r , 1 T l p l b r 1 = d p , 1 d b , 1 0
Generally speaking, rank l p l b r 1 = 3 holds. Therefore, the azimuth dimension projection vector v d o p p l e r , 1 T of ISAR station 1 can be estimated as follows:
v ^ d o p p l e r , 1 T = d ^ p , 1 d ^ b , 1 0 l ^ p l ^ b r 1 1
where d ^ p , 1 and d ^ b , 1 denote the Doppler dimension projection lengths of the solar panel and main body extracted from the ISAR image, and v ^ d o p p l e r , 1 is the estimated value of v d o p p l e r , 1 .
It can be known from Equation (20) that the ERV of the target for ISAR station 1 can be expressed as
ω ^ eff , 1 = λ 2 v ^ d o p p l e r , 1 T × r 1
where ω ^ eff , 1 is the estimated value of ω eff , 1 .
Similarly, we can estimate the azimuth dimension projection vector and the ERVs for ISAR station 2 and ISAR station 3:
v ^ d o p p l e r , 2 T = d ^ p , 2 d ^ b , 2 0 l ^ p l ^ b r 2 1
v ^ d o p p l e r , 3 T = d ^ p , 3 d ^ b , 3 0 l ^ p l ^ b r 3 1
ω ^ eff , 2 = λ 2 v ^ d o p p l e r , 2 T × r 2
ω ^ eff , 3 = λ 2 v ^ d o p p l e r , 3 T × r 3
In the rotational motion vectors of the target, ω LOS is completely perpendicular to the radar LOS, and all its motion components contribute to ISAR imaging. However, ω spin contains both components perpendicular to and along the LOS. Only the former induces the Doppler effect and contributes to ISAR imaging, while the latter is termed an ineffective rotational vector (IRV), as it generates no Doppler information and does not contribute to ISAR imaging. Therefore, the ERV of the target is composed of ω LOS and the perpendicular LOS component of ω spin . Combining with Equation (7), the ERV can be expressed as
ω eff , i = ω LOS , i + ω spin r i T ω spin r i
where ω eff , i , ω LOS , i , and r i are the ERV, the target equivalent rotation vector, and the LOS vector for ISAR station i , respectively.
Then IRV ω non , i and the ERV generated by the target spin ω eff , i spin for ISAR station i can be expressed as
ω non , i = r i T ω tar r i
ω eff , i spin = ω eff , i ω LOS , i
Finally, the spin vector of the target for ISAR station i is estimated to be
ω spin = ω eff , i spin + ω non , i
As Equation (46) reveals, ω non , i is generated by the spin motion and aligns with the direction of LOS. The geometric relationships of the motion vectors of the target for ISAR station i are shown in Figure 12.
It can be known from Equation (48) that the spin vector can be obtained according to ω non , i and ω eff , i spin , where ω eff , i spin can be obtained using Equation (47). According to Equation (48), when ω non , i changes along the LOS, the estimation result of ω spin will change along the straight line h i . The vector formed by the target centroid O S to any point on h i may be the estimated result of the target spin vector ω spin , such as O S Q and O S Q in Figure 12. At the same time, ω non , i , ω eff , i spin , and h i jointly determine the solution plane of ω spin . Since ω non , i cannot be determined, the estimated value of ω spin can vary within the solution plane. Consequently, a single ISAR is insufficient to determine ω spin uniquely.
Inspired by [21], we estimate the spin vector of the target through a three-station ISAR joint observation. As shown in Figure 13, each ISAR station can determine a solution plane of the target spin vector ω spin . Because ω spin must lie within the solution planes, and it is identical for different ISAR stations, the estimated result of ω spin must be the intersection line of the three stations’ solution planes, and its estimated value O S Q ^ should be determined by the intersection point Q ^ of the lines h i , h 2 , and h 3 .
Since there will inevitably be errors in the feature extraction process, the solution planes of the three ISARs may not intersect, and the relationships of motion vectors at this time are shown in Figure 14a. Due to the existence of errors, the solution planes of the three ISARs no longer intersect in a single line. As shown in Figure 14b, the spin vector estimates determined by the three ISAR stations are recorded as O S Q ^ 1 , O S Q ^ 2 , and O S Q ^ 3 , respectively. Q ^ 1 , Q ^ 2 , and Q ^ 3 are the points on lines h i , h 2 , and h 3 that are closest to Q ^ .
We denote O S Q ^ opt as the optimal estimate of ω spin , so O S Q ^ opt should be close to O S Q ^ 1 , O S Q ^ 2 , and O S Q ^ 3 at the same time. The sum of the distances between O S Q ^ opt and O S Q ^ 1 , O S Q ^ 2 , and O S Q ^ 3 can be expressed as
E Q ^ = O S Q ^ O S Q ^ 1 2 + O S Q ^ O S Q ^ 2 2 + O S Q ^ O S Q ^ 3 2 = Q ^ Q ^ 1 2 + Q ^ Q ^ 2 2 + Q ^ Q ^ 3 2
Therefore, the estimation of O S Q ^ opt is transformed into an optimization problem to minimize E Q ^ as follows:
Q ^ opt = arg min Q ^ E Q ^
In fact, the problem is about finding the point Q ^ opt that has the minimum sum of distances from the three lines h 1 , h 2 , and h 3 . In 3D space, only two points are needed to determine a straight line. Since ω non , i is along the direction of r i , according to Equation (48), two arbitrary points p s , i = x s , i , y s , i , z s , i T and p e , i = x e , i , y e , i , z e , i T on the straight line h i satisfy
O S p s , i = ω eff , i spin + κ s , i r i
O S p e , i = ω eff , i spin + κ e , i r i
where κ s , i and κ e , i are the adjustment coefficients for the two possible values of ω non , i , namely O S p s , i and O S p e , i . By adjusting the values of κ s , i and κ e , i , we can obtain different values of ω non , i .
In mathematics, the straight line h i can be described as an equation with only one variable as follows:
p l , i = p s , i + η i u i
u i = p e , i p s , i = x u , i , y u , i , z u , i
where η i is the linear adjustment coefficient of h i . By changing its value, different points on h i can be obtained.
Therefore, the equations of the three lines h 1 , h 2 , and h 3 are, respectively, as follows:
p l , 1 = p s , 1 + η 1 u 1
p l , 2 = p s , 2 + η 2 u 2
p l , 3 = p s , 3 + η 3 u 3
Suppose that x 1 , y 1 , z 1 , x 2 , y 2 , z 3 , and x 3 , y 3 , z 3 are points on h 1 , h 2 , and h 3 , respectively. Then, the 3-line linear system can be expanded in the following way:
1 x 1 + 0 y 1 + 0 z 1 x u , 1 η 1 0 η 2 0 η 3 = x s , 1 0 x 1 + 1 y 1 + 0 z 1 y u , 1 η 1 0 η 2 0 η 3 = y s , 1 0 x 1 + 0 y 1 + 1 z 1 z u , 1 η 1 0 η 2 0 η 3 = z s , 1 1 x 2 + 0 y 2 + 0 z 2 0 η 1 x u , 2 η 2 0 η 3 = x s , 2 0 x 2 + 1 y 2 + 0 z 2 0 η 1 y u , 2 η 2 0 η 3 = y s , 2 0 x 2 + 0 y 2 + 1 z 2 0 η 1 z u , 2 η 2 0 η 3 = z s , 2 1 x 3 + 0 y 3 + 0 z 3 0 η 1 0 η 2 x u , 3 η 3 = x s , 3 0 x 3 + 1 y 3 + 0 z 3 0 η 1 0 η 2 y u , 3 η 3 = y s , 3 0 x 3 + 0 y 3 + 1 z 3 0 η 1 0 η 2 z u , 3 η 3 = z s , 3
Then, the point with the minimum distance sum of h 1 , h 2 , and h 3 can be obtained by the vector m in Equation (59). The first three elements of m are the coordinates of the nearest point to be solved, and the last three elements are the linear adjustment coefficients [48]. Combined with Equations (55)–(57), they can represent the three points on h 1 , h 2 , and h 3 that are most adjacent to Q ^ opt , that is,
m = x ^ y ^ z ^ η ^ 1 η ^ 2 η ^ 3 T
Q ^ opt = x ^ , y ^ , z ^
p ^ l , 1 = p s , 1 + η ^ 1 u 1
p ^ l , 2 = p s , 2 + η ^ 2 u 2
p ^ l , 3 = p s , 3 + η ^ 3 u 3
where p ^ l , 1 , p ^ l , 2 , and p ^ l , 3 are the estimated values of Q ^ 1 , Q ^ 2 , and Q ^ 3 , and m can be solved by Equations (64)–(66) as follows:
m = M T M 1 M T D
M = 1 0 0 x u , 1 0 0 0 1 0 y u , 1 0 0 0 0 1 z u , 1 0 0 1 0 0 0 x u , 2 0 0 1 0 0 y u , 2 0 0 0 1 0 z u , 2 0 1 0 0 0 0 x u , 3 0 1 0 0 0 y u , 3 0 0 1 0 0 y u , 3
D = x s , 1 y s , 1 z s , 1 x s , 2 y s , 2 z s , 2 x s , 3 y s , 3 z s , 3 T
Then, the optimal estimated value of the target spin vector can be represented as
ω ^ spin = O S Q ^ opt

3.5. Summary

The proposed method can simultaneously obtain ISAR images of different perspectives through three-station ISAR joint observation. Combined with the ISAR imaging projection principle, the instantaneous attitude and spin vector of the target can be estimated. Firstly, the ISAR-HRNet is used to extract the key points of the target, and the range dimension features and azimuth dimension features of the key points are obtained. Then, the lengths and orientations of the target components are estimated based on the range dimension features of the key points. Finally, the spin vector is estimated based on the azimuth dimension features of the key points. The specific steps for attitude parameter estimation are given in Algorithm 1.
Algorithm 1: Attitude Estimation of Spinning Targets
1:
Initialization. Initialize the LOS vectors r 1 , r 2 and r 3 . Initialize the equivalent rotation vectors ω LOS , 1 , ω LOS , 2 and ω LOS , 3 . Extract the key point features r p 1 , d p 1 T , r p 2 , d p 2 T , r p 3 , d p 3 T and r p 4 , d p 4 T using ISAR-HRNet;
2:
Calculate l ^ b and l ^ p through Equations (32) and (33);
3:
Calculate L ^ b and n ^ b through Equations (34) to (35), and calculate L ^ p and n ^ p through Equations (36) to (37);
4:
Calculate ω ^ eff , 1 , ω ^ eff , 2 and ω ^ eff , 3 through Equations (40), (43) and (44);
5:
Calculate ω eff , 1 spin , ω eff , 2 spin and ω eff , 3 spin through Equation (47);
6:
Obtain the optimal estimated value ω ^ spin of the target spin vector based on Equations (59) to (67);
7:
Return n ^ b , n ^ p , L ^ b , L ^ p and ω ^ spin .

4. Experiments and Results

In order to verify the effectiveness of the proposed attitude estimation method, we designed and conducted a series of experiments. In Section 4.1, we explain the data used in the experiments. In Section 4.2, we analyze the effectiveness and robustness of the proposed method through experiments using simulated data. In Section 4.3, we verify the effectiveness of the ISAR-HRNet by using the publicly available measured ISAR data of space targets.

4.1. Data Description

Owing to the scarcity of measured ISAR data for space targets, this study uses simulated data for experimental verification. The physical optics (PO) method [49,50] is implemented to generate the ISAR echo of the target. The 3D model and two-line element (TLE) data of the target are sourced from publicly accessible space target databases [51,52]. Subsequently, based on the acquired 3D model and TLE data, after setting the locations of the ISAR stations, the ISAR echo is simulated via the PO method according to the imaging parameters. The detailed ISAR echo simulation process is illustrated in Figure 15.
TLE data is one of the common data used for orbit simulation of space targets. The first line of TLE data includes identification and time information, such as the target’s satellite catalog number, international designator, and element set epoch. The second line contains orbital parameters like orbit inclination, eccentricity, and right ascension of ascending node [52]. Based on TLE data, we can accurately calculate the target’s orbital position data in space within a given time period. In this paper, the TLE data of 6 targets are utilized for ISAR data simulation. The detailed parameters of the TLE data for each target are listed in Table 1.
On the basis of satisfying multi-station time synchronization, the distribution of stations will also affect the performance of attitude estimation. Generally, the condition number C V of V in Equation (32) can be used to evaluate the distribution of the multi-station observation system, and C V less than 500 can meet the task requirements [16]. In this paper, the deployment positions of three ISAR stations employed in the simulation are configured, as specified in Table 2 [21]. Based on the TLE data and the locations of the ISAR stations, Figure 16 shows the first visible arc of different targets relative to the ISAR stations. It is worth noting that these arcs do not appear simultaneously; rather, the figure depicts only the geometric relationships among the targets and the ISAR stations within the Earth-centered, Earth-fixed (ECEF) coordinate system.
The radar imaging parameters employed in the ISAR echo simulation are listed in Table 3. By utilizing diverse imaging parameters during the echo simulation, the generated echo data exhibit rich characteristics, which can effectively enhance the performance of the feature extraction network, so that our method can be applied to multi-station ISAR joint observation scenarios where each ISAR has different imaging parameters.
During the echo simulation process, the ideal positions of the key points in the IPP are calculated according to the imaging projection principle, followed by the automatic annotation of the key points. A total of 8 space targets with different structures are selected, and the examples of the 3D models and the corresponding ISAR images with annotated key points are shown in Figure 17, where the red numbers represent the index of the key points.
In this study, 3D models and TLE data of different space targets were combined to simulate the ISAR echo. Each space target was set to rotate around a specific shaft, with the angle between the rotation shaft and the ZS-axis ranging from 0° to 30° and the rotational speed varying from 0.01 to 0.03 rad/s. Subsequently, based on these settings, echo simulation was carried out, and the key point extraction dataset was constructed. This dataset altogether contains the echo data of 24,000 imaging apertures. Specifically, the echo data of 3000 imaging apertures are included for each space target. To facilitate model training and evaluation, the dataset was partitioned at an 8:2 ratio, yielding a training set and a test set.

4.2. Validation of Simulated Data

To validate the effectiveness and robustness of the proposed algorithm, three experiments were conducted using simulated data. In the first experiment, simulations were performed based on the 3D model and TLE data of TG-1 to verify the effectiveness of the proposed method. Subsequently, the proposed algorithm was compared with existing methods to further demonstrate its superiority. Finally, the robustness of the proposed algorithm was thoroughly analyzed.

4.2.1. Effectiveness Validation

In this part, a three-station joint observation and imaging scenario was constructed based on the 3D model and actual orbit of TG-1 to validate the effectiveness of the proposed attitude estimation method for spinning targets. Specifically, the TLE data of TG-1 corresponds to TLE 6 in Table 1, and the geographical locations of the ISAR stations were set according to Table 2. Using TLE 6, the orbital data of TG-1 during the period from 20:00:00.000 Coordinated Universal Time (UTC) on 22 September 2016 to 02:00:00.000 UTC on 23 September 2016 were simulated. In the ECEF coordinate system, the geometric relationships between the four visible arcs of TG-1 and the three ISAR stations are shown in Figure 18.
For the experiment, the orbital data of ARC 6-1 was selected. When the target was in a three-axis stable state, the changes in the elevation angle and azimuth angle of the LOS vector of each ISAR station in the OCS are shown in Figure 19a–c. When the target was in a spinning state, TG-1 was set to rotate around the ZS-axis at a speed of 0.015 rad/s. By combining the orbital motion and spin motion of TG-1, the elevation angle and azimuth angle of the equivalent LOS in the OCS can be obtained, as shown in Figure 19d–f. Through comparative analysis, it can be seen that spin motion significantly changes the variation characteristics of the LOS, which in turn affects the results of ISAR imaging.
Next, three imaging apertures were selected from ARC 6-1, namely aperture 1, aperture 2, and aperture 3, and the center instants of each aperture were t1, t2, and t3, respectively. The corresponding LOS information is indicated by the dot markings in Figure 19, and the observed ISAR images corresponding to the three apertures are shown in Figure 20.
Subsequently, the key points were extracted from the ISAR images by the ISAR-HRNet, and the results are listed in Table 4. The data within the parentheses of each cell in Table 4 represent the number of range cells and the number of azimuth cells, respectively. It is worth mentioning that only part of the average offset greater than 0.5 cells was retained. As can be seen from Table 4, the average offset of the key points extracted by the ISAR-HRNet is within 3 cells, indicating that the ISAR-HRNet can accurately extract the key points of the target.
Based on the results of the key point extraction, the proposed method was employed to estimate the attitude of TG-1 in the three imaging apertures. The geometric relationships between the actual spin vector and the estimated spin vector in different apertures are illustrated in Figure 21.
Additionally, the estimation results and estimation errors of the different apertures are presented in Table 5. Specifically, the length errors of the typical components and the spin speed error are quantified by calculating the absolute difference between the estimated values and the true values, and the orientation errors of the typical components and the spin orientation error are determined by measuring the included angle between the estimated values and the true values.
The attitude estimation results in Table 5 show that in different imaging apertures, all the length errors of typical components obtained by the proposed algorithm are less than 0.2 m. All the estimation errors of the orientation vectors of typical components are less than 1°. Moreover, the errors of the spin speed and spin direction vector are less than 0.0003 rad/s and 2°, respectively. These results fully verify the effectiveness of the proposed attitude estimation method.

4.2.2. Comparison Analysis

To comprehensively assess the performance of the proposed method, we compared it with different methods. These methods vary in key point extraction, target motion modeling, and parameter solving strategies. The specific configurations are detailed in Table 6. Among them, Methods 1 to 5 use the ResNet key point extraction network (RKPEN) [17], key point R-CNN (KRCNN) [22], parallel multi-scale key point extraction network (PMKN) [53], stacked hourglass key point extraction network (SHKEN) [54], and Lite-HRNet [35], respectively, for key point extraction. All these methods take the spin motion of the target into account, and the analytical method is adopted to solve the spin parameters of the target. Notably, Method 6, Method 7, and the proposed algorithm all utilize the ISAR-HRNet for target key point extraction. In terms of target motion modeling, Method 6 regards the target as three-axis stabilized, while both Method 7 and the proposed algorithm consider the spin motion of the target. Regarding the solution of spin parameters, both Method 6 and the proposed algorithm use the analytical method, while Method 7 uses the intelligent optimization algorithm.
To validate the performance of the proposed method, we conducted three sets of comparative experiments based on the above 8 methods. First, we compared Methods 1 to 5 with the proposed method to investigate the differences in accuracy among different key point extraction networks. Second, using Method 6 as a control, we analyzed the performance of different attitude estimation frameworks in attitude estimation for spinning targets. Finally, by comparing with Method 7, we validated the advantage of the analytical method over the intelligent optimization algorithm in spin parameter solving strategies.
Firstly, we compared the accuracy of the different key point extraction methods using the data from the test set. All key point extraction methods complete the model training in an environment equipped with a single RTX 5000 graphics card and CUDA 11.3. Since the detailed training processes of the comparative methods have been elaborated in the relevant literature, they will not be repeated in this paper. Instead, the training configuration of the proposed ISAR-HRNet is highlighted. Specifically, the training was configured with a batch size of 10, and the number of training iterations was set to 150. The initial learning rate was set to 0.002, and the momentum factor and weight decay were set to 0.9 and 0.0001, respectively. Consequently, it took a total of 15 h to complete the training of the model.
The statistical probability distributions of the average offsets of the key points extracted by different methods are shown in Figure 22. To intuitively analyze the performance of different methods, we added some red dashed lines at the position where the feature offset is 5 cells in the figure. The results indicate that all the key point extraction methods can achieve an extraction accuracy of 9 cells. Among them, the key point extraction offsets of the comparison algorithms are distributed within 7 cells, while the key point extraction offsets of the ISAR-HRNet are more distributed within 5 cells. The means and variances of the extraction offsets of different methods are shown in Figure 23. It can be seen from the figure that both the mean and variance of the extraction offsets of the ISAR-HRNet are smaller than those of the comparison algorithms. The above results validate the accuracy of the ISAR-HRNet in key point extraction.
Subsequently, we conducted attitude estimation experiments using data from three imaging apertures in ARC 6-1 to further analyze the attitude estimation performance of different methods. The experimental results are presented in Table 7. When analyzed from the perspective of the average offsets, the offsets of the proposed algorithm in both the range and azimuth dimensions were within a relatively small and stable range. In contrast, although the comparative methods achieved more accurate extraction results than the proposed algorithm in the data of some apertures, overall, their extraction errors were relatively large and their performance varied significantly across different apertures. To evaluate the performance of different attitude estimation methods more intuitively, we bolded the data with the most accurate attitude estimation in each aperture. Specifically, in aperture 1 and aperture 3, the proposed algorithm achieved optimal attitude estimation performance due to the minimum feature extraction offsets in both the range and azimuth dimensions. However, in aperture 2, Method 5 outperformed the proposed algorithm in terms of instantaneous attitude estimation, and Method 4 showed better performance in spin parameter estimation. This was because the range dimension feature extraction offset of Method 5 was smaller than that of the proposed algorithm, and the azimuth dimension feature extraction offset of Method 4 was smaller than that of the proposed algorithm. An in-depth discussion of the detailed reasons for this phenomenon will be conducted in Section 4.2.3. Based on the attitude estimation error data of the three apertures, the proposed algorithm obtained relatively accurate estimation results in each aperture, which fully verified its effectiveness and stability.
Next, to assess the impact of target motion modeling on attitude estimation performance, we employed Method 6 and the proposed algorithm to estimate the attitude of the target using multi-station joint observation ISAR data. Method 6 is applicable to both single-station and multi-station observation modes. In the single-station mode, data from each ISAR station is processed independently, and the attitude estimation framework described in [12] is utilized for attitude estimation. Conversely, the multi-station mode involves leveraging the instantaneous attitude estimation framework outlined in Section 3.3 to estimate the instantaneous attitudes of the three imaging apertures. To satisfy the long-term observation requirements of Method 6 (single-station mode), we conducted attitude estimation using data from all imaging apertures in ACR 6-1. For method 6 (multi-station mode) and the proposed algorithm, the average attitude estimation errors of imaging apertures 1–3 were adopted as the final estimation results. The attitude estimation results of these three methods are presented in Table 8.
It can be seen from Table 8 that since Method 6 (single-station mode) does not consider the attitude change caused by the spin motion, the attitude estimation model is mismatched, the attitude estimation result has a large deviation, and the attitude estimation result cannot meet the task requirements. In contrast, Method 6 (multi-station mode) uses simultaneous observations of three ISAR stations, which can estimate the instantaneous attitude of the target. Nevertheless, since the spin motion of the target is not considered, the spin parameters of the target cannot be estimated. Our proposed algorithm, however, takes the target’s spin motion into account in the attitude estimation framework, which can accurately estimate the lengths and orientations of the typical components and the spin vector of the target, thus making it applicable to the attitude estimation scenarios of both spinning and three-axis stabilized targets.
Lastly, to investigate the influence of the parameter solving method on attitude estimation performance, a comparative experiment was conducted between Method 7 and our proposed method. Method 7 employs the chaotic grasshopper optimization algorithm (CGOA) [16] to estimate the spin vector. In the experiment, the attitude estimation was performed 1000 times based on the ISAR data of three apertures. The performance and computational times of the two methods are shown in Table 9. It should be noted that since the two methods only differ in the way of solving the spin vector parameters, their estimation accuracies for the target’s component length and orientation are identical. Therefore, the comparison is focused solely on the estimation performance of the target’s spin speed and spin axis orientation.
As clearly depicted in Table 9, Method 7 estimates the target’s spin parameters based on the intelligent optimization algorithm, which needs repeated iterative optimization and tends to converge to local optima, resulting in low estimation accuracy and significant time consumption. On the contrary, the proposed algorithm directly gives the analytical expression of the estimated value of the spin vector, so the accuracy of parameter estimation is high and the time consumption is short. It is worth noting that the running speed of the proposed method is two orders of magnitude higher than that of Method 7, showing significant performance advantages in terms of computational efficiency and estimation accuracy.

4.2.3. Robustness Analysis

To analyze the robustness of the proposed method against feature extraction offsets, we added different degrees of average offsets of range dimension features and average offsets of azimuth dimension features to the true values of the range dimension features and azimuth dimension features of the target key points for statistical analysis. Specifically, the ranges of the average range dimension feature offsets and the average azimuth dimension feature offsets were 0 to 10 cells, respectively, with a step size of 0.25 cells. Then, for each combination of range dimensional feature offset and azimuth dimensional feature offset, 1000 repeated tests were conducted for each imaging aperture, and the average values of the length errors, the orientation errors of the typical components, the spin speed errors, and the spin orientation errors of the target were recorded. The relevant results are shown in Figure 24, Figure 25 and Figure 26. The study found no significant correlation between the length and orientation estimation errors of the typical components and the azimuth dimension features. This phenomenon could be explained by the theoretical derivation of Equations (32) and (33). In other words, the length and orientation estimation accuracy of the solar panels and the main body depended only on the range dimension features. On the other hand, the spin vector estimation error of the target was affected by both the range dimension offset and the azimuth dimension offset, and the influence of the azimuth dimension offset was more significant. This phenomenon can be understood by combining the theoretical analysis of Equations (39)–(44). That is, the azimuth dimension feature offset directly affected the estimation accuracy of the target’s spin rate and spin axis orientation. At the same time, the range dimension feature offset led to the transfer error between the target’s orientation and size, which indirectly affected the estimation accuracy of the target’s spin rate and spin axis orientation. Moreover, it was worth noting that as the offsets of the range dimension features increased, the influence of the transfer error reached the same level as that of the offsets of the azimuth dimension features, and could even exceed the latter. Consequently, at this time, there were relatively large errors in the attitude estimation results. Finally, the experimental data in Table 7 also verify our conclusions.
Combined with the key point extraction performance of the ISAR-HRNet, the attitude estimation robustness of the proposed method can be obtained. As shown in the areas within red dotted line frames in Figure 24, Figure 25 and Figure 26, when the feature offsets are less than 5 cells, the proposed algorithm can achieve an attitude estimation accuracy with a size estimation error of less than 0.3 m, a component pointing error of less than 2°, and a spin vector estimation error of less than 8° in all of the above three imaging apertures. Such estimation accuracy can meet the requirements of most space target observation tasks.

4.3. Validation of Measured Data

In order to verify the effectiveness of the proposed method on the measured data, we tested it on publicly available measured data [55,56]. However, since these data were obtained by a single station ISAR without providing observation LOS information, the proposed method could not be applied for attitude estimation. Therefore, this experiment primarily aimed to verify the key point extraction performance of the ISAR-HRNet on the measured data.
Specifically, the measured data used in the experiment were derived from the German tracking and imaging radar (TIRA), including the measured data of TG-1 and another space target. The processing flow of the measured data is shown in Figure 27. For the measured data of TG-1 [55], the video file was extracted frame by frame, and a total of 840 frames of images were obtained. Then, 750 of them were selected as training data, and the remaining 90 frames were used for key point extraction verification.
As for the measured data of another satellite [56], after noise with different signal-to-noise ratios (SNR) was added, data expansion was performed by rotation, scaling, and flipping. Consequently, a total of 200 frames of images were obtained, with 190 frames used for training and the remaining 10 frames for verification. The key points of the measured data were manually labeled according to experience. Finally, Figure 28a–f give the key point labeling results of some measured data.
To verify the effectiveness of the ISAR-HRNet, we first carried out a key point extraction experiment based on the test data, and the average offset of key point extraction was statistically analyzed. Subsequently, Figure 28g–i present the visualization results of key point extraction for three ISAR images in the verification data. Visually, the position distribution of the key points is relatively accurate. Moreover, the average offset of the extraction of each key point is presented in Table 10.
From the statistical results in Table 10, it can be seen that the ISAR-HRNet can still accurately extract the key points of the target on the measured ISAR data, which fully proves the effectiveness of the ISAR-HRNet in the key point extraction of the measured data.

5. Discussion

The proposed method, leveraging the excellent performance of ISAR-HRNet, can accurately extract key points from ISAR images, providing reliable feature inputs for the estimation of target instantaneous attitude and spin vector. Additionally, the joint observation configuration of multi-station ISAR enables the method to possess the capability of estimating spin parameters of space targets, and the analytical spin parameter solving method ensures high computational efficiency. The experimental results in Section 4 fully validate the effectiveness and robustness of the method, demonstrating that it can accurately achieve attitude estimation of spinning targets and meet the task requirements of attitude estimation for most uncontrolled space targets.

6. Conclusions

Estimating the attitude of the spinning space target is beneficial for the reentry observation and collision avoidance of the target, which is of utmost importance for safeguarding the security of space assets. In order to realize the attitude estimation of the spinning space target, this paper proposes a space target attitude estimation method based on multi-station ISAR joint observation. The method uses three ISARs to jointly observe the space target, then employs the proposed ISAR-HRNet to automatically extract the key points of the ISAR image, and solves the instantaneous attitude and spin parameters of the target based on the distance dimension and azimuth dimension features of the key points. The effectiveness and robustness of the proposed method are verified in experiments using simulated data, and the key point extraction performance of the ISAR-HRNet is also validated in experiments using measured data. In contrast to the existing methods, the proposed method does not require a 3D model of the target and can automatically extract the target features. Furthermore, the analytical spin vector estimation method can improve the efficiency of attitude estimation while ensuring accuracy. This method can be applied to most spacecraft attitude estimation tasks.
However, due to the difficulty of obtaining the measured data, the attitude estimation experiments for space targets in this paper were conducted based on the simulated data. Meanwhile, extremely low SNR and sparse aperture affect image quality, reduce the accuracy of key point extraction, and further impact the precision of attitude estimation. In the future, we will further carry out the attitude estimation experiment of the space target in combination with the publicly available ISAR-measured data. and improve the proposed method to enhance its performance under extremely low SNR and sparse aperture conditions.

Author Contributions

Conceptualization, J.L., C.Y. and Y.Z.; Methodology, J.L.; Software, J.L. and C.X.; Visualization, J.H. and P.L.; Writing—original draft, J.L.; Writing—review and editing, J.L., J.H. and P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the editors of Remote Sensing and the anonymous reviewers for their patience, helpful remarks, and useful feedback.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, C.Y.; Jiang, L.B.; Li, M.X.; Ren, X.Y.; Wang, Z. Slow-Spinning Spacecraft Cross-Range Scaling and Attitude Estimation Based on Sequential ISAR Images. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7469–7485. [Google Scholar] [CrossRef]
  2. Liu, L.; Zhou, Z.B.; Zhou, F.; Shi, X.R. A New 3-D Geometry Reconstruction Method of Space Target Utilizing the Scatterer Energy Accumulation of ISAR Image Sequence. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8345–8357. [Google Scholar] [CrossRef]
  3. Guo, W.J.; Yang, Q.; Wang, H.Q.; Luo, C.G. Axis Estimation of Spaceborne Targets via Inverse Synthetic Aperture Radar Image Sequence Based on Regression Network. Remote Sens. 2024, 16, 4148. [Google Scholar] [CrossRef]
  4. Li, B.; Wang, J.L.; Fu, T.; Zhao, L.Z.; Zhang, S.; Lv, P. Estimating the Hotspot Area Observed by Earth Imaging Satellite via ISAR Image Sequence. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5101917. [Google Scholar] [CrossRef]
  5. Deng, J.Y.; Xie, P.F.; Zhang, L.; Cong, Y.L. ISAR-NeRF: Neural Radiance Fields for 3-D Imaging of Space Target from Multiview ISAR Images. IEEE Sens. J. 2024, 24, 11705–11722. [Google Scholar] [CrossRef]
  6. Gong, R.; Wang, L.; Wu, B.; Zhang, G.; Zhu, D.Y. Optimal Space-Borne ISAR Imaging of Space Objects with Co-Maximization of Doppler Spread and Spacecraft Component Area. Remote Sens. 2024, 16, 1037. [Google Scholar] [CrossRef]
  7. Lemmens, S.; Krag, H. Sensitivity of automated attitude determination form ISAR radar mappings. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS 2013), Maui, HI, USA, 10–13 September 2013. [Google Scholar]
  8. Lemmens, S.; Krag, H.; Rosebrock, J.; Carnelli, I. Radar mappings for attitude analysis of objects in orbit. In Proceedings of the 6th European Conference on Space Debris, Darmstadt, Germany, 22–25 April 2013. [Google Scholar]
  9. Sommer, S.; Rosebrock, J.; Cerutti-Maori, D.J.E.; Leushacke, L. Temporal analysis of envisat’s rotational motion. In Proceedings of the 7th European Conference on Space Debris, Darmstadt, Germany, 18–21 April 2017. [Google Scholar]
  10. Du, R.Z.; Zhou, Z.B.; Liu, L.; Bai, X.R.; Zhou, F. A New Attitude Estimation Method of Space Target Utilizing ISAR Image Sequence Under Low SNR. In Proceedings of the 2021 CIE International Conference on Radar (Radar), Haikou, China, 15–19 December 2021; pp. 57–61. [Google Scholar]
  11. Zhou, Y.J.; Zhang, L.; Wang, H.X.; Qiao, Z.J.; Hu, M.Q. Attitude estimation of space targets by extracting line features from ISAR image sequences. In Proceedings of the 2017 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xiamen, China, 22–25 October 2017; pp. 1–4. [Google Scholar]
  12. Zhou, Y.J.; Zhang, L.; Cao, Y.H.; Wu, Z.H. Attitude Estimation and Geometry Reconstruction of Satellite Targets Based on ISAR Image Sequence Interpretation. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 1698–1711. [Google Scholar] [CrossRef]
  13. Wang, J.D.; Li, Y.C.; Du, L.; Song, M.; Xing, M. Joint Estimation of Satellite Attitude and Size Based on ISAR Image Interpretation and Parametric Optimization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5103817. [Google Scholar] [CrossRef]
  14. Xie, P.F.; Zhang, L.; Du, C.; Wang, X.Q.; Zhong, W.J. Space Target Attitude Estimation from ISAR Image Sequences with Key Point Extraction Network. IEEE Signal Process. Lett. 2021, 28, 1041–1045. [Google Scholar] [CrossRef]
  15. Xie, P.F.; Zhang, L.; Ma, Y.; Zhou, Y.J.; Wang, X.Q. Attitude Estimation and Geometry Inversion of Satellite Based on Oriented Object Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4023505. [Google Scholar] [CrossRef]
  16. Zhou, Y.J.; Zhang, L.; He, C.Y. Dynamic Estimation of Spin Spacecraft Based on Multiple-Station ISAR Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2977–2989. [Google Scholar] [CrossRef]
  17. Zhou, Y.J.; Xie, P.F.; Li, C.W.; Jian, M.; Zhang, L.; An, Z.W. Automatic Dynamic Estimation of On-Orbit Satellites Through Spaceborne ISAR Imaging. IEEE Trans. Radar Syst. 2023, 1, 34–47. [Google Scholar] [CrossRef]
  18. Zhou, Y.J.; Zhang, L.; Cao, Y.H.; Huang, Y. Optical-and-Radar Image Fusion for Dynamic Estimation of Spin Satellites. IEEE Trans. Image Process. 2020, 29, 2963–2976. [Google Scholar] [CrossRef] [PubMed]
  19. Zhou, W.T.; Liu, L.; Du, R.Z.; Wang, Z.; Shang, R.H.; Zhou, F. Three-Dimensional Reconstruction of Space Targets Utilizing Joint Optical-and-ISAR Co-Location Observation. Remote Sens. 2025, 17, 287. [Google Scholar] [CrossRef]
  20. Long, B.; Tang, P.; Wang, F.; Jin, Y.-Q. 3-D Reconstruction of Space Target Based on Silhouettes Fused ISAR-Optical Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5213919. [Google Scholar] [CrossRef]
  21. Du, R.Z.; Liu, L.; Bai, X.R.; Zhou, Z.B.; Zhou, F. Instantaneous Attitude Estimation of Spacecraft Utilizing Joint Optical-and-ISAR Observation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5112114. [Google Scholar] [CrossRef]
  22. Zhou, Z.B.; Jin, X.G.; Liu, L.; Zhou, F. Three-Dimensional Geometry Reconstruction Method from Multi-View ISAR Images Utilizing Deep Learning. Remote Sens. 2023, 15, 1882. [Google Scholar] [CrossRef]
  23. SGP4. Available online: https://www.mathworks.com/matlabcentral/fileexchange/62013-sgp4 (accessed on 18 April 2025).
  24. Pittet, J.-N.; Šilha, J.; Schildknecht, T. Spin motion determination of the Envisat satellite through laser ranging measurements from a single pass measured by a single station. Adv. Space Res. 2018, 61, 1121–1131. [Google Scholar] [CrossRef]
  25. Song, C.; Lin, H.-Y.; Zhao, C.-Y. Analysis of Envisat’s rotation state using epoch method. Adv. Space Res. 2020, 66, 2681–2688. [Google Scholar] [CrossRef]
  26. Li, J.S.; Zhang, Y.S.; Yin, C.B.; Xu, C.; Li, P.J.; He, J. A Novel Joint Motion Compensation Algorithm for ISAR Imaging Based on Entropy Minimization. Sensors 2024, 24, 4332. [Google Scholar] [CrossRef]
  27. Shao, S.; Liu, H.W.; Zhang, L.; Yan, J.K.; Wei, J.Q. Model-Data Co-Driven Integration of Detection and Imaging for Geosynchronous Targets with Wideband Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5235616. [Google Scholar] [CrossRef]
  28. Bezerra, F.D.M.; Santos, L.A. A new look at the Euler-Rodrigues formula for three-dimensional rotation. Palest. J. Math. 2024, 13, 243–248. [Google Scholar]
  29. Guo, X.R.; Liu, F.K.; Huang, D.R. Migration through Resolution Cell Correction and Sparse Aperture ISAR Imaging for Maneuvering Target Based on Whale Optimization Algorithm-Fast Iterative Shrinkage Thresholding Algorithm. Sensors 2024, 24, 2148. [Google Scholar] [CrossRef]
  30. Yang, J.X.; Wang, Y. Sparse Aperture Inverse Synthetic Aperture Radar Imaging for Maneuvering Targets with Migration Through Resolution Cells Correction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 4446–4472. [Google Scholar] [CrossRef]
  31. Liu, F.K.; Huang, D.R.; Guo, X.R.; Feng, C.Q. Joint Range Alignment and Autofocus Method Based on Combined Broyden-Fletcher-Goldfarb-Shanno Algorithm and Whale Optimization Algorithm. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5214617. [Google Scholar] [CrossRef]
  32. Cai, J.; Martorella, M.; Chang, S.Q.; Liu, Q.H.; Ding, Z.G.; Long, T. Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton’s Method. IEEE Sens. J. 2021, 21, 4474–4487. [Google Scholar] [CrossRef]
  33. Xu, D.; Wang, X.; Wu, Z.X.; Fu, J.X.; Zhang, Y.H.; Chen, J.L.; Xing, M.D. Space Target 3-D Reconstruction Using Votes Accumulation Method of ISAR Image Sequence. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9881–9893. [Google Scholar] [CrossRef]
  34. Zhou, Y.J.; Zhang, L.; Xing, C.; Xie, P.F.; Cao, Y.H. Target Three-Dimensional Reconstruction from the Multi-View Radar Image Sequence. IEEE Access 2019, 7, 36722–36735. [Google Scholar] [CrossRef]
  35. Yu, C.Q.; Xiao, B.; Gao, C.X.; Yuan, L.; Zhang, L.; Sang, N.; Wang, J.D. Lite-HRNet: A Lightweight High-Resolution Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10435–10445. [Google Scholar]
  36. Kato, S.; Hotta, K.; Hatakeyama, Y.; Konishi, Y. Lite-HRNet Plus: Fast and Accurate Facial Landmark Detection. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023. arXiv:2308.12133. [Google Scholar]
  37. Pang, S.C.; Peng, R.R.; Dong, Y.K.; Yuan, Q.; Wang, S.T.; Sun, J.Q. JointMETRO: A 3D reconstruction model for human figures in works of art based on transformer. Neural Comput. Appl. 2024, 36, 11711–11725. [Google Scholar] [CrossRef]
  38. Liu, L.; Zhou, Z.; Li, C.; Zhou, F. An Enhanced Sequential ISAR Image Scatterer Trajectory Association Method Utilizing Modified Label Gaussian Mixture Probability Hypothesis Density Filter. Remote Sens. 2025, 17, 354. [Google Scholar] [CrossRef]
  39. Zhao, L.Z.; Wang, J.L.; Su, J.Y.; Luo, H.Y. Spatial Feature-Based ISAR Image Registration for Space Targets. Remote Sens. 2024, 16, 3625. [Google Scholar] [CrossRef]
  40. Ni, P.S.; Liu, Y.Y.; Pei, H.; Du, H.Z.; Li, H.L.; Xu, G. CLISAR-Net: A Deformation-Robust ISAR Image Classification Network Using Contrastive Learning. Remote Sens. 2023, 15, 33. [Google Scholar] [CrossRef]
  41. Lu, W.; Zhang, Y.S.; Yin, C.B.; Lin, C.Y.; Xu, C.; Zhang, X. A Deformation Robust ISAR Image Satellite Target Recognition Method Based on PT-CCNN. IEEE Access 2021, 9, 23432–23453. [Google Scholar] [CrossRef]
  42. Si, Y.Z.; Xu, H.Y.; Zhu, X.Z.; Zhang, W.H.; Dong, Y.; Chen, Y.X.; Li, H.B. SCSA: Exploring the synergistic effects between spatial and channel attention. Neurocomputing 2025, 634, 129866. [Google Scholar] [CrossRef]
  43. Hou, Q.B.; Zhou, D.Q.; Feng, J.S. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
  44. Sunandini, G.; Sivanpillai, R.; Sowmya, V.; Variyar, V.V.S. Significance of Atrous Spatial Pyramid Pooling (ASPP) in Deeplabv3+ for Water Body Segmentation. In Proceedings of the 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 23–24 March 2023; pp. 744–749. [Google Scholar]
  45. Ma, P.F.; Ma, J.; Wang, X.J.; Yang, L.C.; Wang, N.N. Deformable convolutional networks for multi-view 3D shape classification. Electron. Lett. 2018, 54, 1373–1374. [Google Scholar] [CrossRef]
  46. Xue, R.H.; Bai, X.R.; Zhou, F. SAISAR-Net: A Robust Sequential Adjustment ISAR Image Classification Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5214715. [Google Scholar] [CrossRef]
  47. Wang, J.Q.; Chen, K.; Xu, R.; Liu, Z.W.; Chen, C.L.; Lin, D. CARAFE: Content-Aware ReAssembly of FEatures. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
  48. Nearest Approaches to Multiple Lines in N-Dimensional Space. Available online: https://www.crewes.org/Documents/ResearchReports/2010/CRR201032.pdf (accessed on 17 April 2025).
  49. Franceschetti, G.; Lodice, A.; Rccio, D. A canonical problem in electromagnetic backscattering from buildings. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1787–1801. [Google Scholar] [CrossRef]
  50. Boag, A. A fast physical optics (FPO) algorithm for high frequency scattering. IEEE Trans. Antennas Propag. 2004, 52, 197–204. [Google Scholar] [CrossRef]
  51. NASA 3D Resources. Available online: https://nasa3d.arc.nasa.gov/models (accessed on 15 April 2025).
  52. Space-Track. Available online: https://www.space-track.org/ (accessed on 17 April 2025).
  53. Wang, C.Y.; Jiang, L.B.; Ren, X.Y.; Zhong, W.J.; Wang, Z. Automatic Instantaneous Attitude Estimation Framework for Spacecraft Based on Colocated Optical/ISAR Observation. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3502005. [Google Scholar] [CrossRef]
  54. Lu, X.; Zhang, S.H.; Liu, A.F.; Li, R.Z. Automatic Instantaneous Attitude Estimation of Satellite Based on Optical and ISAR Image Fusion. In Proceedings of the 2024 9th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 12–14 July 2024; pp. 659–664. [Google Scholar]
  55. Monitoring the Re-Entry of the Chinese Space Station Tiangong-1 with TIRA. Available online: https://www.fhr.fraunhofer.de/en/businessunits/space/monitoring-the-re-entry-of-the-chinese-space-station-tiangong-1-with-tira.html (accessed on 1 April 2018).
  56. Sharper ISAR Images of Satellites and Other Space Objects. Available online: https://www.fhr.fraunhofer.de/en/sections/Radar-for-Space-Situational-Awareness-RWL/sharper-isar-images-of-satellites-and-other-space-objects-jb2019.html (accessed on 17 April 2025).
Figure 1. The imaging geometry of a multi-station ISAR joint observation system.
Figure 1. The imaging geometry of a multi-station ISAR joint observation system.
Remotesensing 17 02263 g001
Figure 2. Definition of LOS.
Figure 2. Definition of LOS.
Remotesensing 17 02263 g002
Figure 3. ISAR imaging projection geometry of spinning space targets.
Figure 3. ISAR imaging projection geometry of spinning space targets.
Remotesensing 17 02263 g003
Figure 4. Flowchart of the proposed method.
Figure 4. Flowchart of the proposed method.
Remotesensing 17 02263 g004
Figure 5. Definition of typical components and key points of space targets. (a) 3D model of TG-1; (b) ISAR image of TG-1.
Figure 5. Definition of typical components and key points of space targets. (a) 3D model of TG-1; (b) ISAR image of TG-1.
Remotesensing 17 02263 g005
Figure 6. Network structure of the ISAR-HRNet.
Figure 6. Network structure of the ISAR-HRNet.
Remotesensing 17 02263 g006
Figure 7. Structure diagram of the DC-ASPP.
Figure 7. Structure diagram of the DC-ASPP.
Remotesensing 17 02263 g007
Figure 8. Structure of the shuffle block and FEB. (a) Shuffle block; (b) FEB.
Figure 8. Structure of the shuffle block and FEB. (a) Shuffle block; (b) FEB.
Remotesensing 17 02263 g008
Figure 9. Structure diagram of the EWM.
Figure 9. Structure diagram of the EWM.
Remotesensing 17 02263 g009
Figure 10. Structure diagram of the MPM.
Figure 10. Structure diagram of the MPM.
Remotesensing 17 02263 g010
Figure 11. Structure diagram of the FE stage.
Figure 11. Structure diagram of the FE stage.
Remotesensing 17 02263 g011
Figure 12. Relationships of the motion vectors.
Figure 12. Relationships of the motion vectors.
Remotesensing 17 02263 g012
Figure 13. Geometric relationships of motion vectors under ideal conditions.
Figure 13. Geometric relationships of motion vectors under ideal conditions.
Remotesensing 17 02263 g013
Figure 14. Geometric relationships of the solution planes with errors. (a) Geometric relationships of the solution planes; (b) estimated value of the spin vector.
Figure 14. Geometric relationships of the solution planes with errors. (a) Geometric relationships of the solution planes; (b) estimated value of the spin vector.
Remotesensing 17 02263 g014
Figure 15. The process of ISAR echo simulation.
Figure 15. The process of ISAR echo simulation.
Remotesensing 17 02263 g015
Figure 16. Geometric relationships between the targets and the ISAR stations.
Figure 16. Geometric relationships between the targets and the ISAR stations.
Remotesensing 17 02263 g016
Figure 17. Examples of the 3D model and the ISAR images with annotated key points of the target. (ah) 3D models of TG-1, Sentinel, CloudSat, CALIPSO, Meteor, Aqua, Jason 1, and COROT; (ip) ISAR images of TG-1, Sentinel, CloudSat, CALIPSO, Meteor, Aqua, Jason 1, and COROT.
Figure 17. Examples of the 3D model and the ISAR images with annotated key points of the target. (ah) 3D models of TG-1, Sentinel, CloudSat, CALIPSO, Meteor, Aqua, Jason 1, and COROT; (ip) ISAR images of TG-1, Sentinel, CloudSat, CALIPSO, Meteor, Aqua, Jason 1, and COROT.
Remotesensing 17 02263 g017
Figure 18. Geometric relationships between the visible arcs and ISAR stations.
Figure 18. Geometric relationships between the visible arcs and ISAR stations.
Remotesensing 17 02263 g018
Figure 19. Variations in LOSs and equivalent LOSs. (ac) Variations in LOSs of ISAR1, ISAR2, and ISAR 3; (df) variations in equivalent LOSs of ISAR1, ISAR2, and ISAR 3.
Figure 19. Variations in LOSs and equivalent LOSs. (ac) Variations in LOSs of ISAR1, ISAR2, and ISAR 3; (df) variations in equivalent LOSs of ISAR1, ISAR2, and ISAR 3.
Remotesensing 17 02263 g019
Figure 20. ISAR images of the three imaging apertures. (ac) ISAR images of ISAR1, ISAR2, and ISAR3 at t1; (df) ISAR images of ISAR1, ISAR2, and ISAR3 at t2; (gi) ISAR images of ISAR1, ISAR2, and ISAR3 at t3.
Figure 20. ISAR images of the three imaging apertures. (ac) ISAR images of ISAR1, ISAR2, and ISAR3 at t1; (df) ISAR images of ISAR1, ISAR2, and ISAR3 at t2; (gi) ISAR images of ISAR1, ISAR2, and ISAR3 at t3.
Remotesensing 17 02263 g020
Figure 21. Geometric relationships between the true spin vectors and the estimated spin vectors. (ac) Geometric relationships in imaging aperture 1, aperture 2, and aperture 3.
Figure 21. Geometric relationships between the true spin vectors and the estimated spin vectors. (ac) Geometric relationships in imaging aperture 1, aperture 2, and aperture 3.
Remotesensing 17 02263 g021
Figure 22. Statistical probability distributions of the offsets. (a) RKPEN; (b) KRCNN; (c) PMKN; (d) SHKEN; (e) Lite-HRNet; (f) ISAR-HRNet.
Figure 22. Statistical probability distributions of the offsets. (a) RKPEN; (b) KRCNN; (c) PMKN; (d) SHKEN; (e) Lite-HRNet; (f) ISAR-HRNet.
Remotesensing 17 02263 g022
Figure 23. Means and variances of the extraction offsets. (a) Means of the offsets; (b) variances of the offsets.
Figure 23. Means and variances of the extraction offsets. (a) Means of the offsets; (b) variances of the offsets.
Remotesensing 17 02263 g023
Figure 24. Analysis of attitude estimation errors of aperture 1. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Figure 24. Analysis of attitude estimation errors of aperture 1. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Remotesensing 17 02263 g024
Figure 25. Analysis of attitude estimation errors of aperture 2. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Figure 25. Analysis of attitude estimation errors of aperture 2. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Remotesensing 17 02263 g025
Figure 26. Analysis of attitude estimation errors of aperture 3. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Figure 26. Analysis of attitude estimation errors of aperture 3. (a) The length error distribution of the solar panel; (b) the length error distribution of the main body; (c) the error distribution of the spin speed; (d) the orientation error distribution of the solar panel; (e) the orientation error distribution of the main body; (f) the orientation error distribution of the spin vector.
Remotesensing 17 02263 g026
Figure 27. The processing flow of the measured data.
Figure 27. The processing flow of the measured data.
Remotesensing 17 02263 g027
Figure 28. Measured data and key point extraction results. (ac) Training data 1, data 2, and data 3 of TG-1; (df) training data 1, data 2, and data 3 of the satellite; (gi) test data 1 and data 2 of TG-1 and test data 1 of the satellite.
Figure 28. Measured data and key point extraction results. (ac) Training data 1, data 2, and data 3 of TG-1; (df) training data 1, data 2, and data 3 of the satellite; (gi) test data 1 and data 2 of TG-1 and test data 1 of the satellite.
Remotesensing 17 02263 g028
Table 1. Parameters of the TLE data used.
Table 1. Parameters of the TLE data used.
TLE NameTLE Parameters
TLE 11 39150U 13018A 23092.27047355 .00001617 00000-0 23877-3 0 9995
2 39150 98.0352 163.0146 0022971 76.1631 284.2129 14.76472717535358
TLE 21 44310U 19032A 23092.17633197 .00006723 00000-0 52032-3 0 9993
2 44310 44.9832 194.7597 0009778 146.1448 288.1752 15.02106084209691
TLE 31 51102U 22004A 23092.58401987 .00000276 00000-0 11131-3 0 9990
2 51102 98.5957 152.8512 0500395 303.1456 52.2629 13.83927097 62353
TLE 41 25544U 98067A 23091.10374725 .00020749 00000-0 37896-3 0 9992
2 25544 51.6419 350.4695 0007223 140.3984 6.4707 15.49341873389833
TLE 51 48274U 21035A 23091.59408903 .00036645 00000-0 37919-3 0 9995
2 48274 41.4735 262.9651 0005222 273.2786 165.3212 15.63969261109859
TLE 61 37820U 11053A 16266.35688463 .00025497 00000-0 24137-3 0 9991
2 37820 42.7662 24.7762 0015742 351.0529 104.2087 15.66280400285808
Table 2. Locations of the ISAR stations.
Table 2. Locations of the ISAR stations.
Radar StationPosition
ISAR1Xi’an (34.4 N, 109.5 E, 557 m)
ISAR2Zhengzhou (34.6 N, 113.5 E, 0 m)
ISAR3Taiyuan (38.8 N, 111.6 E, 1452 m)
Table 3. Imaging parameters of ISAR sensors.
Table 3. Imaging parameters of ISAR sensors.
ParameterValue
Size of a Single Image512 × 512
Signal Frequency10, 12 and 14 GHz
Bandwidth2, 3 and 4 GHz
Pulse Repetition Frequency80 Hz
Table 4. Key point extraction results of simulation data.
Table 4. Key point extraction results of simulation data.
Imaging ApertureKey PointAutomatic Extraction (Cells)Ground Truth (Cells)Extraction Offset (Cells)Mean Offset
(Cells)
Aperture 1key point 1(354.5, 288.0)(352.0, 288.0)(2.5, 0)(2.5, 1.0)
key point 2(159.0, 241.0)(158.5, 240.5)(0.5, 0.5)
key point 3(253.5, 368.5)(250.0, 367.5)(3.5, 1)
key point 4(368.5, 184.5)(365.5, 183.0)(3, 1.5)
Aperture 2key point 1(118.0, 272.5)(115.0, 272.0)(3.0, 0.5)(2.5, 2.0)
key point 2(401.5, 259.0)(398.5, 255.0)(3.0, 4.0)
key point 3(151.0, 206.5)(149.5, 204.5)(1.5, 2.0)
key point 4(201.5, 322.0)(199.0, 321.0)(2.5, 1.0)
Aperture 3key point 1(251.5, 211.5)(248.5, 211.5)(3.0, 0)(2.0, 1.0)
key point 2(280.0, 323.5)(277.5, 323.0)(2.5, 0.5)
key point 3(418.5, 201.5)(417.5, 198.5)(1.0, 3.0)
key point 4(89.0, 251.0)(88.0, 251.0)(1.0, 0)
Table 5. Results of attitude estimation.
Table 5. Results of attitude estimation.
Imaging AperturesParameterEstimation ValueTruth ValueEstimation Error
Aperture 1The length of the main body10.5142 m10.5400 m0.0258 m
The orientation vector of the main body(0.7217, 0.6921, −0.0110)(0.7317, 0.6816, 0)1.0428°
The length of the solar panel19.4929 m19.3400 m0.1529 m
The orientation vector of the solar panel(−0.6826, 0.7307, −0.0060)(−0.6816, 0.7317, 0)0.3532°
Spin speed0.0149 rad/s0.0150 rad/s0.0001 rad/s
Spin direction vector(−0.0281, 0.0098, 0.9996)(0, 0, 1)1.7053°
Aperture 2The length of the main body10.6125 m10.5400 m0.0725 m
The orientation vector of the main body(−0.6178, 0.7862, −0.0118)(−0.6282, 0.7781, 0)1.0137°
The length of the solar panel19.2016 m19.3400 m0.1384 m
The orientation vector of the solar panel(−0.7804, −0.6253, −0.0065)(−0.7781, −0.6282, 0)0.4286°
Spin speed0.0150 rad/s0.0150 rad/s0.0000 rad/s
Spin direction vector(−0.0089, −0.0295, 0.9995)(0, 0, 1)1.7658°
Aperture 3The length of the main body10.4118 m10.5400 m0.1282 m
The orientation vector of the main body(−0.8198, −0.5726, −0.0082)(−0.8206, −0.5716, 0)0.4755°
The length of the solar panel19.3552 m19.3400 m0.0152 m
The orientation vector of the solar panel(0.5770, −0.8167, −0.0044)(0.5716, −0.8206, 0)0.4574°
Spin speed0.0152 rad/s0.0150 rad/s0.0002 rad/s
Spin direction vector (−0.0022, −0.0242, 0.9997)(0, 0, 1)1.3924°
Table 6. Configuration of different methods.
Table 6. Configuration of different methods.
MethodKey Point ExtractionTarget Motion ModelingParameter Solving
Method 1RKPENspinanalytical method
Method 2KRCNNspinanalytical method
Method 3PMKNspinanalytical method
Method 4SHKENspinanalytical method
Method 5Lite-HRNetspinanalytical method
Method 6ISAR-HRNetthree-axis stableanalytical method
Method 7ISAR-HRNetspinintelligent optimization algorithm
OursISAR-HRNetspinanalytical method
Table 7. Comparison of attitude estimation results based on different key point extraction methods.
Table 7. Comparison of attitude estimation results based on different key point extraction methods.
Imaging ApertureMethodMean Offset
(Cells, Cells)
Estimation Error of the Length of the Main Body
(m)
Estimation Error of the Orientation Vector of the Main Body
(°)
Estimation Error of the Length of the Solar Panel
(m)
Estimation Error of the Orientation Vector of the Solar Panel
(°)
Estimation Error of Spin Speed
(rad/s)
Estimation Error of Spin Orientation Vector
(°)
Aperture 1Method 1(3.0, 2.0)0.03261.36050.19930.45450.00013.1720
Method 2(3.5, 1.5)0.03711.57060.23010.52360.00012.8157
Method 3(3.0, 2.5)0.02891.19260.17470.39910.00013.3064
Method 4(3.5, 2.5)0.03441.44450.21160.48220.00013.6301
Method 5(4.0, 1.5)0.03891.65460.24240.55120.00012.5925
Ours(2.5, 1.0)0.02561.04580.15320.35050.00001.7036
Aperture 2Method 1(3.5, 2.0)0.09891.37380.18900.58450.00002.0417
Method 2(2.5, 2.5)0.07511.05290.14470.44590.00002.2797
Method 3(3.0, 3.5)0.08401.17340.16130.49780.00013.1427
Method 4(4.5, 1.5)0.12921.77300.24420.75860.00001.5622
Method 5(2.0, 3.5)0.06480.91200.12530.38550.00013.0017
Ours(2.5, 2.0)0.07211.01270.13910.42870.00001.7650
Aperture 3Method 1(2.5, 2.5)0.15450.57410.01710.55060.00033.0750
Method 2(3.0, 1.5)0.20580.76930.02320.73390.00042.1212
Method 3(3.5, 2.5)0.21860.81840.02470.77970.00042.7823
Method 4(2.5, 3.0)0.16730.62270.01860.59640.00033.8150
Method 5(2.5, 1.0)0.16280.60570.01800.58040.00031.7380
Ours(2.0, 1.0)0.12880.47720.01410.45890.00021.3935
Table 8. Comparison of attitude estimation results based on different motion modeling.
Table 8. Comparison of attitude estimation results based on different motion modeling.
ParameterMethod 6
(Single-Station Mode)
Method 6
(Multi-Station Mode)
Ours
Estimation error of the length of the main body (m)2.02670.07550.0755
Estimation error of the orientation vector of the main body (°)5.35690.84520.8452
Estimation error of the length of the solar panel (m)3.14930.10210.1021
Estimation error of the orientation vector of the solar panel (°)10.57350.41270.4127
Estimation error of spin speed (rad/s)Not AvailableNot Available0.0001
Estimation error of spin vector (°)Not AvailableNot Available1.6207
Table 9. Comparison of attitude estimation results based on different parameter solving methods.
Table 9. Comparison of attitude estimation results based on different parameter solving methods.
Imaging ApertureMethodEstimation Error of Spin Speed
(rad/s)
Estimation Error of the Orientation of the Spin Vector
(°)
Average Time
Consumption
(s)
Aperture 1Method 70.00232.17250.2996
Ours0.00011.70360.0025
Aperture 2Method 70.00122.50170.3253
Ours0.00001.76500.0028
Aperture 3Method 70.00261.73810.2828
Ours0.00021.39350.0020
Table 10. Key point extraction results of measured data.
Table 10. Key point extraction results of measured data.
Key PointMean Offset
(Cells, Cells)
Key point 1(3.5, 4.5)
Key point 2(4.0, 4.5)
Key point 3(4.5, 3.0)
Key point 4(3.0, 5.0)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Yin, C.; Xu, C.; He, J.; Li, P.; Zhang, Y. Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation. Remote Sens. 2025, 17, 2263. https://doi.org/10.3390/rs17132263

AMA Style

Li J, Yin C, Xu C, He J, Li P, Zhang Y. Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation. Remote Sensing. 2025; 17(13):2263. https://doi.org/10.3390/rs17132263

Chicago/Turabian Style

Li, Jishun, Canbin Yin, Can Xu, Jun He, Pengju Li, and Yasheng Zhang. 2025. "Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation" Remote Sensing 17, no. 13: 2263. https://doi.org/10.3390/rs17132263

APA Style

Li, J., Yin, C., Xu, C., He, J., Li, P., & Zhang, Y. (2025). Attitude Estimation of Spinning Space Targets Utilizing Multistatic ISAR Joint Observation. Remote Sensing, 17(13), 2263. https://doi.org/10.3390/rs17132263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop