# Non-Cooperative Target Attitude Estimation Method Based on Deep Learning of Ground and Space Access Scene Radar Images

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Non-Cooperative Target Attitude Estimation Method Based on Machine Learning of Radar Images in GSA Scenes

**Lemma**

**1.**

## 3. Building of GSA Scene with Co-Vision from Space and Earth

#### 3.1. Methods and Ideas

#### 3.2. Method for Solving Orbital Maneuver

#### 3.2.1. Step 1: Alignment Maneuver of the Track Surface Intersection

**Algorithm**

**1.**

#### 3.2.2. Step 2: In-Plane Pursuit Orbital Maneuver

#### 3.3. Calculation Example

## 4. Attitude Estimation Based on Deep Network

#### 4.1. Brief Introduction

#### 4.2. Attitude Estimation Modeling

#### 4.3. Attitude Estimation Based on Deep Network

#### 4.3.1. ISAR Image Enhancement Based on UNet++

#### 4.3.2. Instantaneous Attitude Estimation Based on Swin Transformer

^{th}layer, the windows are re-divided in the L

^{th+1}layer with an offset of half the window distance, which allows the information of some windows in different layers to interact. Next, another LN layer is inputted to connect the multilayer perceptron (MLP). The MLP is a feedforward network that uses the GeLU function as an activation function, with the goal of completing the non-linear transformation and improving the fitting ability of the algorithm. In addition, subsequent stages have 3, 6, 12, and 24 heads. The residual connection added to each swin transformer module is shown in the yellow line in Figure 7. This module has two different structures and needs to be used in pairs: the first structure uses W-MSA, and the second structure connects with SW-MSA. During the process of passing through this module, the output of each part is shown in Equations (19)–(22):

#### 4.3.3. Network Training

## 5. Data Simulation Verification Results

#### 5.1. Basic Settings

#### 5.2. Data Generation and Processing

#### 5.3. Noise Robustness Analysis

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Huo, C.Y.; Yin, H.C.; Wei, X.; Xing, X.Y.; Liang, M. Attitude estimation method of space targets by 3D reconstruction of principal axis from ISAR image. Procedia Comput. Sci.
**2019**, 147, 158–164. [Google Scholar] [CrossRef] - Du, R.; Liu, L.; Bai, X.; Zhou, Z.; Zhou, F. Instantaneous attitude estimation of spacecraft utilizing joint optical-and-ISAR observation. IEEE Trans. Geosci. Remote Sens.
**2022**, 60, 5112114. [Google Scholar] [CrossRef] - Wang, J.; Du, L.; Li, Y.; Lyu, G.; Chen, B. Attitude and size estimation of satellite targets based on ISAR image interpretation. IEEE Trans. Geosci. Remote Sens.
**2021**, 60, 5109015. [Google Scholar] [CrossRef] - Zhou, Y.; Zhang, L.; Cao, Y. Attitude estimation for space targets by exploiting the quadratic phase coefficients of inverse synthetic aperture radar imagery. IEEE Trans. Geosci. Remote Sens.
**2019**, 57, 3858–3872. [Google Scholar] [CrossRef] - Zhou, Y.; Zhang, L.; Cao, Y. Dynamic estimation of spin spacecraft based on multiple-station ISAR images. IEEE Trans. Geosci. Remote Sens.
**2020**, 58, 2977–2989. [Google Scholar] [CrossRef] - Wang, J.; Li, Y.; Song, M.; Xing, M. Joint estimation of absolute attitude and size for satellite targets based on multi-feature fusion of single ISAR image. IEEE Trans. Geosci. Remote Sens.
**2022**, 60, 5111720. [Google Scholar] [CrossRef] - Wang, F.; Eibert, T.F.; Jin, Y.Q. Simulation of ISAR imaging for a space target and reconstruction under sparse sampling via compressed sensing. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 3432–3441. [Google Scholar] [CrossRef] - Zhou, Y.; Wei, S.; Zhang, L.; Zhang, W.; Ma, Y. Dynamic estimation of spin satellite from the single-station ISAR image sequence with the hidden Markov model. IEEE Trans. Aerosp. Electron. Syst.
**2022**, 58, 4626–4638. [Google Scholar] [CrossRef] - Kou, P.; Liu, Y.; Zhong, W.; Tian, B.; Wu, W.; Zhang, C. Axial attitude estimation of spacecraft in orbit based on ISAR image sequence. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2021**, 14, 7246–7258. [Google Scholar] [CrossRef] - Wang, Y.; Cao, R.; Huang, X. ISAR imaging of maneuvering target based on the estimation of time varying amplitude with Gaussian window. IEEE Sens. J.
**2019**, 19, 11180–11191. [Google Scholar] [CrossRef] - Xue, R.; Bai, X.; Zhou, F. SAISAR-Net: A robust sequential adjustment ISAR image classification network. IEEE Trans. Geosci. Remote Sens.
**2021**, 60, 5214715. [Google Scholar] [CrossRef] - Xie, P.; Zhang, L.; Ma, Y.; Zhou, Y.; Wang, X. Attitude estimation and geometry inversion of satellite based on oriented object fetection. IEEE Geosci. Remote Sens. Lett.
**2022**, 19, 4023505. [Google Scholar] [CrossRef] - Tang, S.; Yang, X.; Shajudeen, P.; Sears, C.; Taraballi, F.; Weiner, B.; Tasciotti, E.; Dollahon, D.; Park, H.; Righetti, R. A CNN-based method to reconstruct 3-D spine surfaces from US images in vivo. Med. Image Anal.
**2021**, 74, 102221. [Google Scholar] [CrossRef] [PubMed] - Kim, H.; Lee, K.; Lee, D.; Baek, N. 3D Reconstruction of Leg Bones from X-Ray Images Using CNN-Based Feature Analysis. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 16–18 October 2019; pp. 669–672. [Google Scholar] [CrossRef]
- Joseph, S.S.; Dennisan, A. Optimised CNN based brain tumour detection and 3D reconstruction. Comput. Methods Biomech. Biomed. Eng. Imaging Vis.
**2022**, 1–16. [Google Scholar] [CrossRef] - Ge, Y.; Zhang, Q.; Shen, Y.; Sun, Y.; Huang, C. A 3D reconstruction method based on multi-views of contours segmented with CNN-transformer for long bones. Int. J. Comput. Assist. Radiol. Surg.
**2022**, 17, 1891–1902. [Google Scholar] [CrossRef] [PubMed] - Murez, Z.; Van As, T.; Bartolozzi, J.; Sinha, A.; Badrinarayanan, V.; Rabinovich, A. Atlas: End-to-End 3D Scene Reconstruction from Posed Images. In Proceedings of the European Conference on Computer Vision 2020, Glasgow, UK, 23–28 August 2020; pp. 414–431. [Google Scholar] [CrossRef]
- Pistellato, M.; Bergamasco, F.; Torsello, A.; Barbariol, F.; Yoo, J.; Jeong, J.Y.; Benetazzo, A. A physics-driven CNN model for real-time sea waves 3D reconstruction. Remote Sens.
**2021**, 13, 3780. [Google Scholar] [CrossRef] - Winarno, E.; Al Amin, I.H.; Hartati, S.; Adi, P.W. Face recognition based on CNN 2D-3D reconstruction using shape and texture vectors combining. Indones. J. Electr. Eng. Inform.
**2020**, 8, 378–384. [Google Scholar] [CrossRef] - Tong, Z.; Gao, J.; Zhang, H. Recognition, location, measurement, and 3D reconstruction of concealed cracks using convolutional neural networks. Constr. Build. Mater.
**2017**, 146, 775–787. [Google Scholar] [CrossRef] - Radenović, F.; Tolias, G.; Chum, O. Fine-tuning CNN image retrieval with no human annotation. IEEE Trans. Pattern Anal. Mach. Intell.
**2019**, 41, 1655–1668. [Google Scholar] [CrossRef] - Afifi, A.J.; Magnusson, J.; Soomro, T.A.; Hellwich, O. Pixel2Point: 3D object reconstruction from a single image using CNN and initial sphere. IEEE Access
**2020**, 9, 110–121. [Google Scholar] [CrossRef] - Space Based Space Surveillance SBSS. Available online: http://www.globalsecurity.org/space/systems/sbss.htm (accessed on 26 December 2022).
- Sharma, J. Space-based visible space surveillance performance. J. Guid. Control Dyn.
**2000**, 23, 153–158. [Google Scholar] [CrossRef] - Ogawa, N.; Terui, F.; Mimasu, Y.; Yoshikawa, K.; Ono, G.; Yasuda, S.; Matsushima, K.; Masuda, T.; Hihara, H.; Sano, J.; et al. Image-based autonomous navigation of Hayabusa2 using artificial landmarks: The design and brief in-flight results of the first landing on asteroid Ryugu. Astrodynamics
**2020**, 4, 89–103. [Google Scholar] [CrossRef] - Anzai, Y.; Yairi, T.; Takeishi, N.; Tsuda, Y.; Ogawa, N. Visual localization for asteroid touchdown operation based on local image features. Astrodynamics
**2020**, 4, 149–161. [Google Scholar] [CrossRef] - Kelsey, J.M.; Byrne, J.; Cosgrove, M.; Seereeram, S.; Mehra, R.K. Vision-Based Relative Pose Estimation for Autonomous Rendezvous and Docking. In Proceedings of the 2006 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006; p. 20. [Google Scholar] [CrossRef]
- Cinelli, M.; Ortore, E.; Laneve, G.; Circi, C. Geometrical approach for an optimal inter-satellite visibility. Astrodynamics
**2021**, 5, 237–248. [Google Scholar] [CrossRef] - Wang, H.; Zhang, W. Infrared characteristics of on-orbit targets based on space-based optical observation. Opt. Commun.
**2013**, 290, 69–75. [Google Scholar] [CrossRef] - Zhang, H.; Jiang, Z.; Elgammal, A. Satellite recognition and pose estimation using homeomorphic manifold analysis. IEEE Trans. Aerosp. Electron. Syst.
**2015**, 51, 785–792. [Google Scholar] [CrossRef] - Yang, X.; Wu, T.; Wang, N.; Huang, Y.; Song, B.; Gao, X. HCNN-PSI: A hybrid CNN with partial semantic information for space target recognition. Pattern Recognit.
**2020**, 108, 107531. [Google Scholar] [CrossRef] - Guthrie, B.; Kim, M.; Urrutxua, H.; Hare, J. Image-based attitude determination of co-orbiting satellites using deep learning technologies. Aerosp. Sci. Technol.
**2022**, 120, 107232. [Google Scholar] [CrossRef] - Shi, J.; Zhang, R.; Guo, S.; Yang, Y.; Xu, R.; Niu, W.; Li, J. Space targets adaptive optics images blind restoration by convolutional neural network. Opt. Eng.
**2019**, 58, 093102. [Google Scholar] [CrossRef] - De Vittori, A.; Cipollone, R.; Di Lizia, P.; Massari, M. Real-time space object tracklet extraction from telescope survey images with machine learning. Astrodynamics
**2022**, 6, 205–218. [Google Scholar] [CrossRef] - Zhou, Z.; Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging
**2020**, 39, 1856–1867. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted Windows. arXiv
**2021**, arXiv:2103.14030. [Google Scholar] - Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All You Need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]

Sat_Target | Sat_Obs | |
---|---|---|

Epoch (UTC) | 27 November 2022 16:00:00.000 | 27 November 2022 16:00:00.000 |

Ha (km) | 700 | 700 |

Hp (km) | 600 | 350 |

i (°) | 43 | 99 |

Ω (°) | 290 | 10 |

ω (°) | 100 | 0 |

M (°) | 200 | 316.5 |

Node | ${\mathbf{X}}^{0,0\u20134}$ | ${\mathbf{X}}^{1,0\u20133}$ | ${\mathbf{X}}^{2,0\u20132}$ | ${\mathbf{X}}^{3,0\u20131}$ | ${\mathbf{X}}^{4,0}$ |
---|---|---|---|---|---|

Number of convolution kernels | 32 | 64 | 128 | 256 | 512 |

Carrier frequency | 15 GHz |

Bandwidth | 1.6 GHz |

Pitch angle | 0° |

Accumulation angle | 5.1° |

Distance points | 256 |

Azimuth points | 256 |

z/° | y/° | x/° | |
---|---|---|---|

Without image enhancement | 1.3867 | 1.2231 | 1.7971 |

With image enhancement | 0.7302 | 0.7050 | 0.9579 |

z/° | y/° | x/° | ||
---|---|---|---|---|

Label | −27.0569 | −20.4094 | 20.2146 | |

Without image enhancement | Estimated value | −27.9102 | −21.9417 | 19.6320 |

Error | 0.8533 | 1.5322 | 0.5827 | |

With image enhancement | Estimated value | −27.1155 | −19.9481 | 19.5822 |

Error | 0.0587 | −0.4613 | 0.6325 |

z/° | y/° | x/° | ||
---|---|---|---|---|

Label | −13.5862 | −35.0809 | −14.4134 | |

Without image enhancement | Estimated value | −14.7988 | −36.1443 | −15.7292 |

Error | 1.2125 | 1.0634 | 1.3158 | |

With image enhancement | Estimated value | −13.5347 | −35.6672 | −14.7969 |

Error | −0.0515 | 0.5863 | 0.3835 |

SNR | z/° | y/° | x/° | |
---|---|---|---|---|

Without image enhancement | 0 dB | 1.4405 | 1.4354 | 1.9950 |

5 dB | 1.3620 | 1.3414 | 1.8249 | |

10 dB | 1.3100 | 1.3210 | 1.7834 | |

With image enhancement | 0 dB | 0.6179 | 0.8326 | 0.8755 |

5 dB | 0.6193 | 0.8047 | 0.8147 | |

10 dB | 0.6163 | 0.8052 | 0.8047 |

SNR | z/° | y/° | x/° | ||
---|---|---|---|---|---|

Label | 36.9615 | −36.9176 | −21.4118 | ||

Without image enhancement | 0 dB | Estimated value | 37.9141 | −37.9030 | −18.1337 |

Error | −0.9525 | 0.9853 | −3.2781 | ||

5 dB | Estimated value | 37.4324 | −37.2864 | −19.9296 | |

Error | −0.4708 | 0.3687 | −1.4822 | ||

10 dB | Estimated value | 37.6068 | −36.9699 | −19.4128 | |

Error | −0.6452 | 0.0522 | −1.9990 | ||

With image enhancement | 0 dB | Estimated value | 37.0437 | −37.0494 | −21.7160 |

Error | −0.0821 | 0.1318 | 0.3042 | ||

5 dB | Estimated value | 36.8439 | −37.1574 | −21.9101 | |

Error | 0.1177 | 0.2398 | 0.4983 | ||

10 dB | Estimated value | 36.9237 | −36.8922 | −21.3185 | |

Error | 0.0379 | −0.0254 | −0.0933 |

SNR | z/° | y/° | x/° | ||
---|---|---|---|---|---|

Label | −10.3241 | −17.8594 | −41.7762 | ||

Without image enhancement | 0 dB | Estimated value | −9.9148 | −18.7568 | −42.4823 |

Error | −0.4093 | 0.8973 | 0.7061 | ||

5 dB | Estimated value | −11.0111 | −20.0738 | −42.1863 | |

Error | 0.6869 | 2.2144 | 0.4101 | ||

10 dB | Estimated value | −11.3889 | −19.4947 | −41.9493 | |

Error | 1.0647 | 1.6352 | 0.1731 | ||

With image enhancement | 0 dB | Estimated value | −10.9262 | −17.8358 | −41.8396 |

Error | 0.6021 | −0.0236 | 0.0634 | ||

5 dB | Estimated value | −10.7412 | −17.5429 | −42.0876 | |

Error | 0.4171 | −0.3165 | 0.3114 | ||

10 dB | Estimated value | −10.9565 | −17.6848 | −41.9112 | |

Error | 0.6323 | −0.1746 | 0.1350 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hou, C.; Zhang, R.; Yang, K.; Li, X.; Yang, Y.; Ma, X.; Guo, G.; Yang, Y.; Liu, L.; Zhou, F.
Non-Cooperative Target Attitude Estimation Method Based on Deep Learning of Ground and Space Access Scene Radar Images. *Mathematics* **2023**, *11*, 745.
https://doi.org/10.3390/math11030745

**AMA Style**

Hou C, Zhang R, Yang K, Li X, Yang Y, Ma X, Guo G, Yang Y, Liu L, Zhou F.
Non-Cooperative Target Attitude Estimation Method Based on Deep Learning of Ground and Space Access Scene Radar Images. *Mathematics*. 2023; 11(3):745.
https://doi.org/10.3390/math11030745

**Chicago/Turabian Style**

Hou, Chongyuan, Rongzhi Zhang, Kaizhong Yang, Xiaoyong Li, Yang Yang, Xin Ma, Gang Guo, Yuan Yang, Lei Liu, and Feng Zhou.
2023. "Non-Cooperative Target Attitude Estimation Method Based on Deep Learning of Ground and Space Access Scene Radar Images" *Mathematics* 11, no. 3: 745.
https://doi.org/10.3390/math11030745