Next Article in Journal
Deep Rolling Process Modeling Using Finite Element Analysis in Residual Stress Measurement on Rail Head UIC860 Surface
Previous Article in Journal
A Social-Network-Based Crowd Selection Approach for Crowdsourcing Mobile Apps Requirements Engineering Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Non-Cooperative Target Ranging Based on High-Orbit Single-Star Temporal–Spatial Characteristics

1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(23), 11232; https://doi.org/10.3390/app142311232
Submission received: 24 October 2024 / Revised: 20 November 2024 / Accepted: 29 November 2024 / Published: 2 December 2024
(This article belongs to the Section Aerospace Science and Engineering)

Abstract

:
A visible light camera payload with star-sensitive functionality was installed to measure the distance between a non-cooperative target satellite and a high-orbit satellite. The rotation matrix was used to calculate the pointing vector from the center of the satellite’s star-sensitive camera axis to the target satellite. Multiple position imaging was achieved, and the moving window approach was used to establish two sets of equations relating the pointing vectors to the positions of binary satellites. To simplify the calculations, the target satellite’s eccentricity was assumed to be small (0 to 0.001), allowing elliptical orbits to be approximated as circular. Additionally, short-interval (1-min) imaging measurements were taken, assuming a small inclination of the target satellite (0.0° to 0.4°). This resulted in the construction of a ranging model with high accuracy, producing a ranging error of less than 5% of the actual distance.

1. Introduction

For situational awareness in space, perceptual measurements of threatening targets, such as space debris, are important safety measures [1,2,3]. In particular, distance measurements of non-cooperative threat targets are critical [4,5]. Visual ranging between two high-orbit satellites requires the host satellite to continuously track and measure the distance to the perceived non-cooperative target satellite [6].
Traditional visual ranging methods include monocular and binocular visual ranging. Monocular visual ranging typically uses point correspondence calibration to obtain depth information from images [7,8]. This method deciphers the transformation relationship between corresponding points in different coordinate systems. As point correspondence calibration requires a predetermined camera position and orientation, any parameter change necessitates recalibration, limiting the use of method to fixed camera positions [9,10,11]. The applicability of cameras mounted on moving platforms (e.g., a camera mounted on a satellite) is restricted because of changes in camera parameters during motion, which impede accurate ranging.
For spaceborne monocular cameras, the continuous tracking of non-cooperative targets is necessary to ensure that the target remains at the center of the camera [12]. However, pixel displacement cannot be guaranteed. Moreover, because this process involves imaging with a single camera, the construction of a binocular model is unachievable [13], which restricts the application of ranging algorithms.
Binocular visual ranging utilizes pixel offsets of the same object on imaging planes to mathematically determine the distance between objects using the camera focal length, pixel offsets, and the actual distance between the two cameras [14,15,16]. The principle of binocular ranging is shown in Figure 1, where D represents the required distance, f denotes the camera focal length, B indicates the center distance between the two cameras, P represents a point on the object, and O 1 and O 2 are the optical centers of the two cameras. The projections of object point P on the imaging planes of the two cameras are P1 and P2, respectively. By applying the principle of similar triangles (△PO1O2∽△PP1P2), the distance can be calculated using the formula D = f B / ( X 1 X 2 ) , where the focal length ( f ) and center distance ( B ) can be obtained through calibration. Therefore, the distance value can be obtained by determining the value of parallax d = X 1 X 2 [17].
In this study, to overcome the limitations of previous methods, we utilized the characteristics of high-orbit satellites in the following ways: (1) spatially—they have low eccentricity and a small inclination angle, which simplifies the motion model of non-cooperative target satellites; (2) temporally—they use the short-term motion of the host satellite to slide and sample pointing vectors continuously to construct a set of equations; and (3), by combining the aforementioned small inclination and short sampling intervals, a further assumption can be made regarding the lack of movement in the Z-axis of the target satellite. Based on these considerations, a monocular visual distance measurement model for non-cooperative targets between high-orbit satellites is proposed.

2. Establishment of the Distance Measurement Model

To obtain monocular visual distance measurements between a host satellite and non-cooperative targets in high Earth orbit, a model for measuring the distance between satellite S and target satellite T needs to be established. Satellite S is assumed to be equipped with a star sensor camera capable of real-time tracking and pointing toward the target satellite T . The star sensor camera’s star-sensitive function provides the rotation matrix M from the celestial coordinate system (which aligns the Z-axis with the Earth’s rotation axis) to the star sensor coordinate system (the star sensor camera can obtain the rotation matrix M through star map matching) [18,19]. The rotation matrix M J 2000 from the star sensor coordinate system to the celestial coordinate system is calculated through matrix transposition, as shown in Equation (1).
M J 2000 = M T
In this paper, the superscript T is used to denote the transpose operation, and the subscript T or t is used to represent the target satellite. Calculating the pointing vector v T of the onboard camera’s optical axis to the target satellite T based on the rotation matrix M J 2000 from the star sensor coordinate system to the celestial coordinate system can be achieved using Equation (2).
v T = M J 2000 U
where U represents the vector [ 0 ,   0 ,   1 ] T . As shown in Figure 2, [0, 0, 1] represents the unit vector in the z-direction of the camera within the star sensor coordinate system. This vector is transformed into the pointing vector of the camera axis to the target within the celestial coordinate system through the M J 2000 matrix.
During the ranging process, both satellites are assumed to maintain their respective orbits. The positional changes that occur during the motion of the two satellites are illustrated in Figure 3, in which OXYZ represents the celestial coordinate system. This assumes the movement positions of satellite   S in its orbit as S 1 S 2 S 3 , and simultaneously, the movement positions of target satellite T in its orbit as T 1 T 2 T 3 . Satellite S moves from position S 1 to position S 2 , with the coordinates corresponding to these positions denoted as S 1   ( x s 1 , y s 1 , z s 1 ) and S 2   ( x s 2 , y s 2 , z s 2 ) . When satellite S   is at positions   S 1 and S 2 , the assumed coordinates for the corresponding positions of target satellite   T are   T 1   ( x t 1 , y t 1 , z t 1 ) and T 2   ( x t 2 , y t 2 , z t 2 ) . At position S 1 , satellite   S points to the position of target satellite T at T 1 ; similarly, at position S 2 , satellite S points to the position of target satellite T at T 2 . The corresponding pointing vectors are v 1 ( v x 1 , v y 1 , v z 1 ) and, v 2 ( v x 2 , v y 2 , v z 2 ) , which can be calculated using Equations (3) and (4) derived from Equations (1) and (2), respectively:
v 1 T = M 1 J 2000 U
v 2 T = M 2 J 2000 U
where M 1 J 2000 and M 2 J 2000 are the rotation matrices from the star sensor coordinate system to the celestial coordinate system at positions S 1 and S 2 , respectively.
Assuming the unknown coefficients k 1 and k 2 , we construct the vector equation v 1 for satellite S pointing to target satellite T at position T 1 when S is at position S 1 and perform a similar construction for vector equation v 2 .
k 1 v 1 = S 1 T 1
k 2 v 2 = S 2 T 2
Assuming a short time interval between the two samples (e.g., 1 min) and a near-zero orbital inclination, the motion of target satellite T in the Z-axis direction can be approximated as nearly 0 (the high-orbit target is on a relatively stable orbit with minimal movement in the Z-axis direction) [20,21]. Variation along the Z-axis is illustrated in Figure 4. Figure 5 shows the 24 h orbital Z-axis variation relative to the X and Y-axes. The analysis during ranging, with a 1 min sampling interval, revealed minimal changes in the Z-axis (0.4° inclination within 600 km at 0.2 km/min). Consequently, we assumed Equation (7):
z t 1 = z t 2
According to Kepler’s first law, the satellite’s orbit is elliptical [22,23], as shown in Equation (8), where a represents the semimajor axis, e indicates the eccentricity, r denotes the polar radius, and θ represents the true anomaly, as shown in Figure 6.
a ( 1 e 2 ) = r ( 1 + e cos ( θ ) )
Based on the characteristics of high−orbit satellites, when the eccentricity e is small, the elliptical orbit can be approximated as a circular orbit [24]. Additionally, when the inclination is small, its projection onto the XY plane forms a circular orbit, as shown in Figure 5 and Figure 7. Therefore, an approximation can be made, as shown in Equation (9).
x t 1 2 + y t 1 2 = x t 2 2 + y t 2 2
Furthermore, the variation in XYZ coordinates in the J2000 coordinate system can be calculated by converting the orbital elements to the J2000 coordinate system. First, the orbital elements are defined as follows: semi-major axis (a), eccentricity (e), inclination (i), right ascension of the ascending node (Ω), argument of periapsis (ω), and mean anomaly (M). Additionally, the eccentric anomaly (E) and true anomaly (f) are defined for the computational process. From Kepler’s equation, we can obtain the following:
M = E e sin ( E )
where the eccentric anomaly (E) can be calculated iteratively from the mean anomaly (M). The satellite’s position in the J2000 coordinate system can be calculated [25]:
x J 2000 y J 2000 z J 2000 = a ( 1 e 2 ) 1 + e cos ( f ) cos ( Ω ) cos ( ω + f ) sin ( Ω ) sin ( ω + f ) cos ( i ) sin ( Ω ) cos ( ω + f ) + cos ( Ω ) sin ( ω + f ) cos ( i ) sin ( ω + f ) sin ( i )
Equation (11) includes the exact expression for z J 2000 . When the eccentricity (e) approaches 0, we have the following relationship:
z J 2000 = a sin ( i ) sin ( ω + f )
In this case, a, i, and ω are constants, and the value of f ranges from 0 to 2π. Assuming a short time interval between two samples (e.g., 1 min), it can be considered that the two samples’ true anomaly (f) values are close to each other, leading to small changes in the Z-axis. This gives the expression in Equation (7).
From Equation (11), the following calculation can be performed:
x J 2000 2 + y J 2000 2 = a 2 cos 2 ( ω + f ) + a 2 cos 2 ( i ) sin 2 ( ω + f )
Furthermore, when i is small, we can approximate that cos 2 ( i ) = 1 , yielding Equation (14):
x J 2000 2 + y J 2000 2 = a 2
Thus, when the change in the true anomaly (f) is small, the equation can be written as in Equation (9).
Based on Equations (1), (3)–(7) and (9), the coefficient k 2 can be solved to obtain the absolute values of the two solutions, d 1 a b s and d 2 a b s . Consequently, Equation (15) can be used to select the minimum coefficient.
d = min ( d 1 a b s , d 2 a b s )
As shown in Figure 3, when the two satellites move to position 2 (establishing equations using positions 1 and 2), the calculated coefficient k 2 for the pointing vector assumes two values. This is the case because, according to Equations (7) and (9), the pointing vector intersects the surface of the cylinder where the target satellite T is located at two points. The minimum value of the coefficient represents the observable target satellite, and another point is located in a position that cannot be observed in the line of sight. Thus, by selecting the minimum coefficient, the distance from satellite S to target satellite T can be determined using Equation (16).
d l a s t = d ( v x 2 2 + v y 2 2 + v z 2 2 )

3. Experimental Results

We used simulation software (STK 11.0) to establish a scenario for the primary and target satellites to validate the algorithm proposed in this paper. First, we set the orbital parameters for the primary satellite and the different target satellites. At different times, the orbital parameters of the primary satellite and the pointing vector toward the target satellite are known. The actual distance between the two satellites can be obtained through the simulation software, and this distance serves as the standard answer for validating the algorithm presented in this paper. Our algorithm does not know the orbital parameters of the target satellite. We compared the distance calculated by the algorithm with the standard answer to analyze the algorithm’s error.
The orbital parameters for the host satellite in the ranging experiment were as follows: semimajor axis, 42,166 km; eccentricity, 0.0005; and inclination, 5°. For the target satellite, the corresponding values were as follows: semimajor axis, 42,176 km; eccentricity, 0.0002; and inclination, 0.0°/0.1°/0.2°/0.3°/0.4°/1.0°.
Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 display the error curves for inclinations of 0.0°, 0.2°, 0.4°, and 1.0°. Ranging accuracy within 600 km was superior to 5% of the detection distance for inclinations between 0.0° and 0.4°. Furthermore, for inclinations between 0° and 1°, a ranging accuracy within 100 km range was guaranteed to be superior to 5% of the detection distance.
The two “V” shapes occur because the ground track of the host satellite forms a figure-eight shape, resulting in two approaches and two retreat events between the two satellites. In Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14, we represent these two events with the symbols Move-1 and Move-2. Furthermore, we assume that the satellite (S) is an inclined geosynchronous satellite and that the target satellite (T) is a geosynchronous satellite with a small eccentricity (ranging from 0 to 0.001) and a small inclination (ranging from 0.0° to 0.4°). The ground track of the satellite (S) forms a figure-eight pattern, which is the trace created by the line connecting the satellite to the center of the Earth and its intersection with the Earth’s surface. The satellite (S) experiences two approaches and retreats from the target satellite (T), resulting in the red and black lines shown. In Figure 8, the black dots represent the calculated distance values from the model, the red dots represent the actual distances, and the green dots indicate the error.
As shown in Figure 13, under the first set of movements (Move-1), the error curves for different inclinations (0.0° to 0.4°) indicate a gradual reduction in error from 0.0° to 0.2°, followed by a gradual increase from 0.2° to 0.4°, with a maximum error of approximately 10 km at around 500 km. The ranging error is less than 2% of the measured distance. Figure 14 demonstrates that under the second set of movements (Move-2), the maximum error at a distance of 600 km is less than 30 km. The ranging error is less than 5% of the measured distance.

4. Discussion

The range model developed herein was successfully deployed on a satellite platform, as illustrated in Figure 15, for the distance measurement of high-orbit threats such as space debris. The system comprises a star-sensitive camera for target acquisition and outputting rotation matrices, a tracking turntable for real-time target tracking, and a field-programmable gate array (FPGA) for computational hardware. The algorithm operates on this hardware platform, leveraging the characteristics of the satellite’s motion and generating continuously changing pointing vectors, as well as its own position. This allows data acquisition at different time intervals. The model calculates the ranging results by continuously sampling two sets of data collected approximately 1 min apart. A sliding sampling method was adopted to compute results every 10 s to ensure real-time calculations.
In Equation (7), an assumption is made that the target satellite’s Z-axis remains unchanged. If we further assume that the distance measurements from two observations are equal (satellite S measures the same distance at S1 and S2), then the distance can be calculated using Equations (17)–(19).
z s 1 z t 1 = k 1 v z 1
z s 2 z t 2 = k 2 v z 2
k 1 ( v x 1 2 + v y 1 2 + v z 1 2 ) = k 2 ( v x 2 2 + v y 2 2 + v z 2 2 )
These equations rely heavily on the variation in the pointing vector v z to calculate the result. In practice, v z is a normalized value, and the change in v z from two observations is reflected in approximately the sixth decimal place. This situation introduces significant errors in the calculation and proves challenging to implement in an FPGA.
The star map matching process for the star-sensitive camera is conducted in the celestial coordinate system. If all computational processes are shifted to the WGS84 coordinate system, it may be feasible to consider solving the trigonometric relationships through two observations. However, this would entail complex coordinate system transformations and could lead to significant errors in the case of extremely small angles.
In contrast, the proposed model is straightforward, requiring no coordinate system transformations and avoiding significant errors arising from small angles. Instead, errors arise from the assumptions within the equations, and refining these assumptions can further enhance precision.

5. Conclusions

No prior model is available for distance measurement in the continuous tracking process of high-orbit non-cooperative targets. Additionally, the real-time computational resources on the satellite hardware platform are limited. This necessitates the design of a simple yet effective computational model.
This model assumes that the high-orbit satellite is relatively stationary with a low eccentricity and minimal orbital inclination. The designed model was verified using both simulation validation and actual in-orbit testing. Within an inclination range of 0.0° to 0.4°, the ranging accuracy within a 600 km range surpasses the detection distance by 5%, whereas within an inclination range of 0.0° to 1.0°, the ranging accuracy within a 100 km range is guaranteed to exceed the detection distance by 5%.

Author Contributions

Methodology, D.Z.; software, D.Z.; validation, Q.Z.; formal analysis, D.Z.; investigation, H.W.; resources, H.W.; data curation, D.Z.; writing—original draft preparation, D.Z.; writing—review and editing, Q.Z.; visualization, Q.Z.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

We are grateful for the support of the Shaanxi Province Key Research and Development Plan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kennewell, J.A.; Vo, B.-N. An overview of space situational awareness. In Proceedings of the 16th International Conference on Information Fusion (IEEE2013), Istanbul, Turkey, 9–12 July 2013; pp. 1029–1036. [Google Scholar]
  2. Lal, B.; Balakrishnan, A.; Caldwell, B.M.; Buenconsejo, R.S.; Carioscia, S.A. Global Trends in Space Situational Awareness (SSA) and Space Traffic Management (STM); Institute for Defense Analyses: Alexandria, VA, USA, 2022. [Google Scholar]
  3. Oltrogge, D.L.; Alfano, S. The technical challenges of better space situational awareness and space traffic management. J. Space Saf. Eng. 2019, 6, 72–79. [Google Scholar] [CrossRef]
  4. Adurthi, N.; Singla, P.; Majji, M. Mutual information based sensor tasking with applications to space situational awareness. J. Guid. Control Dyna. 2020, 43, 767–789. [Google Scholar] [CrossRef]
  5. Hilton, S.; Cairola, F.; Gardi, A.; Sabatini, R.; Pongsakornsathien, N.; Ezer, N. Uncertainty quantification for space situational awareness and traffic management. Sensors 2019, 19, 4361. [Google Scholar] [CrossRef]
  6. Zhang, S. High-orbit satellite magnitude estimation using photometric measurement method. In Proceedings of the MIPPR 2015: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications (SPIE2015), Enshi, China, 31 October–1 November 2015; pp. 235–240. [Google Scholar]
  7. Gray, R.; Regan, D. Accuracy of estimating time to collision using binocular and monocular information. Vision. Res. 1998, 38, 499–512. [Google Scholar] [CrossRef]
  8. Guo, S.; Chen, S.; Liu, F.; Ye, X.; Yang, H. Binocular vision-based underwater ranging methods. In Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA, IEEE2017), Takamatsu, Japan, 6–9 August 2017; pp. 1058–1063. [Google Scholar]
  9. Cagenello, R.; Arditi, A.; Halpern, D.L.J.J.A. Binocular enhancement of visual acuity. J. Opt. Soc. Am. A 1993, 10, 1841–1848. [Google Scholar] [CrossRef] [PubMed]
  10. Deng, F.; Zhang, L.; Gao, F.; Qiu, H.; Gao, X.; Chen, J. Long-range binocular vision target geolocation using handheld electronic devices in outdoor environment. IEEE Trans. Image Process. 2020, 29, 5531–5541. [Google Scholar] [CrossRef]
  11. He, M.; Zhu, C.; Huang, Q.; Ren, B.; Liu, J. A review of monocular visual odometry. Vis. Comput. 2020, 36, 1053–1065. [Google Scholar] [CrossRef]
  12. Li, G.; Ge, R.; Loianno, G. Cooperative transportation of cable suspended payloads with Mavs using monocular vision and inertial sensing. IEEE Robot. Autom. Lett. 2021, 6, 5316–5323. [Google Scholar] [CrossRef]
  13. Tang, D.; Fang, Q.; Shen, L.; Hu, T. Onboard detection-tracking-localization. IEEE/ASME Trans. Mechatron. 2020, 25, 1555–1565. [Google Scholar] [CrossRef]
  14. Musch, D.C.; Niziol, L.M.; Gillespie, B.W.; Lichter, P.R.; Janz, N.K. Binocular measures of visual acuity and visual field versus binocular approximations. Ophthalmology 2017, 124, 1031–1038. [Google Scholar] [CrossRef] [PubMed]
  15. Qian, P.; Zhao, Z.; Chen, C.; Zeng, Z.; Li, X. Two eyes are better than one: Exploiting binocular correlation for diabetic retinopathy severity grading. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC, IEEE2021), Virtual, 1–5 November 2021; pp. 2115–2118. [Google Scholar]
  16. Xue, L.; Li, M.; Sun, A.; Jin, X.; Zhang, Y. Research on generalized error control mechanism of monocular vision ranging method. J. Phys. Conf. Ser. 2020, 1624, 052013. [Google Scholar] [CrossRef]
  17. Sun, X.; Jiang, Y.; Ji, Y.; Fu, W.; Yan, S.; Chen, Q.; Yu, B.; Gan, X. Distance measurement system based on binocular stereo vision. IOP Conf. Ser. Earth Environ. Sci. 2019, 252, 052051. [Google Scholar] [CrossRef]
  18. Gao, Y.; Sun, S.; Mao, X.; Wu, D.; Zhou, X. A new method of calibration for space debris target measurement. In Proceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers, Dalian, China, 14–16 April 2022; pp. 281–289. [Google Scholar]
  19. Lv, Q.; An, X.; Wang, D.; Wu, J. Research on error online calibration method of inertial/stellar refraction integrated navigation system. In Proceedings of the China Satellite Navigation Conference (CSNC 2021), Nanchang, China, 22–25 May 2021; Volume II, pp. 435–444. [Google Scholar]
  20. Zhang, J.; Zhao, S.; Yang, Y. Characteristic analysis for elliptical orbit hovering based on relative dynamics. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2742–2750. [Google Scholar] [CrossRef]
  21. Nakasuka, S.; Funase, R.; Kimura, S.; Terui, F.; Yoshihara, K.; Yamamoto, T. On-orbit experiment of motion estimation and tracking of tumbling object in space. IFAC Proc. 2004, 37, 223–228. [Google Scholar] [CrossRef]
  22. Cariñena, J.F.; Rañada, M.F.; Santander, M.J.E.J.O.P. A new look at the Feynman ‘hodograph’ approach to the Kepler first law. Eur. J. Phys. 2016, 37, 025004. [Google Scholar] [CrossRef]
  23. Gingerich, O. Kepler then and now. Perspect. Sci. 2002, 10, 228–240. [Google Scholar] [CrossRef]
  24. Yu, K.C.; Sahami, K.; Denn, G.J.A.E.R. Student ideas about Kepler’s laws and planetary orbital motions. Astron. Ed. Rev. 2010, 9, 010108–010117. [Google Scholar] [CrossRef]
  25. Hugentobler, U.; Montenbruck, O. Satellite orbits and attitude. In Springer Handbook of Global Navigation Satellite Systems; Teunissen, P., Montenbruck, O., Eds.; Springer Nature: Cham, Switzerland, 2017; pp. 59–90. [Google Scholar]
Figure 1. Schematic of principle of binocular ranging.
Figure 1. Schematic of principle of binocular ranging.
Applsci 14 11232 g001
Figure 2. Direction of the camera’s optical axis.
Figure 2. Direction of the camera’s optical axis.
Applsci 14 11232 g002
Figure 3. Motion process and direction change of the two satellites. The blue line represents the orbit of the target satellite (T), the green line represents the orbit of the satellite (S), and OXYZ represents the celestial coordinate system.
Figure 3. Motion process and direction change of the two satellites. The blue line represents the orbit of the target satellite (T), the green line represents the orbit of the satellite (S), and OXYZ represents the celestial coordinate system.
Applsci 14 11232 g003
Figure 4. High−orbit satellite at 0.161° inclination and change in Z-axis in the 24 h orbit (J2000 coordinate system). At short sampling intervals, such as 1 min, the variation in the Z-axis is approximated according to Equation (7).
Figure 4. High−orbit satellite at 0.161° inclination and change in Z-axis in the 24 h orbit (J2000 coordinate system). At short sampling intervals, such as 1 min, the variation in the Z-axis is approximated according to Equation (7).
Applsci 14 11232 g004
Figure 5. High−orbit satellite at 0.161° inclination and the change in the 24 h orbit (J2000 coordinate system).
Figure 5. High−orbit satellite at 0.161° inclination and the change in the 24 h orbit (J2000 coordinate system).
Applsci 14 11232 g005
Figure 6. Schematic of Kepler’s elliptical orbit.
Figure 6. Schematic of Kepler’s elliptical orbit.
Applsci 14 11232 g006
Figure 7. Orbital projection on the XY plane at 0.161° inclination, as described by Equation (9).
Figure 7. Orbital projection on the XY plane at 0.161° inclination, as described by Equation (9).
Applsci 14 11232 g007
Figure 8. Relationship between the calculated distance of the model and the real distance when the inclination is 0°.
Figure 8. Relationship between the calculated distance of the model and the real distance when the inclination is 0°.
Applsci 14 11232 g008
Figure 9. Relationship between distance and error at an inclination of 0°.
Figure 9. Relationship between distance and error at an inclination of 0°.
Applsci 14 11232 g009
Figure 10. Relationship between distance and error at an inclination of 0.2°.
Figure 10. Relationship between distance and error at an inclination of 0.2°.
Applsci 14 11232 g010
Figure 11. Relationship between distance and error at an inclination of 0.4°.
Figure 11. Relationship between distance and error at an inclination of 0.4°.
Applsci 14 11232 g011
Figure 12. Relationship between distance and error at an inclination of 1°.
Figure 12. Relationship between distance and error at an inclination of 1°.
Applsci 14 11232 g012
Figure 13. Error curve in the first group (inclination 0.0°–0.4°).
Figure 13. Error curve in the first group (inclination 0.0°–0.4°).
Applsci 14 11232 g013
Figure 14. Error curve in the second group (inclination 0.0°–0.4°).
Figure 14. Error curve in the second group (inclination 0.0°–0.4°).
Applsci 14 11232 g014
Figure 15. Physical diagram of a ranging system. The system includes a star-sensitive camera, turntable, and calculation circuit to complete distance measurements between two objects, such as measuring the distance to space debris.
Figure 15. Physical diagram of a ranging system. The system includes a star-sensitive camera, turntable, and calculation circuit to complete distance measurements between two objects, such as measuring the distance to space debris.
Applsci 14 11232 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Wang, H.; Zhao, Q. Non-Cooperative Target Ranging Based on High-Orbit Single-Star Temporal–Spatial Characteristics. Appl. Sci. 2024, 14, 11232. https://doi.org/10.3390/app142311232

AMA Style

Zhang D, Wang H, Zhao Q. Non-Cooperative Target Ranging Based on High-Orbit Single-Star Temporal–Spatial Characteristics. Applied Sciences. 2024; 14(23):11232. https://doi.org/10.3390/app142311232

Chicago/Turabian Style

Zhang, Derui, Hao Wang, and Qing Zhao. 2024. "Non-Cooperative Target Ranging Based on High-Orbit Single-Star Temporal–Spatial Characteristics" Applied Sciences 14, no. 23: 11232. https://doi.org/10.3390/app142311232

APA Style

Zhang, D., Wang, H., & Zhao, Q. (2024). Non-Cooperative Target Ranging Based on High-Orbit Single-Star Temporal–Spatial Characteristics. Applied Sciences, 14(23), 11232. https://doi.org/10.3390/app142311232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop