Abstract
Aiming at the relative pose estimation of non-cooperative targets in space traffic management tasks, a two-step pose estimation method, based on spatially intersecting straight lines, is proposed, which mainly includes three aspects: (1) Use binocular vision to reconstruct the straight space line, and based on the direction vector of the straight line and the intersection of the straight line, solve the pose of the measured target in the measurement coordinate system, and obtain the initial value of the pose estimation. (2) Analyze the uncertainty of the spatial straight-line imaging, construct the uncertainty description matrix of the line, and filter the line features, accordingly. (3) Analyze the problems existing in the current linear distance measurement, construct the spatial linear back-projection error in the parametric coordinate space, and use the linear imaging uncertainty to weigh the projection error term to establish the optimization objective function of the pose estimation. Finally, the nonlinear optimization algorithm is used to iteratively solve the above optimization problem, to obtain high-precision pose estimation results. The experimental results show that the two-step pose estimation algorithm, proposed in this paper, can effectively achieve a high-precision and robust pose estimation for non-cooperative spatial targets. When the measurement distance is 10 m, the position accuracy can reach 10 mm, and the attitude measurement accuracy can reach 1°, which meets the pose estimation accuracy requirements of space traffic management.
1. Introduction
The relative pose estimation is one of the basic technologies of space on-orbit operations and traffic management tasks [1,2,3]. For the estimation of the relative pose between spacecrafts, the ground telemetry method cannot obtain the high-precision 6-DoF (six-degree-of-freedom) parameters [4]. In recent years, pose estimation methods, based on LiDAR (light detection and ranging) and vision, have been favored by researchers [5]. LiDAR is not restricted by light conditions. However, LiDAR is an active sensor, the data it collects is sparse, and the power consumption and quality are large. Therefore, it is difficult to obtain high-precision measurement results in tasks, such as space traffic management. Compared with the LiDAR-based method, the vision-based method has the advantages of a low price, low power consumption, lightweight, and full-field measurements. Therefore, the vision-based method is widely used in aerospace missions [6].
For the relative pose estimation of non-cooperative spacecrafts, based on vision, there are currently mainly model-based methods, point-based methods, line-based methods, and deep learning-based methods [7]. The method, based on the model, is referred to the use of the camera images and the feature information of the observed building geometry model (such as shape, and size), and then with the known model is combined with the corresponding information for the reference estimation algorithm for the pose parameters, but this kind of method needs to know the geometry prior to the information of the target [8,9]. Therefore, its application scope is greatly limited. Based on the deep learning method, the use of the offline training network model for the non-cooperation target pose estimation, and then the images captured by the camera in real-time are processed online to achieve the relative pose estimation [10,11,12]. This method has a good robustness and is not sensitive to light, but it needs a lot of offline data to train the estimation network. However, this process of the pose estimation of non-cooperative targets is not easy. A method, based on the point feature, refers to extracting the feature points of the non-cooperative target, and then, based on the known geometry information, or by using stereo vision ways to realize the feature points of the 3D (three dimensional) reconstruction, adopt the corresponding pose estimation algorithm for the pose estimation [13,14,15]. The algorithm is simple to implement, therefore the pose estimation, based on the cooperative target, is widely used. However, this method is easy to be affected by illumination conditions, and the algorithm’s robustness is poor. The method, based on the online feature, uses the information of the line and the conic section in the collected image, combined with the stereo vision to realize the estimation of the pose parameters. Compared with the feature point method, this method has a stronger robustness, so it has received extensive attention in recent years. Pose estimation methods, based on linear features, can be divided into a linear method and an iterative method. The linear solving algorithm is to directly use the known 3D model features or obtain a 3D model through stereo vision, and then solve the PnL (Perspective-n-Line) problems, such as the PnP (Perspective-n-Point) [16,17,18]. The iterative method usually uses the linear method to obtain the initial value and then uses the iterative algorithm to minimize the objective function to obtain the optimal pose estimation results [19,20]. The pose estimation method, based on the conic reconstruction, mainly uses the spatial analytic geometry theory and the perspective projection imaging model to solve the characteristic parameters of the spatial conic model in the closed form and obtain the pose estimation results [21]. In addition, the linear-based motion estimation algorithms are often used in tasks, such as in the robot estimation and aircraft ground simulation tests [22,23]. The main contributions of our work are as follows:
- (1)
- We propose a two-step relative pose estimation algorithm, based on the uncertainty analysis of the line extraction. The method fully considers the uncertainty of the straight-line extraction to weigh the error term of the objective function, and is more robust in a space environment than the feature-point-based method.
- (2)
- A novel distance error model for the reprojection of a line, in which the mean value of the distance sum from the two ends of the original direct comparison line segment to the line error, is transformed into the sum of the distance between the two points in the parameter space, which is more reasonable, in theory.
- (3)
- The proposed approach was verified on a ground test simulation environment and has been extensively evaluated in experimental and simulation analyses.
The rest of the paper is organized as follows: In Section 2, a new pose estimation method for non-cooperative targets, based on the line features, is presented, that extracts the intersection vectors and the intersection of the straight line from the target images. Then, the uncertainty of the straight-line extraction and the back-projection error model are analyzed, and the uncertainty model and straight-line error model are established in Section 3. Section 4 comments on the experimental results and Section 5 concludes the paper.
2. Principle of the Non-Cooperative Target Relative to the Pose Estimate
2.1. Problem Description of the Relative Pose Estimation
As shown in Figure 1, let the coordinate system of the measured target be , The coordinate system of the binocular vision measurement unit is . In this paper, it is assumed that it coincides with the left camera, represents the line feature on the target, , is the number of lines, and represent the imaging of the straight line on the left and right cameras, and the left and right camera measurements are independent of each other, represents the moment when the camera collects data. The internal and external calibration parameters of the binocular stereo vision measurement unit are and , respectively, and the calibration has been completed. Then, the relative pose estimation problem, based on the straight-line features, can be described as the following maximum a posteriori probability estimation problem:
Figure 1.
Schematic diagram of the relative pose estimation of a non-cooperative spacecraft.
How to estimate the rigid body transformation relationship and the spatial straight line between the coordinate system of the measured object and the coordinate system of the binocular measurement unit, according to the linear imaging and of the spatial straight line on the measured object in the left and right cameras:
where represents the relative pose and the spatial line of the target at different times, represents the spatial line feature and its imaging and in the camera, represents the maximum posterior probability estimate of the relative pose.
2.2. Relative Pose Estimation between Spacecrafts
In order to solve the above-mentioned relative pose estimation problem, this paper proposes a relative pose estimation algorithm, which mainly includes two parts: (1) Solve the initial value of the relative pose estimation; (2) Iterative optimization of the relative pose estimation. The above two parts will be described in detail below.
2.2.1. Initial Value Solution of the Relative Pose Estimation, based on the Intersection Vector
Theorem 1.
If it is known that there are intersecting lines in the 3D space and their imaging and in the left and right views, and the internal and external parameters of the binocular stereo vision have been calibrated, then the transformation matrix between the visual measurement unit coordinate frame, relative to the target coordinate frame can be solved [23].
Proof of Theorem 1.
As shown in Figure 2, let the left camera coordinate system be the visual measurement unit coordinate system, if the spatial line set of the target has been solved, then a line whose direction vector is arbitrarily selected as the x-axis of the target coordinate frame, and then take any non-parallel line , whose direction vector is , and take a cross product with to obtain the y-axis direction vector of the target coordinate frame, as follows:
Figure 2.
Schematic diagram of the spatial straight-line perspective projection imaging.
The z-axis direction vector of the target coordinate frame is obtained by the cross-product of the vectors:
Therefore, we can construct a new matrix:
where is the direction vector of the linear features in the camera coordinate system, is constructed as the direction vector of the coordinate axis of the camera coordinate system represented in the target coordinate system, and according to the construction principle of the target coordinate system, it is set as the identity matrix . If it is assumed that represents the rotation transformation relationship between the measurement unit coordinate system and the target coordinate system, then according to the definition of the transition relationship between the coordinate systems, we can write:
For the solution of the displacement vector , it can be obtained by the coordinate and of the straight line intersection in the target coordinate frame and the left camera coordinate frame (it is set to the measurement coordinate system), respectively:
If the straight-line intersection in the target coordinate frame is taken as the origin of the target coordinate system, the above equation can be expressed as:
□
2.2.2. Nonlinear Optimization of the Pose Estimation
The initial relative pose relationship between the ontology and the target was obtained by solving the problem in the previous section. Ideally, if you were to project the direction vector of the line from the target coordinate system back to the camera coordinate system, they would be equal, but because of the error, they would not be equal. Therefore, for the attitude estimation, Equation (8) is adopted in this paper for a nonlinear optimization solution:
where represents the binocular vision external rotation parameter, represents the rotation transformation matrix between the camera coordinate system and the target coordinate system, and represents the representation of the spatial straight line direction vector in the left and right camera coordinate system, represents the direction vector of a 3D straight line in the target frame, for the orientation uncertainty.
For the displacement , firstly, the rotation transformation matrix is obtained by the optimization solution Equation (8), and the initial value is obtained through the initial value solution method mentioned above. Then, the spatial straight lines are back-projected into the imaging plane, and the bundle adjustment algorithm is used to solve Equation (9) to obtain the final most pose estimation:
wherein represents the number of straight lines in the 3D space, represents a spatial line on a non-cooperative target, represents the direction vector of the projected line of the 3D space straight line in the left and right cameras, represents the internal parameter matrix of the left and right cameras, represents the linear position uncertainty. The LM (Levenberg–Marquardt) optimization algorithm is used to solve the above two nonlinear optimization problems.
3. Line Features Reconstruction and the Uncertainty Analysis
3.1. Solving the 3D Space Straight Line and Its Intersection
According to the pose estimation algorithm proposed above, to realize the pose estimation of the non-cooperative target, the 3D space straight line equation needs to be solved, and the weighted uncertainty of the nonlinear optimization objective function of the pose estimation is determined. Therefore, this section first uses binocular vision to reconstruct the 3D space straight line, and then analyzes the uncertainty of the direction and position of the line imaging, and constructs its uncertainty description matrix.
As shown in Figure 2, it is assumed that the intrinsic parameters of the two cameras and the external parameters between the two cameras have been calibrated. The images of a 3D space line in two views can be expressed as:
The two analytic planes and passing through the spatial straight line can be expressed as:
If the left camera coordinate frame is assumed to be the reference coordinate frame, then in the reference coordinate frame, a 3D space straight line can be expressed as:
where , the projection matrix can be expressed as:
When , . According to Equations (10)–(13), we can obtain:
wherein represents the coordinates of the points on the straight-line image. The direction vector of a straight line in the 3D space can be obtained by directly solving the following equations:
Similarly, the solution can be obtained: . The intersection of the straight lines can be obtained by solving the above-mentioned 3D space straight lines (the straight lines of the different planes are the midpoints of the mid-perpendicular lines). For a detailed solution, please refer to [24].
3.2. Uncertainty Analysis of the Straight Line Extraction
As shown in Figure 3, the uncertainty of the feature extraction of a straight line can be described as the angle uncertainty and radius uncertainty of the feature extraction of a straight line.
Figure 3.
Uncertainty analysis of the straight lines.
3.2.1. Uncertainty Modeling of the Straight Line
If the size of the uncertainty region in the parameter space is assumed to be: , then according to the cosine law, we know that the uncertainty can be expressed as:
where , , and can be obtained by solving the inner tangent of the two ellipses, and the following equation can be obtained after sorting:
In this formula, , , , , and are, respectively, the coefficients of the general equation of the ellipse with the positioning uncertainty of the two feature points on the line, which can be obtained from the positioning uncertainty of the points on the straight line as ellipse [25]. Therefore, the direction uncertainty of the straight line can be expressed as:
The position uncertainty of the straight line can define as , wherein , then, according to the distance formula from the point to the line, we can get the position uncertainty of the straight line as:
wherein and can be obtained by solving the outer tangents of the two ellipses in Equation (17).
Therefore, the uncertainty model of the linear projection in the 3D space can be modeled as a combination of the orientation uncertainty and the position uncertainty:
3.2.2. Linear Filtering based on the Uncertainty Size
Assuming that the uncertainty covariance matrix of the 3D reconstruction of a spatial straight line is . The imaging of a 3D space straight line in each camera can be defined as , and because they are independent of each other, the covariance matrix of the spatial line can be represented as:
wherein , . Moreover, according to Equations (11) and (12), it can be known that the 3D space straight line . Therefore, the covariance matrix of a straight line solution in the 3D space can be expressed as:
wherein , , .
According to Equation (22), the uncertainty covariance matrix of the features of each straight line is calculated, and then the determinant size of the uncertainty covariance matrix is solved to exclude the feature straight lines with large uncertainties, so as to determine the lines participating in pose estimation and to improve the accuracy of the non-cooperative target pose estimation, based on the line.
3.3. Line Reprojection Error Analysis
The current reprojection error of the line is generally defined as the average distance between the two endpoints of the line segment and the line. As shown in Figure 4, this definition has the following problems:
Figure 4.
Schematic diagram of the straight-line distance error analysis.
- (1)
- When the detection line segment and the reprojection line are in the situation, as shown in (a) and (b), the error means the two lines are completely inconsistent with the length being equal.
- (2)
- When the detection line segment and the directional projection line are in the situation, as shown in (c) and (d), its position on the line directly affects the error when the direction vector of the line remains unchanged.
Intuitively, the difference between the two lines can be more accurately described by directly solving the “distance” between them. To solve the above problems, this paper transforms a straight line into a parametric space and defines the reprojection error of the straight line by solving the distance between the two points in the parametric space Cartesian coordinate system.
As shown in Figure 5, the line is in the parametric space Cartesian coordinate system:
Figure 5.
Schematic diagram of the linear distance representation in the different coordinate systems.
Using Cartesian coordinates to represent a line, when the line is a vertical line or nearly vertical line, the slope of the line is infinite or nearly infinite, and thus cannot be represented in the parameter space a-b. To solve this problem, we can use the statement in polar coordinates:
Therefore, according to the property that distance is independent of the coordinate system, the distance between the straight lines can be defined as:
Once the uncertainty model of the straight line and the direction projection error model of the straight line is obtained, they can be brought into the nonlinear optimization objective function and solved iteratively by the nonlinear optimization algorithm, to obtain an accurate non-cooperative objective pose estimation. The algorithm flow of the non-cooperative target pose estimation, considering the straight line uncertainty, can be represented, as shown in Figure 6.
Figure 6.
Flow chart of the outer space non-cooperative target pose estimation algorithm.
4. Experiments and Analysis
To verify the feasibility, the measurement accuracy, and the measurement robustness of the proposed non-cooperative pose estimation algorithm, a simulation analysis and test experiments were carried out, respectively, in this paper.
4.1. Simulation and Analysis
MATLAB simulation software is used to generate the feature point data on the line by the numerical simulation, and then the binocular vision algorithm is used to process the data offline to verify the performance of the different pose estimation algorithms. The simulation model of the target movement in the experiment is shown in Table 1.
Table 1.
Simulation parameter setting of the target motion.
In this section, a mathematical simulation model is established, according to the measurement requirements and the characteristics of the measurement target, and the Monte Carlo method is used for the simulation. The specific process of the numerical simulation is as follows: the 3D motion of four intersecting edge lines on the rigid body in the 3D space is simulated (its size is: 500 mm × 500 mm), the fixed noise was added to the intersecting points, between the lines, and the uncertainty of the point location in the stereo vision 3D positioning system was simulated by the Gaussian white noise with the mean value of 0 and the variance of 20 mm. For occlusion, several frames of the 3D point coordinate data were eliminated in a short time, in order to simulate. The target moves in the same direction from the position 12,000 mm to 2000 mm. The outer space non-cooperative target is in the slow spin instability state, and the slow spin motion parameters of the target are as follows: the spin angular velocity range is 1~10°/s, the maximum nutation angle: 10°. Finally, the pose estimation algorithm, based on the line features proposed in this paper is solved and compared with the direct linear method [20], the feature point method [12], and the odometer method [15].
4.1.1. Analysis of the Relationship between the Measurement Accuracy and the Measurement Frequency
According to the actual research, the image acquisition frequency of general spaceborne high-resolution cameras does not exceed 10 Hz. To verify the relationship between the pose estimation algorithm and the sampling frequency of the measured data, this paper adopts the method of simulation analysis to simulate the measurement errors of the pose, when the data collection frequency is 10 Hz and 2.5 Hz, respectively, and make statistics. The results are shown in Figure 7.
Figure 7.
Pose estimation accuracy at the different measurement frequencies. (a) Attitude measurement error at 10 Hz; (b) Position measurement error at 10 Hz; (c) Attitude measurement error at 2.5 Hz; (d) Position measurement error at 2.5 Hz.
Figure 7 shows the following results:
- (1)
- When the simulation data acquisition frequency is 10 Hz, the pose measurement results are shown in Figure 7a,b, the maximum measurement error of the attitude angle is less than 1° (3δ) and that of the position measurement is less than 2 mm (3δ), and the simulation results meet the pose measurement requirements.
- (2)
- When the data acquisition frequency is 2.5 Hz, the position and pose calculation results of the numerical simulation are shown in Figure 7c,d, the attitude measuring results had divergent trends in the shade, but the measurement error is bigger, this is due to the non-cooperative targets in space for the non-linear movement, and the amount of data collected is less, resulting in fewer constraints involved in the optimization using the linear model.
- (3)
- For the position measurement, it can be seen from Figure 7b,d that the position measurement is basically consistent with the theoretical setting value, and the position measurement results. By comparing the position measurement and attitude measurement results, the attitude measurement results are more sensitive to the 3D positioning accuracy, and the attitude measurement accuracy decreases significantly, compared with that when the sampling frequency is 10 Hz. This is mainly because the estimation hysteresis error, caused by the corresponding optimization algorithm, increases significantly with the decrease of the measurement data.
4.1.2. Algorithm Measurement Error Statistics and Analysis
To verify the advantages of the algorithm proposed in this paper, compared with the direct linear method [20], the feature point method [12], and the visual odometry method [15], the mean value of the absolute error was counted. Firstly, the parameters of the simulation analysis were set, according to Table 1, and each method was simulated 100 times to obtain the mean value and variance. The results are shown in Figure 8.
Figure 8.
Statistics of the measurement accuracy of the different algorithms. (a) Attitude measurement error; (b) Position measurement error.
Figure 8 shows the following results:
- (1)
- By comparing the measurement results of the four methods, the accuracy of the pose estimation algorithm of the visual odometry method is the worst, reaching the maximum attitude angle of 5.8° and the maximum position error of 37 mm. This is mainly because the visual odometer method has a cumulative error, so its error will become larger and larger as time goes by.
- (2)
- According to the attitude error of the measurement result, the measuring accuracy of the proposed algorithm, and based on the measuring accuracy of the feature point is quite up to 0.6°, the position is 2.2 mm, but this article proposed a smaller algorithm of the attitude angle variance, this is mainly because the participation attitude angle calculating the straight linear feature uses the linear reconstruction uncertainty size selection.
- (3)
- In the measurement results of the attitude angle, the errors of the pitch and yaw angles are large, and the z-axis errors of the position error are large, which is mainly caused by the great uncertainty of the vision measurement in the z-axis direction.
4.1.3. Comparison Experiment of the Algorithm Robustness
To verify the influence of the image measurement noise on the relative pose estimation algorithm of the non-cooperative targets, this paper uses the method of simulation analysis for the simulation verification. We use and to express the true value, meanwhile, and to express the value of the pose estimation. The and are described as:
For the experimental results to be more statistically significant, we take the average of 100 times the experimental result under each parameter condition in this section.
To evaluate the influence of the pose estimation results, under the noise of different levels, we set the number of feature points on the line as 20 for the feature points method and visual odometry method with the error mean value of 0, the variance is varied from 1 to 30 pixels, and set the feature line number to eight for the direct linear method and the proposed method, and eight lines are constructed by using the above feature points directly.
Figure 9 shows the following results:
Figure 9.
The influence of the different image measurement noise levels on the measurement accuracy of the algorithm. (a) Attitude measurement error; (b) Position measurement error.
- (1)
- The measurement errors of the attitude and position points increase with the increase of the feature extraction errors, and the proposed calibration algorithm has the highest accuracy, compared with other algorithms participating in the comparison.
- (2)
- The algorithm proposed has a smaller error growth slope, compared with other algorithms, therefore this method has stronger robustness compared with other methods.
- (3)
- The position accuracy of the proposed algorithm is equivalent to that of the feature point method, mainly because it is equivalent, in theory, when the error level is the same. However, the algorithm proposed in this paper filters the straight lines involved in the pose estimation, so the positioning accuracy is slightly higher.
4.2. Actual Experiment and Analysis
To verify the pose estimation algorithm of the outer space non-cooperative target proposed in this paper, a set of binocular vision pose estimation systems is established. The system mainly includes two cameras with a resolution of 2048 × 2048 pixels and a pixel size of 0.0055 mm. A PC with a memory of 8 GB and a main frequency of 3.7 Hz; Two Zoom lens parameters are AF ZOOM-Nikon 24–85 mm/1:2.8-4D; A spacecraft with a natural size of 1000 mm × 900 mm and its motion simulator. The sampling frequency of the camera is 10 Hz; The algorithm is run in Matlab2017b.
4.2.1. Static Pose Estimation Experiment
Firstly, the binocular vision measurement system is calibrated. In this paper, the calibration algorithm, based on a planar target is adopted [26], and the calibration results are shown in Table 2.
Table 2.
Calibration results of the binocular vision.
Then, the feature point algorithm, the visual odometry algorithm, the direct linear algorithm, and the proposed algorithm are used to estimate the motion of the simulated spacecraft. The measurement results are shown in Table 3. The reference coordinate system of the pose parameters is the coordinate system of the simulated pose setting device, and the transformation relationship between it and the binocular vision measurement system is determined by using third-party calibration equipment (such as total station).
Table 3.
Statistics of the static pose estimation experimental results.
As shown in Table 3, except that the average error is close to the feature point-based method, both the average error and the maximum error of the algorithm proposed in this paper are more accurate than other algorithms, which indicates that the algorithm proposed in this paper has a better stability and accuracy.
Moreover, since the distance between the adjacent frames is far in the static experiment, the odometer-based algorithm cannot obtain enough matching feature points to solve the pose in most moments, due to the large illumination variation of the target surface coating layer.
4.2.2. Dynamic Pose Estimation Experiment
To verify the dynamic test performance of the algorithm proposed in this paper, the linear motion unit is used, the relative attitude of the target remains unchanged, the target moves in a uniform phase from 12 m to 2 m, and the binocular camera is used to capture the target image, as shown in Figure 10.
Figure 10.
Non-cooperative target motion acquisition images.
The algorithm proposed in this paper is used to estimate the pre-alignment pose between the camera and the target, and its error is calculated, as shown in Figure 11.
Figure 11.
Non-cooperative target poses solution results.
As can be seen from Figure 11, the position measurement errors of the pose estimation algorithm proposed in this paper are all less than 20.0 mm and the attitude angle measurement errors are all less than 2.5° in the whole measurement process, indicating that the proposed algorithm fully meets the accuracy requirements of the rendezvous and docking of the non-cooperative target spacecraft.
Moreover, the actual measurement error is greater than the simulation error because the change of the illumination environment during the measurement process leads to a large error of line extraction. Meanwhile, the system pose measurement also involves the unified process of the global coordinate system, which will introduce errors into the final structure.
5. Conclusions
In this paper, binocular stereo vision is used to obtain the 3D space straight line of a non-cooperative spacecraft. The 3D reconstruction algorithms of the spatial straight lines are used to realize the 3D feature information perception of the target. Combined with the uncertainty analysis of the straight-line reconstruction, the extraction accuracy of the feature required for the pose estimation is improved, and the robustness of the algorithm is enhanced. Finally, according to the parameters of the straight line, combined with the intersection vector algorithm and the nonlinear optimization algorithm, the position and pose solution method is derived. The simulation and actual experiments show that the algorithm proposed in this paper can accurately estimate the relative pose of the spacecraft in the ultra-short range.
Author Contributions
Conceptualization, Y.L. and X.X.; methodology, Z.M.; software, Y.L. and Y.Y.; validation, Y.L., Y.Y. and X.X.; formal analysis, Y.L.; investigation, Y.L.; resources, Y.L.; data curation, Y.Y.; writing—original draft preparation, X.X.; writing—review and editing, X.X.; visualization, Y.Y.; supervision, Y.L.; project administration, Z.M.; funding acquisition, Z.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the China Postdoctoral Science Foundation, grant number 2021M702078.
Data Availability Statement
Not applicable.
Acknowledgments
Thanks to Shanghai Agriculture and Rural Committee.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Liang, B.; Du, X.; Li, C.; Xu, W. Advances in Space Robot On-orbit Servicing for Non-cooperative Spacecraft. ROBOT 2012, 34, 242–256. [Google Scholar] [CrossRef]
- He, Y.; Liang, B.; He, J.; Li, S. Non-cooperative spacecraft pose tracking based on point cloud feature. Acta Astronaut. 2017, 139, 213–221. [Google Scholar] [CrossRef]
- Cougnet, C.; Gerber, B.; Heemskerk, C.; Kapellos, K.; Visentin, G. On-orbit servicing system of a GEO satellite fleet. In Proceedings of the 9th ESA Workshop on Advanced Space Technologies for Robotics and Automation ‘ASTRA 2006’ ESTEC, Noordwijk, The Netherlands, 28–30 November 2006. [Google Scholar]
- Ibrahim, S.K.; Ahmed, A.; Zeidan, M.A.E.; Ziedan, I.E. Machine Learning Methods for Spacecraft Telemetry Mining. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 1816–1827. [Google Scholar] [CrossRef]
- Woods, J.O.; Christian, J.A. Lidar-based relative navigation with respect to non-cooperative objects. Acta Astronaut. 2016, 126, 298–311. [Google Scholar] [CrossRef]
- Fan, B.; Du, Y.; Wu, D.; Wang, C. Robust vision system for space teleoperation ground verification platform. In Proceedings of the 32nd Chinese Control Conference, Xi’an, China, 26–28 July 2013. [Google Scholar]
- Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
- Terui, F.; Kamimura, H.; Nishida, S.I. Motion estimation to a failed satellite on orbit using stereo vision and 3D model matching. In Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision, Singapore, 5–8 December 2006. [Google Scholar]
- Terui, F.; Kamimura, H.; Nishida, S. Terrestrial experiments for the motion estimation of a large space debris object using image data. In Proceedings of the Intelligent Robots and Computer Vision XXIII: Algorithms, Techniques, and Active Vision, Boston, MA, USA, 23–25 October 2005. [Google Scholar]
- Sharma, S.; Beierle, C.; D’Amico, S. Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2018. [Google Scholar]
- Sharma, S.; D’Amico, S. Systems, Neural network-based pose estimation for noncooperative spacecraft rendezvous. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4638–4658. [Google Scholar] [CrossRef]
- Du, X.; He, Y.; Chen, L.; Gao, S. Pose estimation of large non-cooperative spacecraft based on extended PnP model. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
- Li, Y.; Huo, J.; Yang, M.; Cui, J. Non-cooperative target pose estimate of spacecraft based on vectors. In Proceedings of the Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019. [Google Scholar]
- Pan, H.; Huang, J.; Qin, S. High accurate estimation of relative pose of cooperative space targets based on measurement of monocular vision imaging. Optik 2014, 125, 3127–3133. [Google Scholar] [CrossRef]
- De Jongh, W.; Jordaan, H.; Van Daalen, C. Experiment for pose estimation of uncooperative space debris using stereo vision. Acta Astronaut. 2020, 168, 164–173. [Google Scholar] [CrossRef]
- Xu, C.; Zhang, L.; Cheng, L.; Koch, R. Pose Estimation from Line Correspondences: A Complete Analysis and a Series of Solutions. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1209–1222. [Google Scholar] [CrossRef] [PubMed]
- Wang, P.; Xu, G.; Cheng, Y. A novel algebraic solution to the perspective-three-line pose problem. Comput. Vis. Image Underst. 2020, 191, 102711. [Google Scholar] [CrossRef]
- Wang, P.; Xu, G.; Cheng, Y.; Yu, Q. A simple, robust and fast method for the perspective-n-point Problem. Pattern Recognit. Lett. 2018, 108, 31–37. [Google Scholar] [CrossRef]
- Mirzaei, F.M.; Roumeliotis, S.I. Globally optimal pose estimation from line correspondences. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
- Li, X.; Zhang, Y.; Liu, J. A direct least squares method for camera pose estimation based on straight line segment correspondences. Acta Opt. Sin. 2015, 35, 0615003. [Google Scholar]
- Liu, Y.; Xie, Z.; Zhang, Q.; Zhao, X.; Liu, H. A new approach for the estimation of non-cooperative satellites based on circular feature extraction. Robot. Auton. Syst. 2020, 129, 103532. [Google Scholar] [CrossRef]
- He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features. Sensors 2018, 18, 1159. [Google Scholar] [CrossRef] [PubMed]
- Cui, J.; Li, Y.; Huo, J.; Yang, M.; Wang, Y.; Li, C. A measurement method of motion parameters in aircraft ground tests using computer vision. Measurement 2021, 174, 108985. [Google Scholar] [CrossRef]
- Wang, Y. Research on Motion Parameters Measuring System based on Intersected Planes. Harbin Inst. Technol. 2015. Available online: https://kns.cnki.net/KCMS/detail/detail.aspx?dbname=CMFD201601&filename=1015982210.nh (accessed on 18 September 2022).
- Cui, J.; Min, C.; Feng, D. Research on pose estimation for stereo vision measurement system by an improved method: Uncertainty weighted stereopsis pose solution method based on projection vector. Opt. Express 2020, 28, 5470–5491. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).